StarfishETL Epicor Connector & Integration Hints

I recently worked on a StarfishETL project to integrate a client’s Epicor ERP system with their Pipedrive Sales CRM software. Upon initial evaluation it seemed like a pretty straightforward project, but as work continued, I stumbled upon several issues that forced us to rethink our process and alter how we were retrieving the data out of Epicor. I’ll walk you through a few items to watch out for and give you some hints for altering your process if you are not receiving the full record sets through the API.

#1 Setting the correct Origin Connection and SQL Selection Statement when working with Epicor

a. The Epicor API is a bit complex in how the API is set up, so be sure you are setting the correct Origin connection. As you’ll notice, there are several different “services” available through the API, and you will have to make sure you are using the correct service to get to the data you need.

b. If your integration is straightforward, you will most likely be retrieving data from multiple services in the BO namespace. Some of these services may include: CustomerSvc, ARInvoiceSvc, TermsSvc, SalesRepSvc, etc. To select the CustomerSvc service, you would click edit connection, add your credentials, and select Erp.BO.CustomerSvc as your Service. Note that these services may be named differently depending on you version of Epicor.

editing connector

c. You may also need to pass a pagesize parameter in the Additional Connection String Parameters field. We ran into issues with the API defaulting to sending only 100 records, even though we needed all the records sent over. If you set the pagesize equal to a very large number (greater than your full record count) then you should never run into issues.

d. The final step to setting up the Origin would be to build your SQL Selection Statement. You can use the Query Builder link on the right-hand side to help find which columns you need. In one instance, we needed to sum the Invoice Amount of Last Year’s Sales for each Customer. So, the statement would look something like this:

building SQL selection theory

#2 Using the BAQ service to retrieve full record lists.

a. A big issue that we didn’t stumble upon until working with the client to compare records lists, was that the API was only sending back a partial list of the records. We realized this while meeting with the client and comparing record lists between what was being pulled through the API, and what was actually in Epicor.

b. We are not certain, but after investigating further we believe these issues stem from additional fields or objects being returned when certain Epicor API endpoints are used. Here’s an example of some of the additional fields that are retrieved with the Customer endpoint:

additional fields

c. As you will notice, a number of fields listed above do not have a column type listed in the documentation. Taking a look in Epicor, the fields without types do not exist on the Customer table. So, it may be that the API is joining other tables, and thus causing a reduced number of records to be sent back. In some cases, we were only seeing 5000 or so records come through the API, when in reality there were well over 10,000 records in Epicor.

d. To solve this and work around the problem, we moved the queries to Epicor by creating a BAQ (Business Activity Query) and accessing the dataset from this query through the BAQ API service. If you make the BAQ query accessible to the API, then you will find that query with the name listed as the endpoint. So, in this example, in Epicor the BAQ was named qryARInvoices, and you can see below that is also the name of the table you would select in StarfishETL.

query builder list

#3 Working with Epicor’s complex API

a. This last hint isn’t necessarily related to an issue we encountered, but we did find this helpful for reducing the number of API calls and data writes to the destination, which is Pipedrive in this case. Epicor’s API is set up where they have multiple namespaces, and within the namespaces, multiple services, and within each service, multiple endpoints. This API setup necessitates the use of multiple jobs in StarfishETL, as we would need a separate Origin Connection and query for each endpoint.

b. Instead of writing to Pipedrive within each job though, we decided to write the data to Xref (cross-reference lists) to allow us to temporarily store the data from each job and use it in the final job to write to Pipedrive.

c. Each job would pull the necessary data from Epicor and then write that data to a Xref along with an identifier that could be used later to retrieve the data for each customer. In this example we used the Company and Customer Number columns to uniquely identify each record.

process example

d. So, for this example we are storing the Company and Customer Number as the ­­­Old ID (or key) and the summed total of Last Year Sales as the New ID (or value). Old ID: @@ORG:InvcHead_Company@@@@ORG:InvcHead_CustNum@@ and New ID: @@ORG:LastYearSales@@. This way we can easily retrieve the Last Year Sales from the Xref list by just passing in the Company and Customer Number.

e. Then, all you need to do for the last StarfishETL job is set up an Xref read for each of the fields you will be passing in data. You will select the Xref List from the dropdown and set the Old ID to the Company and Customer Number. Then, for each row that runs in the job, it will pass in the Company and Customer Number and get back the Last Year Sales value. This value will then be the value passed to the API and updated in the destination. So, in this particular case, the Last Year Sales value will then be passed and updated in the Pipedrive Sales CRM.

Posted in:

Start a Project with us

Fill out the form below and we will contact you