Try as we may, many of us still run into unexpected errors when executing integrations. In the world of unpredictable data, these errors are inevitable. They can occur from human miscalculations, incorrect information, malfunctioning systems, or any other number of random events.
Sometimes, it may make sense to kill the Job entirely and retry the record in 5 minutes, or just let it keep going and skip over the error record. But regardless of which decision you make, you should preface it by asking yourself “Can the integration handle when the Origin or the Destination is down?”. If it can you may not need to spend as much time stressing about such issues. Below, I’ve listed two common errors you may run into when integrating, and some big picture strategies for a smoother integration.
ERROR: Failure When Running Records
Sometimes you’ll encounter an error during a data run. The internet goes out or the Origin or Destination go down and you find yourself in a pickle. What should you do in these cases? What if records were missed due to the outage? You’ll need to re-run the records.
There are three ways you could approach the resolution of this error:
- Log the IDs of missed records and re-run them later
- Save the IDs of every successful record. Then, when you re-run the records, have the system ignore the ones you marked successful.
- Mark properly run records as “Processed” in your Origin, and only run records that show as “Unprocessed”.
ERROR: Lookup Failure
If you’re storing a local cross-reference (XRef) list for faster lookups (such as Origin AccountID to Destination AccountID) what happens if the lookup fails?
Perhaps your local cross reference database or file is in use or unavailable. Do you perform a backup lookup into the destination system?
I use my local XRef to look for the issue first. If it returns nothing, then I initiate a Smart Lookup in my destination to ask, “Are you SURE this QuickBooks ID doesn’t already exist?”. A thorough check of these two areas will help you avoid inserting an unnecessary duplicate record.
What happens if your local XRef lists or database are lost?
This can happen if you’re using the on-premises version of StarfishETL and a server crashes. Files get deleted or the integration is incorrectly moved to a new server. If you’re using the Cloud platform, this should not be an issue.
STRATEGY: Re-Check Your Work
In some cases, it may be useful to set up jobs that run periodically (such as over the weekend) to check that all transactions that ran over the last 1-3 weeks ran correctly. Running lookups for 25 fields to make sure they are correct is much slower than simply re-running the record, and some tables are small enough that it makes sense to re-run all records in the table every weekend.
STRATEGY: Set Expectations
What if bad/strange data in the Origin or connection issues cause 0.1% of the items to fail? Is this a problem? If so, what should happen when discrepancies are found? Create a plan for how your team will address those instances and the procedures you or your integration partner should take to resolve them.
Many software companies release upgrades of their systems multiple times per year. If you have an ongoing integration, it’s important to prepare for the possibility that one of those upgrades could break your integration. What is the action plan if that occurs? Working with an implementation partner can help mitigate some of your biggest integration issues.
Concerns about errors and integration breaks can be discussed with, and subsequently handled by, your integration partner to put less stress on you and your team. Just make sure everyone is on the same page with the expectations and plans; and that goes for any data project you undertake!