Navigating the Data Quality Minefield
Your organisation never stands still, with data continually flowing in and out of your systems at an impressive rate. But there is one issue than most organisations struggle to cope with; controlling the accuracy of data received from third parties. Data quality is vital to the value that it’s going to be able to provide you with.
The consumption of poor quality data can cause significant knock-on effects. Bad data leads to bad decision-making, often resulting the need for time-consuming and costly corrective action later on. It therefore makes sense to check the quality of incoming data at the point of arrival.
With seemingly few options available, many organisations turn to performing these data quality processes manually. This is far from ideal. The processes are often the remit of a small team of specialists, leaving the organisation vulnerable to knowledge shortages if members of the team move on.
Additionally, these specialist teams frequently struggle to keep up with the volume of requests for their time, meaning data users having to wait longer to access the intelligence contained within the data. All of these small delays add-up to cause significant levels of disruption for the wider organisation.
Using Automation to Improve Data Quality
What’s needed is an automated way to process the incoming data before it reaches significant systems, so that users aren’t caught in a slow and costly queue. What would be even better is a system that could take this one step further by defining how any non-compliant data is handled – perhaps returning incorrect data to the issuer for review.
A system like this would drastically increase the volume of processes that could be performed. Great news, it would also remove the risk of human error. This results in far greater confidence in the data and strengthens any decisions made using it. In addition, by removing manual processing, you can benefit from significant time and cost savings. Any delays further downstream are also minimised or even eliminated altogether.
East Hampshire District Council used this logic when they used FME to create a process to QA a large collection of historic datasets ready for loading into their SQL database. Automating the processes increased the overall accuracy and reliability of the data, and also enabled significant savings in staff costs.
It’s a simple approach, but it has the potential to solve an issue that causes organisations complex problems. It may not work for all scenarios, but it would definitely be a good starting point for most.