Blog

The Cost of Poor Data Quality

The Cost of Poor Data Quality Cover Image

Did you know that the quality of crude oil depends more on its refining than where you found it? Whether it’s from the oil-rich fields of Tapis, Malaysia, or painstakingly extracted from the Athabasca oil sands of Canada, oil refiners build their infrastructure around the characteristics of the crude oil closest to them, resulting in similar qualities for their final product. Therefore, the raw product isn’t nearly as important as the processing method.

As data solidifies its status as the most profitable commodity in the world, the quality of your data (and how you process it) becomes all the more relevant. Similar to oil, raw data is certainly valuable. Still, the systems you have to refine it, maintain its quality, and distribute it to relevant parties are what extract the real business value.

There are many ways to measure that value. Specialists estimate that businesses with low data quality maturity can lose around $3.1 trillion in the U.S. alone– or 20% of their revenue. But why is the cost of poor data quality so great? We’d like to explore the financial side of data quality and answer that very question.

One of the best ways to save people time is with automated data quality. With a tool like data observability, you can automate most of these manual tasks and keep a consistent pulse of your data system (see how it works here). 

Poor data quality wastes everyone’s time

Unmanaged data is a massive time-waster in every department: from core data people, like data scientists and engineers, to end data consumers like salespeople. According to Gartner, data quality affects overall labor productivity by up to 20%.

Card Artboard

Data scientists spend about 60% of their time verifying, cleaning up, correcting, or even wholly scrapping and reworking data, or, as they like to call it, “data wrangling,” “data munging,” or “data janitor work.” They also spend about 19% of their time hunting and chasing the information they need. In the end, fewer machine learning models are tested and deployed, as machine learning and data quality go hand in hand.

Therefore, one of the biggest pain points for any data-driven company is getting the right data to the right people in a workable format.

This is a common problem among companies because, as they expand, their data gets recopied and distributed to different applications, systems, and departments. As all this change and distribution occurs, critical business data can become inconsistent, and no one knows which application or system is the most up to date. For example, a salesperson trying to call potential customers can waste up to 27% of their time due to contact details that are incorrect or incomplete.

With higher data quality, you can deliver data ready to be used on arrival. Decision-makers can respond to issues and market changes in real-time instead of waiting for their data to become verified and actionable. Also, high data quality will make it easier to automate data integration, so DQ isn’t lost when exchanging data between applications and systems as your company grows.

Bad data leads to bad analytics

Since analyzing data is one of the primary ways to extract value from data, it’s essential to have those analytics be as accurate as possible. After all, poor data quality leads to poor quality business decisions and failed projects/initiatives. Businesses have prioritized using data to drive their decision-making. After the 2008 financial crisis the global market saw a 30% increase in the use of business metrics for decision making. Now, firms base around 45% of their business decisions on the data they’re collecting.

If you’re basing these decisions on dirty data, it can lead to unexpected costs and failures for your company – like wasting your marketing budget. Marketers waste 21 cents of every media dollar because of poor data quality. In other words, one-fifth of the media budget has 0 ROI. More specifically, missing or invalid customer contact information can cost an average large-sized company up to $238,000 per campaign. Not having a single customer view or master record can run from $85,000 to $425,000 per campaign.

IT modernization projects fail because of poor data quality

As business processes become more automated, data quality is the main roadblock between a successful initiative and a failure. More automation means that you will be building machine learning and self-sufficient applications on the data your company has available. Implementing these systems with poor data quality will inevitably lead to setbacks and failures.

The same is true for IT and data modernization projects, like migration to the cloud.

88% of all data integration projects fail entirely or significantly overrun their budgets because of poor data quality.

33% of organizations have delayed or canceled new IT systems for the same reason.

This problem with IT systems is alarming when you acknowledge the importance of data infrastructure modernization for the contemporary business: 70% of productivity growth comes from IT projects. Poor data quality will impact about 10% of the savings you get from IT initiatives, meaning if you save $21.7 million in labor productivity, $2.2 million will be lost.

Risk and noncompliance

A significant financial risk for data quality comes from compliance and regulatory reporting. Suppose you are storing personal data that falls under the jurisdiction of the GDPR or other data protection laws. In that case, you need to maintain a certain level of data quality concerning accuracy and integrity. If anyone requests their personal data from your company, you will need to provide it immediately at an acceptable quality for them to understand it. This also means you’ll need to keep master records of all your private data.

You’ll also have to ensure all data under GDPR jurisdiction is validated upon entry and use, as 70% of the contact data may become outdated annually as people change jobs, addresses, etc. Companies whose data doesn’t meet these standards run the risk of severe fines like 20 million euros or 4% of annual turnover (whichever is higher) under the GDPR.

In regulated industries like pharmaceuticals or banking, you also risk bad data skewing your reports, leading to fines for delivering false information. Basel II, a section of the Basel banking accords for banking supervision, necessitates “a framework to measure and manage data integrity” and “quantified and documented targets and robust processes to test the accuracy of data.” In pharmaceuticals, you will have to work with lots of personal data, especially for drug trials, forcing you to comply with the GDPR and similar regulations for proper storage and protection.

Here are other relevant financial figures related to non-compliant data:

  • Business disruption: $5.1 Million
  • Productivity loss: $3.8 Million
  • Revenue loss: $4 Million
  • Fines, penalties, & others: $2 Million

Instead, you can just pay the average cost of staying compliant: $5.5 Million. It’s much cheaper to pay for compliance upfront than await the consequences later.

Poor data quality ruins customer experience

There are several ways that poor data quality can ruin your customer experience and have potential buyers running for the hills. Remember that customer data quality typically degenerates at 2% monthly and 25% annually. Gartner attributes the number one cause of CRM system failure to poor data quality.

The data you collect about your customers, their buying habits, and how they interact with your products and platform can all help you improve your user experience. Tailoring your business to customers will make them feel more confident in their preferences and get to the items they want to buy faster. However, if you try to build your customer experience around insufficient data, it will have the opposite effect. Individuals will receive offers they aren’t interested in, sometimes twice or three times if a customer duplicate exists in your database.

For example, let’s say your organization or institution works primarily on donations as a university does. You don’t want to resend offers to donors or send them the wrong offer and make them feel undervalued.

Better data quality will also prevent common retail problems that can turn off customers, like overpaying for a product or extra slow delivery times. For example, having low-quality geolocation data from your delivery drivers could lead them to pick longer routes, deliver items to the wrong warehouse, etc.

Other losses can come from your collections department in the hundreds of thousands due to missing or invalid customer data and collateral data that isn’t linked to a contract and therefore can’t be reevaluated. You can see the same for online retail, where companies can lose millions from scanning the wrong price of items, inventory-data inaccuracies, and phantom “stock-outs.”

View your data quality as an asset

Forrester says that data performance management is essential to proving the ROI of data. Data is now a business driver, resource, and economic stimulator. Business intelligence capabilities like dashboards and financial statements provide insight into the business’s health. As a primary resource, data also needs these capabilities to see if it is productive, where it can be improved, and if it meets expectations. Therefore, having high-quality data can be looked at as an asset.

Organizations that continue to make false assumptions about their data quality will only continue to experience inefficiencies, excessive costs, compliance risks, and customer satisfaction issues:

  • Customers can share negative experiences and write reviews about them.
  • Stakeholders can lose faith in data-based business decisions.
  • Employees can begin to question the validity of the data they’re working with, forcing them to double and triple check before using it.

Here is the takeaway:

Data quality impacts your reputation.

While the value of data quality management is apparent, many companies are still failing to take action.

Just look at the numbers:

Less than 50% of companies are confident in their internal data quality, and only 15% are confident in the data provided by third parties.

Despite this, companies still overestimate their overall DQ performance and underestimate the costs of errors in data, spending massive amounts of time fire-fighting immediate problems instead of investing in long-term solutions.

That’s why we recommend getting out in front of the problem. Business processes, customer expectations, source systems, and compliance rules change constantly. Your DQ initiative should reflect this by staying consistently updated with the data climate. Maintaining high data quality will give you some of the most valuable data on the market.

Right now, Gartner recommends maintaining a 3.5 sigma value (standard deviation of a data point from the mean) for your data or 22,800 defects per million data points. At this rate, quality initiatives will cost about 20% of the total cost of a business process. For example, if you have a marketing process that costs $100 million to run yearly, maintaining quality for that process would be about $20 million. Keep in mind that this cost could be significantly reduced with new innovations in data quality automation. It’s not cheap, but it’s worth it.

If you want to get started with data quality, you can begin here.

Don’t lose money! Get ahead of poor data quality

How does the average business lose $9.7 million annually because of poor data quality? It’s a combination of lost productivity, lousy analytics, project failures, not valuing data quality as an asset, the cost of non-compliance, and ruined customer relationships. If you want to save yourself millions, you should invest in a data quality management initiative as soon as possible.

You can start with Ataccama. Ataccama ONE Data Quality Suite is a self-driving data quality solution. It lets you understand the state of your data, validate & improve it, prevent bad data from entering your systems, and continuously monitor data quality.

By using metadata and AI, Ataccama ONE simplifies solution configuration, deployment, and scaling. Both small teams and enterprise-wide deployment with complex data landscapes see quick ROI and fast time to production. We make data quality accessible to anyone.

Automate the delivery of high quality data
automate the delivery of high quality data

Related articles

Arrow Right
Arrow Left
Blog
What Is Data Quality and Why Is It Important?

What Is Data Quality and Why Is It Important?

Learn about the importance of data quality management regarding the EU GDPR and…

Read more
Blog
How to Get Started with Data Quality: The 3 Steps You Should Take First

How to Get Started with Data Quality: The 3 Steps You Should Take First

Starting out with data quality can be hard. We're breaking it down for you in…

Read more
Blog
The Evolution and Future of Data Quality

The Evolution and Future of Data Quality

Learn about the evolution of data quality from SQL to the Data…

Read more