Data is an asset. Those who understand this and position their organisation to realise its inherent value have a massive advantage over competitors who do not.
Data quality matters. Good quality data instils confidence in business decisions based on it. Timely delivery of good quality data allows organisations to react quickly to change through faster decisions. Accurate data, particularly for financial institutions, assists with regulatory compliance and reporting.
Data quality issues on the other hand not only leave organisations at a competitive disadvantage but expose them to reputational risks and sub optimal customer experiences compared to those who get it right. A Gartner report titled “Measuring the business value of data quality” stated that “40% of business initiatives fail due to poor data quality”. So how does an organisation maximise the value of their data asset by ensuring the quality of the data is fit for purpose?
I believe high quality data is the outcome of an organisation effectively using appropriate capabilities (internal and external), tools, processes and systems to manage and leverage information. I believe, possibly controversially, that responsibility for this and ultimately data quality lies with the business leaders and not IT. I believe it is the responsibility of the business leaders to define what constitutes good data quality which is used by them to make business decisions.
"Whilst IT provide the resources to assess and improve data quality, it is the business leaders who need to define and own the information they use to make decisions."
The dimensions I typically consider align with those described in the DAMA UK white paper that describes various “data quality dimensions”. IT is critical in implementing improvements across all of these dimensions. However, the end users of the data insights need to have confidence that the data their decisions are based on is of a quality that is fit for purpose and can only do so by defining the acceptance criterion for each dimension / measure.
Organisations who participate in any of RFi Analytics’ benchmarking programmes have a distinct advantage in the data quality stakes over those who do not. These organisations benefit not only from RFi Analytics using a suite of data quality tools, processes, systems and data stewards but also from their data being independently validated and benchmarked against their industry peers using a common set of data definitions and processes. When a new institution joins one of our programmes we work incredibly hard with their analysts to obtain a single month of data using our common data return specification. This process always involves a number of clarification questions and several iterations leading to an initial draft report.
Presenting the initial draft benchmarking reports to new members of our benchmarking programme is one of my favourite activities. This is when, often for the first time, we shine a light on how an organisation’s portfolio compares to their peers across a raft of metrics and segments. These initial reports invariably highlight areas where the organisation is either an outlier or where there are data issues we need to address before seeking historical data and formally including their data in the programme. These initial reports almost always bust a number of internal myths around perceived relative performance. The quest for data quality doesn’t stop, rather it is an iterative process involving continuous improvement. Over time these programmes not only improve the data quality for individual organisations, but for the industry as a whole.
"IT is critical in implementing improvements across all of these dimensions. However, the end users of the data insights need to have confidence that the data their decisions are based on is of a quality that is fit for purpose and can only do so by defining the acceptance criterion for each dimension/measure."
The most useful counsel I can provide any organisation on the topic of data quality is to start to formally invest in data quality by having the end users of the data and the insights it provides to take ownership for the quality of the data. If a credit risk or marketing department identifies an issue with the data, rather than simply pulling the data into a discrete system to fix it for that business unit it is incredibly helpful and efficient to seek to have it addressed at the source of the problem. By doing this the organisations’ data quality will improve across all functions.
No matter how big the data set is; how good the machine learning environment; how innovative the predictive analytics process; how talented the data scientists and analysts are, insights based on poor quality data will not be trusted or potentially worse, will result in incorrect decisions. In order to trust the data behind any insight or decision the owner of that decision needs to define and be responsible for the level of data quality that sits behind it. Organisations that invest in data quality have a competitive advantage across all data based processes and decisions.