Author: Dale Henry, Head of Client Services
With over 75% of businesses worldwide relying on market research to guide their decision making, we wanted to be able to understand whether they could be making poor and detrimental decisions off the back of unchecked / poorly checked data, and what the cost to organisations could be.
A recent report estimated that close to £3bn of business spend was directly misplaced due to poor data representation from online survey respondents. Beyond impacting the bottom line, bad data has wider business implications such as reputational damage and productivity inefficiencies, so the true cost implication is difficult to measure.
STRAT7 Audiences pledged our commitment to ensuring the integrity of the data we deliver to our clients with the launch of our Data Integrity unit. The team has grown rapidly since its inception in July 2023, and is underpinned by our unique framework. They leverage human intervention and the latest technology to validate every single survey respondent that enters our survey ecosystem.
To show the importance of this, we’ve put our framework to the test.
What we did
Experimental quantitative research in the UK, US & China to look at the impact of the application of augmented data integrity checks on the data outputs from three ‘typical’ research modules:
- Concept testing
- Campaign evaluation
- Audience sizing and profiling
We created two fake concepts for a new toothpaste brand – one focused
on scientific claims and one with more herbal messaging. We also created a fake OOH creative for this brand.
With a market worth an estimated £563m in 2022, and the biggest brand used by 12.5 million Brits a year, the toothpaste category represents a product that most of the adult population use daily, and one which is heavily fragmented, with a range of price points from £1 and up. It felt representative of a typical CPG category.
What did we want to understand?
The relative size and scale of the impact of any data difference between the cells based on the weight of the data integrity checks.
What did we find out from the ‘up to industry standard’ vs the ‘clean’ sample?
Concept testing
Outright rejection to purchase concept 1 declined in the UK, and neutrality increased.
In the US, top 2 box likelihood to purchase decreased significantly, with neutrality seeing a significant increase.
Average claimed frequency of purchase per month was also lower (1.25x vs 1.67x for concept 1, and 1.23x vs 1.57x for concept 2). This leads to a difference of 5 tubes per year for concept 1 and 4 tubes per year for concept 2.
In the pricing question, we saw a wholesale increase in the % who selected that they would pay within the higher price ranges for each product (£2.50 and £3). We know that the price point at which a product is launched signals to consumer it’s position in market, and supports the product at launch, so its imperative it feels in tune with what consumers expect to pay.
So what?
Using the industry standard data, propensity to purchase either concept was lower overall, which might have underestimated the potential of the concepts tested. The data integrity checks unearthed a greater level of neutrality, which may have led to further optimisation of the concept or an additional layer of research.
If you were to apply rudimentary volumetric analysis to this data to understand potential income from each concept, the difference between the cells would have an impact so great as to change the outcome of the concept test (winner highlighted in green):
Standard data integrity checks
Enhanced data integrity checks
It overinflated the potential value of each concept by a significant margin, through inaccuracy on purchase intent, price willing to pay and frequency of purchase. This shows how inaccuracies over multiple data points can be greater than the sum of their parts.
With expenditure on R&D in the UK hitting £70.7bn in 2022, and the estimate that 95% of new product launches fail, if you extrapolate this type of erroneous decision making up to even a sector level impact it could represent billions of £s.
Campaign evaluation
Enhanced data integrity checks reduce the levels of false creative recall from 9% to 6%. This doesn’t seem like a significant amount, but if you have a campaign targeting a population of (for example) 18-24 year olds, you would be looking at a total audience size of 5,550,000 UK adults. This would represent a difference in recall of 166,500 people.
In the US, this reduction is doubled – from 9% to 3% with enhanced data integrity checks.
If looking to optimise creative, we see that enhanced checks present a different heatmap of appeal – with ‘confidence’ and the 24hr fresh breath claim seeing more appeal. This may influence the recommendations given to the client to optimise their creative, and lead to emphasis on the wrong aspects of the creative.
So what?
Enhanced data integrity checks help to reduce overclaim in creative recall, leading to a more accurate picture of a creative or campaign’s cut through.
They also support a greater understanding of the performance of creative elements which can help the team in optimising individual creative treatments, up-weighting and down-weighting formats and identifying messages which resonate most.
Given that the more messages you try and communicate the lower the likelihood of communicating any single message, it is important to understand the right one to put in front of consumers to have the best chance of cutting through.
Brands spent £1.3bn on OOH advertising in 2024 (source: Ad Association) so there is a lot to play for to ensure the optimal campaign.
This is truly the tip of the iceberg. When we look deeper into the impact enhanced data integrity checks have on things like sample audience type, market being sampled etc, it’s truly concerning to think that brands are working with this type of data and the impact it could be having.
Debrah Harding, Managing Director at the Market Research Society (MRS), said of data quality issues in the industry during a recent podcast: “If left unchecked, this could become an existential issue” and we have to agree… but we think we’re already there.
So what should you do?
In a nutshell don’t scrimp on price. If it looks and sounds cheap then it probably is, and ultimately you’ll likely pay for it in the long run. Be more selective and creative with your research, and work with suppliers that can offer transparency and veracity when it comes to delivering data that is going to have legitimate impact and decision-making influence to your business.
STRAT7 Researchbods: Posts | LinkedIn
STRAT7 Audiences | Tailored audience insights & data solutions