A TV advertising idea is rarely so brilliant that no-one feels the need to run some research before they commit the production money. But it is not uncommon that, by the time you get to a script you’re comfortable with, there’s almost no time for research before you have to start production. Or, if you do have time, you may not have the budget. Either way, if you only do one stage of research before you commit, what should it be: quant or qual?
In my view, it absolutely has to be qualitative development research rather than quantitative pre-testing.
As Mandy Rice Davies famously remarked, “Well, he would say that, wouldn’t he?”. ‘He’ has, after all, spent most of his 30-plus years in marketing plying his trade as a qualitative researcher.
But it’s true, nonetheless. Most people in this predicament will run a quantitative pre-test, classically IPSOS Next or a Millward Brown Link Test. However, it surely makes no sense to run a one-off test from which you will learn nothing about how to develop the ad to maximise its effectiveness. At best, the ad will receive a good ‘score’, but you still won’t know what to do to make it even better. If the test gives the ad an ‘OK’ score, you’re none the wiser about how to make it better. And, if it scores poorly, you don’t know whether it’s the strategy, the idea or the execution that needs changing.
The whole purpose of qual is to help us understand how the idea works and give us a clear sense of how it could be improved. Why would one not want to do this?
If you don’t buy this argument, here are four more reasons why you have to commission qualitative research rather than a quantitative pre-test.
1 You need to research the idea, not the stimulus
Typically, quantitative pre-tests are run using animatics. Many animatics are just terrible. Some ideas are almost impossible to represent effectively in animatic form. You end up testing the animatic, not the idea.
The notion that testing an animatic gets the ‘consumer’ closer to what the final commercial will look like is, in most cases, completely spurious. More often, it does just the opposite, giving an misleading sense of ‘looking like’ the finished ad, when a respondent is more likely to create a realistic sense of the finished ad in their own mind in response to a vividly written narrative.
I have seen poor results from Link that made no sense to me until I saw the animatic. The original ad in what became the long running and hugely successful ‘Adam’ campaign for BT is a case in point. The idea showed great potential when I researched it in script form, being found original, engaging and relevant. But it did not Link Test well, with engagement dropping off quickly over the opening 10 seconds. Mystified at this result, I asked the client to send me the animatic. As soon as I watched it, I could see the problem. The opening section of the commercial consisted of the internal dialogue in Adam’s mind as he sat contemplating his options. In the script I had researched, this was vividly described, so that my respondents could imagine the subtly changing expressions and body language that would help express Adam’s thoughts and make the opening sequence very watchable. But, in the animatic, the camera slowly zoomed in on a static image of a bloke sat on a chair – no movement, no emotion, no expression. In short, the opening scene of the animatic had none of the things that were clearly essential to engagement, but which would unquestionably be there in the finished film.
Once you saw the animatic, it was blindingly obvious why the score was poor: what was being researched was a terrible animatic for a good idea, and people were judging the stimulus, not the idea.
I pointed this out to the client and asked them if they thought the finished film was likely to be like the first 10 seconds of the animatic or, instead, like the first 10 seconds as they had imagined it from the description in the script. The film got made in spite of its poor Link Test scores, and the rest is history.
2 You need to allow the advertising to be processed in its own way, not within a straightjacket
Regardless of the big players’ protestations to the contrary, pre-testing evaluates all ads against essentially the same criteria, as if all ads work in a similar way. They don’t. The questions are also highly prescriptive, forcing every ad into a framework that may well be entirely inappropriate for how the specific piece of copy will work. Much of the time, quant pre-testing measures things because they can be measured, rather than because they are relevant. To quote my favourite qualitative research guru, Albert Einstein:
‘Not everything you can count counts; and not everything that counts can be counted’.
Qualitative research, owing to the open-ended and responsive way in which ideas are explored, allows for a more natural response that is driven by the way the specific idea works. In a sense, the ad creates its own agenda, just as communications do in the ‘real world’, rather than having one imposed upon it by a set question protocol.
3 You need to assess the ad against its own objectives, not ‘norms’
Many clients find the ability to score their ideas against ‘norms’ reassuring. ‘Norms’ are dangerous nonsense. Even if you compare your ad’s score to other ads in the market, or relative to other ‘fair’ comparison points, this is completely spurious. Every ad must be assessed in its own terms. Only if there was another ad for this brand, trying to achieve the same thing at the same point in time against the same target, would a comparison be meaningful.
For example, a pre-testing agency will have a ’norm’ for ‘carbonated drinks’ advertising. Why on earth would it be meaningful to compare the scores for a new Coke ad with ’norms’ derived from advertising for other CSDs such as Pepsi and Rubicon? They’re ads for different brands with different objectives that are intended to work in different ways. What would such comparisons mean? Not a lot.
The only meaningful way to appraise an advertising idea is against the objectives set for that specific idea. The ad must always be assessed against the brief, and the brief is unique to the specific ad. Quant pre-testing does not do this; qual research can.
4 You need to use people who understand their own ‘data’
In my experience, most ‘researchers’ presenting findings from quant pre-testing don’t understand their own data. For example, I recall the ‘researcher’ in a Link Test debrief giving us a ‘Comprehension’ score that was derived from a question asking respondents how easy they thought the ad was to understand. This is not the same thing at all: 100% of your respondents could think the ad is easy to understand, when in fact they have completely misunderstood it, and vice versa. When I pointed this out to the guy presenting the slides, he just couldn’t see the difference. His ‘Comprehension’ score was clearly zero.
I would prefer to use a researcher who has worked with advertising for over 30 years, researching upwards of 1500 scripts, and is trusted by agencies and clients alike for his ability to discern idea from execution and give great clarity about how best to develop the finished advertising. Can you think of anyone… ?
I realise my entreaties to use qual rather than defaulting to quant may fall on deaf ears. I am all too aware of the pressure in organisations for ‘numbers’ to back decisions. I know that, in many companies, an ad won’t get made unless it passes the threshold test score in Next or Link. I know it’s a brave marketing person who sticks their neck out for understanding rather than percentages.
But isn’t that what a good marketing person is supposed to do?
I would simply urge you to screw your courage to the sticking place and let the finished ad’s performance prove that, when you had your back to the wall, your choice of qualitative development research was the right one.
A version of this post was previously published in the Movement blog. For many more posts about marketing, brand communications and strategic and creative development research, visit movementmuse