Concept testing, in its many guises, is one of the core methodologies in our industry. Companies invest significant amounts of money to find ‘winning ideas’; to determine which ideas to back because they represent opportunity for growth and to eliminate the risk of launching weaker ideas.
However, we find ourselves increasingly questioning whether clients are getting true value from those methodologies.
Imagine how ridiculous the following scenario would be…
… A company develops a new treatment for hay fever. They then test the treatment on a general population sample – some of whom are clearly interested in better hay fever treatments and some of whom are not. In testing how well the treatment is received, they don’t analyse the performance of the treatment with the part of the sample who has hay fever. They also don’t bother to investigate how the new treatment compares to the solutions people are currently using. Instead they benchmark how well the treatment has scored vs. every other single treatment they have ever tested, irrespective of the therapy area.
They conclude that the treatment is too focused on hay fever, and in order to improve its performance it needs to be “less hay-fevery”.
Sadly, we see lots of the mistakes above being made in concept testing.
As way of context, before I started working in marketing, I was a research scientist, part of a team developing drug therapies for bleeding disorders. There is much I was pleased to leave behind, but it’s fair to say I do miss the rigour and accuracy, and dare I say it...common sense of our evaluation approach.
We always tested potential treatments with those they were designed to impact. We always benchmarked performance against the other solutions people were currently using.
We never tested an idea with anyone who ultimately wouldn’t be the target and we didn’t measure success by whether new solutions did a better job versus what we tested previously!
If a treatment didn’t deliver a superior benefit in some way then it was unlikely to progress. Superiority could come in different ways, faster acting, longer lasting, more efficacy, superior side effect profile… If it couldn’t prove its worth and relevance, to its target, it didn’t progress.
Leaving aside, for the moment, any issues you might have about concepts being the best way to test the potential of ideas; isn’t it time for how we test concepts to evolve?
Here are some areas where we believe there’s ‘room for improvement’.
#1. Current concept testing doesn’t reflect reality well enough.
In most situations, consumers will have a solution to whatever problem you are looking to solve, or opportunity you want to create. Changing behaviour and getting new ideas used usually means you need to displace something. That means knowing what the true competition is and understanding what they do well and what they do less well.
You want to know…
- How well your idea has performed vs. what people are doing/using now?
- How well has it performed against the key drivers and metrics of choice?
If you have developed ideas against a particular target, then analysis needs to better reflect their POV. This can mean attitudes as much as demographic.
This level of analysis simply doesn’t happen often enough. Knowing the answers to these questions is a better measure of idea performance in the real world and likelihood to drive behaviour change.
#2. Current concept testing doesn’t reflect the context of strategy enough
More effort is needed to understand likelihood to displace key competition or performance against the key drivers of a need, occasion or consumer target.
What is the point of businesses investing significant resource and effort in developing propositions against particular parts of a market if we don’t feed back on how well ideas can deliver on that?
By all means test ideas against a uniform sample if you want, but give yourself the opportunity to look at key performance metrics by the areas you are really looking to win against.
Whilst benchmarking against historical data provides comfort, it is simply not reflective of consumer decision-making.
#3. Concept testing currently drives businesses to look for the wrong things
‘Green boxes’ are the currency of idea progression, so it’s not surprising that the desire for ‘green boxes’ is king.
‘Green boxes’ do not indicate whether ideas are on strategy and they can reward the wrong things. ‘Green boxes’ are driven by familiarity and average appeal across a large group in the ‘middle of the market’.
Stretchy ideas are punished. Ideas that are loved by a specific cohort are punished.
But in reality, what is most likely to drive behaviour change & incremental growth?
Isn’t a focused idea with a tight target who really like it and see it as superior to existing offers likely to drive behaviour change more than an idea that lots of people like 'a bit'?
Given the surge in new technologies, it’s getting ever easier to create a more realistic context for evaluating ideas; one that benchmarks against the true competition and measures how well a new idea performs.
In truth, some of this necessary evolution could simply be achieved by better application of things we already have the ability to do.
So there really isn’t any excuse. So let’s start putting trust in these new better methods and let go of the comfort blanket of the old normative databases!