The 3 key mistakes CMOs make when commissioning research – and how to avoid them

The 3 key mistakes CMOs make when commissioning research – and how to avoid them

Picture the scene: a CMO faced with a rapidly changing market needs to get to the heart of why too many target prospects are buying from their competitors, and needs to know how to finesse their own proposition so that more prospects entering the top of the funnel actually make their way to the point of purchase and beyond.


Rather than second guess the market, the CMO commissions a large-scale quantitative study to help understand how core personas in a variety of target segments take decisions at key stages of the buyer process.


It might be the only time in the annual planning cycle that the CMO gets the chance to take the pulse of market sentiment, and so is keen to maximise the value the company can get from the research.


Having agreed the research objectives, audience and methodology, the next step is to agree the questionnaire – and it’s here that for some unwitting CMOs that the wheels can start to fall off.





It’s surprising how many times clients approach researchers with a list of fully-fleshed questions they want to put to their market.


Quite often they do it out of a sincere wish to be helpful, or to save time – but it can often be counterproductive.


Think of it like this: writing your own questionnaire and then asking your researcher to critique it is like writing your own ad copy and then asking your comms agency what they think of what you’ve written.


A wee bit like buying a dog and asking it what it thinks of your barks.



How to avoid mistake #1


Provide the researcher with the business context in which you are commissioning the research.


Don’t pre-judge what questions you think you should be asking. Tell the researcher what it is you want to find out, from whom, why, and what you will do with the information when the results come in.


Be clear about what you need to find out, but leave the writing of the actual questions to the researcher.


A good researcher well-versed in your business context should be able to craft a succinct questionnaire that intelligently draws out the themes you wish to explore and give you the answers you need.


They will structure it logically, phrase questions in a way that maintains the respondent’s engagement. and avoid asking questions that lead to respondent bias.


In return you should get genuine insights that help to drive your business forward, rather than a set of numbers that simply confirm existing prejudices within your business.





As we’ve said, quite often, the research being commissioned will be the only opportunity the CMO gets to gain a window on their customer’s world. Especially in B2B, where research budgets can be much more restrictive than in the B2C world.


Which means that there can be a huge temptation to ask as many questions as possible in order to wring every last drop of value from the project.


Paradoxically, this can be quite counterproductive (though for some reason not all agencies seem to be keen for their clients to understand why.)


The reason is actually to do with the mechanics of Quantitative research.


Sad to say, but most respondents taking part in a Quantitative survey are far less interested in the outcome than the marketers that commission the research in the first place.


Which means – even if the respondents are incentivised – that their commitment to concentrating on every question and giving it the full consideration that the commissioning marketer expects can wane surprisingly quickly.


You only have to listen to a respondent being taken though a lengthy, directionless questionnaire to sense the increasing frustration at how long the process is taking. The disinterest in their voice rises as their concentration diminishes, and the thought they give to answering each question consequently declines with each passing minute.


It can get even worse with questionnaires that are completed online. Within a remarkably short time, respondents can get bored and start to click on random answers just to get to the end.


Researchers can generally spot the point at which this starts happening, as it results in response data getting flatter and flatter as the questionnaire progresses – but what they can’t do is filter out the respondents that were genuinely paying attention and the ones that got bored.


So the client is left with a number of questions where the results are inconsequential, or – worse still – unreliable or contradictory or misleading.



How to avoid mistake #2


In B2B research, a good researcher given a solid brief should be able to construct a meaningful quantitative survey that gets to the nub of the client’s core issues within around 15 questions.


It should really take a respondent no more than 15 minutes to complete.


(To be clear, a question that gives six statements and asks a respondent to review each in turn and indicate to what extent they agree or disagree with each is would count as a single question.)


Choosing your questions wisely and keeping your questions crisp has the added advantage of speeding the rate of data collection. That’s because the longer the questionnaire, the greater the drop off in completion rates. And so the more respondents the researcher has to approach to complete the questionnaire, the longer it takes to get a quorate number of responses.


(It’s for this reason that shorter questionnaires tend to cost less to put into field, by the way).


This is particularly important in B2B research if the personas you wish to survey are relatively niche. High respondent drop-off is less of an issue if you are researching UK households who buy mayonnaise, for example. If some respondents drop out, there’s probably another 20 million households you could go out to. But if you need to survey Procurement Directors in the UK telco infrastructure supply chain, for example, there may only be a respondent universe of a few hundred respondents qualified to answer your questions – and so you are going to need them to remain engaged throughout the process.



When devising multi-answer questions it’s always tempting to add a catch all “Other (please write in)” response option.


The thinking is straightforward enough: “We may have missed something important – the respondents will tell us if this is the case”.


However most clients are disappointed in how few respondents actually tend to bother to complete the “Other” box.


As we’ve already explained, respondents’ commitment to completing the questionnaire can fall frustratingly short of the client’s commitment to uncovering the answers they need. So questions that ask respondents to think and type for themselves tend to be given less thought than questions where the answer range has already been thought through for the respondent to consider.


Not only that, having a question with an open answer field option creates a challenge in accurately interpreting the answers. That’s because this kind of question ends up blending prompted and unprompted answer sets in the same question. Which means that the chances are that had any relevant “please write in” answer been included in the original answer set, more respondents might have chosen that option as an answer.


How to avoid mistake #3


The best way to mitigate this is to do some Qualitative interviews with your core audience prior to constructing the Quantitative questionnaire. By asking for the views of a handful of subject matter experts beforehand, a skilled interviewer can tease out what the most likely answer set for a Quant questionnaire is likely to be.


The second-best way is to pilot the Questionnaire with a relatively small sample of respondents prior to rolling out the full survey. This is good practice in any case – it can help show up unanticipated flaws in the questionnaire logic, or help reveal where an unintentionally ambiguously-worded question is throwing up some bizarre answers.


By reviewing where pilot respondents have completed any “Please write in” answers, the client and researcher can decide whether these extra answers are sufficiently insightful to be included in the prompted answer set when the questionnaire is rolled out. (Though note that even in doing this it can distort the validity of the final data unless the pilot respondents’ answers are actually disregarded and then replaced with an equivalent cohort of additional respondents, so that all final respondents are all being prompted by the same answer set).


So there it is. In summary, if you are thinking of commissioning market research:


  • The best research briefs inform the researcher what the client needs to find out, not what questions need to be asked.


  • The best questionnaires are crisp and engaging. Over a certain boredom threshold the more questions asked, the less reliable the answers to the later questions, the more contradictory the data.


  • It’s always wise to do some Qual research prior to running the Quant – even if it’s only amongst 4 or 5 subject matter experts. It’ll tease out angles that neither the client nor the researcher have anticipated and ultimately lead to better insights.




Simon Hayhurst

February 2023


Leave a Reply

Your email address will not be published. Required fields are marked *