7 Deadly Sins Of Online Surveys
Online surveys have become the digital marketer’s chainsaw.
A terrific research tool that can butcher marketing strategy with bad techniques and bad assumptions.
“Research is the foundation of marketing strategy, providing real insight into customer needs and motivators. However, with do-it-yourself online surveys, it can go seriously wrong,” says John Parikhal, partner with Breakthrough Management. Parikhal has been conducting marketing research for over 35 years for organizations such as Viacom, Scholastic Publishing, and Pepsi.
“With so much at stake, it’s really important to involve experts in research design and analysis. Unfortunately, there are too many online “research” studies that just don’t meet statistical criteria for reliability, and are often filled with bias-generating questions. Sometimes, I suggest that marketers might want to shout ‘Get this Survey Monkey off my back’.”
Why do so many earnest survey efforts sail off the rails?
Here are the seven deadly sins of online surveys. Each one can poison marketing strategy.
1. The Wrong People Are Surveyed
The pros call this “Frame Error.” It defines who is actually surveyed.
“Typically a marketer will send a web survey invite to their house list of email addresses, but this house list may vary in substantial ways from the wider market they are trying to understand,” says Jeffrey Henning, PRC, with Researchscape International in Norwell, MA.
The problem...brand awareness is significantly stronger on a house list than we find from a third-party panel.
2. The Wrong Decisions Emerge From Results
Data analysis, scoring, and interpretation are fraught with peril. Preconceptions shade interpretation.
Data validation, response partitioning, ordinal and nominal data analysis can be overlooked, given short shrift, or misapplied.
Insignificant statistical differences can be amplified and used to validate decisions.
And human nature can be overlooked.
“People are poor predictors of their own behavior,” says Tom Ewing, Senior Director with BrainJuicer, a brand strategy and research agency.
“Think about Facebook and parties – if you’re having a party and invite people via Facebook, you don’t actually believe that everyone who says ‘Yes’ let alone ‘Maybe’ will turn up.”[/entity]
Much the same holds true from survey questions dealing with product trial or purchase intention. Without comparative data, what we’re told is by and large worthless.
3. The Wrong Design Is Used For A Questionnaire
Well-intentioned amateurs who believe they’re gathering precise data when they use a 0 to 11 point scale with the ends labeled are actually setting themselves up to collect less reliable data.
A 5-point, fully labeled scale, with choices such as “Not At All Likely”, “Slightly Likely”, “Somehat likely”, “Very Likely”, “Completely Likely” will typically provide more accurate and more actionable feedback.
A commonplace and problematic approach is the use of agreement scales.
Agreement scales can easily corrupt results because of acquiescence bias, something professional researchers have been aware of since the 1950s.
Our natural tendency to agree rather than to disagree can easily skewer survey data.
One online survey mistake marketers make is to provide a list of choices respondents can choose from, but to overlook common choices.
Another widespread design flaw appears when there’s no opportunity for the respondent who doesn’t find an appropriate choice to weigh in with feedback.
“Always include an ‘Other (please specify)’ choice in such lists,” says Henning. “If the marketer really isn’t comfortable that their list of choices is exhaustive, just use an open-ended question instead.”
4. The Wrong Questions Are Asked
“One common mistake is leaving out important contextual information,” says Tom Ewing. “If you’re asking about alcoholic drinks, it matters a lot whether you’re buying to take home or to drink in a bar. The brands you consider will be completely different. ‘What’s your favorite beer brand?’ just isn’t a good enough question.
5. The Insight Of The Respondent Is Overestimated
Steve Jobs has been quoted ad infinitum on his observation that research comes saddled with significant limits.
“You just can’t ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.”
One cornerstone of behavioral economics is the notion that we have little or no awareness of the social, cognitive, and emotional factors that shape our decisions to buy.
We can’t identify these factors let alone explain them.
Online surveys can amplify this noise. But there is something of a workaround, according to Jeffrey Henning...
“If you ask customers how satisfied they are overall and on a range of attributes, you can derive the relative importance of the different attributes using correlations.”
6. Trying To Research Something That’s Better Off Being Tested
The classic example is price.
What consumers say they will spend rarely align with the reality of what they will actually spend.
Testing trumps research. The results are in dollars and cents, and open to little or no debate.
7. The Wrong Job Is Assigned
If we can’t use a survey to determine price...
If we agree with Steve Jobs and shy away from the process of asking prospective customers what they want so we can give it to them...
Why even bother with online surveys?
Concept validation and identification are two of the best jobs research can be given.
“The most important surveys may be concept tests, providing information about whether to launch or kill new products and services,” says Jeffrey Henning. “Get the concept test wrong and you’re going to launch a product that will fail in the market – or, perhaps worse, miss a huge opportunity by killing a product before launch.”
Professional researchers like Parikhal, Henning, and Ewing can steer do-it-yourself researchers in the right direction.
But they agree on the value of an outsider, and not only because of a deeper grasp of the mechanics and the methodologies.
As Jeffrey Henning explains...
“An independent researcher has the advantage of having much less strongly held beliefs about the market being studied than the marketer commissioning the study may have.”
This article was written by Paul Talbot from Forbes and was legally licensed through the NewsCred publisher network.