An earlier version of this article was published on ESOMAR Research World and the Liveminds blog.
Each day, businesses use insights drawn from the UK’s £4bn market research industry to make decisions that can often ‘make or break’ the business.
But how useful is the data?
A particular concern is ‘professional participants’: people who frequently take part in online research in order to make money. One study concluded that fewer than 1% of members in the US’ ten largest panels were responsible for 34% of survey completes, while another study found that the average British panelist belonged to four panels.
Psychology research wrestles with the fact that many of their participants are university students, resulting in “WEIRD” samples – that is, Western, educated, and from industrialised, rich and democratic countries. As a result, psychological research may often fail to be generalisable or replicable.
Why should the situation be different with the online panels used in market research – are these respondents truly representative of the population, or do they suffer from their own kind of ‘weirdness’?
Studies of online samples have, for example, found that these participants are skewed female, younger, and lower-income, and that they are more curious, but less entrepreneurial, in their personalities. Of course, businesses can work to overcome these issues through careful profiling and weighting. However, there still remains three questions around whether these ‘professional participants’ are answering in a generalisable way.
Firstly is the issue of habitual responding. We are all cognitive misers with limited attention spans, meaning there’s only so much our brains can do at any given time. Novel stimuli produce interest, and an increase in brain activity, whereas we habituate to familiar stimuli and respond with low-effort, heuristic thinking.
This is important because ‘professional participants’ have high familiarity with online surveys and are therefore likely to be less engaged and perform more poorly. In support of this, researchers have found that curiosity is a significant predictor of response effort in surveys: if a survey is new (and thus interesting) to a respondent, it sparks cognitive engagement.
The second concern is around implicit memory. Broadly speaking, there are two types of memory – explicit, which we can consciously recall, and implicit, which affects our perceptions and behaviours but which we may not be consciously aware of. For example, in card-sorting tasks, participants are often able to predict which card will appear next in the series, even if they cannot consciously verbalise the pattern that allowed them to predict it – they had learned the pattern implicitly.
When ‘professional participants’ take multiple surveys, they are likely to be implicitly learning patterns in survey design and researcher expectations, as well as implicitly remembering their answers to questions; in future surveys, these implicit learnings may manifest in respondents answering in the way they have learned to respond rather than giving an accurate response for that specific question.
Research has indeed found, for example, that having participants respond twice to the same survey significantly reduces effect sizes the second time round.
The final reason to scrutinise online panels is motivation. There are two broad reasons why people take part in surveys: intrinsic motivations such as curiosity or enjoyment; and extrinsic motivations like financial incentives.
Psychology research is consistent in finding that extrinsic rewards are limited in motivating behaviour – for example, children spend more time (i.e., effort) on a drawing task when given praise than when given money. In fact, extrinsic rewards may even ‘cancel out’ intrinsic motivation, with another study finding that, compared to control groups, children who were told they would be rewarded financially for their work spent less time on the task.
This is largely because of our need to feel that we are in control of our actions: if we are being rewarded, the behaviour is perceived as externally coerced and not something to do for the pleasure of doing it, resulting in less effort and worse performance.
In other words, ‘professional participants’, taking the survey for financial gain rather than personal interest, are likely to put less effort into their answers. Indeed, research has found that survey-takers with extrinsic motivations have the lowest levels of response rates, effort and performance.
Enjoying this article? Sign up to get regular email updates
In conclusion, better data is likely to come from fresh participants who do not regularly take part in market research.
Matching new participants for every project can be achieved through targeted advertising of individual surveys on social media and search engines – where samples of over 2 billion can be reached to take part in one-off surveys. The added benefit of recruiting participants using these advertising networks is that targeting is based on demonstrated behaviour, interests and demographics rather than self-reported data. People only see adverts to take part in a survey if they genuinely match the specifications for that project so are more likely to be prompted by personal interest rather than being primarily motivated by money.
More details about such behavioural recruitment methods can be found on the Liveminds website.