The last two Forums of the season – on B2B panels and on election polling – have got me thinking about sampling, one of the fundamental elements of market research.

At the panels session, the question that inevitably came up was, are panel members representative of the particular industry or occupation that they belong to? As for polling, the trials and tribulations of the pollsters are well known and Lucy Davison’s excellent recent blog gave a good account of them: the difficulties of obtaining a representative sample from a shrinking pool of respondents, as she put it, particularly when those who answer surveys are (in Dr Nick Baker’s words) “a bit weird”, are huge.

After the panels session, I ended up chatting in the pub with one of the speakers, Pete Cape of SSI. Now Pete is what I would call an “old school” researcher (though not as old as me), trying to maintain methodological discipline in an increasingly fast and frantic marketplace. We talked about how many of the random sampling methods that we were taught are now just confined to textbooks, gathering dust. For example, hands up if you know what a Kish grid is.

I was recently contacted on a telephone survey and was pleasantly surprised when the interviewer applied the ‘next birthday’ rule to randomly select a respondent from the household rather than just interview the first person that happened to pick up the phone, but I fear that this is very much the exception rather than the rule.

The simple fact is that we are facing a “double whammy” of quota sampling replacing random sampling, and poor response rates. Quota sampling can never truly replicate random sampling, because it can only control how the sample is divided up between quota groups. If there is no control of selection probabilities within each quota group, there can be no assurance that the sample will be representative. If a quota fills up with people who are unrepresentative of that group (for example, because they are the ones that are most easily available or willing to participate), no further respondents can be allowed on the survey to compensate.

Furthermore, with response rates becoming ever lower, a huge base of sample contacts is required to achieve the required number of interviews, and those that participate are essentially self-selected. The possibilities of bias creeping in are immense.

Yet how many people within the MR industry are seriously concerned about this? The hot topics in training seem mostly to focus on ways of putting across a more powerful message – the buzz words are insight, storytelling, data visualisation and so on. However, for quantitative research, if the insight and the stories are based on dodgy data, then this is just a case of putting lipstick on a pig.

Even on sampling and statistics courses, I suspect that the main emphasis is on statistical reliability, sample sizes, significance testing and weighting, rather than understanding the fundamental importance of trying to achieve fixed probability of selection in a sample.

A notable aspect of the recent Brexit campaign was the extent to which sampling methods, particularly the pros and cons of telephone and online approaches, were being discussed in the public domain. We have generally assumed that scepticism about opinion polls won’t affect the MR industry more widely but will this always be the case? Will clients begin to ask more searching questions about how we select our samples and how will we answer those questions?

Worse still is the possibility that there is already an underlying assumption among clients and researchers that the data we work with is pretty shaky, because of lax sampling, but it’s the best we can do within the time and budget available – a kind of conspiracy of silence that we will just carry on as if everything is OK. Perhaps that is one reason why the industry perennially struggles to raise its status and the value placed upon its work.

Either way, isn’t it time that we revisited our sampling methods and aspired to a more rigorous approach?

Mike Joseph