It is 10 years since Nassim Nicholas Taleb published his influential book, “The Black Swan”, that familiarised the idea of our inability to predict those random events that have so much impact on our lives.

After the last 12 months or so, which has seemed to produce more black swans than white ones, I’ve been dipping into the book again to see how well the ideas stand up.

One chapter that struck me, with particular reference to market research, is entitled “The Narrative Fallacy”. In it, Taleb describes “our vulnerability to overinterpretation and our predilection for compact stories over raw truths”

Narrative, or storytelling, has become an established part of the researcher’s toolbox. There is even an MRS course on “Storytelling for Researchers”, designed to teach us how to “deliver compelling and inspiring narratives”

The urge to summarise and simplify data into a clear and logical structure is entirely understandable, especially in an age of information overload. There are limits on how much the human mind can take in and compression into a simple story is necessary if clients are to absorb the messages and insights arising from our research.

But researchers must also be aware of the dangers of storytelling. As Taleb puts it, “the Black Swan is what we leave out of simplification”.

The risk becomes even greater when the research data is far from clear cut. Without a clear appreciation of probability and uncertainty, it is easy to decide on the conclusions from a piece of research, then seek out every piece of evidence that supports a post hoc rationalisation for that conclusion, whilst ignoring all the evidence that points in an opposite direction – all in the interests of creating a good ‘story’.

In the 2016 US election, for example, those who were boldly forecasting a Clinton victory often cited her gains among Hispanic voters, whilst overlooking indications of declining African-American turnout and support, compared with that for Barack Obama.

The renowned forecaster, Nate Silver, came in for considerable criticism during that election for giving Trump a much higher probability of victory (around 30%) than other forecasters. The reason why his model differed from others was that he made greater allowance for the fact (based on historical evidence) that polls sometimes suffer from systematic errors, thereby creating more uncertainty about the poll findings. In the event, his caution turned out to be well founded.

Understating the level of uncertainty in research findings – moving, say, from ‘Hillary will probably win but has a chance of losing’ to ‘Hillary WILL win’ – has two obvious dangers; firstly, it creates complacency and prevents people from taking the steps necessary to prevent a Trump victory and, secondly, that it leaves people completely unprepared and at a loss to know what to do when the unthinkable happens and Trump actually wins.

Thus, whilst not losing sight of the need for powerful storytelling, the role of the intelligent researcher should be to alert clients to that uncertainty and to the possibility of different outcomes. Perhaps, as in John Fowles’ postmodernist novel “The French Lieutenant’s Woman”, the narrative in our client presentations should have two alternative endings!

Mike Joseph