Does The Law of Small Numbers Explain Many Myths?
Does The Law of Small Numbers Explain Many Myths?

What Is
The Law of Small Numbers?
The psychologist Daniel Kahneman points out that many social scientists, psychologists and market researchers often fall foul of a common human bias – the law of small numbers! Is this bias to blame for many modern day myths and the inability to replicate many well known psychological studies?

Source: Freeimages.com
The law of small numbers is a general cognitive bias that makes people favour certainty over doubt. Most people, including many experts, don’t appreciate how research based upon small numbers or small populations often generate extreme observations. As a result people have a tendency to believe that a relatively small number of observations will closely reflect the general population. This is reinforced by a common misconception that random numbers don’t generate patterns or form clusters. In reality they often do.
Kahneman makes this observation:
“We are far too willing to reject the belief that much of what we see in life is random.” Daniel Kahneman, Thinking, fast and slow.
Why are researchers prone to the law?
Kahneman acknowledges that researchers (social and behavioural scientists in his case) are prone to the law of small numbers because they have too much faith in what they learn from a few observations:
- They select too small a sample size which leaves their results subject to a potentially large sampling error.
- Experts don’t pay enough attention to calculating the required sample size and instead use rules of thumb.
A well known example of the law of small numbers is the supposed ‘Mozart effect’. A qualitative research study suggested that playing classical music to babies and young children might make them smarter. The findings spawned a whole cottage industry of books, CD and videos.
The study by psychologist Frances Rauscher was based upon observations of just 36 college students. In just one test students who had listened to Mozart “seemed” to show a significant improvement in their performance in an IQ test. This was picked up by the media and various organisations involved in promoting music. However, in 2007 a review of relevant studies by the Ministry of Education and Research in Germany concluded that the phenomenon was “nonexistent”.
What is to blame for the bias?
Kahneman puts much of the blame for people being subject the law of small numbers on System 1. This is because system 1:
- Eliminates doubt by suppressing ambiguity and automatically constructs coherent stories that help us to explain our observations.
- It embellishes scraps of information to produce a much richer image than the facts often justify.
- Is prone to jumping conclusions and will construct a vision of reality that is too coherent and believable.
- Humans are pattern seekers and look for meaning in their observations.
- People do not expect to observe regular patterns from a random process. When they do see a potential correlation they are far too quick to reject the assumption that the process is entirely random.
Overall Kahneman believes people are prone to exaggerating the consistency and meaning of what they see. A tendency for causal thinking also leads people to sometimes see a relationship when there isn’t one and this leads to the law of small numbers.
Questions for Researchers?
Kahneman’s work on the law of small numbers raises some important questions for researchers and customer insight specialists.
- We are pattern seekers, and we often use small samples in qualitative research and usability testing. However, is there a tendency to extrapolate the findings from small scale studies to the wider population?
- Do researchers sometimes select too small a sample size in quantitative studies and experiments? Is this because they use a rule of thumb rather than calculating the statistically required sample size?
- Are we too quick to reject a random process as being truly random?
As with all forms of bias reality is characterised by a spectrum of behaviours from the rigorous to the lax. From my experience on the client-side of research there are a number of reasons why research sometimes falls foul of the law of small numbers.
Observations from a client-side researcher!

Source: Freeimages.com
-
Usability tests evaluate actual behaviour so normal sampling rules don’t apply!
I read this recently in a blog about website usability testing. This is a myth. The reason for only undertaking a small number of tests is because there are diminishing returns. After 5 to 10 tests few new usability risks tend to be generated. The law of small numbers still applies even when it involves observing human behaviour.
Like any form of qualitative research usability testing is a valuable way of uncovering potential risks and perceptions of a new design. However, just like traditional qualitative research, usability testing still benefits from quantitative techniques (e.g. A/B or multivariate testing).
-
Treating qualitative findings like quantitative data!
I wasn’t going to include this as it seemed too obvious. I changed my mind when I read a post on a LinkedIn group which asked; “Can qualitative become quantitative?”
The answer is normally no. Qualitative studies usually rely on small numbers and a less structured approach to questionnaire design. This means that each interview is unlikely to be identical to the others and so are not comparable.
However, the post reminded me of a number of occasions where I witnessed people latching onto the number of respondents choosing an option in a qualitative study as being indicative of the frequency of behaviour in the wider population. This is a risk when quoting numbers or proportions in qualitative research. Non-researchers frequently have a tendency to interpret proportions in qualitative research as indicative of actual customer behaviour. This leads to mistakes based upon the law of small numbers.
-
Senior Management comprehension of sampling & statistics:
When working for a life insurance company I was constantly being challenged about the reliability of findings from small samples. The reason for this was simple. Almost all the senior management were actuaries. This meant they had an excellent grasp of the potential bias caused by sampling. This had the benefit that other departments were unlikely to be able to misuse research based upon small samples because they would meet the same challenges as I did.
-
DIY research tools (e.g. Surveymonkey):

Source: surveymonkey.com
DIY tools have given non-researchers easy access to the means of conducting and analysing their own surveys. I am not against the use of these tools. Unfortunately though many non-researchers who use DIY research tools may not have sufficient knowledge of sampling and statistics to correctly design or analyse data from surveys. If this is the case it suggests that non-researchers may be particularly prone to the law of small numbers.
-
Correlation does not mean causation!
Key driver/multiple regression analysis is modelling the influence of independent variables on a single dependent variable. However, such models can only infer a causal relationship and further experimentation and analysis to support such a relationship.
The nature of survey data (e.g. independent variables are often correlated) and sample sizes does not always justify the use of such statistical techniques. Big data can play a key role here in providing more robust evidence for causal relationships. But without evidence to suggest a reason for a causal relationship it is important that a correlation between two variables is treated with the utmost caution. On a similar vein always be sceptical about a trend line that fits too well as this could be a Procrustean solution. This is where only data that fits the trend has been selected and data that doesn’t is discarded and again results in decisions based upon the law of small numbers.
-
Death by PowerPoint!
This is a training issue, but I frequently see PowerPoint slides that highlight differences between sub-samples that are not statistically significant. In most research agencies the modelling and analytics are carried out by a separate department from the account executives. This is not a problem provided the account executives who present data have sufficient understanding of the nature and limitations of the analysis they present. From my experience this is not always the case.
-
Budgets and treating research like a commodity!
When companies treat research like a commodity and constantly expect to make cost savings there is a danger that sample sizes will be cut to the bone. As a result studies don’t deliver the required level of reliability. I briefly worked on a multi-country brand and advertising tracking study that only had sufficient sample to analyse on a three monthly basis. This proved very frustrating as it wasn’t sufficiently sensitive to measure the short-term impact (i.e. monthly) of bursts of advertising activity.
-
Pressure to identify insights!
Researchers are by their nature pattern seekers and this can make them susceptible to seeing phenomena that is done by a purely random process. There is nothing wrong with this provided we treat such patterns with caution and seek further data or more robust research to test our hypothesis. This is why researchers need to have the training to present results in a balanced and critical way so that management don’t jump to conclusions.
-
Reporting continuous data too frequently!
There is a growing tendency to expect to have data on tap. This is a characteristic of the digital age. But sometimes this leads to pressure to analyse and communicate continuous survey data too frequently.
I came across a continuous customer satisfaction survey a few years ago. High level Key Performance Indicators communicate to each business area on a monthly basis. However, despite most of the base sizes being far too small to identify any significant differences. The Customer Insight Manager was has to comment on changes from the previous month’s score. This encouraged the inventing of reasons for changes that were not statistically significant and would result in the law of small numbers.
Implications:
The law of small numbers gives researchers an interesting insight into our own potential fallibility. It warns us against listening to our intuition and relying on rules of thumb for determining sample size. Khaneman also provides a useful reminder to be careful on how findings are communicated when dealing with data from small numbers.
Research and experimentation is after all an iterative process. We should always be looking to validate results, whether from large or small scale studies. It is only through trial and error that we are ultimately able to separate insights from myths. However, the law of small numbers should teach us to treat any insights based upon small samples with extreme caution.
Further reading:
Thinking, fast and slow by Daniel Kahneman.
Comments