The Myth of The Average User

The Myth of The Average User

Have you heard how the US Air Force used the average dimensions of pilots to design their cockpits?

Averages are everywhere in digital marketing. Mobile designers use average thumb size to determine button height and project teams often base decisions on the average user. Many metrics are also based on averages such as click-through rates, open rate, conversion rate and average basket value. Whether we like it or not most websites are designed for the average user. But is there really such a thing as an average customer or visitor?

Should we use averages for design purposes?

Well, back in the 1940’s the US air force had a serious problem. For some unknown reason pilots were frequently losing control and crashing their air craft. This was of course a period of tremendous change with the advent of the jet engine. Air craft were getting much faster and more complicated.

Initially pilot error was blamed as planes seldom suffered from mechanical breakdown. But attention soon turned to the cockpit design. This was based upon the average physical dimensions of hundreds of male pilots measured in 1926. Was it possible that the average dimensions of pilots had got bigger over the past twenty odd years?

Data informed decision-making:

In 1950 they decided to find out. Researchers at Wright Air Force Base in Ohio measured over 4,000 pilots on 140 dimensions of size, including average torso length, arm length, crotch height and even thumb length. Almost everyone thought the new measurements would result in a better designed cockpit that would reduce the number of non-combat accidents.

However, a 23 year-old scientist, Lt. Gilbert Daniels, who had recently joined the Aero Medical Laboratory from college had a different theory. He had studied physical anthropology at college. Daniel’s thesis had involved measuring the shapes of 250 male Harvard students’ hands.

Although the students were all from similar ethnic and socio-cultural backgrounds, he noted that their hands were very different in size and shape. Further, when he calculated the average hand size he found that it did not match any individual’s measurements.

“When I left Harvard, it was clear to me that if you wanted to design something for an individual human being, the average was completely useless.” – Lt Gilbert Daniels

To prove whether or not he was right, Daniels selected ten physical dimensions that he thought would be most important for cockpit design. Using the data from the 4,063 pilots who had been measured, Daniels defined someone as average if their measurements fell within the middle 30% of the range for each dimension.

He then compared each individual pilot to the average he had calculated. Most of his colleagues expected the vast majority of pilots to be within the average range for over half the dimensions. But in fact Daniels analysis discovered none of the 4,063 pilots measured managed to fit within the average range of all ten dimensions. Even when he selected only three dimensions fewer than 3.5% of pilots were within the average size for all three dimensions.

Implications for digital marketing:

Daniel’s concluded that any system that is designed around the average person is doomed to fail. There is no such thing as an average user and so we need to stop creating users or personas based upon averages.

This creates a problem for website designers and optimiser because websites are normally designed for the average user. Most websites display identical content for all visitors and yet people have different intentions and goals they wish to meet. Treating everyone the same based upon some illusionary average person is highly toxic and dangerous when it comes to design and conversion rate optimisation.

How do we individualise the user experience:

If one hundred users go to the Amazon website they would each see a different version of the Amazon homepage. This is because Amazon understands the benefit of adjusting the customer experience in according with the user’s past behaviour and intent.

Amazon uses real-time content personalisation and behavioural targeting to serve a version of their site that responds to each visitor’s unique needs. This generates huge benefits for the likes of Amazon because visitors are much more responsive to a website that adjusts to their intent and interests than a generic site that does not respond to their individual needs.

Personalisation can take many forms, but the main criteria often used include demographics (e.g. gender or age), purchase history, device, media consumption, source of traffic, service history, browser, engagement and psychographics.

When I mention personalisation to web developers they often tell me that it’s “difficult” or “complex” to target content using such characteristics. This might be the case if you rely on developers to build content, but if you have an enterprise web analytics platform or an A/B testing solution it can be relatively straightforward to set up and test personalisation criteria.

With the introduction of artificial intelligence (AI) based personalisation tools there is scope for even greater sophistication. Companies that invest in AI are likely to benefit from first mover advantage because the technology lends it so well to personalisation. Don’t be left behind, start investing now as Amazon and Booking.com won’t wait for their competitors to catch on to the potential benefits of using AI for personalisation.

Personas:

Many organisations like to use buyer personas to help their teams visualise real customers. However, if these are based upon average users they will again be potentially highly misleading. Ensure your buyer personas are based upon real customer segments using research and analytics to guide you. Although personas do have their critics, they can be useful if organisations go through an evidence based process to create relevant customer personas.

What about analytics?

When it comes to tracking digital performance many organisations still rely on measuring averages. But just as averages are dangerous when designing a website, they are also meaningless and potentially highly misleading when it comes to measuring performance of a website. Let’s take the average conversion rate that many companies monitor on a daily basis.

1. Not all visitors are able to buy: 

When I was asked to set up conversion reporting for an online gaming brand, I noticed their web analytics were tracking all visitors, including from countries that were prevented from signing up. No one had thought to set up filters to exclude visitors from outside the company’s business area, and so the conversion rate included many visitors who were unable to sign up.

BJ Fogg’s behavioural model point’s out that users will only complete a task if they have both the motivation and ability to complete a conversion goal. In addition, there also needs to be a trigger to nudge the user towards the goal. If any of these criteria are lacking a user will not convert.

When reviewing a web analytics report consider if these criteria are present. If possible remove those users where they clearly lack at least one of the criteria. For example if there is no prominent call to action on the page for an individual customer segment (e.g. logged in users) exclude these visitors from your analysis.

Image of BJ Fogg's behavioural change model

Image source: BJ Fogg

2. Users access your site in different ways:

Your conversion rate is highly likely to vary significantly according to how visitors access your site. The type of device used often reflects different intent and behaviour. Unless you analyse your conversion rate by device and browser you will probably be missing large variations in your key metrics that may provide valuable insights to help improve sales or lead conversion.

3. Source of traffic matters:

Similarly the source of traffic often has a massive impact on conversion rates and it is fairly common for the average conversion rate to plummet if you pump lots of money into a new untested source. Affiliates and paid search (PPC) can promise large amounts of extra traffic to a site, but the intent of these visitors can sometimes be very poor.

A TV campaign can also boost traffic volume significantly, but again the intent of such visitors will be different from existing traffic sources. This makes it is essential to break down conversion rates by source of traffic to understand performance at a more granular level.

4. New and returning visitors:

In one company I worked for the managers noticed that a majority of visitors were returning visitors and assumed that many of these would be existing customers. They were concerned that including returning visitors in reporting was reducing their conversion rate as customers couldn’t sign-up more than once. So they decided to exclude returning visitors from their calculation of the conversion rate.

But as I pointed out to them when I became responsible for the brand, returning visitors normally convert at a higher rate than new visitors. This means that you should look at new and returning visitor conversion rates separately, but use new visitor conversion as a guide for paid campaigns. When I looked at the number of returning visitors to the site it was also clear that relatively few were existing customers and so they were not having a significant impact on the conversion rate.

5. Visitors are at different stages of buying process:

Most websites have a mixture of informational content and transactional or lead generation content. This reflects visitor intent and that visitors are at different stages of the buying process.

Not everyone is ready to buy when they arrive on your site and so it is necessary to create custom segments in your analytics to allocate people to an appropriate group. As a result you should set appropriate success metrics for customers at different stages of the buying process and not expect your overall conversion rate to be identical for all visitor segments.

Conclusion:

Averages are a tidy way of dealing with statistics, but as Daniel’s identified over half a century ago, they are meaningless and potentially fatal when designing systems or interfaces for people to use. It’s time we stopped designing websites for average users and employed personalisation and behavioural targeting to better meet customer needs.

We shouldn’t be a surprised that according to Millward Brown Digital, Amazon Prime converts around 74% of the time compared to an e-commerce average of 3.1%. Even non-Prime Amazon converts around 13% of the time. This is mainly because Amazon is so good at testing and personalising their site to be responsive to individual customer needs.

Amazon runs literally thousands of A/B and multivariate tests a day to achieve this level of sophistication. This is because to find high impact experiments you have to try a lot of things. Most average retailers run a few hundred tests a year.

As a result companies such as Amazon, Netflix and Booking.com also use highly segmented web analytics reports to explore user behaviour. They don’t rely on average conversion rates because they hide real insights.

More reading

8 Marketing Lessons From Theresa May's Campaign

Are Implicit Techniques The Future of Market Research?

Comments


Warning: call_user_func() expects parameter 1 to be a valid callback, function 'custom_comment_function' not found or invalid function name in /home/customer/www/conversion-uplift.co.uk/public_html/wp-includes/class-walker-comment.php on line 183

Leave a Reply

Your email address will not be published. Required fields are marked *