Why Are Most A/B Test Results A Lie?

No comments yet

In a report on A/B testing Martin Goodson at Qubit suggests that “most A/B winning test results are illusory”. Andre Morys at Web Arts goes even further and argues that “90% of test results are a lie.” If their estimates are correct then a lot of decisions are being made based upon invalid A/B test results. Does this explain why many non-CRO managers are sceptical about the sustainability of A/B test results? Why are some A/B test results invalid and what can we do about it?

1. Confirmation Bias:

Andre Morys suggests that confirmation bias results in many false positives. This is because optimisers naturally base most of their test hypothesis and designs on their own attitudes and opinions. They ignore information that contradicts their ideas. As a result they become too emotionally attached to their design and stop experiments as soon as the testing software indicates they have a winner.

Stopping a test as soon as statistical confidence is achieved can be highly misleading. It does not allow for the length of the business cycle. It is important to also consider source of traffic and current marketing campaigns. You should run tests for a minimum of two business cycles. Make allowances for day-to-day fluctuations and changing behaviour at weekends. Tests should run for at least 2 to 4 weeks depending upon your test design and business cycle.

2. Survivorship Bias:

This refers to our propensity to focus on people who survive a journey (e.g. returning visitors or VIP customers). But ignore the fact that the process they have been through influences their characteristics and behaviour. Even returning visitors have survived in the sense that they weren’t put off by any negative aspects of the user experience. VIP customers may be your most profitable users but they are not a fixed pool of visitors. Their level of intent will often be much higher than your normal user.

Survivorship bias is a logical error that means people focus too much on survivors and forget about those who did not go the distance

The danger is that by including returning users in an A/B test for a landing page experiment they will behave very differently from new users who have never seen the site before. For tests relating to existing users the process of excluding outliers can reduce problems with VIP customer influencing test results. Consider excluding VIP customers completely from your A/B tests as they do not reflect normal users.

For more on survivorship bias see my post: Don’t let this bias destroy your optimisation strategy!

3. Statistical Power:

Statistical power refers to the probability of identifying a genuine difference between two experiences in an experiment. To achieve a high level of statistical power, we have to build up a sufficiently large sample size and generate a reasonable uplift. However, when working in a commercial organisation there is often a lot of pressure to achieve quick results and move on to the next idea. Unfortunately this can sometimes undermine the testing process.

Image of confidence interval at 95% level

Before you begin a test you should estimate the sample size required to achieve a high level of statistical power (see confidence level), normally around 90%. This means that you should identify 9 out of 10 genuine test differences. Due to taking a sample of observations and the natural random variation this causes, we know that tests will also generate a false positive. By convention this is normally set at 5%.

According to an analysis of 1700 A/B tests by convert.com, only around 10% of tests achieve a statistically significant uplift. This means that if we run 100 tests we might expect 10 of those tests to generate a genuine uplift. However, given current traffic levels for each site we estimate that we would need to run each test for 2 months to achieve 80% power. This means we should in theory identify 90% of uplifts or 9 tests. Using the p-value cut off of 5% we would also anticipate 5 false positives. Thus our tests generate 9 + 5 = 14 winning tests.

The Problem:

The danger here is that people are too impatient to allow a test showing an uplift to run for a full two months and so they decide to stop the test after just two weeks. The problem with this is that the much smaller sample size decreases the power of the test from 90% to perhaps as low as 30%. Under this scenario we would now expect to achieve 3 genuine uplifts and still get 5 false positives. This means that 63% of your winning tests are not genuine uplifts.

Before running a test always use a sample size calculator and estimate the length of time needed achieve your required statistical power. This allows you to consider the implications on the power of the test if you do decide to cut it short. Assume a much greater risk of a false positive if you do stop tests early.

4. Simpson’s Paradox:

Once you begin a test avoid altering the settings, the designs of the variant or the control and don’t change the traffic allocated to the variants during the experiment. Adjusting the traffic split for a variant during a test will potentially undermine the test result because of a phenomenon known as Simpson’s Paradox. This occurs when a trend in different groups of data vanishes when data from both groups is combined.

Experimenters at Microsoft experienced this issue when they allocated just 1% of traffic to a test variant on Friday, but increased this to 50% on Saturday. The site received one million daily visitors. Although the variant had a higher conversion rate than the Control on both days, when the data was aggregated the variant appeared to have a lower overall conversion rate.

Image of A/B test from Microsoft which suffers from Simpson's Paradox

Image Source: Microsoft

This occurs because we are dealing with weighted averages. Saturday had a much lower conversion rate. As the variant had 50 times more traffic that day than it did on Friday it had a much greater impact on the overall result.

All About The Tests

Simpson’s Paradox occurs when sampling is not uniform and so avoid making decisions on sub-groups (e.g. different sources of traffic or type of device) using aggregate data. This demonstrates the benefit of targeted tests for example where you are only looking at a single traffic source or type of device.

When you do run tests for multiple sources of traffic or user segments it is best to avoid using aggregate data. Instead treat each source/page as a separate test variant. You can then arrange to run the test for each variant until you achieve the desired statistically significant result.

Altering traffic allocation during a test will also bias your results because it changes the sampling of your returning visitors. Because traffic allocation only affects new users. A change in the share of traffic won’t adjust for any difference in returning visitor numbers that the initial traffic split generated.

5. Don’t validate your A/B testing software:

Sometimes companies begin using A/B testing software without proper validation that it is accurately measuring all key metrics for all user journeys. It is not uncommon for A/B test software not to be universally integrated. Different teams are often responsible for platform, registration and check-out.

During the process of integration be careful to check that all different user journeys have been included. People have a tendency to prioritise what they perceive to be the most important paths. However, users rarely, if ever, follow the preferred “happy path”.

Once integration is complete it is then necessary to validate this through either running A/A tests or using web analytics to confirm metrics are being measured correctly. It is also advisable to check that both testing software and web analytics align with your data warehouse. If there is any discrepancy it is better to know before you run a test rather than when you present results to senior management.

6. Regression To The Mean:

There is a real danger of looking at test results in the first few days that you see a large uplift (or fall) and you mention it to your boss or other members of the team. Everyone then gets excited and there is pressure on you to end the test early to benefit from the uplift or to limit the damage. Often though, this large difference in performance gradually evaporates over a number of days or weeks.

Image of A/B test showing signs of regression to the mean

Never fall into this trap as what you are seeing here is regression to the mean. This is a phenomenon that if a metric is extreme the first time it is measured it will tend to move towards the mean on subsequent observations. Small sample sizes of course tend to generate extreme results. Be careful not to read anything into your conversion rate when a test first starts to generate numbers.

7. The Fallacy Of Session Based Metrics:

Most A/B testing software uses standard statistical tests to determine whether the performance of a variant is likely to be significantly different from the Control. This is based upon the assumption that each observation is independent.

However, if you use session level metrics such as conversion per session you have a problem. A/B testing software allocates users into either group A or B to prevent the same visitor seeing both variants and to ensure a consistent user experience. Sessions are therefore not independent as a user can have multiple sessions.

Analysis by Skyscanner has shown that a visitor is more likely to have converted if they have had multiple sessions. On the other hand an individual session is less likely to have converted if made by a user with many sessions.

This lack of independence is a concern as Skyscanner simulated how this affects their conversion rate estimates. They discovered that when they randomly selected users rather than sessions, as occurs in an A/B test, the variance is greater than assumed in significance calculations.

Image of estimated variance for session randomised and user randomised demonstrates the fallacy of session based metrics

Source Image: Skyscanner

Skyscanner

Skyscanner found that the effect is greater with longer experiments due to the average number of sessions being higher. What this means is that month-long tests based on session conversion rate (i.e. users randomised) would have three times as many false positives as normally expected. However, when the test was based on users (i.e. randomised sessions irrespective of user) the variance conformed to that predicted by significance calculations.

Furthermore, this problem also occurs whenever you use a rate metric that is not defined by what you randomised on. So per-page view, per-click or click-through rate metrics will also be subject to the same problem if you are randomising on users. The team at Skyscanner suggest three ways of avoiding being misled by this statistical phenomenon.

  1. Keep to user-level metrics when randomising on users and you will normally avoid an increased rate of false positives.
  2. When it is necessary use a metric that will be subject to the increased propensity of false positives. There are methods of estimating the true variance and calculating accurate p-values. Check this paper from the Microsoft team and also this one.
  3. True variance and accurate p-values can be computationally complex and time consuming. You can just accept a higher false positive rate. Use AA tests to estimate how much the metric variance is inflated by the statistical phenomenon.

Conclusion:

When trying to avoid these pitfalls of A/B testing the key is to have a strong process for developing and running your experiments. A good framework for testing ensures that hypotheses are on evidence rather than intuition and that you agree the test parameters upfront. Make sure you calculate the sample size and how long your test will need to run to achieve the statistical power you want to reach.

There is an opportunity cost to running tests to their full length. Sometimes you may want to end tests early. This can be fine if you allow for the lower level of statistical confidence and the increased risk of a false positive. There is evidence to suggest that you are continuously running tests. This can largely compensate for any increase in the rate of false positives.

How To Get Started With Google Optimize

Comments Off on How To Get Started With Google Optimize

Google Optimize is a powerful free A/B testing and personalisation solution that is revolutionising online experimentation like Google Analytics did for web analytics. Optimize allows web masters to conduct A/B and multivariate tests without the need for large budgets. Create a free account and you can start conducting online experiments and personalisation campaigns within minutes. Find out what does and what doesn’t work on your site and stop having to rely on best practice!

How to set up and run experiments in Google Optimize. 

What is Google Optimize?

Google Optimize is the free version of Google Optimize 360 which is an A/B testing and personalisation platform. Optimize allows marketers to run up to 3 A/B tests (or multivariate tests) at a time and provides a simple to use visual editor that enables a non-technical marketer to set up experiments in a matter of minutes. It also integrates fully with Google Analytics.

Google Optimize (free) and Google Optimize 360

So, what are the main differences between Google 360 and Google Optimize?

Limit on number of tests

Optimize only allow you to run up to three concurrent experiments. For many small and medium sized sites that have no one dedicated to conversion rate optimisation this may not prove to be a major restriction. I know even large websites that struggle to run more than a couple of tests at once and would save tens of thousands of pounds if they switched to Google Optimize.

Optimize allows you to conduct multivariate tests, but it limits you to 16 variations. Again, for many sites this may not be a problem because the more variations you have the more traffic you need to complete the test within a reasonable time scale.

No audiences

The free version of Google Optimize does not allow the use of Google Analytics audiences to target which visitors to include in tests. However, there are other targeting features available to use.

Objectives have to be pre-set

Unlike Optimize 360 there is no ability to analyse additional goals after the test has been set up. Although this is a useful feature it can lead to lazy thinking as you should have a single success metric linked to a strong hypothesis.

How to get started with Google Optimize?

Assuming that you already have Google Analytics implementation of Google Optimize is a simple process that looks like this.

  • Create an account and container
  • Link to Optimize to Google Analytics
  • Paste snippet into Google Analytics script
  • Add snippet of code to eliminate flicker from A/B tests

When you register for Optimize you will be asked to create an account for your business and then a container for each website. You should then link your container (the individual website) to your Google Analytics account as this allows the two tools to share data. I recommend you do this as it allows for analysis of your tests within Google Analytics.

Optimize then prompts you to add the Optimize snippet to your site. This is only a single line of code that is inserted before the last line of your Google Analytics JavaScript.

Image of snippet of code for implementing Google Optimize

To minimise page flickering from A/B tests Optimize also recommends that you add some additional code to each page immediately before the Google Analytics code.

Create an experiment:

You are now ready to set up an experiment and Optimize gives you three types of tests to run.

  • A/B tests: Two or more variants of a single page
  • Multivariate test: Two or more different sections of a page to be tested
  • Redirect test: Sometimes called a split test where you test one or more whole new page or path on a separate URL.

Image of types of experiments in Google Optimize

My example here is an A/B test where I have changed the heading to make it shorter and snappier which also brings more content above the fold.

Image of heading A/B test from Conversion-uplift.co.uk

Visual editor:

Optimize has a simple to use what you see is what you get (WYSIWG) visual editor which allows you to add, remove or change content. To access the visual editor you will need to download the Chrome extension for Optimize or use a browser that supports CSS3 selectors.

You can now create the variant you want to test using the visual editor or specify the URLs you want to test if you plan a redirect test. To make changes using the visual editor click on the heading or container you wish to amend. This will then open up the menu with quick tools to make simple changes to text, typography and orientation. If you select the Edit Element button you will see more advanced options which include Remove, edit text, edit HTML and insert HTML.

Image of edit and advanced edit options

Make sure you save your changes and confirm you are “Done” to create your variant.

Setting Objectives & Targets:

Before you publish your experiment you must set your objectives and decide what audience you want to target. If you have linked Google Analytics to your account you can use any goals that you have set up in GA as an objective. Optimize also has Pageviews, Bounces and Session Duration as default objective options.

Optimize allows you to select up to three objectives for each experiment. For my A/B test I selected Bounces and Pageviews. You should then decide which users you want to target as this needs to be set before the experiment begins. Click on the “CREATE RULE” button to open the side menu.

Image of how to create a rule in Google Optimize

For many tests you may want to only target new visitors to your site. This ensures that visitors won’t have previously seen the default experience which could otherwise skew your test results. Google Analytics sets a cookie on the user’s first visit to your site. This means you can target an experiment to new unique first time users by specifying a short value for Time since first arrival. To set this up create a behaviour targeting rule like this:

Targeting new visitors – Example 1

Variable Match type Number Value
Time since first arrival Less than 10 seconds

 

To target a test to any page that a new user visits in the first hour since they first landed on your site, create this behaviour targeting rule:

Targeting new visitors – Example 2

Variable Match type Number Value
Time since first arrival Less than 60 minutes

The current targeting options are as follows:

URLs

Target individual pages and sets of pages. URL targeting enables you to pick the page where your experiment is to run. This allows you to target a single page, a narrow subset of pages, or Hosts and Paths.

Behaviour

Target visitors arriving on a site from a specific channel or source. It allows you to target first time users and visitors from a specific referrer.

Geo

Target users from a specific city, region or country. When you type in the Values field, you will see suggestions from the AdWords Geographical Targeting API to speed up rule creation.

Technology

Target visitors using a specific browser, operating system or device. Optimize tracks the browser’s user agent string to identify which browser a visitor is on, what version and on which operating system.

JavaScript Variable

Target pages using JavaScript variable values. This allows you to target according to a value in the source code of the page in the form of a JavaScript variable.

First-party cookie

Target the value of a first-party cookie in the user’s browser. This allows you to target returning visitors who will already have a first-party cookie from your site.

Custom JavaScript

Target pages using a value returned by custom JavaScript. This allows you to inject JavaScript onto a page, then target your test based upon the value in the JavaScript returns. For example if you wanted to target users visiting your site during the morning hours you could write a JavaScript function that returns the current hour. Then set a targeting condition that looks for a returned value that is less than 12.

Query Parameter

Target specific pages and sets of pages. Query parameter targeting explicitly targets values that occur in the query string of a URL. These are found between the question mark and the hash mark in the URL query string.

Data Layer Variable

Rather than referencing JavaScript variables in your targeting rules, you can reference key-values pairs that are contained in the data layer. You may want to create a targeting rule that uses shopping cart data or other information available on the page. For example, you might want to target users who have just completed a purchase of more than £100. This information could be stored in the data layer and so Optimize could retrieve it from there.

Personalisation:

These targeting options allow you to easily use Google Optimize for personalisation as well as for testing. For example, you could use Optimize to display a different image or heading for new visitors compared to returning visitors. Alternatively you could change the heading or message for visitors arriving from a specific source of traffic or customize text according to the user’s location.

Reporting test results:

To view the performance of your test variants simply go to your experiment and select the Reporting tab in the top left-hand menu. Alternatively you can view results in Google Analytics by selecting Behaviour>Experiments. This provides a simple improvement overview which compares your variant with the original experience.

Image of reporting from Google Optimize

Here we can see that in my headline test variant 1 currently has a 69% chance of being the best performing experience. However, the test had only been running a few days and so it was far too early to make any definite conclusions.

Length of tests:

Google Optimize recommends that all tests are run for at least two weeks. This allows for the weekend effect as people often behave differently during the week when they are at work compared to when they are at home for the weekend. It is also important to consider how long your business cycle is so that you don’t end a test before a full cycle has ended.

After the test has been running a reasonable length of time and you have a sufficiently large sample of users included in the test Optimize will display a definitive recommendation about the test. This is very useful if you are new to testing.

Conclusion:

For a free tool, Google Optimize is a powerful and easy to use A/B testing engine that will meet the needs of most small and medium sized websites. It is by far the best free testing solution currently on the market and it has most of the functions and capabilities of paid for solutions.

It allows companies with small or even non-existent budgets to conduct tests and begin to personalise their user experience. Google Optimize may be a game-changer as far as A/B testing is concerned. Expect to see more organisations begin to run tests and experiment with personalisation. Given the cost of some paid for solutions I would expect some organisations will consider switching to Optimize. If their current testing solution is not being fully utilised they could potentially save thousands of pounds a year by switching to Optimize.

Related posts:

Optimisation process – 8 steps guaranteed to boost your conversion rate. 

Importance of web analytics – 18 Free & Paid Web Analytics Solutions.

Types of A/B tests – How to use A/B testing to optimize your website.

Strategy – How should you prioritise your A/B test ideas?

Should You Optimise For Your VIP Customers?

No comments yet

What if most revenues are generated by a few VIP customers?

Some websites get most of their revenues from a relatively small proportion of high value VIP customers. This begs the question should you optimise your site design around your most profitable segment of customers or for your mass of ordinary ones?

How do we optimise the conversion rate?

One of the most scientific methods we use to improve site design and increase the conversion rate is through online experiments (i.e. A/B tests and multivariate tests). However, when we run the analysis for such tests the standard practice is to remove outliers to avoid the results from being overly skewed by abnormal observations. Like high value players (VIPs). Is this practice consistent with a website where a small minority of customers generate the vast majority of revenues?

I was recently asked this question on behalf of an online gambling site. 5% of their users generate over 50% of revenues. Here is what they asked:

“How can you reliably test revenue uplifts in an industry which is driven by outliers? We are removing the top 5% of outliers from tests but that 5% of users is generating ~50% of the revenue. So variants could be winning which aren’t suitable for VIPs, and if they don’t like the changes we could lose a lot of revenue!”

Pareto Principle:

As the Pareto Principle tells us most sectors have a similar issue – around 80% of the profit often comes from 20% of customers in many sectors. Online gambling may or may not be more concentrated than this, but it is not an uncommon problem. However, trying to predict who are the high value customers when they first land on your site is more problematic.

Image of the Pareto Principle

Moving Target:

Indeed, a key characteristic of high value VIP customers is that most begin their journey looking and behaving the same as the majority of new visitors. However, survivorship bias means that we have a tendency to ignore this fact and so we concentrate on the characteristics of those who remain rather considering the nature of those who have been eliminated along the way.

For example, a majority of first time deposits from customers who become VIPs are relatively low. The most frequent amount is often on or near the minimum deposit level. Sure, you get a tiny minority who come in with large first deposits, but they are probably already VIPs on other sites or have a windfall. They do not represent the majority of VIPs.

Think about it, if a large supermarket noticed that high value customers shop more regularly and have more items in their basket, would they re-design the store and remove lines only purchased by lower value customers? Nope, that would be stupid as lower value customers might one day become a high value customer. It would also potentially annoy low value customers and and they might shop elsewhere. Higher value customers have the same basic needs, they just happen to have a higher disposable income or a windfall.

High value (VIP) visitors do not represent a fixed pool of customers. It is in a constant state of flux as user circumstances and behaviour change over time. Very few people, if any, will remain true VIP users throughout their customer life cycle. Their income, luck, assets, lifestyle, attitudes and other factors change as people progress through different life stages.

VIP Customer Intent Is High:

Image of Starburst slot game

Do drug addicts worry about the user experience? Nope, their intent is so strong they will do almost anything to get a fix. Most VIP customers on gambling sites (or other kinds of sites for that matter) are demonstrating similar addictive behaviour.

Like any addict they will jump through hoops to achieve their goal. I doubt very much that many VIPs will be put off by a long form or poorly designed check-out. If they are then god help your other customers.

Conclusion:

VIP or high-value customers certainly need your attention. But that should be through CRM and personalisation to improve their customer experience and retention. However, as such customers are not a fixed group of people you should definitely remove outliers from A/B and multivariate tests.

It would also be counter-productive to optimise a site just for your high value customers. You would potentially turn-off non VIP customers and you would not have the opportunity to nurture customers as they progress through different value segments. In gambling the pool of VIP customers is usually too small to conduct robust experiments and so you would also be in danger of drawing false conclusions due to the law of small numbers.

Conversion Rate Optimisation Strategy Mistakes

No comments yet

10 Top Conversion Rate Optimisation Strategy Mistakes:

There is plenty of advice on Twitter and other social media about conversion rate optimisation strategy. ConversionXL, Widerfunnel, and Hubspot to name but a few. Despite this many organisations continue to make some basic errors that limit their ability to improve sales and revenues from their conversion rate optimisation strategy. Below are nine of the most fundamental mistakes that organisations tend to make with conversion rate optimisation strategy:

1. Don’t fully integrate web analytics tracking and reporting

Image of Google Analytics behaviour flow report

The saying that if you don’t measure something you can’t identify if you are improving or not, rings true with website optimisation. Unless you have reliable web analytics monitoring and reporting of your KPIs from the beginning to the end of the user journey. You will never really know how your site is performing and what impact tactical changes have on your revenues. You will also struggle to prioritise effectively as you need web analytics to identify the value of each step in the user journey. Conversion optimisation strategy depends upon comprehensive and reliable web analytics to inform decision making.

They are also important to validate test results and check the robustness of uplifts. A/B testing solutions only support certain browsers and devices and need to be configured to ensure they cover all important use cases. What if your test doesn’t include an alternative user journey? Your web analytics can help identify these kinds of problems so that you can fix them.

2. Conversion Rate Optimisation Strategy = A/B Testing:

Although A/B testing can be a useful optimisation technique, it is only one of many activities that an organisation needs to use for an effective conversion rate optimisation strategy. The chart below shows the many activities companies use to improve conversion rates. Companies that have an effective strategy will do all of these and more. Furthermore, they won’t begin A/B testing until they have completed a thorough user experience audit to identify and fix problems with the customer experience.

conversion rate optimisation strategy from Econsultancy

Image Source: Econsultancy

3. Rely on Before & After Measures:

This kind of measurement can be misleading because conversion rates continuously fluctuate due to many factors. Competitor activity, website bugs, traffic source, advertising spend and the weather are just a few that can cause your conversion rate to change. Because of this you can only be confident that a change to your website is the reason for a significant uplift or decline in conversion by running an A/B or multivariate test.

These kinds of experiments allow you to isolate the impact of the difference in the customer experience by having control. This is achieved by randomly splitting traffic to both experiences and so all other drivers of your conversion rate should influence both variants equally.

4. Don’t A/B Test.

Source: Freeimages.com

Source: Freeimages.com

OK, you’ve fixed your user experience problems. What’s next? Provided you have enough traffic and conversions A/B testing allows you to learn from your mistakes and identify what improves conversion. There are many reasons why organisations don’t conduct A/B testing, but the lack of online experiments can hinder your ability to reduce acquisition and retention costs.

A/B testing enables you to remove subjective opinions from decisions about which design or journey is better at meeting the organisation’s objectives. They also help develop an evidence based decision making culture. Which is key to a successful conversion rate optimisation strategy.

5. Only track a single measure of conversion:

Image of tape measure

Source: Freeimages.com

It is beneficial to agree a single success metric for your conversion optimisation strategy. This is especially useful for A/B tests as it provides clear direction to everyone creating experiments. But if your success metric is total revenues or sales leads, that doesn’t mean you should ignore other metrics that could suggest a change is counter-productive. For example if you are optimising to increase sales it would be appropriate to also measure average basket value and total revenues to understand how this affects overall profitability. For a conversion rate optimisation strategy to be sustainable it needs to improve long term profitability and not just short-term sales. This means having a long-term vision and suitable metrics to target.

For ecommerce this means monitoring metrics such as average order value, number of items per basket, sales from returning customers and returns. You will then get a better understanding of how the new customer experience influences user behaviour and your bottom line.

A High Bounce Rate

With content marketing a high bounce rate is often seen as an indication of low engagement. But because of the way most web analytics calculate bounce rates and time on page this may not be the case. Google Analytics defines a bounce as a single engagement hit and counts the session time for such a visitor as zero. What if some of those visitors are spending a number of minutes engrossed in a post and then exit your site? Are they not engaged?

To understand true levels of engagement you need to also track how long bounced visitors spend on a page. This can be done by adding some extra script to your GA tag and setting up events in your web analytics. The point here is that no single metric will ever give you the whole story and it is essential to delve deeper into customer behaviour to truly understand the impact of changes you make to your site.

It is also essential to segment metrics as there is no such thing as an average customer. Device, browser, new visitors and returning visitors are all metrics that can significantly influence how your conversion rate performs. It’s important to analyse the conversion rate by these metrics as otherwise you could draw the wrong conclusions.

6. Don’t have a dedicated team for CRO.

Image of skills required for website optimization

Without a dedicated conversion rate optimisation ( CRO) specialist (or a team in larger enterprises), you will not achieve the full potential from optimisation because generalists will struggle to develop the necessary skills or allocate sufficient time to the task. CRO requires specialist skills (e.g. web analytics and heuristic analysis) that take time to acquire and benefit from regular updating.

Developing strong hypothesis for testing is also a time consuming process. As your A/B testing programme matures you may notice that between 50 to 80% of tests will fail to generate a significant uplift in conversion. As a consequence you will need to run more tests to generate a reasonable return on investment (ROI).

Marketing generalists should be able to deliver landing page and other tactical tests, but they are unlikely to have the time or expertise to develop a more strategic optimisation roadmap that is required to achieve the full benefits of CRO. Generalists also often fail to develop strong hypothesis or have the time to build more complex tests as their time horizons may be too short.

Strong Test Ideas

It is essential to have a continuous supply of strong test ideas in your pipeline to achieve the necessary scale of testing required for a good ROI. A centralised CRO team can easily allocate the necessary resource for the development of test ideas and ensure priority is given to websites or pages with the most potential for generating a high ROI. This minimises duplication of effort and facilitates the sharing of test results with all CRO specialists in the organisation.

A fragmented or silos based approach to CRO is prone to failure because of its inefficient use of resources, often resulting in duplication of effort, and a focus on tactical rather than strategic optimisation. A lack of co-ordination and control of CRO also tends to prevent the implementation of a structured approach to optimisation as silo develops its own ad-hoc processes and KPIs. This is generally a recipe for disaster and a reason why CRO will fail to deliver a good ROI.

7. Put junior people in charge of optimisation.

 Image of boy dressed in business clothes

Source: Freeimages.comA/B testing is a form of experimental research and as such should be seen as part of your innovation strategy. It needs to be headed up by a senior person to deal with all the obstacles that prevent change in an organisation. A junior person is unlikely to have the clout to deal with office politics, and almost certainly won’t have the authority to optimise product, sales channels, Customer Services or prioritise development projects.

This is something that few companies get, for website optimisation to achieve its true potential you need to look at the whole customer journey, and optimise all the inputs, not just the new customer sign up to buy process. Look at the companies that excel at optimisation. Organisations like AmazonSpotifySkyscanner and Netflix, they all have directors or senior managers in charge of their testing strategy and don’t limit themselves to new customer journeys.

If you you don’t have a senior role in your organisation for conversion rate optimisation consider hiring a conversion rate optimisation consultant. They can review your processes and ensure your conversion rate optimisation strategy is on solid ground.

8. Don’t formulate hypothesis.

Image of a question mark

Source: Freeimages.com

When generating ideas for A/B tests it is important to base the experiment on a hypothesis about how and why the change will influence user behaviour. A hypothesis explains the rationale and also predicts the outcome of the test so that you know which success metrics to set for the test. The hypothesis needs to be based upon evidence gathered from an agreed optimisation process rather than pure gut feeling as otherwise you may struggle to learn from successful tests. Without strong hypothesis A/B testing becomes a random and undirected process that will fail to generate the full benefits of CRO.

9. Don’t have a clear strategy for testing.

6 types of tests to optimise a website page

There is no point relying on low hanging fruit and best practice to direct your A/B testing as these sources will soon run dry and you will lack direction in your testing programme. It’s important that you follow a recognised and structured optimisation process that draws insights from a range of sources, especially from customers.

And yet companies are often more concerned about competitors and copying their ideas than listening to customers. This is a serious mistake and will lead to a sub-optimal testing programme. Customer insight and usability research is vital because to develop strong testing ideas you need to have a good understanding of customer personas, goals, tasks that lead towards goals and how users interact with your website or app.

Otherwise how can you expect to develop hypothesis to predict user behaviour? You could be making assumptions about customers which might not have any basis in reality. The more insights you can get from your customers the greater the chance you have of identifying a significant problem or improvement you can make to improve conversions.

10. Think it’s all about design.

Image of craigslist.co.uk homepage

Source: craigslist.co.uk

I’ve heard this so many times, but do your visitors really come to your site to look at its design? I don’t think so. People come to your site to complete a task and are rarely interested in your “cool” design. In fact most conversion rate experts agree that all too often ugly wins over beautiful designs.

Just look at Amazon.com and ebay.com, none of them are what anyone would call aesthetically great designs. They are functional, they offer a great deal maybe and most importantly of all they let users do what they want to do without having to think too much. Conversion rate optimisation strategy must focus on the customer first and not the subjective opinions of designers.

Designers may be good at composing a new webpage or app screen, but that doesn’t mean they understand your main customer segments or know what improves conversions or revenues. Conversion optimisation strategy requires a collaborative process and so designers must work closely with CRO experts to deliver new experiences based upon evidence rather than subjective opinions. Otherwise you will end up with new experiences that are based upon design principles rather than CRO insights and there will be limited, if any, learning from the process.

Does Usability Research Reflect Real Behaviour?

No comments yet

Does Usability Research Measure Reality?

Usability research is essential for checking whether a site or app is intuitive and easy to navigate to create a great customer experience. It helps inform our decisions about the choice architecture. Remote usability research solutions or face-to-face user interviews identify the main usability problems. Do these methods of research reflect real behaviour?

How many usability research proposals acknowledge that the process of undertaking usability research can influence the behaviour we observe? We may have taken users out their natural environment and set them objectives that lead them to behave in a certain way.

Behavioural scientists have found that many of our decisions are made automatically by our unconscious brain. The context and our underlying psychological goals heavily influence the choices we make. We also behave differently when we are aware that we are being observed.

Asking respondents direct questions is especially problematic as people over-think issues. They switch to their slow, rational brain when encountering a mentally demanding task. Unfortunately most of the time when we are browsing a website we rely on our fast, intuitive, unconscious brain to make decisions without really engaging our conscious thought process. The implication here is that we cannot even access the rationale behind much of our behaviour when interacting with a website.

Daniel Kahneman, Thinking, fast and slow

“People don’t have reliable insight into their mental processes, so there is no point asking them what they want”.

UserZoom.com prototype testing methods

Source: UserZoom: 

Context is important:

Avoid taking people away from their natural environment if at all possible. Certainly don’t use focus groups as this is about far away of a normal browsing behaviour as you can get. How often do you search the web with a group of people you have never met and discuss your likes and dislikes of the site?

This is why remote user testing methods have an advantage over some face-to-face methods. Participants can be in their normal environment, with their normal distractions and so their behaviour is less likely to be influenced by the testing process. Don’t get me wrong, there will still be some bias as a result of the testing method. But it may be substantially less than techniques which take the user out of their normal browsing environment.

Observe and listen rather than ask:

You will get more meaningful insights from simply observing and listening to your users during a usability test as past behaviour is a more reliable indicator of future behaviour. Try to avoid verbal interventions as much as possible. People don’t like to admit when they do something wrong and you are likely to influence how they then behave in any future tasks. If you do want some verbal feedback, just ask your testers to say what they are doing as they go through the task.

But always keep in the back of your mind that usability testing is about informing your judgement, and not to prove or disprove someone’s opinions. It is also an iterative process that should begin early on in the development of a design.

5 second test UsabilityHub.com

Source: UsabilityHub:

Implicit Research Methods:

Most of our daily choices are made by our fast, intuitive brain which means we don’t have time to rationalise why we are making those decisions. New implicit research techniques such as functional MRI, EEG, biometrics, eye tracking, facial decoding and implicit reaction time studies (IRTs) are allowing marketers to access the sub-conscious part of the brain to better understand how we respond to communications and designs.

Eye tracking research helps identify which specific elements of a page or message attract our attention, but also the communication hierarchy of messages. Heatmaps allows us to display this data to reveal the proportion of visitors who noticed each of the key elements on a page. Plus the frequency and duration of gaze on each element.

Click and mouse movement heatmaps from visual analytics solutions such as Hotjar and Decibel Insights can provide similar insights for existing pages. For true eye tracking research though solutions from Affectiva and Sticky allow for you to evaluate both new and existing web page designs.

Clicktale.com heatmaps

Source: Click Tale:

A/B Test Usability Testing Results:

In the final analysis the only way you will know if a change identified through usability research improved agreed success metrics is to conduct an online experiment in the form a A/B test. It is only when visitors are acting on their own impulses and with their own money that you will see how they behave.

Prioritise the insights you get from usability testing to decide which are worthy of A/B testing. A/B testing will give you the evidence to show exactly how much difference your usability testing has had on your conversion success metrics.

How Culture influences Website Design

Comments Off on How Culture influences Website Design

This post explores the science of how culture influences website design and conversion rate optimisation. Marketing is about persuading visitors to take action. But what if your visitors come from a range of different countries and cultures? Will one strategy work for all visitors even though they come from different cultures? Design and culture are highly interrelated and yet little allowance is often made for cross-cultural differences.

Culture: 

Culture has a deep and pervasive influence on how people perceive and react to web content. For global brands it is important to consider how culture influences website design because they attract visitors from many different countries and cultures. They need to understand how people from different cultures interpret, and respond to such variants as colour, language, images and technology to be able to serve optimal content.

Design does not evolve in a cultural vacuum. For example, McDonald’s has a separate website and uses different colours for every country they operate in. They do not attempt to have a consistent brand design and website for consistency’s sake. They appreciate that culture influences website design because culture affects how people respond to different design and communications.

Singapore/Russia

Image of McDonalds homepage for Singapore and Russia showing how design and culture are interrelated

Germany/Brazil

Image of McDonalds hompeage for Germany and Brazil showing how design and culure are interrelated

The most influential research studies on cultural differences in communication were conducted by the anthropologists Geert Hofstede while at IBM and Edward T Hall when he taught inter- cultural communications skills at the US State Department. Their research studies are a must for anyone wanting to understand how culture influences website design. Their work provides many important insights into how design and culture are highly interrelated.

A Framework for Understanding Culture:

Professor Geert Hofstede conducted probably the most comprehensive study of how cultural values vary by country between 1967 and 1973. Whilst working for IBM he analysed data from over 70 countries. He has since used studies, such as the World Values Survey, to validate and refine his cultural dimensions theory. This identifies 6 cultural dimensions that can be used to explain observed differences between cultures. This can be used to help align design and culture to avoid mistakes when creating an experience for a specific culture.

Hofstede’s 6 Cultural Dimensions: 

1. The Power Distance Index

How is power distributed in a culture? The Power Distance Index is the degree to which people accept and expect inequality in a society. Cultures that score low on this dimension will seek to reduce the level of inequality and expect justification for where it does exist.

2. Individualism versus collectivism

Is a person’s self-image defined by “I” or “we”? In Western cultures, we tend to focus on the needs and wants of the individual. Conversely, Eastern cultures place the needs of the collective ahead of individual.

3. Masculinity

Does a culture have a preference for achievement, heroism, assertiveness and material rewards? If so, to what degree? In this context, femininity translates to collaboration, modesty, caring and quality of life.

4. Uncertainty Avoidance

How comfortable does a society feel with uncertainty and ambiguity? A high score indicates a society that has formal rules and policies and are often intolerant of unorthodox behaviour and ideas. They also like to plan for every eventuality and are more concerned about product specifications than societies that score lower on this dimension.

5. Long Term Orientation

This describes a culture’s time orientation – long-term vs short term. Scoring low means a culture favours long-standing norms and is suspicious of societal change. Cultures that score high are pragmatic and take a long-term view of business.

6. Indulgence versus Restraint

Does a culture restrain or indulge in fun and instant gratification? A high score means a culture
encourages instant gratification and enjoying life and having fun. Low scores reflect strict social norms which suppress indulgent behaviour.

Free Resource on Cultural Differences:

By measuring how different cultures compare on these six dimensions we can better understand the common ways culture influences website design. Data from over 100 countries has been made available by the Hofstede Centre. This is very useful if you’re trying to boost conversions by aligning design and culture to improve the customer experience in a cross-cultural context.

For instance, this chart shows us that Japan scores much lower on individualism than the United States. This suggests that web content in Japan needs to focus more on the community and relationships, rather than showing pictures of individuals in isolation. Japanese people don’t like to stand out from the crowd and are more likely to put the needs of society before personal preferences.

Their high score for masculinity reflects their competitive drive for excellence and perfection, together with a strong work ethic. These values should be reflected in web content through both high quality imagery and messaging about how the product quality cannot be beaten.

At 92, Japan is one of the most uncertainty avoiding countries in the World as they like to plan for every eventuality. This means Japanese people usually won’t make a decision until they have reviewed all the facts and figures. Risk assessment and planning tools, as well as detailed and fact based information, could help boost conversions in this cultural context. Design and culture must be aligned here as otherwise visitors will seek the information they are looking for elsewhere.

6 Dimensions of Culture – Country Comparison

Image of table showing Hofstede’s 6 cultural dimension values by country that can be used to align design and culture

Cultural Preferences and Facebook

Art preferences are affected by cultural norms and tends. For example, a study of over 400 Western and East Asian portraits found that the subject’s face on average made up around 15% of the total area of the picture in Western art compared to just 4% on average in East Asian portraits.

However, one study that analysed Facebook profile photos found that 12% of Americans’ photos lacked any background – compared to only 1% of photos from the Far East. Both our art and Facebook profiles reflect our cultural ideals and preoccupations that influence our behaviour in all kinds of ways. This is just another way that design and culture are interrelated and this occurs in all aspects of society.

Western culture emphasizes individualistic and independent traits. People focus on their own face and pay less attention to the background. Eastern culture emphasizes communal and interdependent traits. There is more of a tendency to include context (e.g. the background) and other people in their pictures.

Image of how culture influences how people frame photos - design and culture

Low Context vs High Context Cultures:

The anthropologist Edward T Hall identified differences between high and low-context cultures in how they communicate routine messages:

  • High-context cultures (e.g. China and Japan) have many ‘unwritten rules’.
  • Low-context cultures (e.g. the United States) leave little left to interpretation. “It is what it is.”

Low context and high context cultures relate to a number of cultural traits, including commitment, trust, overtness – and even time. Design and culture can be easily aligned here by identifying whether the society has many unwritten rules or people leave little to interpretation.

Monochronic vs Polychronic Cultures:

People in low context cultures often have a monochromic perception. This means they see time as tangible and sequential. They follow strict time schedules, focus on one task at a time and set deadlines that they aim to meet at all costs.

High context cultures tend to have a polychronic perception of time where it is more fluid. Punctuality and structure is less important and deadlines are seen as more flexible and people work on multiple tasks at once.

Monochronic Societies Prefer Simplicity:

So how can we apply these insight to ensure culture influences website design when we launch in a new country?

Since monochronic societies dislike clutter and fluidity, a simple design with a clear action should work well. Things like:

  • A clear hero image.
  • Short bullet point messaging.
  • Clear focus on the product.
  • In polychronic cultures, rich context can be displayed using:
  • Multiple graphics, icons, boxes, and animation
  • Animated navigation.
  • Greater complexity.

Check out Chinese e-commerce website Taobao on the left and compare it with the UK’s John Lewis site. Both are very successful e-commerce sites, but vastly different website design approaches due to the cultural values of the countries they operate in. It is wise to consider monochronic and polychronic cultures when designing a user experience for cross-cultural websites. This will ensure culture influences website design in an appropriate and sympathetic way.

Taobao – China/John Lewis – UK

image of Chinese and UK ecommerce homepages from Taobao and John Lewis - design and culture

Colours of our culture:

Colours have different meaning according to where you are in the world (nope, there’s not a colour that converts best). Yet many organisations insist on consistent brand colours across different markets. It could be that you’re losing conversions by not accounting for cultural variations in the associations of colours in different countries .

Brands that align design and culture are normally more successful because their websites and apps are designed according to local cultural preferences rather than trying to impose the cultural norms and traditions of the brand’s home country.

In his book, Drunk Tank Pink, the American psychologist Adam Alter suggests that colours have meaning partly because they are associated with practically every pleasant and unpleasant object on Earth.

As a result our interpretation and preference for colours is strongly influenced by factors such as language, climate, gender, age and context. For example, the way languages categorise colours are not universal (e.g. Russian has two words for blue). Some colours are also used to express moods and feelings in some languages which inevitably affects how we perceive them.

If you’re curious, you can see which colours mean what here: Colours Across Cultures, Translating colours in interactive marketing communications by Global Propaganda.

Colours Mean Different Things to Different Cultures: 

In 1999 American researchers investigated how people from 8 countries perceive different colours. The analysis allowed researchers to generate a colour spectrum of meaning with red at one end and the blue-green-white cluster at the other end. Red is associated with hot/vibrant and the spectrum gradually moves towards calm/gentle/peaceful that the blue-green-white cluster is associated with.

Testing by international search and conversion agency Oban International suggests that cultural preferences for particular colours may also be driven by strong national associations and brand identities taken from individual sectors of the economy. Joe Doveton tested this hypothesis in Germany where brands such as Siemens, Mercedes and Audi are renowned for promoting engineering excellence as an integral part of their USP.

In tests for global air charter company Chapman Freeborn, they discovered a strong preference among German visitors for a silver button and a big dislike for a red button. Silver in Germany is synonymous with the Mercedes brand. Red may be associated with the old Soviet Union which at one time controlled East Germany. Again, this is why it is important to align design and culture.

Germany – Silver CTA/UK – Red CTA

image of Chapman-Freeborn.com homepage for Germany and UK with different CTA colours according to cultural preferences - design and culture

Use Localised Copy For Personalisation & Conversions:

Your value proposition is the most important element of your communication. The danger of using direct translation, especially for keywords, is that you will end up with copy that uses words out of context. The term “mobile” for example is fine in the UK, but people in the United States refer to mobile phones as “cellphones”. In Germany people use a different word again, “handy” and in France “portable”. The same term can also have multiple meanings in a language.

Understanding your customers is the best way to craft a great value proposition. However, your customers preferences’ will likely vary according to their culture. This is where you can use qualitative research to learn new insights and validate or challenge your existing ideas on how to improve conversions by aligning design and culture. You can then use A/B testing to evaluate different copy and images to identify the best performing messages.

Pro tip: use loanwords in your copy – they’re often left out of copy that is directly translated.

Fonts and Font Sizes:

Fonts often have visceral connotations behind them, and they often vary culture-to-culture. For example in the United States people relate Helvetica with the US Government and the IRS because it is commonly used on tax forms. This again demonstrates how design and culture can heavily influence how visitors view something as simple as a font.

Another example is how logographic language cultures use smaller, tightly packed text, confusing American readers. That’s because the language itself (e.g. Japanese) communicates a lot of information in just a few characters. Further, as Japanese doesn’t have italics or capital letters it is more difficult to create a clear visual hierarchy to organise information. So web designers often use decoration or graphic text to create emphasis where required.

For more on font psychology read this post by Alex Bulat.

Further complicating the issue of conversion across cultures, we have the distinction between bi-culturalism and multi-culturalism.

Bi-Culturalism and Multi-Culturalism: 

In the 2010 US Census over 6 percent of the population (over 2 million citizens) associated themselves with two or more ethnic or racial groups. Psychologists have discovered that bi-cultural people engage in frame switching, which means they can perceive the world through a different cultural lens depending upon the context of the situation and whether it reminds them of one culture or another.

So we can’t assume people coming from a different culture (e.g. Vietnamese Americans), will retain all the same preferences as individuals still living in their native culture. Web analytics may help you identify potential bi-cultural visitors.

Even across monocultural people there are strong contrasts in values and behaviour. The concept of honour tends to be more strongly associated with East Asia than the West. However, even in the United States honour is known to influence behaviour more in southern and western states than in the northern states. All this goes back to understanding your customer’s journey and aligning design and culture.

Other Considerations: 

Technology:

We can’t assume people will all be using the same technology in different geographical markets.

  • In Africa, for example, mobile commerce is much more established in certain sectors, (e.g. banking), because of a lack of fixed-line internet infrastructure.
  • For various reasons, iPhones have failed to establish a large market share in Spain, so Android and other operating systems more relevant to the Spanish mobile user.
Browsers:

Browser usage is also fragmented at an international level.

For more detailed information check out data from StatCounter.

Search Engines:

The major search engines use different algorithms for different countries and languages.

  • Although Google has increased its penetration in Russia, the local search engine, Yandex, is still an important search engine in the country.
  • In China, Google is not used at all, with Baidu being the top search engine with a market share of over 50%.

For more details of search engine market share see an article from extraDigital.

Payment Methods:

There are different payment methods. This means having a single cashier or ecommerce check-out design is unlikely to be optimal for a global audience.

  • In Europe, credit card penetration is much lower in Germany, Netherlands and Poland. For cultural reasons many Germans dislike credit and as a result the single most popular payment method (38%) is (ELV).
  • In the Netherlands a similar payment option, iDeal, is the referred method of payment for 55% of online shoppers.
  • Security-conscious Russians still like to use cash as a quarter of them use Qiwi to make online payments. This allows people to deposit cash into ATM style machines and then make payments online without having to transmit sensitive bank or credit card numbers over the internet.
  • Even in Turkey where credit and debit cards are very popular (87% market share) you won’t see Visa or MasterCard on most cards.
  • In Islamic countries Sharia law prohibits the acceptance of interest or fees for loans and so potentially limits the use of credit cards and other Western style financial products. The expansion of Islamic banking is making e-commerce more accessible to Muslims, but again adds to the complexity of online payment processes and demonstrates the importance of aligning design and culture.

6. Culture Implications for Optimisation:

Websites that use identical content and colours across all countries and cultures are at a major disadvantage because of the impact diversity of values, norms and other differences have on how we interpret the world. Here are the key takeaways for optimising a global website by aligning design and culture:

1. Research competitors:

To obtain a feel for whether your website is out of sync with the local culture conduct a competitor review of sites in the country concerned. This will give you the opportunity to look for similarities across your competitors’ websites that may indicate areas for A/B testing. (Just don’t copy your competitors; they don’t know what they’re doing either).

2. Focus on colours and words:

There is sometimes a tendency to focus on purely transactional matters (e.g. payment methods) when adapting websites for an international audience. This is a mistake and I would recommend paying attention to your website colours and the language you use to ensure the site conforms to local preferences.

3. Use qualitative research to get a local perspective:

In addition, use local contacts, such as colleagues and suppliers to obtain feedback on your site in different countries. I’m surprised how often I come across websites and apps where it is obvious that a key page or journey has not had input from someone in the targeted country. Don’t fall into this trap as it is dangerous to rely solely on website experts who are not embedded in local culture.

4. Consider cultural dimensions and context:

Utilise the country comparison tool to understand the cultural dimensions of your audience and how contextualised your website needs to be. The more your website can reflect local cultural preferences the more likely your visitors will happily engage and interact with your content. However, use testing to ensure you validate your hypothesis as there needs to be a return on investment as otherwise you may be better spending your money elsewhere.

4. Serve targeted content:

A/B testing is also ideal for evaluating the use of dynamic content to target images and messages that are responsive to how different cultures see the world. This allows you to increase conversions by using geo-targeting (i.e. based upon country IP address) or other cultural indicators and let the data guide your website design.

Singapore/Chile

Image of Hertz homepage for Singapore and Chile - design and culture

Source: Hertz.com

Both of these Hertz websites are on the same domain and root directory (Hertz.com), but have different languages, visuals and appropriate text.

5. Analyse customer behaviour:

Cultural targeting has perhaps the greatest potential for your existing customers where you can track and analyse their behaviour over time. Use your customer database to analyse behaviour by cultural indicators to see if you can identify key cultural drivers to their behaviour. Alternatively try A/B testing personalisation based upon cultural differences to see what impact this has on your KPIs.

6. Multiculturalism:

Due to the increasing influence and spread of cultural preferences across the globe there are likely to be opportunities to segment by cultural indicators even in your home country. There are strong cultural and racial indicators, such as customer names, that you can utilise to segment your customers by and test the performance of targeted content.

Given the complexity of the human psyche and the pervasive power of cultural influences on our behaviour it is dangerous to assume anything when trying to improve website performance. Make A/B and multivariate testing your friend and guide in the multicultural jungle.

For more of our blogs visit conversion-uplift.co.uk/post/.

Voice of The Customer Tools To Boost Conversions

1 comment

Do you want to use Voice of the Customer tools to get feedback from your visitors, but not sure how to go about it? I’ve outlined below a best practice guide on how to use online Voice of the Customer tools to gain insights and increase conversions. I’ve also reviewed over 15 online survey tools for you to use.

When to use Voice of the Customer tools?

Asking people questions hours, days, weeks or even months after a visit to your website is not going to deliver very accurate feedback on your customer experience. Our memories have to be reconstructed every time we recall them and as result they change on each occasion they are retrieved.

Voice of the Customer tools though allow you to gather data during the actual experience, allowing customers to express opinions and feelings when or immediately after an event occurs. This provides for much richer and accurate feedback on your site. Online survey tools can catch users in the moment when it is best to obtain feedback.

Surveymonkey.com customer satisfaction

Image Soure: Surveymonkey.com

How To Use Voice of the Customer tools?

On-line Voice of the Customer tools provide an important input into the overall conversion rate optimisation process. Online survey tools can provide valuable insights to reduce friction and help you develop hypothesis to be validated using A/B tests. Online survey tools can also help in a number of areas including:

Why
  • What are visitors looking for when they come to your site and is it meeting their expectations? Identify the main use cases – what are people trying to achieve and are they successful?
Barriers
  • What is preventing users to complete their task? Find out what is preventing visitors from completing everything they set out to do.

Missing information
  • Are visitors finding everything they need on a particular webpage? For example, audit your homepage to compare the content with what customers say they are looking for on your website. Segment the data by new and returning visitors as they may have different requirements. This can help identify unnecessary content on your homepage and highlight other information that you should consider replacing it with.
Competitors
  • Which of your competitors’ sites do your customers use? Digital marketing is a zero-sum game, if you can’t convince your visitors to buy from your website, one of your competitors may be more persuasive.
  • Voice of the Customer tools can be used to identify which competitor sites visitors are going to as their expectations will be influenced by these other sites. If your value proposition and customer experience does not compare favourably with these competitor sites you may struggle to convince visitors to convert.

Typeform.com mobile survey

Image Source: Typeform.com

Value proposition
  • What attracted new visitors to your website? Online survey tools can be used to identify what aspects of your value proposition are most appealing to new customers as this may not be the same as what you have on your website. Use this feedback to develop and test different proportion messages to see if this resonates better with customers.

Nebula by Kampyle

Image Source: Kampyle.com

Bugs
  • When your site is broken visitors can provide you with the evidence you need to fix it. Some on-line tools automate this process so that you can get screen shots and technical details sent directly to an inbox for quick and efficient resolution of problems.

Bugmuncher.com homepage

Exit surveys
  • Online survey tools are ideal for finding out why visitors leave your site. When users have decided to leave your site you have nothing to lose by asking them to provide feedback on what they thought of your site. Ask them if they found what they were looking for or what would make them return to your site.
Abandon basket
  • When someone abandons their basket this is a great opportunity to get their feedback to understand what is behind this behaviour. Has something on your site raised concerns or are they struggling to get the delivery date they require? Any feedback from these customers may help you identify issues that you can seek to resolve to improve your conversion rate. Online survey tools allow you to create a questionnaire and then you can email your customers a link to the survey to find out why the abandoned their basket.

A word of caution about Voice of the Customer tools:

Online survey tools are great, but don’t take what your visitors say literally. People are complex and we are not always fully aware of our own motivations and reasons for the decisions we may. Psychology shows us that cognitive short-cuts (e.g. stereotypes and confirmation bias) and our social networks are important drivers of our behaviour. This is why people will say one thing and do something completely different.

For this reason it is a good idea to validate insights from Voice of Customer tools by looking for supporting evidence from your web analytics, but also review session recording from user experience tools. If you have sufficient traffic you may also want to A/B or multivariate testing to measure the real impact on behaviour. Never only rely on online survey tools for informing decision making as user insights should be supported by other sources of evidence.

Over 15 Online Survey Tools Compared:

1. Bugmuncher:

Enables users to report problem & automatically sends your company screen shots with details of the browser, the operating system, the path they took & even which browser plug-ins they have installed. An ideal solution for any site that has more than its fair share of bugs to fix. Free trial available.

Price:

Plans range from $19 a month for a single user (Personal plan) to $99 for the Corporate plan with up to 5 users. For most small to medium sized companies the Start Up plan at $49 per month offers good value as it allows up to 3 users and 400 reports per month.

Bugmuncher prices page image
2. Feedbackify:

Voice of the Customer tools like Feedbackify use a fully customisable widget to deliver short online surveys for your visitors to complete. The Feedback Dashboard allows you to view answers with full context, including which page it was submitted from, your customer’s geographic location, browser, operating system, screen size etc.

Price:

Offers a Free full-featured 15 day trial. A single subscription plan costs just $19 a month.

Feedbackify price page image
3. Hotjar:

This is a great solution that offers a range of visual analytics solutions (e.g. heatmaps, session recordings, & form analytics) together with customer polls, surveys and an on-site usability recruitment tool. See our review of Hotar analytics for conversion optimisation.

The Free basic service offers up to 3 on-site polls, surveys and recruiters for live usability testing each month. The Pro and Business packages both offer unlimited polls, surveys and usability test recruitment. The Business service also allows you to remove Hotjar branding from the feedback widget.

Price:

The Free Basic plan allows you to run up to 3 polls or surveys a month and obtain up to 300 responses. Pro plans start from €89 a month for up to 20,000 page views a day.

4. InMoment: 

A Voice of the Customer tool that uses an “omnichannel” approach to gathering customer feedback, drawing from various channels such as text, email, video, social media, and more. Most powerfully, through machine-learning “active listening” technology, their platform encourages more in-depth responses from your customers by automatically formulating follow-up questions based on customer input. Finally, their robust analytics and reporting features will gather all your data to show valuable insights, allowing you to make informed business decisions.

Image of inmoment.com homepage

Price:
InMoment is more geared toward enterprise-level companies, and you can request a customised demo. Pricing is determined by location and a company’s specific needs.
5. i-Perceptions:

One of a number of free online survey tools. This delivers a pop up that asks three simple questions to website visitors. The three questions could include: “How would you rate your site experience?”, “What describes the primary purpose of visit?” and “Were you able to complete the purpose of your visit today?” You can use the feedback to understand how people engage with your website and find opportunities for improvement.

Price:

A Free and Enterprise plan. No prices on the website.

6. Omniconvert:

An optimisation solution that also provides Voice of the Customer tools including a flexible and professional online survey tools. This provides on-click surveys (triggered by clicks on a designated HTML asset), branching logic set up which ensures the questionnaire responds to the user’s answers and a segmentation engine for targeting of specific user groups. You can either serve pop-up surveys or use a widget which appears at the bottom of the page.

Image of www.omniconvert.com survey tools page

Price:

Free for up to 5,000 tested views and offers flexible paid plan (no pricing guidelines shown).

7. Qualaroo:

Offers a customisable widget for desktop and mobile devices. You can target questions to visitors anywhere on your site, and includes exit surveys to capture insights from visitors who are leaving your website.

They offer a Free trial and subscription plans start from $63 a month for desktop. The Professional plan costs $199 a month and includes exit surveys and mobile survey add-on. The Enterprise plan ($499) provides for integration with CRM tools and advanced segmentation.

Qualaroo.com pricing page image
8. Qualtrics:

Offers enterprise Voice of Customer tools that includes Site Interceptor which allows you to survey visitors as they browse your website. A fully flexible offering that includes over 100 different types of questions, drag-and-drop ordering, advanced flow logic, rich text editing, and the ability to include images, videos and audio in surveys. It also allows you to randomize the order of response categories, set quotas and set-up email alerts.

No pricing information on the site.

9. Upwave:

Voice of the Customer tools for those who have never designed questionnaires before and want some advice to complete the process. The tool finds respondents for your survey who meet your target audience from 17 countries by age, gender, geography and custom attributes.

You write your surveys questions, build your questionnaire using their self-service survey tool and an analyst will then review it and suggest edits based upon industry best practice. Upwave will then find the respondents for your survey and provide raw data in an Excel spreadsheet and in Statwing, a free partner analysis tool.

Price:

Plans range from $200 a month for Quick Read for surveys of up to 200 respondents per survey and $2,000 a month for Deep Read which offers up to 2,000 respondents.

10. Alchemer:

Comprehensive online survey tools that can be used to create fully-customisable surveys for distribution through email campaigns in HTML and plain text, on Twitter, Facebook and by embedding them on your website using JavaScript or iFrames.

For mobile forms Alchemer automatically re-formats questions for the device and only displays one question at a time. Mobile surveys also enable use of their File Upload question to gain access to the respondent’s camera and allowing you to capture photos for the study.

Automated reporting tools offer one-click advanced reports and cross-tabs for full analysis of your data. Export data to other data analysis packages. You can also schedule reports and email results to fully automate the reporting process.

Price:

Plans range from just $25 a month for Basic which offers over 30 question types and $95 a month for Premier. An Enterprise plan offers multi-user access for an unspecified price.

11. SurveyMonkey:

One of the most well-known and popular online survey tools that enables the creation of most types of surveys, including web, email, mobile, social media, and automated telephone surveys. If you need to find respondents, SurveyMonkey Audience allows you to define your target audience and will then provide you with the feedback you require.

Offers 15+ types of questions, customisable logo and branding and the ability to set skip logic by page and question. Fully integrated with the likes of MailChimp and Eventbrite. Comprehensive real-time reporting available, together with text analysis, SPSS integration, custom reporting, cross-tabs and presentation-ready charts and reports. A Free plan is available for 10 questions and up to 100 responses per survey.

Price:

Subscription plans start from £26 a month (Select) for up to 1,000 responses to £65 a month for Platinum that offers an unlimited number of responses.

Surveymonkey.com pricing plan page image

 12. Surveypal:

One of the most popular Voice of the Customer tools. It positions itself as an enterprise survey tool that uses an intuitive drag and drop style editor to make it easy to build high quality online and cross-device surveys. You can also choose to edit one of their professionally designed templates if you prefer.

They also offer customer support via phone, email and built-in live chat to make the process stress free as possible. All support staff are engineers which means you can expect to receive a high level of technical support to quickly resolve any problems.

Allows you to set up automated email alerts based upon your own business rules to instantly respond to certain types of customers or responses. A flexible reporting tool which provides automated visual presentations in a variety of formats such as PowerPoint, Word, Excel, SPSS and as an interactive dashboard.

Surveypal integrates with Slack, Zendesk, Salesforce and many other apps. Their API also allows you to send, receive and track surveys. A Free plan is available for up to 100 responses.

Price:

Subscription plans cost $40 a month for Premium for 1,000 responses per month. An Enterprise plan is also available with an unlimited number of responses per month.

Surveypal.com pricing page image
13. Temper:

People are emotional creatures and Temper uses smiley faces for its Voice of Customer tools to measure how customer feel about your organisation and the topics you ask them questions about.

It offers three options for delivery of surveys.

  1. Tab – Shows up at the bottom right of every page you install it on.
  2. Inline – Is positioned within your page anywhere you’d like to get feedback on a specific item or experience.
  3. Email – At the end of an email which is great for gauging how your customer support interactions are performing.
Price:

A 60 day money-back guarantee is available on all plans. Subscription plans range from Hobby at $12 a month to White Label at $199 a month.

14. Typeform:

Voice of the Customer tools that aims to delight respondents, keeping them focused on one question at a time and the versatility of their forms. Provides an enterprise survey tool for use across all devices. Offers Free plan (Core) for basic users.

Price:

The Pro plans costs $20 a month with unlimited typeforms and responses. A Pro+ for teams is currently under development.

15. UserReport:

A Free tool that offers both online survey tools and feedback forums. The online survey tool allows you to ask for feedback about your website and gather visitor demographics in over 60 languages. You can either use the ‘ready-to-go survey or customise with your own logo, colours and questions. Survey results are presented in intuitive reports that can be easily shared and exported as PDF or raw data.

The feedback forums give you the opportunity to gather ideas on how to improve your website. It also allows users to report bugs, submit issues, comment on and vote for ideas online. It works across devices and is fully customisable.

Price:

The solution is currently Free. 

16. UserEcho:

Offers a suite of online survey tools for better customer service and engagement. The main customer feedback tool is their Ideas Forum which enables customers to ask questions, share ideas and learn. Customers can vote and you can gather critical feedback of what they like or dislike.

Users can login via popular social networks which eliminates the need to go through a registration process. In addition, the Knowledge Base will automatically search for answers when a user writes a query, and in the case of a match will display the item to the user.

UserEcho also enables live chat conversations with visitors on your site. 15 day Free trial.

Price:

A single plan is available for £15 a month

17Uservoice:

Voice of the Customer tools that offers a all-in-one product management platform to make it easy to give customers, partners or internal teams a voice with private labelled feedback forums. You can collect customer feedback on web or mobile with a native user experience.

Uservoice does not require your customers to register which encourages participating. The forums work by visitors raising a ticket and then vote or discuss ideas and possible solutions. The tickets contain useful information on the user including their OS, browser and the page from which the ticket was raised.

Price:

Basic plan costs $499 a month and the Premium is $999 a month. An Enterprise solution is also available with quotations on request.

18. Voice Polls:

Create questions or use existing templates to poll your website visitors by embedding surveys onto your website or blog. If you agree to sponsored polls behind your own polls you will earn revenue for every sponsored opinion collected from your site. You can browse trending polls from other users add those to your website to see if they improve engagement with your site.

Voice Polls are a Free tool for online publishers. They can help you grow your traffic, engage your reader, learn from them, discover who they are and bring some interactivity on your pages.

Price:

For non-publishers each question is priced at $12.50 and $0.05 per completed survey.

19. WebEngage:

Voice of the Customer tools that offers surveys, feedback forms and in-depth information (including screen grabs) to obtain and resolve customer problems and notification to display messages to specific audiences (e.g. shopping cart drop-off).

Survey:
  • Collect insights from visitors. Target questionnaires at specific audiences using rule builder. Get real-time analytics and reports.
Feedback:
  • Add context to your feedback form with custom fields and automatic screen grab features.
Notification:
  • A push messaging tool which lets you display offers, discount codes, product launch announcements etc. to visitors with real-time statistics.
Price:

Plans range from $49 for Basic to $949 per month for the Enterprise Lite solution.

Use Voice of the Customer tools today!

Many of these online survey tools provide free trials and many have free plans so there is no reason not to give online Voice of the Customer tools a go. Further, using such tools can also help encourage a more customer centric approach to optimisation and website development. People are naturally curious about what potential and actual customers think about their ideas and designs so assist this process by giving your colleagues the opportunity to capture such feedback.