How To Set Up and Run Experiments With Google Optimize
How To Set Up and Run Experiments With Google Optimize

Google Optimize is a free A/B testing and personalisation solution that is revolutionising how small to medium sized organisations optimise their websites. Unlike most A/B testing tools Optimize doesn’t require any financial commitment to conduct online experiments or personalisation campaigns.
You can conduct up to five simultaneous online experiments and ten personalisation campaigns on your website with Google Optimize. This may seem a small number of tests, but many organisations struggle to have more than five A/B tests running on a single website at any one time. That’s often because of the need to avoid conflicts with other experiments, and it takes time to develop strong hypothesis and create designs to deliver new experiences.
1. Create an account:
If you have a Google registered email address you can login to Google Optimize here and create an account within a few seconds. As you may want to create an account for your organisation, get your IT department to provide you with an appropriate new email inbox.
Register your new email address with Google by going to your personal profile and click ‘Add another account’ and then ‘Create account’. You can register it using your organisation’s domain rather than ‘gmail.com’.
Now you can create a container for your website. You should create a separate container for each domain you wish to conduct experiments or personalisation for. A container has a 9-digit ID which you will need for integrating with Google Tag Manager. Each container has its own unique JavaScript, but you won’t need that if you are implementing Optimize with GTM.
To find the container ID, click on ‘Settings’ in the top right-hand corner of your container. Copy and paste it into your Tag Plan. Below this, click on ‘Link to Analytics’ to choose the correct Google Analytics property and the view you want the data to go into. This will allow you to analyse results in GA.
To use the visual editor to create new experiences you will need to use the Google Optimize Chrome extension. Click on the button to add this extension to your browser. This is one of the more limiting aspects of Optimize and there is no ability to test in other browsers.
2. Notifications and Permissions:
Scroll down to the ‘Additional settings’ and first invite colleagues to join Optimize. Here you will also set permissions for either publish, edit or read. Limit those who can publish an experience to only the most senior personnel. It’s a good idea to discuss the approval process with stakeholders, but we will cover this later.
You can now set email notifications for when an experiment and personalisation are published or stopped. By inviting IT or other senior colleagues to join Optimize, you can use this functionality to reassure anxious IT colleagues that you won’t forget to inform them when a campaign goes live.
3. Implement Optimize:
When implementing Google Optimize you can either add a snippet of code directly to your site or use GTM to manage the tag. From a performance perspective it is better to add the code directly to your site using the synchronous or asynchronous snippet. This is because it will load more quickly as it does not have to wait for the GTM container to respond. In most instances you won’t need the anti-flicker snippet with this synchronous script.
To install Optimize directly onto a site you will need to ensure it is implemented in this order:
- <meta charset>
- Data layer initialization (can be used in Optimize and GTM)
- Setting cookies
- Any scripts to set JavaScript variables that you want to use in Optimize experiments.
- Anti-flicker snippet
- Google Optimize script
Data Layer Initialization:
Use this script to for the data layer initialisation:
“<script>
window.datalayer = window.datalayer || [];
</script> “
Synchronous Code Sample:
Note: Replace OPT_CONTAINER_ID
with your Optimize container ID.
Asynchronous Code Sample:
The advantage of the asynchronous code is that it doesn’t block your website to load Google Optimize. Thus it has the least impact on the load speed performance of your site. However, you may see some page flicker and so you can add the anti-flicker snippet to prevent page flicker.
<script async src=”https://www.googleoptimize.com/optimize.js?id=OPT_CONTAINER_ID”></script>
Note: Replace OPT_CONTAINER_ID
with your Optimize container ID.
However, the performance difference between implementing via GTM will be a matter of a small fraction of a second. If you work for an organisation where getting a code snippet added to the website takes several months you may prefer to compromise and implement Optimize via GTM instead.
Implementing Google Optimize via GTM
Go to your GTM container and Tags.
- Click ‘New’
- Tag Configuration > select ‘Google Optimize’ from the list of options
- Input Optimize container ID
- Select > Google Analytics Settings Variable
- Save without trigger
Now select your Page view tag:
- Scroll to select ‘Tag Configuration’ > Advanced Settings > Tag Sequencing
- Check the ‘Fire a tag before this tags fire’
- Click menu to select the Optimize Tag
- Set the Optimize tag to fire once per page
- Save the tag
Now publish your GTM container so that you can test the implementation in Optimize.
4. Select the type of experiment:
Go back to your Optimize account and select the container for the domain you have implemented in GTM. To create an experiment or personalisation click on ‘Create experiences’ on the far right-hand side of the screen.
Give it a suitable name and consider including a suitable prefix to help you with documenting and retrieving a test or personalisation. For example, T001 might be used for a test, whilst P001 would be used for a personalisation. Keep it simple, as if you make it too complex people are bound to make mistakes.
Paste the full URL of the page you wish to use for the new experience. You now have at least four options to choose from.
- A/B tests to compare at least one variant with the default page
- Multivariate to test two or more sections of a page (called recipes)
- Redirect or split test where you test different pages using their URLs
- Personalisation where you target content to specific visitors
Be careful about multivariate tests because you need lots of traffic to run a conclusive test and even then, you should validate the winning recipe before implementing it. An ex-colleague, who is now Director of Experiments at Netflix, once told me it was better to do lots of well-designed A/B tests than try and cut corners with multivariate tests. I thought that was wise advice.
Make your choice and click ‘Create’ and you will be taken to the draft experience screen.
5. Making changes with the visual editor:
Scroll down to ‘Description’ and summarise the nature of your campaign. If you have chosen an A/B test you can now create variants by clicking on the ‘Add variant’ button. This launches the visual editor and allows you to select and edit the HTML on the targeted page.
When the page loads you will see the app bar (at the top of the screen) which tells you which variant you are editing and allows you to change the device category view of the page. On the lower right of the screen you will see the editor panel which provides a menu of the types of changes you can make with the visual editor.
To make changes to the page:
- Select the element on the page that you wish to change (e.g. ‘Get a Free Consultation’ button).
- Click ‘Edit’ on the editor panel.
- Select the type of change from the editor panel (e.g. edit text).
- Make the change (e.g. remove the existing text and enter the new copy).
- Click ‘Save’ on the app bar.
- Click ‘Done’ on the app bar and this will return you to the Optimize console.
6. Draft experience settings:
Scroll down to the ‘Measurement and objectives’ section of the console.
- Set the correct Google Analytics property and view if not already selected.
- Choose the objectives for your experiment. You can choose from a pre-selected list of objectives or use a GA custom event that you have configured in GTM.
- Validate your installation is working by clicking on the ‘Check installation’ button.
- Turn on email notifications so that you can’t forget to inform your colleagues using Optimize when you begin or end an experiment.
- Adjust the traffic allocation if you really must. Personally, I find it is counterproductive to start a test on a small proportion of traffic This is because it means you have to wait longer to know if a test is causing a large decline in your primary objective and you are more likely to respond to a large, but statistically insignificant fall due to the law of small numbers.
- Set your activation event which is normally a page load. If your experience is targeting a dynamic page or single page application this is likely to have multiple components that are asynchronous with each other. Fortunately you can use an activation event to trigger the experiment instead of a page load. An activation event requires a data layer push using the code below, but this allows you to push it wherever it is needed.
dataLayer.push({‘event’: ‘optimize.activate’});
Personalisation campaigns
It is a similar process to configure a personalisation campaign.
- Define the page for editing of the experience.
- Set page targeting rules, such as URL, host, page path or fragment.
- Configure your target audience from a pre-set list including UTM parameter, device category, geography or behaviour. There are also advanced rules which include query parameter and data layer variable.
7. Anti-Flicker Snippet:
If you notice a flicker when testing your experiments you can install an anti-flicker snippet onto your website. This blocks the loading of your site for a split second to allow the Optimize server to respond before your webpage loads and thus preventing any flicker.
It cannot be implemented via GTM as it must be placed directly onto the page. If you have installed Optimize using GTM you will need to input your GTM container ID into the code where it shows the Optimize ID. You may find Simo’s blog on how to measure the impact of A/B test flicker useful as there are always compromises with adding an anti-flicker snippet.
8. Test your experience:
Although you are probably keen to get your new experience live, always allow plenty of time for testing and debugging. Click on ‘Preview’ mode to view exactly what your users will see when they enter your experiment. This gives you the option to open in a web, tablet or mobile mode. If they are relevant, check each device type and then use the ‘Debug’ mode.
The ‘Debug’ mode allows you review the experience and the targeting rules before you begin the experiment in a live environment. It opens in a new browser tab so that you can check how it renders in Chrome. It also helps with resolving problems because it displays which rules are evaluated as true or false.
9. Get approval to start your experience:
Solutions like Optimize can cause anxiety amongst developers and other IT professionals. That’s partly why having a clear process to obtain approval for experiments and personalisation campaigns is essential to obtain the support of senior stakeholders and IT. It’s important to only launch campaigns once all the agreed steps have been completed and you have approval of the relevant stakeholders.
In some organisations campaigns are started by a designated person in IT. This might seem somewhat bureaucratic, but if that’s what it takes to proceed with experimentation and personalisation, it’s better than getting bogged down in internal politics. Once you have data to demonstrate the benefits of experiments and personalisation you are likely to find it much easier to proceed with your campaigns.
When approval has been obtained get the designated person to click the ‘Start’ button and begin collecting data.
10. Reporting:
Never report results from an experiment in the first few days because your data will suffer from the law of small numbers. You can almost guarantee that any large uplift in the first day or so won’t persist as small samples tend to generate extremes. Optimize will normally begin to show results after a couple of days.
To review how your experiment is performing click on the ‘Reporting’ tab. You can also access raw data in Google Analytics.
Bayesian Statistics:
Optimize employs Bayesian inference to produce its reports. This assigns a probability to a hypothesis which allows Optimize to undertake adjustments as more data is collected.
Bayesian inference has the advantage that it calculates the probability that one variant is better than the other experiences which is easier to explain and comprehend than hypothesis-testing approaches. It also enables you to end a test as soon as Optimize estimates there is little more to gain by continuing to run the experiment.
Reporting Tab:
In the header of the report you will see the test’s primary objective, status, start date and a recommendation. The example below shows the recommendation of ‘Keep your experiment running’ and gives an estimated end date. There is also a link to the Google Analytics report.
In the body of the report you will see the modelled primary metric per session and modelled improvement. The latter is shown in a range and gives you an indication of how positive or negative the variant is performing against the default experience.
It is generally good practice to run any experiment for at least two weeks. This allows for data over two weekends because many sites see different user behaviour during weekdays compared to weekends. For this reason, Optimize won’t display an outcome until it has collected data for at least two weeks.
Recommendations From Optimize:
Optimize will display recommendations about your experiment including:
- ‘Keep your experiment running’ – This means the modelled improvement is not sufficient to show a conclusive result. You should continue with the experiment until one variant has a 95% probability of beating the default experience.
- ‘No leader found’ – This indicates there is insufficient data to declare a winning variant. However, this means you could use the variant experience as it is unlikely to do any worse than the default.
- ‘At least one variant is better than the original’ – There isn’t sufficient data to identify which variant is best and so you could continue with the test to see if you can get a conclusive result.
- ‘The original is the leader’ – This suggests none of the variants are better than the default.
- ‘One or more leaders found’ – Indicates some of your variants are clearly performing better than the default and it is clear which is the best performing variant.
- ‘A variant is the leader’ – This shows that only one of your variants is conclusively better than the default experience.
Don’t forget you can also analyse your experiment or personalisation in Google Analytics. This may help you identify new insights be segmenting the data by relevant dimensions. It also helps you avoid Simpson’s Paradox which can happen when we aggregate data rather than drilling down to relevant segments.
Conclusion:
Given that Google Optimize is a free A/B testing and personalisation solution, you can’t expect it to be as sophisticated as paid for tools. I do find the installation validation a bit buggy and the visual editor is limited because it only runs in the Chrome extension.
The limit of five simultaneous experiments may also be limiting for some organisations. But for most organisations I have come across they don’t have the capacity to run that many tests at once anyway. They could save literally thousands of pounds a month by switching to Google Optimize.
That’s why Google Optimize is the ideal solution for websites wishing to take their first steps into experimentation and personalisation. It gives you the opportunity to gain experience in what is a complex process and allows you to learn what works and what doesn’t.
Too often organisations splash out large sums on expensive experimentation platforms before they have established the necessary teams and internal procedures. It is important that you agree a systematic approach to conversion rate optimisation because otherwise you will struggle to achieve sustainable uplifts and you may lose the support of key stakeholders.
Experimentation requires a culture change in an organisation and consequently it needs to be managed like a change programme. It challenges many existing practices and can create anxiety amongst managers who have previously had control over content and front-end development. For this reason, it’s important to plan, engage, collaborate and review to get the most out of experimentation and personalisation.
Comments