A/A Test
A/A Test

Definition
An A/A test is an online experiment where both variants are identical and is normally conducted to evaluate whether your A/B testing software is accurately measuring conversions. In such a test you would expect the conversion rate to be flat, with no significant difference between the two variants.
Why run A/A tests?
- To check new A/B testing software is processing data accurately.
- Ensure the software has been implemented a correctly
- To cross-check test results against your web analytics platform as this could identify potential issues with you’re A/B tests. But also does the tool fully integrate with your web analytics so that you can analyse your data in a more granular way to get different insights from your tests.
- Establish a base-line for your conversion rate to monitor changes over time. This will provide you with your confidence interval so that you understand the size of the range that your conversion rate normally fluctuates between. This can help you set internal A/B testing guidelines on the required lift threshold that you need before you begin to consider a change statistically significant.
What do A/A tests tell you?
If there is a significant difference between the two identical variants in an A/A test. This could be an indication that your A/B testing software is not correct or the testing tool is inefficient.
However, there is also the possibility that the test was not run properly or even that it is due to random variance which naturally occurs due to a sample rather than all visitors being measured (i.e. sampling error). For example a 95% confidence level means that in one out of 20 cases a winning result will occur due to sampling error rather than a true difference in performance between two variants.
The disadvantages of A/A tests
There is a cost to running A/A tests. You could be running an A/B test instead of an A/A test. An A/A test is more about the software and benchmarks.
A/A tests also require much larger sample sizes than a normal A/B test. This is to be confident that there is no statistical difference between the two identical versions. This will inevitably reduce your A/B testing activity and the potential return on investment you get from your tool.
As a result some optimisation experts advocate triangulating data instead of A/A tests. This involves using two sets of analytics (e.g. your A/B testing software and web analytics) to cross-check metrics. If you have an existing web analytics platform fully integrated on your website use that to compare with other data to identify if other software is giving you an accurate picture.
Conclusion:
A/A tests can be a useful tool for validating new A/B testing software, but also consider that all experiments suffer from bias to a degree. Ideally you should not rely on a single means for checking the accuracy of your tests. Ensure you have web analytics integrated with your A/B testing software so that you can compare metrics on a regular basis.
Resources:
Conversion marketing – Glossary of Conversion Marketing.
Over 300 tools reviewed – Digital Marketing Toolbox.
A/B testing software – Which A/B testing tools should you choose?
Types of A/B tests – How to optimise your website’s performance using A/B testing.