Tips and Tricks for Coding Experiments For CRO

Tips and Tricks for Coding Experiments For CRO

Image of java coding for experiment

A Guide for front-end developers on coding experiments.

I’m a self-taught developer since the age of 15. I’ve also spent 10 years of my career working as a freelance web designer and developer. I fell into coding experiments for CRO and A/B testing.

But, when I started coding experiments, I found it to be an unusual mix of the familiar and the new. I want to share some of the quirks about coding for A/B tests today.

1. Disposable code:

The first thing I realised about coding experiments was whatever code I wrote, it was likely to be thrown away after the test ended. There are a couple reasons for this:

  • The experiment can (and often will) fail. A failed test is still a learning experience. But as a developer, it can be disheartening to realise that the complex trigger you spent a day building is irrelevant.
  • If building a client-side experiment in a third party tool, the code you’re writing is manipulating the DOM. It is unlikely to be useful for the engineers. Manipulation involves using Javascript to modify the page, which is just not how websites are built in the source. I’ve had my code called ‘hacky’ which hurt. But it was technically true  — I’m hacking the DOM to make a change.
It’s A Learning Experience

It can be difficult to accept that all that time spent struggling to target a certain element. Or getting the experiment to trigger at the right moment, was a waste of effort. But the whole point of experimentation is to be a learning experience. Even the failures teach us something, and your disposable code is vital to that process.

Coding experiments is disposable
Photo by Steve Johnson on Unsplash

2. Limitations:

There are always going to be limitations when building an experiment. There are plenty of workarounds to get the required results. Firstly, don’t use the WYSIWYG (what you see is what you get) tool from third party tools. It’s usually rubbish and inserts untidy and unreliable JQuery. 99% of experimentation tools have a developer mode. It allows you to insert CSS and javascript written by you, the expert.

Secondly, if you’re running client-side testing (as opposed to server-side testing). You will be limited to modifying what is already present in the DOM upon load. You will have to learn to work within this limitation.

For Example:

Creating new blocks of HTML or editing copy or layouts.

However, if a change is to be generated dynamically. That content needs to be exposed via a data layer.

For Example

Moving pricing or availability of a product to an earlier stage in a funnel. Without a data layer exposing this information on the page it’s required. You won’t be able to build the experience. Work with your engineers to ensure a data layer is functional for testing. Ideally this should be done early on when installing your experimentation tool. This is so dynamic content can be pulled throughout the site.

3. Targeting and triggering 

URL targeting of experiments in Google Optimize
URL targeting in Google Optimize

Making sure the experiment fires at the right time is likely to be the hardest part of a test build. There are multiple moving parts to consider. The most common is to target the URL. This can be precise using “equals” or “exact” which will not allow parameters or trailing slashes. Alternatively, you can target URL “contains” or “substring”. This targets a URL based on whether it contains a set of characters that can be present anywhere in the URL. This is useful for targeting pages for parameters (ie referral traffic from social or email.) Some tools give you the option to use regex to target URLs as well.

You are not restricted to URL for targeting

Sometimes you may need to use information available in the data layer. This is to ensure your experience is firing for specific products only (relevant if URL structure is unrelated to product type). Or you may need to search for a unique element on the page. Whether a cookie is or is not present (based on previous exposure to an experiment for example). And, of course, audience segments (new vs returning, referral, device type etc).

Tests can also be triggered on a user’s action — (e.g. clicking on an element, or having a certain threshold of items in their basket for example). Again, most testing tools offer a way to ensure these triggers are met. Sometimes this is via audience segmentation/audience behaviour selectors, or a custom Javascript trigger section. Check your documentation for the solution offered.

4. QA Experiments:

You cannot QA your experiment enough. Make sure you test your experiment throughout the build. Check it on the devices it will fire on. Try going forward and backward through the user journey. Run through the whole user journey, try jumping around to unexpected pages. Not all users follow the funnel laid out for them. And make sure you have a peer QA it as well.

Use Ghostry, or an alternative third party tracker tool. What I love about it is that if something looks broken on a website, but I’m not sure if it’s an A/B test causing the problem. I can use Ghostry to simply stop the testing tool from loading. If the bug still appears, then it’s not the experiment. I recommend this browser add-on to all product owners, engineers and CRO specialists. It prevents A/B tests getting the blame for every UI problem that appears on a website, when 9 times out of 10, it’s not the experiment that’s the problem.

A Word of Warning About Coding Experiments

Sometimes there will be something called an ‘edge case bug’. Where the experiment does not fire when it should, or maybe it looks a little crooked going from one screen size to another. It is vital to weigh up the effort of fixing that bug vs the chance that a user might see it.

This is where the analytics team are your friends. Get them to check the number of users on that screen size, or how many go from page A to page H and back again. If the impact on users will be very low, but the effort to fix the bug high. It’s often best to simply run the test. Remember, it is temporary and will be turned off and your code disposed of in the space of a few weeks. The learnings will be greater than the risk of an edge case bug. Remember to QA your experiments.

After coding experiments remember to QA
(Photo by Cookie the Pom on Unsplash)

5. Goals:

Goals / KPIs should be established early within the experiment hypothesis. Do not forget to add them. Confirm that the tracking is working either via the testing tool itself or the analytics solution. Adding the goals is part of building a test. It should be part of your checklist, but if you think something should be tracked that is not in the spec, ask. Once a test is live, data tracking cannot be retrofitted.

Goals are usually a form of event tracking, such as clicks on elements or visits to certain pages. Depending on the tool, you may be able to use the WYSIWYG to select the element to track, or you may need to code this in directly. You might even use a tool such as Google Tag Manager to create the event tag & trigger. Check your experimentation tool documentation for instructions.

element.addEventListener(“click”, fnClickTrack);

function fnClickTrack() {

console.log (“Add the tracking code for your tool here”);

}

Goals can also be revenue related. Such as average order value, or engagement rates like bounce or session time.

6. Miscellaneous: 

If your company isn’t set up for server-side testing. You can still use experimentation tools to switch features that have been coded in the main code base on and off. This is great for testing new features. Larger changes such as element redesigns or if the experimentation tool itself is clunky for a developer to use (Google Optimize I’m looking at you). Expose both control and the variation code to the DOM, then use CSS or Javascript to show and hide it based on whether the user is bucketed into the variant or not.

if(window.lp__test) {

console.log(‘Experiment loaded – activate’);

lp__test.init();

} else {

console.log(‘Experiment not loaded yet – sett flag’);

lp__experiment_flag_test = true;

}

Saying all that, using an experimentation tool to test a page redesign is not the best use of A/B testing. It is difficult to ascertain which element caused the impact. You should encourage CRO specialists to use qualitative measures, such as user testing to validate design decisions.

7. Create a code library:

Finally, if you spend any time coding experiments, you’ll quickly realise the amount of copy/paste you’ll do. Keep a library of code snippets for quick access for common commands.

For Example

OnClick tracking, setting and retrieving cookies, and checking URL parameters contains. These are snippets I use enough to not know them off by heart, but to find them easier to copy/paste from my code library. You can use  to create a code snippet library, or any note taking tool that suits you.

Conclusion:

Whether you’re new to coding experiments or an optimisation developer, I hope these tips were useful. There are a lot of points to consider when coding CRO experiments. It can be frustrating when someone asks you to “just” make this change when they don’t understand the complexities in manipulating the DOM and QA-ing the work.

Remember to communicate clearly with your team, especially when blockers occur in the experiment coding process. Use these tips to know when to push back and ask for clarification about the work you’ve been asked to do.

Featured image by Pankaj Patel on Unsplash

More reading

How To Set Up and Run Experiments With Google Optimize

How to Track SMS Marketing Campaigns in Google Analytics

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *