Conversion Rate Optimisation Strategy

Conversion Rate Optimisation Strategy

Process for conversion rate optimisation

Conversion rate optimisation strategy co-ordinates:

I’ve previously written about conversion rate optimisation strategy mistakes where I outlined some fundamental errors that some organisations make with implementing digital optimisation programs. I have also written about conversion rate optimisation strategy used by successful companies. But here I bring them together to look at conversion rate optimisation strategies from a clever/stupid perspective.

Image of conversion rate optimisation coordinates for clever and stupidity

Really clever – sounds stupid:

Do you need a user acceptance testing (UAT) team? Not if you ask your developers to test their own changes to make sure they get them right first time and then A/B test the change before they are fully rolled out. This makes developers more accountable as they can’t rely on the UAT team to identify bugs.

Take most of the control for tactical changes to your sites away from the highest paid person opinion’s (HiPPO) and committees by agreeing to use online experiments to inform teams about the effectiveness of proposed changes.

To short-cut building your own internal team consider bringing in expert consultants who have the experience and credibility to shake the organisation up and get things done.

Sounds stupid – Really stupid:

Changing content is not an optimisation strategy, it is content management, but it is often called optimisation by some marketers.

Vanity metrics, such as likes and shares are meaningless if they don’t impact on the bottom line. Monitoring such metrics results in the cobra effect which is damaging to the business.

Listen to customers, they are your most important stakeholders, but don’t take what they say literally or do what they ask without first testing the idea to measure real behaviour. People are poor at predicting their own future behaviour because the choice architecture influences decision making (volition) and there are many complex and contributory factors that influence the final outcome.

Usability testing is just common sense. But focus groups are not usability testing and so don’t use them! Enough said.

Sounds clever – really clever:

With the development of AI solutions and evolutionary algorithms it is now feasible to optimise the whole customer journey at once.

Establishing a culture of experimentation and learning through testing ideas out should be a given.

Having a central team of conversion rate optimisation experts who work closely with stakeholders and seek input from the wider business is the most efficient and effective way of using such expertise.

Diversity of people and inputs is key to a successful innovation and change management program. CRO needs to be a collaborative process as that is what it is.

CRO needs senior people with clout to manage all the crap of the highest paid person’s opinion (HIPPO) and the internal politics generated by trying to use evidence rather than subjective opinions to make decisions.

Sounds clever – Really stupid:

Trying to control everything is a stupid and unrealistic idea for anything. To develop a culture of experimentation it is necessary to seek ideas and help from all parts of the organisation.

IT won’t solve optimisation – it needs the support of the whole organisation.

Keeping experiments secret and not circulating results just limits the organisation’s ability to develop the right culture.

Relying on departmental specialism ignores the expertise of conversion rate optimiser’s who bring together skills from number of disciplines. Very counter-productive approach to optimisation.

Optimising sites separately. When you have more than one digital brand the last thing you should do is to allocate separate optimisation resource to each site/app. Why test on a small brand with little traffic when you can complete the same test much more quickly and with a higher degree of confidence on a larger, more profitable brand? Prioritise resources according to where it can have most impact rather than creating silos for each brand.

Why on earth would you want to stop testing at peak times? Sure, you shouldn’t assume you can replicate the test result for other non-peak period. However, this is the time when you have most traffic and greatest potential to improve revenues. With high traffic levels you can also complete tests more quickly than at any other time and so you would have to be stupid to waste this opportunity. If you want to maximise revenues you could use multi-armed bandit testing to facilitate this process.

More reading

What Was The Top Blog Post of 2016

Artificial Intelligence & Conversion Optimisation

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *