Get in touch with our team
Feature image for 24.05.2017

24.05.2017

4 min read

SMX London: Conversion Rates & The Law of Diminishing Astonishment – Joe Doveton

This article was updated on: 07.02.2022

Why do your CRO programmes “fail”?

Plenty of people within the industry say they’ve tried A/B testing but claim it doesn’t work and this is frustrating to hear. CRO is a journey, and a failed test is a success as you learn from it.

1. Programmes aren’t setting the right level of expectation.

Understand that a test programme is for life, Your website is never finished and there is a never a bad time to start testing.

Make sure you get a get an executive sponsor within your business who gives you a license to succeed in by failing forward.

Between 50 percent and 90 percent tests don’t win and The point is that this is a good thing, Winning is good, losing is good, drawing is good so make you get that person in your business to understand this. If it wins, you can do more of it, if you lose, why did it? Get your sponsor to understand this

Be flexible.

Get agreement to push at the elastic properties of your brand. Most brand guidelines have a give in, discuss with your brand team on what you’re able to change.

Web Development and optimisation teams may need to compromise and meet in the middle.

You may have experienced issues such as not having resource to build complex tests, not having resource to apply test winners. Meet with your development team upfront and explain the test program to them. Get your development team on board

Have realistic targets based on evidence.

Begin collecting industry conversion rates. Plenty of companies such as Smart insights publish global conversion bench market reports ever year.

Identify pages to start on.

Use the leakage process = unique page exists x average page value. This will give an estimated revenue value of drop-out

Testing the Wrong things

Do you feel pressured by your boss to test something they want to do? This is called a vanity testing, this shows they are interested, but make sure its prioritised by the impact of business.

Having incomplete research

Make sure you back up the changes with the evidence. Test pages which have problems within google analytics, heat maps, video recording and customer surveys.

Having a bad hypothesis.

Having a bad hypothesis is worse than having no hypothesis. Presume the problem,

Changing X into Y will get you:

X – What your analysis indicates a problem

Z- what change you think will solve that problem

Z- Will have this effect on your KPI

3. Not Reading the data correctly

Measuring the wrong things or having inaccurate data

You need to be able to trust your base line data and validate this. Getting a bot traffic report will be great, as bot traffic will be skewing your data.

Finishing Tests Early.

Run the test to the minimum appetite for statistical significance which is around 95%, and you also want to capture 80% of your sale cycle. Also consider your seasonality within your business as you don’t want your results to be skewed because of this

Not looking at Segmented data.

Understand how your tests work by device, by channel and by country. This may lead to new testing

4.Not having the right workflow

Not QAing experiments properly. There is a lot of guides on this, including this one

Document everything.

Just incase there’s been an issue with your website so you can you’re able to prove a

Running before you can walk.

Don’t spend lots of money on tools if it’s not required, you may have issues such as site speed or

Trying to test with not enough traffic.

The lower the traffic, the longer it takes to get a meaningful traffic. Don’t run multiple variations if you have low traffic, stick to A/B. but sometimes its not worth testing and may be worth using CRO tools such as heatmaps.

5. Not being brave

Smaller changes take longer to detect. be aware of this in planning and inspire to make bigger changes, to get headline figures, you need to make headline changes.

Learning from mistakes.

Planning in changes you cant build, promising uplifts that aren’t going to happen, ending tests too early and not killing underperforming tests. Mistakes are the important of the process as you learn.

Some tests lose – so what?

VWO say 1/7 win, a loss is a valuable as a win as you get that data. Win, lose or draw, always ask your yourself why you got that result.

Summary

Optimisation is not a magic money tap, its a better way of working.

Optimisation and personality is a journey, not a destination.

Your site is a never finished

A great quote by Elon Musk to summarise this way of thinking.

“There’s a silly notion failure is not an option at Nasa. Failure is an option at Tesla, because if things aren’t failing then you’re not innovating enough”