So, you have a feature packed a/b testing tool, you have run countless tests but as time goes on your conversion rate doesn’t seem to be shifting. Suddenly, all of your time and effort is starting to feel wasted and you’re beginning to get disheartened. What’s going wrong?
If you are not seeing the results you want from your CRO efforts you may be falling prey to some common CRO mistakes.
In this list, I’ll run through 5 common CRO mistakes you might be making and how you can fix them.
5 CRO mistakes you may be making
1) You are a/b testing everything
When starting out practising conversion rate optimisation, it’s easy to think that the more you test, the more results you will see. Whilst this is partly true, the most important aspect of testing is making sure that you are testing the right things.
Sure, it’s fun to ad-hoc change buttons and text in the hope that you’re going to strike gold. The temptation of a quick win is always enticing. But in reality, you are most likely going to waste your time and end up frustrated. Unfortunately, this is a common CRO mistake that many digital marketers make.
Should you be testing capitalisation VS non-capitalisation on your main banner heading? Probably not, just because you can doesn’t mean you should.
How to figure out what to test
To conduct meaningful tests you need to be able to identify where and why issues are occurring. Don’t think of running an a/b test until you have done this.
To find out where issues are occurring, look into the following:
- Google Analytics reports – Look at different segments. How do different devices stack up against each other? Inspect funnels to see where you are leaking users.
- Click, Heat & Scroll maps – Are people clicking on things that aren’t clickable? Is all the important information above drop off points on the page.
- Recordings – Try to identify areas of user frustration. What is stopping people from taking action? Are they getting confused?
- Heuristic Analysis – Could the page be clearer? Is value clearly communicated? What areas of friction are there?
To find out why issues are occurring look into the following:
- Polls & Surveys – Use on-page polls and surveys to find out what’s stopping your users from taking action. Make sure your questions envoke actionable responses.
- User testing – Remote or internal. Get real people to carry out tasks on your site. Observe their actions and note where they are having issues.
- Internal communication – Speaking to your customer-facing staff can be a great way to gain insights. Your customer care or sales team are communicating with potential customers every day – don’t let this go to waste.
Once you have carried out this research phase, you should have some clear, data-driven insights that you can turn into testing ideas.
2) You are not following a structured CRO framework
So you’re testing some button colours here, reviewing some heatmaps there, but how do you join everything up to create a meaningful conversion optimisation plan?
You need to be following a data-driven, structured framework.
I like to think of frameworks like recipes.
Sure you can have a crack at baking a delicious cake, hoping that it all comes together, but you will have a much better chance of success if you hunt down a good recipe first. The same can be said when using frameworks for your CRO process. You will have a far greater chance of getting the results you want if you follow a thought-out process rather than trying to wing it.
Luckily, there are lots of effective CRO processes out there that you can follow or use to enhance your existing workflow.
Examples of effective CRO frameworks
To achieve optimal results, it’s best to follow a structured process.
Following a framework allows you to ensure you are a using a methodical, logical approach to your entire conversion optimisation process. There are a number of CRO frameworks out there – they generally involve a conversion research phase followed by a testing plan.
Here are some great frameworks to get started.
- ResearchXL by CXL – This is a go-to framework as it emphasises the importance of data-driven research to gain insights. There is also a worksheet and prioritisation system to help you take action.
- The Infinity Optimization process by Widerfunnel – Spilt into two phases, ‘Explore’ and ‘Validate’. This framework helps to generate key insights that can be used to produce meaningful uplifts.
- The Aquire convert conversion rate optimization process – This is a repeatable process covering 8 key steps.
- invesp Conversion Optimization System – A comprehensive 12-step optimisation process that covers heuristic analysis and qualitative and quantitative research, along with the creation of a conversion road-map.
Some other great frameworks/processes to look at are:
By studying the different process that different CRO experts use, you can be sure that you are following the most efficient path to achieving your goals.
3) You are not constructing testing hypothesis
How should you conduct an a/b test?
If your answer was to jump right into to launching a test, you may need to think again.
Just like the overall CRO process, there needs to be logical thought put into each test that is run.
One of the most common CRO mistakes people make is to start tests with no hypothesis.
A hypothesis is a prediction that you make before you construct your test.
Conducting tests with no written hypothesis is an easy route to frustration and wasted time. By launching a test with no structured hypothesis, you are essentially hoping for the best.
How to construct a solid testing hypothesis
Before you start a test, you will want to consider the following questions:
- What are you going to change?
- How will this affect your user?
- What impact will this have?
Let’s take the following example case study and create a hypothesis using the three factors above.
Let’s say you run a SaaS company and you are looking at ways to increase the number of users that sign up to a free trial of your software.
Before you begin to construct a hypothesis, you conduct your conversion research. You look at a range of quantitive and qualitative data and discover that certain users aren’t signing up as they feel they don’t understand what your product does or how it can help them.
You now have some data-backed findings that present to you a problem. Your value proposition isn’t clear enough. So, armed with this knowledge you can begin to put together your hypothesis as follows:
What are you going to change?
Users don’t seem to be understanding what your product does or the benefit it can provide to them. To remedy this, you may propose to change your banner copy to create a more effective value proposition.
How will this affect your user?
With your value proposition souped-up, your users will now understand that they simply cannot live another day without your product.
What impact will this have?
Sign-ups for your free trial will explode through the roof!
Ok, I may have exaggerated the last two points so lets put this into a more complete, sensible hypothesis:
If we improve the clarity of our banner copy, then users will clearly understand what our product does and how it can benefit them. As a result, we will see sign ups for free trials increase.
With a data-driven, structured hypothesis in place, you can still extract meaningful insights, even if your test loses. For example, if your test failed in this instance, it may mean that your new value proposition wasn’t effective enough. From this, you will then be able to create another iteration of the test.
4) You are not segmenting your test results
Your a/b test has just concluded and the variation has beaten the control by 6%. Fantastic news, pop the champagne and apply the winning variant. A winner is a winner, right? Well, not quite.
To get the most out of your testing program, you need to develop insights from all concluded tests – both winning and losing. To do this, it’s important to understand how your test has affected the behaviour of different types of users.
For example, one homepage layout may perform really well for desktop users, while for mobile users it doesn’t have such a profound impact. You may then conduct a second test with a different layout for mobile.
How to segment your data
When concluding tests, consider looking into the following segments:
- Device type – How did different devices perform (mobile vs tablet vs desktop)?
- Geolocation – How did users from different locations stack up against each other?
- Age – How did different age brackets respond to the test?
- New vs returning visitors – Did returning users or new users respond better?
- Logged in vs not logged in – How did logged in users compare to those who didn’t log in?
Some a/b testing programs such as Optimizely allow you to do some segmentation analysis inside of the interface. However, to really slice and dice test results, I always bring my results into Google Analytics.
You can normally import your experiment data into Google Analytics as custom dimensions or in some cases, advance segments. Once in place, you can create custom reports to drill down into numerous advance segments. Grab your pickaxe and mine those results for golden insights.
ConversionXL has a great post about analysing your a/b test results inside of Google Analytics.
Looking into these different user segments will give you valuable insights into how your users have reacted to your tests. You can then use this information to craft new testing ideas and create personalisation strategies.
5) You don’t understand when your a/b test should end
When should you stop an a/b test?
Should you stop it once it reaches a 95% statistical significance rating? Or perhaps when it has been running for a few weeks?
The answer is neither.
Whilst both statistical significance and time are important factors to consider when understanding how long your test should run for, they should not be used to independently stop tests.
Don’t stop your test just because it reaches statistical significance.
Mats Stafseng Einarsen put together a really interesting simulation where he launched an A/A test (the same variation against each other). He ran 1,000 experiments, each with 200,000 fake participants divided randomly into the two variants.
The results clearly show why you should not end a test just because it has reached statistical significance.
- 771 experiments out of 1,000 reached 90% significance at some point
- 531 experiments out of 1,000 reached 95% significance at some point
When a calculated sample size was taken into consideration, there was a huge reduction in false error rates:
- 100 experiments out of 1,000 were significant at 90%
- 51 experiments out of 1,000 were significant at 95%
This highlights that you cannot simply stop tests because they are ‘statistically significant’. You need to consider more than that if you are to obtain the most accurate results.
How to know when your test should end
To understand how long your test should run for, you need to first calculate the sample size you will need to conclude your experiment. Evan Miller has an awesome sample size calculator that will help you understand how many participants you need to stop your experiment.
First of all, you will need to enter the conversion rate of your control page. Then you will need to enter the lowest uplift (Minimum Detectable Effect) you would like to see. In CRO, most conversion uplifts are referred to in relative terms. Be sure to click the relative option when entering your minimum detectable effect.
So in this example, you can see that to detect a minimum 5% uplift, you would need to get 25,255 users into each variation. In total, this equates to 50,510 visitors needed to participate in the experiment.
This brings into question another important topic (and potential CRO mistake) of a/b testing when you don’t have enough traffic. Looking at the above example, if your page was receiving 50 unique page views a day it would take over 2.5 years to conclude your experiment.
The higher the conversion rate, the lower sample size you need – so more profound changes may work better for sites with lower traffic. For more information on how to do CRO on-site with little traffic, check out this post from conversionXL and slipstreamdigital.
Minimum time to run tests
Even if you can reach your sample size and statistical significance in just a week, it doesn’t mean that you should conclude your experiment. Your experiment needs to account for the regular changes in user behaviour that occur over your business cycle.
For example, if you launch your tests the same weekend that a big sale is happening or when an offline marketing campaign is launching, you will need to consider that this will change user behaviour.
To combat this, you need to run your tests to account for the following (to name a few):
- Changes in traffic types (organic, paid, social etc)
- Social promotions
- Offline marketing campaigns
A good rule of thumb is to run tests for a month. You will still need to assess your own business cycle and ensure that your test accounts for all variances that will occur over time.
Before you start your next test, take some time to think about these points:
- How large does my sample size need to be?
- What is the minimum time my test should run to account for variations in user behaviour?
With these two questions answered, you will now have a clear idea of when your test should end.
The common theme across all mistakes in this list is failing to take a structured approach.
CRO is a complex practice and it requires a lot of pre-thought before you take action. A systematic outlook on CRO will serve you well.
Whether you are constructing a single hypothesis or your entire testing plan, a systematic outlook on CRO will serve you well.
Do any of these CRO mistakes look familiar to you? What are some of the CRO mistakes you have made in past? Let me know in the comments below!