A/B split tests are excellent tools to discover which page or feature that performs the best. However; most A/B split tests are done wrong — learn why
What is an A/B split test?
With A/B split tests you discover which of two different versions of a page, feature, etc. that performs the best:
This sounds as a quite tempting and quick solution; however, this type of testing has a few, yet rather essential pitfalls:
- The number of visitors is too low statistically to decide the accuracy of which page performs the best
- To reach enough visitors the test often must be carried out for quite some time, which makes it a slow, rather than a fast tool
A/B split tests are a guessing game
However; one much more important pitfall outperforms the above: A/B split tests often are a guessing game based on assumptions rather than facts and they are carried out in the wrong part of the process.
Many Conversion Rate Optimization (CRO) processes look like this:
- A/B split testing different pages
- Conclusions are made based on which page performs the best
- The winning page is implemented
The essentially bad part of this process is that you do not know exactly why:
- You are A/B split testing
- One page performs better than the other
And this then leads back to the guessing game as data cannot explain to you why one page performed better than the other or if you are A/B split testing the right thing. This process furthermore will cost you time and money (not pennies) which is why I would like to share my recommendations on how to improve this process a lot!
How to A/B split test correctly
To improve the process you would like to know:
- What to A/B split test
- Which pages / features to A/B split test and why
The best way to get good answers to these questions is to involve your customers – real people – by carrying out a user test.
What is a user test?
A user test is also called a “think-aloud test” because the users in the test speak about how they perceive the user interface: what they like, why they get stuck, etc.:
Most user tests are based on 5 representative customers, which will help you locate approximately 85% of all errors within the given tasks, as illustrated by this graph (Source):
A user test is the most powerful conversion optimization tool out there. It does not take a lot of time to test, and it is based on real customers interacting with your e-commerce store. I have conducted more than 100 user tests, and I learn something new every time. This makes me very humble in terms of just how effective (and often very lucrative) user tests are.
Create a “data” mirror
Once you have user tested your e-commerce store (qualitative data) you can then compare the findings in the user test with quantitative data such as Google Analytics tracking data:
User tests contains qualitative data as they “qualify” data based on observed customer behavior, while tracking tools such as Google Analytics include quantitative data showing how customers click around your online store.
By matching data from the user test with website tracking data, you very clearly identify data clusters that point out exactly where you can improve your e-commerce store. Since you have user tested your shop to begin with, you now know why customers are stuck, which helps you find a way to fix the issues.
Once you have improved your online shop based on this approach, then you should A/B split test pages and feature variations, if needed.
I thus suggest that you do not start out by A/B split testing your e-commerce store; however, that you follow along this process:
- User test your e-commerce store
- Compare user test findings with your Google Analytics tracking data (create a “data mirror”)
- Fix issues in the interface when there is a match between qualitative an quantitative data
- A/B split test variations, if needed
I really hope you have enjoyed the post and do feel free to ask any questions.