We’re going to look at how statistical validity can be used to create the perfect CRO test and how it can help to speed up your process of running split tests.
Some people will tell you that a slow steady pace for a conversion rate optimization test (CRO test for short) is good – but I’m here to tell you that fast is better.
I’ve always believed this, in spite of the trouble it’s caused me.
Being strapped to a rocket will always be better than slithering down a garden path.
That is why God made fast motorcycles, face-melting g-force and conversion rate optimization tests.
The problem that most businesses face is that they’re so concerned about doing as many tests as possible, as fast as possible, they end up with experiments that yield poor results or skewed data from a split test.
Very few people understand this including many people who claim to be an expert in conversion rate optimization.
As far as I’m concerned, it’s a damned shame that a field as potentially dynamic and vital as conversion rate optimization should be overrun with dullards, hag-ridden with myopia and complacence, and generally stuck in a bog of stagnant mediocrity.
Perhaps Conversion Sciences said it best:
You might know just enough about split testing statistics to dupe yourself into making major errors”Jacob McMillen, Write Minds
This is why I’ve written this article about statistical validity and my aim is to help you to avoid making errors with your CRO tests.
CRO tests are a great way to make your website more efficient at selling.
We’ve all heard of split testing, but how does it work? What are the benefits of a CRO test? How can you use ab testing to improve your business?
Split testing (also known as ab testing) is a type of validity test where you test different versions of your website and find out which one gets the best results.
You can split-test anything from colours of buttons and or images shown on a page all the way through copywriting and layout changes.
It’s a form of statistical test that results in you knowing that more people click one type of button, more people join your newsletter list when a particular headline is used or perhaps more people complete a purchase when your checkout form has fewer fields.
The possibilities are endless and we’re going to break down everything you need to know about this powerful conversion rate optimization strategy.
You’ll learn what split testing is, why it works so well and how you can implement it correctly using statistical validity into your own marketing strategy today!
By the end of this article, you will be an expert on creating the perfect CRO split test.
So let’s get started!
Your data will tell you what CRO split tests to do.
Lee Stafford is a successful ecommerce store that sells a wide range of hair products such as shampoo and straight shine serum.
However, when you look at the screenshot above, you’ll see that the button with the text “Start Shopping” doesn’t stand out against the other shades of pink.
As a CRO expert, one of my first suggestions would be to change it to a high-contrast colour so that it is more distinguishable such as this:
This simple change to the colour of the “Start Shopping” button is a CRO test that I would fully expect to produce more conversions.
However, the website’s analytical data must support this hypothesis before any changes are made to the page.
Every business eventually comes to the conclusion that every visitor to their website that doesn’t covert is a potential customer that they’re missing out on.
And sure getting more people to buy their products or services may sound easy, but it’s actually a lot more difficult than it sounds.
Experience with conversion rate optimization does help, but it’s important to consider that every website and every industry behaves differently – at times wildly different – with no obvious explanation for why conversion rates vary.
This means that there is no such thing as “one size fits all” when it comes to conversion rate optimization.
Here lies the problem with CRO tests.
It isn’t until you begin to collect and study your website’s data, that you begin to understand which CRO tests/experiments you should do.
However, even if you know what to test, very few split tests are done correctly because most don’t use statistical validity.
Further down the data hole.
Okay so before we dive into what statistical validity is, I want to issue a minor warning when it comes to knowing which ab test you should start with.
It all starts by looking at your web data (and yes, this is where the warning comes in).
When it comes to a website, we use web data collected by platforms such as Google Analytics to understand how people use a website.
A more complicated way of looking at this is convergent validation and discriminant validation.
Convergent validation is a measurement that refers to different variables that should have an effect on one another and indeed do have an effect on one another.
A hypothetical example of convergent validation in web data may be that a make-up store gets more females visiting the website than men.
In psychology, discriminant validation tests to see if concepts or measurements that are not expected to be related are actually unrelated.
Taking the above example of Google Analytics data for a make-up store, a discriminant validation test might be used to confirm that males have a lower conversion rate than females.
So why is this important?
Over the past decade and a half, I’ve dealt with hundreds of clients. While they may have a successful business, their preconceived ideas of who and why their customers buy their products in services is more often than not wrong.
Take the example of the make-up brand.
I’ve seen make-up brands make almost the same amount of revenue from males and females. Even if the brand is completely focused on females in a certain age group, males have had double or triple the conversion rate of females.
After surveying male customers, we found that many of them made birthday or holiday-related purchases. Therefore, we started segmenting this traffic and used personalisation to increase conversion rates, average order values and customer lifetime values.
Therefore, while you may think that you know your customers inside and out, the reality is that you can never truly know your customers until you do convergent and discriminant validation tests.
The greatest mania of all is trying to understand web data and to become a natural at it, you must have a strong hypothesis, a gut for numbers and the perfect conditions for a CRO test and experimental designs of websites, popups and emails.
This may sound simple, but web data is one mucky rabbit hole after another that without the correct processes in place can influence the results of an experiment.
Therefore while you may think that your experiment is a success or a failure, when in reality it may be the opposite.
Statistical validity is a critical part of ensuring a successful CRO test or website experiment and sadly it is mostly overlooked by conversion specialists.
What is statistical validity?
In data science, statistical validity is a fancy-pants term for ensuring that accurate conclusions to a hypothesis can be drawn from a fair and unbiased experiment.
Variables are any characteristics that can take on different values, such as height, age, species, or exam score.
However, when applied to web data, these variables become age, gender, the device used, location, interests and much more.
In conversion rate optimization, we regularly study the effect of one variable on another one. For example, we may want to know the relationship between different age groups and average order values.
There are two different types of variables in a study of a cause-and-effect relationship and they are called the independent and dependent variables.
- The independent variable is the cause. Its value is independent of the other variables in your experiment
- The dependent variable is the effect. Its value depends on changes in the independent variable
In data science, there are three things that must be true for a study to have statistical validity:
1. Statistical validity must have random selection.
The first thing that you need for statistical validity is a randomly selected sample group.
In other words, the people who participate in the split test must be chosen at random and must not know they are participating in an a/b test.
Otherwise, they may act differently than they would normally because they know what was happening to them.
By ensuring that each website visitor believes that they have the same user experience as everyone else, they will not form a bias or expectation when using your website which may influence the results of your experiment.
2. Statistical validity must have controlled experiments.
Secondly, website experiments need to be conducted under strict control so you can isolate one variable at a time and see how it impacts an outcome – like testing bounce rates, click-through rates or how many people click an add-to-cart button when you use different types of headlines.
3. Statisitical conclusion validity.
Statistical conclusion validity is the accuracy of the results from an experiment with your website.
More often than not, when it comes to split testing websites statistical conclusion validity is very straightforward as one variation will perform better than the others.
For most conversion rate optimization specialists, this is enough validation to make the appropriate changes to the website to get more conversions.
However, a part of our process at GoGoChimp is to test a significant amount of traffic over a long period of time.
This is because it’s impossible for us to come to a valid statistical conclusion unless we’re confident that the volume of website traffic, the source of the website traffic or time of year has not had an influence on a split test.
What are the different kinds of statistical validation?
Now that we know why statistical validation is important for improving conversion rates, it’s time to look at the different kinds of statistical validation that are relevant to running website experiments.
Construct validity is a type of statistical validity that ensures that the actual experimentation and data collection conforms to the theory that is being studied.
For example, an exit questionnaire about why someone decided to not make a purchase on your website, must provide an accurate picture of what people really think.
Content validity is important for statistical validity because it is designed to make the split test (or questionnaire) prepared to cover all aspects of the variable that is being studied.
This is important because if the test is too narrow, then it will not predict what it claims.
For example, in a CRO split test, the colour of a button should not be limited to just one of two colours as the experiment will only produce a narrow set of results.
Therefore, using content validity, a valid split test would be run many variations of the button using numerous colours and hues.
Face validity is one way for an experimenter to evaluate his/her initial thinking about what needs to be studied in an assessment program.
The face-validity criterion is the extent to which it appears that a given tool, implementation or test actually measures what it purports to measure.
In other words, if you were to study the different open rates between two different email subject lines, then the face-validity of the experiment would mean that your chosen newsletter platform is the best way to measure the results of the a/b test.
Conclusion validity ensures that the conclusion reached from the data sets of the experiment is correct.
For example, the sample size should be large enough to predict any meaningful relationships between the variables being studied.
If not, then conclusion validity is being violated.
Internal validity is used so that the end result cannot be improved by the design, manipulation, and statistical analysis of experiments themselves.
This is a concept that refers to how confident we can be about the cause and effect relationships in an experiment and in some cases a CRO tool such as Unbounce will automatically assign a confidence rating to each variant tested.
Just because there is a statistical relationship between two factors does not necessarily mean that one factor has caused the other to change; this statistical relationship could just have occurred by chance.
It’s internal validity to which we refer when we want to know whether two phenomena really do correspond.
Let us assume that you wanted to find out if your business telephone number and live chat increases the chances of closing more businesses.
If you were not interested in answering questions from customers then this may influence how they are displayed on your web pages and could therefore compromise statistical validity.
CRO experts often have a difficult time proving that their findings represent the wider population in real-world situations.
They face many difficulties, such as small samples sizes and setting up experiments with artificial conditions rather than observing natural ones.
The main criteria of external validity is process generalization–whether or not results obtained from one set of subjects can be extended across various segments to make predictions about an entire segment’s behaviour.
For example, a landing page variant shown to people living in Glasgow will also produce the same set of results to people located throughout the United Kingdom.
The external validity of an experiment is determined by how well the results can be applied to other segments and populations.
If there are some significant limitations in applying this specific set of data, then it may not be able to give us accurate insights into what might happen with more generalized situations or people.
It’s time to us statistical validity in your CRO tests.
While all of the above may seem a little complicated, each stage of statistical validity is a form of common sense.
Now it’s time for you to buy the ticket and take the ride. Use statistical validity to enhance your CRO split tests and speed up the process of finding what will improve your conversion rates.
Thanks for reading and don’t forget to discuss this article below.