Comparing A/B split testing and MVT

Working in an email marketing environment, I have been exposed to two distinct sides of testing – the traditional A/B testing and Multi-Variate Testing (or MVT). Personally, I like them both, as they each have both advantages and limitations as to what they can. This post (source) will take a look at the two and how they can help you achieve some amazing results – we’ll be looking at these from an email marketing point-of-view.

A/B Testing
A/B testing, also known as ‘split testing’, is a method of email optimization in which the conversion rates of two email campaigns are compared to one another using live traffic, with customers being bucketed into one version or the other. By tracking the unique confirmed opens and unique clickthroughs, you are able to determine which version of your email generated the better level of engagement, and thus, was more successful.

What are some of the common uses?
One of the main things I have tested using A/B testing is trying out two different subject lines. For example, one subject line could be a direct line to the customer, while the other one could be phrased as a question. Subject line A could have personalisation (the customer’s first name), while subject line B could be very generic. Using A/B testing to determine which subject line is better at driving open rates is great.

A/B split testing - common uses
You can also test elements within the email itself, for example two different header banners. As an example, a campaign has two emails, both identical in terms of content, but the head banner was different – in version A, the header banner was a combination of copy and products from the email. Version B was simply copy. After doing the initial test, results showed that the banner with both copy and images (version A) was more well-received that the banner with copy alone. It is important to note that while the email is the same, it’s the email as a whole that is tracked, now individual elements.

When doing an A/B test to your customers, I would recommend to send to 10% of your database. This is a small enough segment size that you are able to gauge which version will do better, but there are still enough subscribers in the remainder of your database that the impact will be great.

A/B testing is a powerful and widely used testing method. Ensuring that only two or three variables are tracked at any one time means that tests can deliver accurate data quickly. Also, because you’re only sending to a small sub-set of customers to begin with, it doesn’t require too long to produce a winning result.

The main limitation around A/B testing is summed up by the name. A/B testing is best used to measure the impact of between two and four variables with your email campaign. Tests with more variables take longer to run and if you have a small database, you won’t receive accurate results from the split test.

If you need information about how many different elements interact with one another, multivariate testing is the optimal approach!

Multi-Variate Testing
MVT uses the same core mechanism as A/B testing, but compares a higher number of variables, and reveals more information about how these variables interact with one another. The purpose of a multivariate test is to measure the effectiveness each design combination has on the ultimate goal. Once enough customers have opened the email to effectively run the test, the data from each variation is compared to find not only the most successful design, but also to potentially reveal which elements have the greatest positive or negative impact on a customer’s engagement.

Multi-Variate Testing
As an example, I have worked on a number of campaigns that used multi-variate testing, with the most successful one testing the call to action button in the email. The campaign had five different colours for the button and five different messages, both urgency driven and more informational messages.

What are some of the common uses?
Multi-Variate testing is most commonly used to test multiple elements in an email – for example, the call to action button, a product icon and a header banner. In order to test this properly, what you would need to do is to create all the elements you want to test (using the example above, you would need to create all 25 call to action buttons – with all colours and messages – as many product icons as required and the total number of header banners). You would then send your email out to your database to receive all possible combinations of these elements.

Depending on the platform you use to do your multi-variate test (8Seconds is a great platform I’ve used in the past) will depend on how long it takes for the winning elements to be determined. But what does emerge is a clear picture of which variable is best performing, and which elements are most responsible for this performance.

Multi-variate testing is a powerful way to help you target redesign efforts to the elements of your email campaign. Unlike A/B testing, you are able to test multiple versions of many different variables and you can create as many elements as required.

The single biggest limitation of multi-variate testing is the amount of traffic needed to complete the test. Essentially, the more elements you have to test, the more time it will take to set up and the more time it will take for a winner to be determined. Too many changing elements at once can quickly add up to a very large number of possible combinations that must be tested.

If you’re unsure if you should run A/B testing of multi-variate testing, ask yourself how they will fit into your cycle of testing and campaign as a whole. You might find you need to test using both options simultaneously.

A/B testing and MVT are two powerful optimization methods that complement one another. Pick one or the other, or use them both together to help you get the most out of your email campaign.

Please let me know if you’ve used MVT or if you prefer the more traditional testing method – leave a comment below.