1. Help Center
  2. Everything Email

How do I set up an A/B test?

Send variations of a campaign to different recipient groups to see which changes have the most impact.

What is A/B testing?

A/B testing involves creating multiple variations of an e-mail campaign and sending these to different subsets of your subscribers, with the intention of finding out which variation of the campaign gives the best results. 

A/B tests are usually done by selecting a single variable (e.g. subject line or email content) and creating multiple variations that differ. These variations are then sent to your list, and a winner is chosen based on an intended outcome (e.g. for a subject line test, you would measure the number of email opens). 

How to set up an A/B test

Setting up an A/B test in Machine Labs is simple. When creating a new campaign, select the email  channel and choose from either Split A/B or Sample A/B test in the dropdown. 

Screenshot 2021-06-01 at 10.20.45

Types of A/B test

Machine Labs offers two types of A/B tests; Split and Sample. 

A Split A/B test divides your recipient list into equal sized sets. Each variation is sent to the same number of recipients at the same time.

A Sample A/B test takes a subset of your recipient list (10%) and splits the variations equally between these groupings. The variations are sent immediately, and then the test is run over user-selected time period (usually 4-12 hours). A winner is then chosen and the remainder of the recipients receive the winning variation. Machine Labs will automatically choose the winning variant and send the campaign for you, there's no need to come back at the end of the test to do this manually.

Variables 

Machine Labs supports the testing of 5 different variables;

  • Sender identity
  • Subject line
  • Preview text
  •  Email content
  • Campaign offer

The goal of the test (and the metric that the winner is decided upon) differs depending on the variable chosen. 

Sender identity, Subject line and Preview text all use the Open rate metric to decide the winning variation. 

Email content uses the Click rate metric to decide the winning variation.

Campaign offer uses the Revenue metric to decide the winning variation.

Setting up a split test

Once you've chosen Split test from the channel selection dropdown, you'll be asked to name your campaign and choose your variable that you'd like to test. Once chosen, click the Save and continue button on the top right of the page.

Next, you'll be asked to select your audience. Simply choose the mailing list and segments that you'd like to send the campaign to in the same way that you would if you were setting up a standard email campaign.

Note that for split tests, we require the final recipient list to contain at least 4,000 contacts. This is to ensure that the results of the test have statistical significance. 

Once you've built your recipient list, you can continue onto the creation of your variants.

Variants are comprised of a sender identity, subject line, preview text, coupon code and email template. Depending on which variable was chosen, some of these fields may be optional. 

You can add additional variants using the + ADD VARIANT button underneath the last variants' box. You must have a minimum of 2 variants and can have up to 4 variants when running an A/B test. Note that if you do not have enough contacts in your recipient list, you may be limited to 2 or 3 variants. If this is the case, you can increase the recipient list to add additional variants. 

You must also ensure that the variable chosen earlier in the creation process differs between all variants. 

Once you've created your variants, you can move onto the review stage. Once you're happy with your A/B test campaign setup, you can either save it as a draft or continue on to schedule the sending of the test.

Setting up a sample test

Once you've chosen Sample test from the channel selection dropdown, you'll be asked to name your campaign and choose your variable that you'd like to test. Once chosen, click the Save and continue button on the top right of the page.

Next, you'll be asked to select your audience. Simply choose the mailing list and segments that you'd like to send the campaign to in the same way that you would if you were setting up a standard email campaign.

Note that for sample tests, we require the final recipient list to contain at least 40,000 contacts. Each variant will initially be sent to a minimum of 2,000 contacts.  This is to ensure that the results of the test have statistical significance. 

Once you've built your recipient list, you can continue onto the creation of your variants.

Variants are comprised of a sender identity, subject line, preview text, coupon code and email template. Depending on which variable was chosen, some of these fields may be optional. 

You can add additional variants using the + ADD VARIANT button underneath the last variants' box. You must have a minimum of 2 variants and can have up to 4 variants when running an A/B test. Note that if you do not have enough contacts in your recipient list, you may be limited to 2 or 3 variants. If this is the case, you can increase the recipient list to add additional variants. 

You must also ensure that the variable chosen earlier in the creation process differs between all variants. 

Once you've created your variants, you can move onto the review stage. Once you're happy with your A/B test campaign setup, you can either save it as a draft or continue on to schedule the sending of the test.

When scheduling a sample test, the initial email will be sent IMMEDIATELY, and the winning variant will be sent at the scheduled time entered. If you do not wish for the email to be sent immediately, we recommend carrying out a split test instead. You will also need to leave a minimum of 1 hour between the current time and the scheduled time to allow for enough data to be received and a winner chosen.

Reporting

Once you've run an A/B test, it's important to look at the results to determine what worked best. This will help to improve your future campaign and tests. Machine Labs has a central view of all your past, current and upcoming A/B tests accessible from Reports -> A/B Tests on the left navigation bar of the app.

You can select any A/B test that you've set up and view all variations alongside each other, helping you to compare the metrics of each variation such as the open and click rates, number of orders and total sales generated from the email. 

Our campaign report and email report can be filtered by a specific variant as well, or left as an overall view of campaign or email performance,

Confidence score

Before sending the winning variation of your A/B test at scale, it's important to understand if the test can be considered statistically significant or not.

The confidence score in the test summary section of your A/B test report reflects the statistical significance of your test. 

From the number of people who received the email, and how well a variation outperforms the other(s), Machine Labs is able to mathematically determine the confidence score of your test.

A confidence score greater than 90% indicates that a certain variation of your test is highly likely to surpass the other option(s). You would be able to reproduce the results of this test and you could apply what you learned to your forthcoming campaigns.

A confidence score between 75% and 89% indicates that the result of your test is encouraging. You should perform this test again to have a more reliable conclusion. If your second test shows a similar confidence score and confirms your first result then implementing the winning variation is likely to have a slight but positive impact on your subscribers.

A confidence score below 75% means that your test is not statistically significant. This happens when one variation beats the other(s) by only a minor amount which is not enough for the test to be considered valid. In this case, you should re-run the test for a longer time period or choose another variable to test.