Are you testing your emails to improve them over time? You should be! As marketers, we can easily fall into the trap of trusting our gut or thinking that what we’ve seen in the past holds true in the present. But to get the most out of your emails, it’s important to trust the data, not your gut. An email testing strategy will ensure you have the data you need to make smart decisions on future email campaigns.
Furthermore, marketers who split-test their emails see higher open rates, click rates and revenue than their non-testing counterparts, according to MailChimp research presented by Alex Kelly at Litmus Live 2017. Have we convinced you yet to experiment with an email split test?
Getting Started With Split Testing
There are two ways to test your email campaigns. With a split test, you divide your recipient list in half and send a slightly different version of the email to each audience. Then you compare the results. Split tests are great options for marketers with small list sizes or who lack the luxury of testing time. With a split test with champion, the marketer designates a percentage of the list to receive version A and an equal percentage to receive version B. For example, you could configure version A to go to 5 percent of the list, version B to go to 5 percent of the list, leaving 90 percent of your list to receive the top-performing version. At the conclusion of four hours — or whatever period of time is available — the marketer selects the winning version, or champion, based on key performance indicators and deploys that version to the remainder of the list.
Not sure where to begin? Here are some key things to consider if you’re just getting started with split testing.
1. Make a plan.
This may seem like an obvious recommendation, but it’s easy to get caught up in the day-to-day tasks of email production and simply throw in a split test here and there without looking at the bigger picture. Your split-testing plan doesn’t need to be overly complex, but you do need to create one. Your plan should address these questions:
- What variables do you want to test? This could include subject lines, preheader text, email content, calls to action, send time, from name … the list goes on.
- What questions do you want to answer? This gets into the detail of what you need to know from your tests. Which subject line results in higher open rates: percentage off or dollars off? What type of call-to-action language (hard sell versus soft sell) results in higher revenue? Form a hypothesis and then create a split test around that hypothesis.
- When will you test each variable? Knowing your target send dates for split tests helps keep your testing plan on track. Be sure to plan ahead for the tested emails to ensure you’re ready with an A and B version.
2. Test one variable at a time.
In order to determine if your tested variable impacted key performance indicators, test only one variable at a time. In other words, if you’re testing a subject-line variable, the subject line should be the only email element that differs between version A and version B. The same rule applies if you’re testing preheader copy, body copy or a call to action.
You may be tempted to create an A/B split test with different subject lines and different calls to action since one is measured on open rates and the other on click rates. After all, that would save time by testing two different hypotheses in one email, right? Wrong. You’ll end up spending that extra time trying to decipher your metrics and ultimately guessing which variable influenced the higher-performing email version. It’s important to realize that an enticing subject line not only impacts your open rates but also can carry through to your click rates. Keep it simple by testing only one variable at a time.
3. Test multiple times. And then test some more.
Let’s say you run an A/B split test on a subject line where subject line A is your standard “August E-newsletter” and subject line B includes something about the specific content enclosed. Now let’s say subject line A is the clear winner on open rates. So you’re thinking, “Great — default subject line wins. We’ll use that moving forward.”
Not so fast. A single test may not tell you everything you need to know. I recommend testing a particular variable and/or question a minimum of three times. This helps control for any factors that might influence a single send, such as current events or other competing email in the inbox.
If you’re sending a monthly newsletter, that means it can take a while to cycle through your testing plan, but it’s worth the time and effort. If you’re sending multiple emails per week, you can get results a little more quickly and apply those to your future emails.
But sending three tests, picking a winner and incorporating that into all future emails forever isn’t a great idea, either. If there’s seasonality to your email calendar, make sure you’re testing your chosen variables within each season. If you have widely different audience segments, don’t assume that the findings from one segment will apply to another. Plus, your subscribers change over time, so it’s a good idea to shift your mindset to regularly test variables over time.
Advanced Testing Strategies
Once you’ve mastered A/B split testing, there are two other tools to add to your email optimization toolbox: multivariate testing and holdouts.
Multivariate testing allows you to test more than two variables at time. You can choose to select a champion or split your list equally by variable. With multivariate testing, I still encourage you to pick one email element or hypothesis test. For example, using multivariate testing, you could choose to test the best way to communicate urgency in a subject line. Your variables could be:
- Test A: Your Discount Is Good Through Friday
- Test B: Take 10% Off Through Friday
- Test C: Two Days Left! Your Discount Expires on Friday
- Test D: 48 Hours Left! Your Discount Expires on Friday
Many email service providers tout multivariate testing with the ability to use a large number of variables. I would caution that test strategy is more important than the number of variables.
Holdouts help you determine if your email created a lift in revenue. By literally holding out a certain percentage of your list from an email campaign, you can determine if that campaign created more revenue than your company would have made without sending the campaign. Let’s say, for example, you’re running a cart-abandon campaign. By removing 10 percent of those who abandon carts from the campaign, you can compare the revenue generated by those who received the cart-abandon campaign to those who did not receive it. You can run holdout tests on any email that aims to convert subscribers — purchases, registrations, RSVPs for example. Holdout tests can be instrumental in helping marketers determine which email campaigns are most important to sales.