A pitfall of A/B Testing in Email Marketing

There are 101 ways to make mistakes with A/B Testing in email marketing. Most articles on email marketing to do with testing end up making a statement like “you should test, test, test!”.

Why is this conclusion kind of correct?

As we’ve stated before, too much of email marketing advice today is focused around optimization of a single campaign. Rather than optimization of your entire email marketing program. So what works for one tiny part of your program, might not work in other contexts! Also though, if you conduct one A/B Test and then just follow those results it does not encourage you to be creative or vary your program appropriately.

Let’s take an example, you sell men’s and women’s socks. You decide to run an A/B Test on subject line with one focused on men’s socks and the other on women’s socks.

The results?

The women’s campaign gets a 32% open rate and the men’s campaign gets a 28% open rate. Which won?

What if I told you the list was 60% females and 40% males? And that each opened campaigns about their own sex at a 40% rate? While the campaigns about the other sex were only opened 20% of the time?

That would perfectly explain the above results. So yes, for a single campaign you might be better off just sending about women’s socks but the real answer is that you have a large set of subscribers with different interests that should be segmented separately.

Now such obvious mistakes are probably not being made that often but if scratch the surface just a little bit these mistakes are being made all the time. Different people react to different words differently. What’s the difference between using “sale” and “discount”? Probably a few people’s preferences. In the course of your email marketing program you’re probably better off using both at different times rather than just using one.


How Unique Subscriber Open Counts Help?

They help by encouraging us to engage with different subscribers from campaign to campaign. Let’s take men’s and women’s socks example again. If we just send the winning women’s socks subject line twice, we end up with 32% of our subscribers opening. However if we send both, we end up with 40% of subscribers opening. Specifically rather than only engaging 20% of men on the list, we’d engage 40% between the two campaigns.

This is the power of measuring unique subscriber opens across your program rather than just focusing on individual campaign open rates. Where before you’re ignoring a number of subscribers, now you’re engaging more. Over time you will also learn more about how you should segment your subscribers based on how they respond to different content as well.

Remember, just because one email tests better in a single campaign, using that result may actually harm your email program in the longer term. That is why the advice is “test, test, test!” because they realise it could harm your program but they don’t know how to explain that.

Do you have any A/B Testing experiences like this? Please feel free to comment or ask any questions below.

1 thought on “A pitfall of A/B Testing in Email Marketing

  1. Pingback: Email Marketing Tactics – Email Analytics

Leave a Reply

Your email address will not be published. Required fields are marked *