If you've been sending to your email list for some time now but feel as though you could be getting better results, this article explains how running email testing such as simple a/b tests can lead to improved open rates, click-through rates and conversions.

Email Testing and Email Marketing: Partners in Success

You've got your email campaign up and running, and now you want to make it even better. Or perhaps you've read some advice from Comm100 and said "That's not right. I KNOW my audience would respond better to an image based email." For whatever reason, you're not convinced that your email is providing the best results, and you want to see if you can improve it. That's great, because Comm100 believes that marketing can always be improved.

One of the greatest benefits of Email Marketing is that it makes testing your marketing concepts extremely easy because it's somewhat easier to keep things random with email.

Email Testing Best Performer: a/b Test

When talking about email testing, a/b tests come the first place. An a/b test is simply a kind of email testing where you present one option to one randomized segment of your audience and a different option to the other segment. Then you see which one performs better. It's that simple!

It's common to perform an a/b test on a website, randomly serving different offers or creatives using a javascript or Google Website Optimizer. However, email presents an alternative way to test things, without running the risk that a repeat visitor to your website may see something different than they saw the first time and skew your response rate.

Setting up an email a/b test is simple. Just divide your email list into two parts, and send them two different emails. Each email has a different part of the email testing in it. The one that gets a better response will tell you what the better offer, creative or sending time to use in future campaigns is.

Email Testing Metrics: What Kinds of Things Should be Tested?

If it's anything that can impact the success of your email campaign in any way, then it should be included in your email testing! This may include:

  • Subject lines
  • Day of week of the send
  • Time of day of the send
  • Offer or email content
  • Email creative and layout
  • Html template vs. text email
  • Headlines within the email
  • From address
  • Concepts such as personalization, email tone, etc.

The list above is fairly comprehensive. However, your industry and the end goal of your email campaign may mean that there are other factors that you want to add into your email testing.

Email Testing Don't DOs: Common Mistakes with a/b Tests

There are some common pitfalls to email a/b tests that you'll want to avoid.

Make sure your list is truly random: To do this, we recommend taking your email list and putting every other name in one test group. A good coder can write a script to do this pretty easily. One of the most common mistakes we see is that mailers will take their list, just divide it in half and then use those two halves as their a group and their b group. The problem with this strategy is that it means that one half of your list will be comprised of older sign-ups. That portion of the list will always perform in an inferior way to the newer segment of the list.

Also, once you have split your list into two, do a quick check of each half of the list as well. If one half has a predominance of, for example, Hotmail addresses, then you may experience email deliverability issues with only one half of your list. That will impact the end results of your email testing. In general, pulling every other name and splitting your list that way will give you the best randomized representation.

Only test one thing at a time: Another common mistake is to try to test more than one thing at a time. For example, we've seen people try to test a subject line and an image within the email on the same email testing. The only way to accurately know if the item you're testing is, in fact, the item that created the change in results is to limit what you're testing to one item. If you test more than one item at a time, you can't properly determine what impact each element had on performance. As tempting as it may be to try to collect more data in less time, using more than one test factor will make that data less valuable, less accurate and less usable.

Don't over analyze: If you've set your test up correctly by making sure your list is completely random and limiting your test factor to just one element, then the results are the results. You've just learned something. Don't muddy the waters with a lot of "what if" and "but". The beauty of a true random a/b tests is that the results are typically quite conclusive. Remember, in the numbers game of email marketing, it only takes a small percentage increase to make a big difference.

What to Do With Your Email Testing Results

If you've never completed an a/b test to your audience before, then the first thing to do is to start saving the results. After you've been running tests in all areas of email optimization, you'll have enough data to create a best practice manual that you can follow in the future for creating the best headlines, subject lines, email creatives, offers and more. Then you'll be sure that you're running the best holistic email program that you can.

After you've got a best practice manual, however, there's no reason to stop your email testing. The context of your audience will continue to grow and change, as will things like price point acceptance for your product, competitor activity and more. Marketing and Email Marketing in particular follow just one rule: Test, then test again.

At that point in their customer life cycle, customers will still have brand awareness and loyalty. It may take just a small offer from you to get them back into the purchasing cycle. Sending them an offer to respond to before they become further distanced from your company or brand will have the greatest impact on lengthening customer lifespan.

Related Articles: