Avoid these 6 Common Errors in Marketing A/B Testing

If you’re not testing, you’re not improving. But, tests can not only fail, they can actually tell you lies. Here are 6 tips I’ve learned to avoid errors in A/B testing.

6 Tips To Avoid Errors In A/B Testing

1. Fail to test

The first failure is easy: failing to test at all. You should be conducting A/B tests on the things you do most often, including:

  • Email sends
  • Home page and any other busy page
  • Landing pages
  • Ad groups
  • Social media posts (sponsored)
  • Retargeting ads and landing pages

If you’re not measuring those things, how are you improving them?

For every one of those high volume activities, there should be at least two versions being tested. One of them will turn out to be better than the other, and it’s often not the one you think.

2. Confusing the source

The next error, or failure, is really common, and that is confusing the source of the change.

Each month, I send out two versions of the same email. One version is long-form and the other is short-form, and I’m trying to work out which one works better.

So, I send out the long-form one month, and a short-form one the next month. The short-form one gets a much better response, and I conclude that short is better than long.

Or, should I conclude that month one is better than month two? Or, should I conclude that I shouldn’t send out an email when there is a big sporting event on?

There are so many other variables that could have explained why one worked and the other one didn’t. That’s why you need to clinically separate variables so nothing else is different. Then, you can pinpoint exactly where you’re A/B test results are coming from.

3. Small numbers tell lies

Let’s say we have a small list of high-quality targets, and we send email version A to 20 of them, and email version B to 20 of them. The results:

  • Version A gets two responses
  • Version B gets only one response

Can we conclude that version A is better than version B?

The answer is no, because it so easily could have been that companies we send version A to, turned out to be better prospects. With such small numbers, variabilities can easily sneak in.

The results can also be situational.

Imagine you just received an email that you weren’t expecting, and mostly, you’re very likely to delete it. But every now and then, you read one because you may not be busy at that point in time.

Can we conclude that there is some kind of indicator that version A is better because it got two responses compared to only one response for version B?

But, if version B came in with four, and version A came in with one, you would conclude the complete reverse. Not only is that not clear proof, it’s not even an indicator.

You need to run tests that can tell you the truth, or don’t run them at all.

4. Confusing proof with failure

Don’t confuse failure of the test with proof of failure.

Maybe you form a view that you can improve something in a material way, and you run a test to do that. But, the test doesn’t come back with a result that you can believe. Maybe the results are the same, or they’re so close that you really can’t determine the differences.

Certainly, the test failed. But did it also prove that that difference doesn’t work? If the test was long-form versus short-form, and the results were similar, this doesn’t mean that we proved there was no difference.

We ran it once, and we failed to prove that we can get an improvement. We did not prove that you can’t get an improvement.

It’s critical that you draw the right conclusions from your tests, and failure to prove is a perfectly okay outcome. Acknowledge that it’s failed to provide proof, and simply try again.

5. Testing without a hypothesis

Putting a test together without a hypothesis is a waste of time.

Let’s say version A of your email beat version B, so what? You learned that version A worked, but you didn’t come to any conclusion.

If you run a test without a hypothesis, you don’t get a clear, generalizable conclusion that you can put in the bank.

Start with a hypothesis like this:

I think I can improve open rates by 10 percent by using more assertive copy in the headline.

Then build your test around proving or disproving that hypothesis, then execute and measure.

6. Lacking and overall context for the processes you’re trying to improve

The sixth and final error is lacking any context for testing. You should be asking yourself these 5 questions:

  1. What tactics came before the one that we’re testing?
  2. What tactics are going to come after?
  3. Strategically, which market are we marketing to?
  4. What are we offering to that market?
  5. What problem are we trying to solve for that market?

You need to know the answers to these questions so you can do each of the tactics really well, and that’s the job of the Funnel Plan.

Funnel Plan is a way of building your overall Sales and Marketing process together. It’s not a marketing plan or sales plan, it’s a Sales and Marketing process map.

Now if you already have a Funnel Plan, you know what I’m talking about. If you don’t, go to Funnelplan.com and grab yourself a Funnel Plan.

I hope you got lots of value out of today’s Funnel Vision Blog. We’ll have a new blog up next month in the same place. Until then, may your funnel be full and always flowing.

If you prefer to watch this content, subscribe to our YouTube channel.

 

 

Our thanks this week to:

Brittany Shipton for blog production

Amy Dethick for video production

Hugh Macfarlane for scripting and presenting this week’s show