Wednesday 2 September 2009

A B Testing: Marketing as Science

Recently I've been asked about A/B testing for online marketing and it occurs to me that many marketers spend a vast majority of their time working on new campaigns to drive conversion, acquisition, retention or monetisation and not nearly enough time perfecting existing channels and communications with the more scientific approach of AB testing.

What is AB testing and why is it important?
AB testing, also known as split testing, brings some science to the practice of marketing. Essentially AB testing involves making small incremental changes, one at a time, so that you can see what impact these changes will have on conversion, click through rates, sales or other targets. In this way marketers can scientifically prove the impact of specific changes rather than making recommendations based on gut instinct.

Before you start with AB testing you need to consider a number of questions:
  • What do you want to test and in which medium or channel?
  • How will you track metrics and measure improvements?
  • What is your goal or objective?

What do you want to test and in which medium or channel?

AB testing can be used across almost any online marketing medium including emails, triggered messages, online advertising and web pages such as purchase flows or landing pages. In deciding where to start with AB testing you should consider what has the greatest impact on your business objective. If you can achieve a 1% improvement in people completing transactions when shopping with you, this is likely to have a greater and more immediate impact on the bottom line than a similar improvement to a landing page shown to new visitors.

How will you track metrics and measure improvements?
Tracking is essential for AB testing. What you’ll need to track will depend on the campaign, channel and objectives however you should think about tracking some of the following:
  • No. of emails sent / page views
  • Clicks
  • Click through rate
  • Sales
  • Conversion
A baseline also needs to be established by measuring the performance of the current version of the communication. The current version will act as a control and will be run against a number of test versions, and therefore the baseline will provide a marker against which to measure improvements.

What is your goal or objective?
Before beginning tests you should decide on the key performance indicators for the test. By being clear about the objective of a particular communication in advance of the test you’ll be able to choose a winner from your results easily and without bias.

Running your AB Test

The golden rule with AB testing is to make one change at a time so you can see the incremental improvements achieved by each individual change. This allows you to isolate the revenue, conversion or CTR impact of each change. Therefore the next thing to do is to create your test versions. In creating the tests you should look at changes to the following areas:
  • Headline
  • Call to action
  • Copy
  • Graphics
  • Colour
  • Configuration / Layout of elements
  • Headings
The temptation to make numerous changes is very hard to resist but remember in order to truly know which element has caused an improvement it’s very important to be patient and only make one change at a time.

Next you’ll need to decide on the proportional split for your traffic, e.g. 80/20, 50/50. Certain business issues may come into consideration, for example when testing on a page which drives the majority of your sales you’ll want to test among a small percentage of your audience to avoid negatively impacting conversion rates and therefore revenues. However in order to give you confidence in making decisions based on the AB test you’ll need to test with an adequate sample size. If your sample is small you have two options available: to test a smaller number of variants or run the test for a longer period.

Now you’re ready to start and a number of treatments (including your control) should be run concurrently. Essentially this means that you randomise which treatment is shown to each user and run them in parallel randomly splitting traffic between each treatment. If this isn’t possible you can split sequentially, by showing one version for a set amount of time followed by another version for the same amount of time, results will be less reliable but still useful.

The actual implementation of the AB tests can be done in a number of ways ranging from simple and cheap scripts to using sophisticated applications such as Offermatica, Inceptor, Optimost, Visual Sciences.

When the tests are complete you'll need to look at the metrics for each. The winning test based on your established key performance indicators then becomes the hero. The hero should then become live and be shown to your main audience in place of your previous control.

But that’s not the end, now you should start again by making individual improvements to the hero. And so the cycle of AB testing continues.

And what if you don’t have the patience?

AB testing requires a great deal of patience, however there is an alternative approach for those of us with a little less than the required measure of patience. At the beginning of your testing you can instead create and test a number of radical re-designs against your current version or control. The design which gets the best results becomes the hero and control going forward and then you make incremental and individual changes to that in the normal way allowing for potentially faster improvements but in a less controlled and scientific manner to begin with. Although less scientific in nature this may allow for greater improvements in a shorter timeframe.

No comments:

Post a Comment