6 Tips to Kick-Start Your A/B Testing Program
What marketing tool boasts efficiency and potential for immediate payback? A well-executed A/B testing program!
Yes, A/B testing has been a standard in direct marketing for decades. So in today’s data-rich, multi-platform marketing environment, with consumers’ reliance on digital tools and easier-to-use-than-ever testing tools, the question is: why aren’t more marketers A/B testers?
Sadly, 56% of companies today report that they are NOT using any form of A/B testing to optimize their marketing efforts. (Source: Unbounce)
Why so few A/B testing marketers? Perhaps those companies feel resource-constrained. Maybe there’s too much pressure to simply get one campaign out the door, without the time to develop an alternate version. Still other marketers might be under the misconception that A/B testing is too complicated. Whatever the reason, here are some ways to kick-start testing and results-optimization efforts.
In case you’re not an A/B tester, let’s start with a definition: A/B testing involves producing two versions of a creative asset – the “A” version (Test) and the “B” version (Control). Consumers are split into two groups and each group is exposed to one version or the other.
You can execute an A/B test on a range of marketing vehicles, from direct mail to email, from Ecommerce sites to simple form fields/data capture.
We recommend the following Best Practices, regardless of the marketing vehicle you’re testing or your specific business goals.
- Assemble a team and get multiple points of view. For companies not doing much (or any) A/B testing, establishing a culture of testing is critical. This begins by engaging stakeholders from around the organization. I’ve always found the best ideas come from many places, such as customer service. Your customer service agents or sales team are on the front lines. Since they are often closest to the customer, they’ll have great ideas on improving the experience.
- Define a clear business problem you’re trying to solve & its financial value. When getting input, remember that just because you can test something, it doesn’t mean you should. So you need to focus on a problem worth solving. For example, perhaps your email marketing campaign suffers from a stagnant open rate. Or the add-to-cart ratio on your Ecommerce site has declined. The business problem must be clear, singularly focused and measurable. Once you have a problem, create a baseline for the metric(s) used to track current performance and clearly define the financial value of solving the problem.
- Establish a hypothesis for your test. If you want to improve email campaign open rates, one hypothesis could be that subject lines don’t feature a strong enough call to action. Or that you’re not sending at the right time, or on the right day of the week. The hypothesis you define determines how to construct the test, what you’ll measure and how you’ll determine success.
- Set goals and plan for what’s next. To determine whether the test “worked,” it must be measured against a specific, pre-established goal. Communicate goals to the organization and evaluate based on them. Before starting the test, clearly identify what happens after. For example, if you’re testing a free shipping offer on your home page, will you extend this offer permanently to all consumers if it wins?
- Identify segments for each test beyond the general population. Rarely will A/B tests apply across your entire customer population. If your test is designed to improve click-through rates on your email marketing campaigns, your best customers may respond differently to certain creative assets than promotionally-driven customers. Best customers may prefer content and lifestyle images. Promotionally-driven customers might respond better to hard-hitting graphic copy touting a sale offer.
- Set up for proper reading of results. This sounds simple, but incorrect tagging on a Web site or setting up too many versions will compromise your A/B test.
Other considerations
You’ll want to take a systematic design to your test plan, working through every element of the conversion funnel. For example, increasing click-through on your email campaign doesn’t necessarily improve conversion rate on your Web site once they land. So once you’ve optimized click-through, consider testing different landing page concepts or on-site conversion elements. Improving every step in the journey will ultimately drive better overall performance for the business.
As you get some wins on the board and grow your A/B testing credibility, then start expanding beyond some of the immediate, often tactical, quick wins into more strategic territory. Build a test roadmap that prioritizes testing based on a combination of strategic value, and complexity of execution. This will organize your plan into those things which can be done immediately (high importance, low complexity), which should be tabled (low importance, high complexity) and those with more long-term requirements (high importance, high complexity). Those tests may involve impact on business processes, or require more significant investment. But the long-term opportunity may be far greater.
It’s important to remember that your test isn’t a failure if the test hypothesis doesn’t beat your control. It simply means you need to either adjust the hypothesis and test again, or alter the execution. And conversely, if your test hypothesis does win, it doesn’t mean you’re done. There’s always room to optimize. Look for another challenger to take on the new champion and see if the metrics you’re using can be further improved.
With the availability of free testing tools on the market, there’s no reason to sit on the A/B testing sidelines. It’s much easier and cheaper to increase performance from existing customers or visitors so get going to optimize their experiences through all their interactions with you.
Need some help? LiftPoint Consulting’s team of professionals can get you started with strategy and implementation of your an A/B testing initiative. Message me directly for more information.