Split Testing is For Losers

Split Testing is For Losers

I used to spend weeks debating with team members and agencies over how a page should function in order to best convert visitors into leads (form fill outs).  We've all been there.  Some concepts like keeping the call to action clear and simple (and singular) need no further explanation, but others will often cause rifts as personal opinions and results from sample size 1 tests take over.  The best way to combat an opinion is with cold hard facts.  

Ready. Set. Wait.

It never fails, one round of reviews turns into five, as each stakeholder separately has "just one more small change".   You thought the page was launching two weeks ago, but now everyone is stuck debating over whether the form slides in from the left or the top.  Or whether the CTA should be "Download Now" or "Download Here".  In effect, you're doing blind conversion rate optimization (CRO).  You're attempting to predict how the page will perform, without having any data whatsoever.  Now, this isn't necessarily a bad thing (we have to start somewhere), but it is bad when you're holding up the launch of the page - because I can tell you this - "Download Now" will always convert better than "Page Not Found".  You can end the debates with a simple sentence.  "We'll test that."

If you're not A/B testing, you're not optimizing.

Split testing (AKA. A/B testing, multi-variant testing) is the only real way you're going to identify the losing ideas.  Everyone has an opinion, and in a split test you have an opportunity to test those opinions or hypotheses.  Blind conversion rate optimization is blind because there's no data to back it up.  You had an opinion or a hypothesis, but you didn't bother to test it - you skipped right past GO.  Wouldn't it be more powerful if you could say - "We had a 3% improvement on conversions by changing the button text from 'Download Now' to 'Free Trial'"?  

"Best Practice" doesn't mean sure thing.

I've read a lot of best practice guides for landing pages.  The good ones will always give the hypothesis they started with, and the results of the split test.  In other words, they're backed by data.  But that doesn't mean you should blindly adopt that idea without testing it yourself.  You should think of best practices as starting points and known hypothesis for your tests, not as absolute rules that must be followed.  Each page is different, each audience is different.  Your mileage may vary.  Your mileage will likely vary.  

So next time someone talks to you about optimizations, the fist question should be - how do we test that?  

Great approach. I would add don't leave the entire lead program dependent on just one portal such as web page. Add Chat, voice of course, and email.

Like
Reply

To view or add a comment, sign in

More articles by Anthony Lombardo

Others also viewed

Explore content categories