When Experimentation is Not the Answer
Recently I've been doing some contract work and in every engagement I get posed a similar problem, “our conversions are tanking, why”.
Most of you would guess correctly if I told you that I would suggest they need to adopt a culture of experimentation, But in the rush to test and implement, there’s a critical yet often overlooked question: is the site even ready for testing?
Sometimes, the issue isn’t about which version of an experience is better or what is the best way to address a UX friction point. Sometimes, the real problem lies buried in a website’s foundation—bugs, glitches, and errors that undermine the very principles of good user experience. Most of these go undetected as they site performs well for the business key stakeholders but has not been reviewed and tested for the range of devices, operating systems and browser versions used by customers.
Every company I know runs a form of QA before going live, however most forget to do routine perpetual QA to ensure that in the rounds of releases that everything still works like it was originally intended.
I’ve been testing now for sometime and I am always shocked at the number of bugs and defects we find. When we think of bugs, we often think of minor inconveniences—a button that doesn’t work or a page that takes a few extra seconds to load. But in reality, bugs can have massive consequences, costing businesses reputational damage and millions in missed revenue. If you think it’s only the small end of town you would be mistaken,
Recommended by LinkedIn
According to a Google study, 53% of mobile users abandon a site if it takes more than 3 seconds to load. Slow-loading pages, broken links, and inconsistent functionality might seem trivial but have a profound impact on key metrics like bounce rate, session duration, and conversion rates.
Experimentation works best when the environment is controlled, stable, and optimised. Running experiments on a buggy website introduces several risks:
It’s always tempting to dive headfirst into experimentation and fix the dropping conversion rate. But a site riddled with bugs is like a house built on a shaky foundation—no matter how beautifully you decorate it, it’s bound to crumble.
Fixing bugs, optimising performance, and ensuring a stable environment are not just technical necessities; they are business imperatives.Only when the foundation is solid can experimentation truly thrive, delivering meaningful insights and driving sustainable growth.
In addition to the potential bugs, would add that a check on if the right events/parameters are correctly fired is worth the time before intensely testing. And that data then collected to the right tools. It’s surprising how many issues can be found in a review.
Nice article
I'm a simple man. I see a Nima Yassini article drop... well, you have my attention. Feels like a lot of issues that are "CRO" aren't really CRO (as you've outlined). BUT the funny thing is, there is still a "foundation" of data collection which still should be had, regardless if you're A/B testing or not. That solid foundation of data (if set up properly) would've given clear signals to the captcha being the source of the dropff, or to the Amazon example you would've had a clear AOV signal to let you know something weird is happening