We value your privacy
This website uses cookies to ensure you get the best experience on our website.
This website uses cookies to ensure you get the best experience on our website.
Over the past 4 years I’ve met somewhere between 500 and 1000 conversion optimization practitioners. I meet more every week, and with every person I meet I try and learn a little something. I ask lots of questions. One of my favourite questions is this:
What percentage of your A/B tests are “winners”?
It seems simple enough. The ultimate KPI for any conversion optimization program is uplift. Without uplift, there is no measurable ROI of conversion optimization. There’s no tangible reason for management to take it seriously as a function. So you’d think that people in CRO – who spend all day looking at metrics and data – would know their own numbers, right? Yet in most cases, the people I talk to have only a rough idea of what their win-rate is, and many don’t really know. (FYI: the reported “win rates” range from 20% to around 70%)
I also ask about the cost of testing. Most people think about the cost of testing in terms of the wages for people involved in CRO, and the price of the tools they use.
I rarely hear about the less tangible factors like opportunity cost, but for any online business, there’s only enough traffic for X number of tests per month. Every testing slot you use = one that can’t be used on other test ideas that might potentially have a bigger impact. And every time you run a test where 1 (or more) variants underperforms the control, you lose real revenue. It’s called “testing”, but there are real customer experiences and cash at stake. These less obvious costs can be huge.
By comparison, whatever price you’re paying for your Optimizely (or other) subscription is fairly insignificant. And yet it’s amazing how many people would happily run an A/B test on a hunch rather than pay a few bucks for some user testing (for example) to validate their idea before they commit real traffic and revenue to it. That’s why I think CRO has grown steadily over the past 5 years, but never really saw “hockey stick” growth as a discipline. In most cases, testing programs aren’t run like a “business”, i.e. a deliberate process with carefully considered benefits and costs.
But what if they were?
If more teams were doing these things, I think conversion optimization would definitely gain a lot more traction in the C-suite, and we’d see a huge increase in resources dedicated to testing. And wouldn’t that benefit all of us?
We look at how to leverage predictive eye tracking to improve your Black Friday marketing campaigns.
Read moreIn this article, we’ll discuss our data-driven approach to CRO, including fundamental tools and principles that will help to...
Read moreGreat SEO brings users to your site. A great UX helps them achieve their goals after they arrive. Too...
Read more