Ten Conversion Testing Questions Answered
We talk a lot about conversion optimization tactics on Get Elastic. While they may be backed up by research, case studies or even what may be considered common sense, they can always be punctuated by one phrase: "You should test that."
I recently asked Chris Goward, author of You Should Test That some hard-hitting conversion optimization testing questions -- answers below. Chris is a pioneer of conversion rate and landing page optimization and the founder of WiderFunnel, a testing consultancy that's worked with ecommerce brands like Google, Electronic Arts, SAP, Shutterfly and BabyAge.com.
GetElastic: The beauty of A/B testing is to slaughter sacred cows and bust through the myth of best practices. But, in your experience, are there any true "best practices" that hold true across tests? Is there anything that's safe to call a best practice for ecommerce?
Chris: Well, it’s a safe bet that you should have some sort of checkout page...
I’m joking (a little) but, in all seriousness, I think there are some pretty well-founded “better” and “worse” ways to do things, but very little marketing and UX rules that should be enshrined as “best practices.”
I believe we are still in the early days of testing new e-commerce experiences. Innovative designers and UX architects are coming up with new experiences all the time. Some are terrible ideas and others are going to work better than our current standards. The beauty of the scientific approach is that you can kill the losers much more quickly and definitively without having to worry about them turning into sacred cows. The pace of business is too fast today to allow them to develop.
GetElastic: Aside from checkout and the home page, what pages or elements on an ecommerce site have the highest conversion potential?
Chris: We’re seeing lots of great conversion rate and profit lift with A/B tests of category pages, landing pages, PDP pages, search results templates, and site-wide elements like Persistent Calls to Action (PCTAs).
The checkout is a purely transactional area that is often not the highest priority and the home page often is too much of a political hot-potato, actually. But, we’ll test there if we have good organizational support. We just finished two home page tests for two different clients last week with big double-digit lifts on each, so they can be fun too.
GetElastic: There's some debate around whether you should aim for a high confidence interval, or take a lower confidence in order to reduce test length. What's your opinion on this tradeoff?
Chris: That debate happens when dealing with low traffic websites. If there isn’t a traffic constraint, there’s no downside to aiming for the 95% confidence level. But, not every website has millions of monthly visitors, so tradeoffs are needed.
This question should be a business decision when weighing the pros and cons of higher certainty with 95% confidence or more tests with 80% confidence, for example.
First, people with lower traffic sites should follow some basic rules for getting faster results:
- Limit the number of variations per test to 3-4
- Test dramatic differences
- Don’t even think about multivariate testing
- Test important insights on higher traffic pages, then validate with a simple A/B on lower traffic areas
But when you’re already following the basic principles for low traffic testing you can also experiment with lower confidence intervals.
But, you should be very cautious about doing that. The only thing worse than not testing is getting unreliable results from your tests.
The 95% confidence level will be a true result 19 times out of 20. An 80% confidence level will only be true 16 times out of 20. That’s certainly better than a gut-feeling or HiPPO approach, it’s quite a bit less accurate.
At WiderFunnel, we use a 95% confidence level as our standard benchmark for most of our clients. There are exceptions where we need to stop tests early if they’re taking too long and there’s an opportunity cost, but then we take the results as directional, not statistically significant.
GetElastic: Many CRO experts believe rotating banners on a home page is a waste of space. Do you agree?
Chris: I was the first to come out publicly against rotating home page carousels in Oct 2011. Since then, others have jumped on the bandwagon based on our data.
For us at WiderFunnel, it’s less about opinion and totally because of the tests we’ve run. We love running A/B split tests against home page rotators! They often lose badly.
GetElastic: What in your opinion is the most underrated, OR overrated metric in CRO?
Chris: You may be surprised to learn that I hate the term “conversion rate optimization” even though that is the industry we’ve pioneered. The problem is that it focuses attention on a metric rather than an outcome. We as an industry should be talking about the purpose rather than the technical details. The outcome should be “marketing optimization” or “scientific marketing” with the conversion rate just being a part of the process.
The most important metric is different for each business and talking about a “conversion rate” implies that it’s a simple, single metric. That’s not true. For e-commerce, revenue and profit are much more important than simple purchase conversion rate. For some businesses, revenue per visitor is most important. For others, Return on Ad Spend (ROAS) may be, or Average Revenue Per User (ARPU).
So, the most important metric is unique to the business. But, if you’re looking for an overrated metric, it’s bounce rate. Your bounce rate doesn’t matter.
GetElastic: What elements of a mobile website or app are most important to test?
Chris: Optimization is even more important for mobile than for full-size sites. A mobile browser’s frustration threshold is much lower and they’ll exit even faster than they would on a desktop.
Think about their experience: they’re out on the road with many distractions, noises, people talking and obstacles to avoid. They may even be driving! (Not recommended.)
They also don’t have the problem solving options as easily available. For example, I tried to buy an iPad mini yesterday on Apple’s Store app on my iPhone. I got all the way to the end of the process and hit an error message with the credit card number. There was no reason given that I could see and I tried it three times before giving up. Now, on a desktop, I would have tried other options and had a chat window to open to figure out the problem, but that’s not available on mobile.
So, error message has to be thought through more carefully, all user interface elements must be obvious and easy, messaging must minimize any distraction and maximize clarity. And, of course, load times must be minimized.
We’ve found that the LIFT Model works just as well for improving mobile sites and apps as for desktop experiences that we developed it for. At a recent conference, I presented a session on mobile optimization and showed how to evaluate a mobile landing page for “iphone cases” as an example. Here are a couple slides from that mobile conversion optimization presentation.
And more… (!)
GetElastic: What is the biggest mistake a marketer could make in an A/B or MVT test for an ecommerce site?
There are many mistakes I see commonly. Here are a few of the biggest:
- Starting without a proper process and losing organizational support
- Optimizing for the wrong goals, like add to cart or reducing bounce rate
- Not tracking average revenue per variation
- Focusing only on landing pages and checkout and ignoring all the gold in the middle informational area of the funnel (More about prioritizing testing here.)
- Using before & after (or pre & post) testing instead of controlled, scientific testing
GetElastic: What is your favorite non-testing tool for improving CRO?
Chris: Eye tracking sucks. I agree with Jared Spool, when he said:
Turns out that eye trackers are the most expensive Ouija Boards available to science.
— Jared M. Spool (@jmspool) April 6, 2013
We look for tools that help our Conversion Strategists generate better hypotheses.
- Eye trackers don’t.
- Click heatmap trackers like CrazyEgg sometimes do, in certain situations.
- User testing often does and we use that regularly as part of our conversion rate optimization system.
- Website popup surveys sometimes do, when they reveal UX problems.
- Post-purchase surveys often do.
We’re constantly evaluating new tools that come out and are optimistic about some of the interesting new ones we’re seeing today. I think we’ll see some really interesting CRO support tools in the next couple years.
GetElastic: Beyond conversion optimization testing, what do you think is or could be the next big trend for improving online sales?
Chris:The best thing about scientific marketing (or conversion optimization) as a framework for decision-making is that it can hold all future innovations. Every decision can be approached as a hypothesis to test, whether it’s a website UX challenge or a business model concept.
More and more people from all business areas are talking about the scientific approach to business. For example, Eric Ries’ The Lean Startup is an awesome book about testing business ideas. He’s essentially promoting conversion rate optimization, although he doesn’t use that term.
So, whether the next big thing is HTML5 mobile sites, Responsive Web Design, Big Data, Same-day Delivery, or Google Glass browsing, the ideas can be tested.
GetElastic: Where can our readers find you speaking on conversion optimization this year?
So far, here’s where I know I’ll be speaking:
- Mn Search in Minneapolis, May 29
- Conversion Conference Chicago, June 12 and Boston, Oct 1
- SES Toronto, June 14
- Shop.org Huntington Beach, July 17 (where I know you will be presenting a great session too, Linda!)
- Content Marketing World Cleveland, Sept 11
- eMetrics Boston, Oct 1
Get Elastic readers, if you want to boost your conversion improvement skills, check out You Should Test That and the fine conferences above (and please subscribe to Get Elastic if this is your first time here).