5 Critical Mistakes When A/B Testing

The practice of AB testing has been so common among the product management community, that most mid level product managers understand the basic concepts and know how to put it in practice. Yet, when pressed about how to push AB testing to the limits and how to really move the needle of the business, most product managers fall flat on their face. This is why it can be very dangerous for a company to have an optimizer looking at the wrong metrics, thus making careless decisions.

As someone who has been doing this for several years, I have made mistakes and watched others make the same mistake. So below are what I consider the top 5 critical mistakes when AB testing.

1. Not AB testing the application

You would be surprised to hear that most AB testing happens at the top of the funnel. Obvious AB testing candidates are home pages, landing pages, and product pages — these are really good things to test, but testing shouldn’t end there. You want test every possible funnel candidate from the top of funnel down through the bowels of your application where customers are performing critical activation and engagement activities. If a customer isn’t activating or engaged, chances are they’re not paying , so why not AB test these experiences?

2. Watching conversion, but not RPV (revenue per visitor)

AB testing is not all about conversion. A basic conversion metrics (visitors/traffic) is great in testing certain creatives on a homepage and a registration event, but useless when testing through the purchase process. Especially when testing the introduction of new products and pricing, you may in fact increase conversion but lower your revenue per visitor (RPV). Such an obvious point is when you lower the price, you can increase your volume but not sell enough to make up the difference. This is where following the RPV metric is critical in AB testing.

3. Not paying attention to Life-Time Value

Other than not watching revenue per visitor (RPV), often customers will watch this metric but fail to measure the long-term value (LTV) impact of a change and follow the ARPU (average revenue per user). For example, let’s say you raise your prices by 20% and the 24 hour revenue goes up, but because the monthly cost is too high, customers churn out too soon. While your RPV appears higher, your ARPU goes down. Customers must be able to watch this metric as well.

4. Not allowing the test to run long enough

This is a fun one, and is very common. After watching an AB test after a day, a product manager or marketer can see that the test is winning (or losing) after a day and turning off the test or making it 100%. The truth is, you need to see at least 1 to 2 weekly cycles to see how customer behavior changes intra-week. Most AB test tools will allow you to follow the metrics on a daily basis to see if one experience has a consistent winning or losing effect. Declaring a decision too soon, could mean disastrous effects on revenue over time.

5. Not properly segmenting customers

Segmenting your AB tests is such a key way to learn about differences customers buying patterns. What may work for a female, may not work for a male. What works for someone older than 50, might not work for someone in their 20′s. Buy watching how an AB test performs on various customer and technical segments, you may and should end up with different experiences for different customers.

For example, your AB test experience could lose by 5%. But what might be happening is that it’s winning by 20% for people on the east coast, but losing by 25% for people on the west coast. By understanding the segments, and then targeting the right experience for the right audience you’ll end up ahead overall.

Leave a Reply

Your email address will not be published. Required fields are marked *