You Can’t A/B Test Your Way to Greatness
A/B testing is an awesome tool, but has severe limitations
First things first: I love A/B testing. For those unfamiliar with the method, A/B testing involves changing a product (or packaging, messaging, pricing…) and then measuring how that changed version performs (in terms of some measurable KPIs) in comparison to the original version (the “control”). The way this test is done is by randomly assigning the population of users (or customers, visitors) to either the control or test group, and then measuring how these groups behave differently. Since the allocation is random, the two groups are very likely to look similar in terms of their characteristics (e.g., demographics) and therefore any measurable differences between the two groups must be caused by the change made to the product.
A/B testing is based on randomized control trials done in medicine to establish the effectiveness of a new treatment. Obviously, we have very high standards for medical trials. A/B testing has therefore been dubbed the “scientific method of product development”.
It is an inconvenient truth of digital product development that most of our ideas fail. Therefore, we have to validate that our ideas have the impact we were hoping, and catch those ideas that failed to live up to those hopes. A/B testing is absolutely a viable way to do that.
If the alternative is to just ship our ideas without validating that they have the expected impact, then I would always argue in favor of A/B testing. In fact, I believe that for a lot of changes, running an A/B test to understand the impact (positive and negative) as accurately as possible is in fact a great idea.
However, A/B testing has its limitations and risks. Most importantly: You can’t A/B test your way to greatness.
Building something great and novel requires creative destruction, overthrowing some aspects of the status quo that are no longer needed or valid and establishing new fundamental assumptions. For example, the iPhone — possibly the…