Outcomes Over Outputs: The Long-Term Challenge
When you can’t immediately measure everything you’d like to improve
Focusing on outcomes over outputs is a product management mantra that has been getting a lot of attention recently. At its core, the idea is extremely powerful. Instead of focusing on product development “velocity” (how fast product and engineering teams can design new features and ship them), we should focus on outcomes for the customer and/or the business. For example, instead of trying to manage a team by how many features it has shipped, it should be managed by how much it managed to improve certain outcomes (for example user engagement or retention).
The shift toward SaaS / apps with deep instrumentation and the ability to A/B test and experiment has made this focus on outcomes much more possible, and goal-setting frameworks like Objectives and Key Results (OKRs) are most effectively used when the goals are outcomes, not outputs.
Focusing on outcomes over outputs empowers the team: instead of just “cracking the whip” and demanding ever faster delivery, it allows the team to own a problem and find the best possible solution. It’s “work smarter, not harder” for product teams.
However, the real test of focusing on outcomes over outputs is how we deal with desired long term outcomes that are hard to impact / measure in a goal-setting period (e.g., a quarter). For example, it’s easy to focus on “increasing conversion rates on a landing page” over “shipping improvements to the landing page”. It’s much harder to focus on “increasing long-term subscriber retention” over “shipping features that we think will increase long-term retention”. It’s of course possible to measure, let’s say, one-year retention, but by definition, it takes one year to measure that. That time frame is too long to wait for A/B test results, or to feed back any information into your development cycle.
The best we can do to address this issue with data is to establish leading indicators that predict the long-term outcomes we want to achieve. We can do that analytically by looking at historic data, or less scientifically by reasoning about user behavior. We can then manage by measuring these leading indicators (such as short term engagement as a leading indicator for long-term retention).
The biggest risk of this approach is that we start “hacking” the short term outcomes with things that hurt the long term ones. For example, sending lots of push notifications will likely increase short term engagement but hurt long term retention (which was our originally desired outcome). This risk is of course especially pronounced if the team’s performance is evaluated based on how much of an impact on these metrics it has had.
Avoiding short-termism and “hacking” leading indicators is much more an art than a science. When managing by leading indicator, we should always ask ourselves during the design and development process “does this risk hurting the long-term objectives we are trying to achieve?” We can also try to use qualitative research to help identify and assess these risks (by understanding how users and customers react to the changes we are planning to make). However, in the end, judgment calls will need to be made by the team and by management that are no more than informed guesses, the result of which will often be known only after months.