Looking for Best Practices? Try Best Principles.

Why implementing hyped frameworks won’t get you far

Jens-Fabian Goetzmann

--

The product development world is enamored with best practices and frameworks. Hardly a week goes by in which there isn’t another set of processes and practices being discussed and at times even hyped: from the Spotify model to Basecamp’s Shape Up, from Scrum to SAFe, from Design Sprints to RICE prioritization, people are looking for plug and play solutions that will make them more effective and efficient at designing and shipping products that are beneficial to their customers and their business.

However, it is a delusion to think that implementing any of these best practices and frameworks top to bottom will be a silver bullet making you more effective, lean, or agile. The reasons for this are manifold. You or your team might lack the right mindset or experience to make the practice a success. You might have a completely different organizational context from seniority over decision making to structure. You might have a culture that is not compatible with the chosen practice. The practice might require a different industry or type of product to yours. In short, there are many reasons why implementing a given practice might fail to realize the results you are hoping for.

Additionally, adding on best practices to what you are already doing is guaranteed to slow you down if you aren’t also taking something away you are doing today — and often these best practices don’t detail what the best in class companies following these practices aren’t doing.

Trying to implement best practices like this can even lead to practice theater, where a misplaced emphasis on implementing some framework du jour leads to paying more attention to how much you are adhering to the practice and less on whether the practice improves organizational effectiveness and efficiency. This can often happen if leaders try to impose an (often overhyped) practice top-down on the organization without true buy-in by the organization.

So, are these best practices meaningless, then? Should we ignore them? Of course not. These practices do show how others have succeeded in producing great outcomes. We should do the best we can to learn from them to increase our own effectiveness and efficiency. It’s just that it requires a bit of interpretation and adaptation.

Often, the best way to adapt best practices is by elevating them to the level of the underlying principles, understanding how these principles are applied in the practice, and then considering how they might be applied in a different — your own — context.

Let’s look at some examples of practices big and small, understand underlying principles and see what context they might require.

Starting with an easy one, the practice of the sprint retrospective, which is a part of Scrum, for example. In a sprint retrospective, at the end of every development sprint, the team conducts a retrospective in which they identify ways in which the way of working could potentially improve and make decisions how to implement these in the next sprint.

The underlying principle is one of continuous improvement, not being content with the status quo of processes and always looking for ways that they can be made more efficient and effective. The principle of continuous improvement requires a growth mindset (believing that one can get better through practice and adaptation) and a culture of trying out new things. The nice thing about the retrospective (and continuous improvement in general) is that it can be implemented on a very localized level — an individual team can to a certain extent improve the way they work within the given set of organizational constraints without asking for permission, even if the required culture and mindset isn’t a strong suit of the broader organization.

Another example of a best practice is validating the impact of product changes through A/B testing. A/B testing is often seen as the “gold standard” for evidence-based decision making about product changes. In this practice, a change is introduced only to a random subset of the user population, and the change in their behavior is then measured and compared against the rest of the user population, which form the control group. However, A/B testing is not without its issues, and might simply be impossible to implement for a given product or in a given context.

Can elevating the practice of A/B testing to the level of principles help, then? The underlying principles of A/B testing are evidence based decision making and validating outcomes of product changes. Implementing these principles requires a mindset of outcome orientation (as opposed to output orientation) as well as an understanding of the uncertainty axiom of product development. The difficulty of implementing these principles is that they require a culture that accepts failure. If context is against this, the best bet is to try and start small, and base adaptations of the practice more on the principle of evidence-based decision making, which will resonate in a lot of contexts (after all, few managers would argue for making decisions against existing evidence). Try to start collecting evidence — qualitative or quantitative — for the outcomes that changes to the product are resulting in, and try to impact product decisions with the collected evidence. Over time, appetite for a more thorough validation through evidence might grow.

A last example is the (Google Ventures) Design Sprint. There has been quite a bit of hype around this practice, and it is therefore particularly prone to practice theater where a sprint is conducted more for the show than for the results. This might be the case if for example the necessary decision makers can make themselves available only for very limited time, access to real customers might be impossible, or the functions involved are limited to those that regularly work together anyway.

What are some underlying principles of the design sprint that might help adapt this practice to your context? Given that the sprint is quite a complex practice, this is of course a bit hard to summarize. Some examples might be blocking substantial focus time, speed to validation (validating very rough prototypes), validating with real customers, cross-functional collaboration, and involvement of the decision maker in the ideation process. All of these might face barriers based on mindset, organizational context, or culture. If these impediments are identified, it is probably better to try and implement some of these principles in a way that fits the context rather than run a full blown design sprint but acting against some of the core principles.

In conclusion, trying to implement best practices and frameworks by blindly following what others have done without regard for the specific context is unlikely to realize the outcomes you are hoping for. At best, you could distract the organization with practice theater, at worst, you could be adding on processes and complexity without removing some, further slowing the organization down.

Instead of following best practices to the letter, regardless of whether you are curious about a practice or are being pushed by someone higher up, try to elevate the conversation to the principles underlying the practice. Then, discuss what these principles might require in terms of the mindset of individuals and teams, what structural requirements the principles have, and what elements of the company culture might support or impede the principles. Only after having had that discussion consider how the principles might be applied in your specific context — the implementation might look quite different to the original best practice.

That is not to say, of course, that you shouldn’t consider changing some aspects of mindset, structure, and organizational culture to better fit with what is considered best practice. Depending on the general willingness to adapt as well as the level of the organization at which this change is initiated, some of the aspects might be malleable. However, especially aspects of mindset and culture can only be changed slowly and bit by bit. Expecting too big an adjustment from the organization will most likely just lead to outright rejection of the practice.

A practice that is based on the same principles as one of the “best practice” case examples, but adjusted to the organizational context and picking up the organization where it currently is is much more likely to yield the results you are hoping for. It won’t be a silver bullet, and the adjustment will require time, but thinking that there exists a framework or best practice that is said silver bullet is a delusion anyway.

I hope you found this article interesting. If you did, feel free to follow me on Twitter where I share thoughts and articles on product management and leadership.

--

--

Jens-Fabian Goetzmann

Head of Product at RevenueCat; previously at 8fit, Yammer, BCG.