Interviewing Product Managers for Product Sense
Evaluating “hard” product skills
I’ve been involved in interview panels for a lot of PMs, and have set up hiring processes for product managers in two different startups. One of the critical components of the interview loop should be what I call the “Product Sense” interview, in which the candidate’s “hard” product management skills are evaluated: the ability to assess products, identify improvement opportunities, and reason about them and their tradeoffs.
Product management, overall, is a “soft skills” job — particularly in larger organizations. Product managers don’t write codes or produce pixel perfect designs — their job is predominantly to ensure that the team is moving in the same direction that is aligned with the strategic direction. However, there are some “harder” skills in the discovery part of the product lifecycle, which this interview is meant to test.
The interview is typically conducted as a “case study” exercise, in which the candidate plays through an abbreviated product discovery process for some product. In this article, I will share my current take on this interview, the questions I typically ask, and what sets apart bad, good, and great responses.
The case prompt
Before diving into the actual “product sense” part of the interview, a case prompt has to be set up. There are various ways to do this. Some people prefer prompting candidates with a specific opportunity for the company’s own product. Some use well-known third party products. Others are using physical examples, like “design an alarm clock for the blind”. Personally, I am not a big fan of the latter, since physical products would in reality be product managed quite differently than digital ones.
My own preference for the case prompt is having the candidate list some apps that they are either using regularly or have recently started using, and then I pick one of them as the example to investigate in the case study. This has a couple of advantages in my eyes: firstly, the candidate is guaranteed to be discussing a product that they are familiar with which means that they don’t have to start from zero. It also means that I can see how well the candidate manages to differentiate between their own needs and use cases and that of the product’s broader audience.
Moreover, even this question itself already gives me some signal whether the candidate thinks like a great product manager. The worst response to these questions is if candidates can’t up with any apps that they use. Either the candidate truly doesn’t use technology much (in which case, why are you a tech product manager?) or, more likely, they don’t pay a lot of attention to their own behavior and the products they use. Contrast this with the best responses: they highlight curiosity and the drive to understand what makes products work or not. There are different telltale signs for this: the candidate might tell you about some new products they have recently discovered. The candidate may immediately link their list of apps to the problems they are solving for them. Or they might say what they admire about a particular app they are using (or, conversely, what annoys them).
A side benefit of asking candidates for the apps they use is that I might learn about interesting products I didn’t know about before, which — as a curious product person myself — I always appreciate.
One downside to this question is that the candidate might come with rehearsed answers. There are a couple of ways around this. Firstly, don’t just pick the first app that the candidate mentions, but ask them for a few more examples. Secondly, if you have the feeling that the candidate is giving rehearsed responses, try to guide them in a different direction when asking about product improvements later on, for example, by focusing on a particular use case or customer segment.
Product strategy questions
The first few questions that I ask are higher level “strategy” questions to see how well the candidate can reason about the context and goals of a product.
My first question is what the customer or user problem the app is solving. This is a bit of a softball to start with, but it still gives a lot of signal on understanding strategic understanding and user empathy. The worst responses to this question are ones which are either lack a good summary or are obviously wrong. Less bad, but still not great are use case descriptions that are too narrow, for example, by focusing only on the candidate’s personal use case. The best responses are ones that don’t just summarize the use case well and succinctly, but also consider different user groups or personas — for example, renters and hosts in the case of Airbnb.
Next, I ask what the top 3–5 metrics would be that they would include in a dashboard for the product. This question gives a good signal on how well the candidate understands how to measure the key drivers of product success. Bad responses to this question include metrics that don’t make sense for the product, vanity metrics, metrics that are too detailed and specific to individual product features, or are too qualitative or ill-defined in nature (e.g., “how are users using the different features”). On the other end of the spectrum, the best answers to this question are metrics that cover all relevant aspects of the product (for example, along the AARRR funnel). In particular, it’s interesting how many candidates do not include monetization related metrics in their response. Moreover, the best responses to this question include not only meaningful metrics but also a prioritization — for example, saying something like “at this stage of the company, they are probably focused mostly on growth, so the most important metric would be X”.
The metrics question is followed up by asking for what the candidate thinks the product goals might be. This question is a bit ambiguous on purpose, and also requires the candidate to make some assumptions. It tests not only the candidate’s understanding of the product and its strategic context, but also their tolerance for ambiguity (which is critical for product managers as well). The worst answers to this questions are ones that just replay use cases or metrics without prioritization. Equally bad are answers that are too narrowly focused, make highly improbable assumptions, or that don’t fit with the company’s strategy (as evident to outsiders). On the flip side, in the best case the candidate has already started answering this question in their previous response, and they are further expanding on their idea when answering this question, and also call out any assumptions that they are making.
The strategy section is wrapped up by asking what the major competitors are. This further tests the candidate’s strategic thinking and also the ability to think about a product from the perspective of jobs to be done, not just its features. Competitive dynamics of course should not be driving most product decisions, but having an idea why customers choose one product vs. another, and what competitive threats a company might be facing are valuable product management skills nonetheless. The worst responses to this questions are the ones that are missing obvious competitors. The best responses are the ones that do not just list obvious competitors but also potential new market entrants, substitutes including non-digital ones (for example, in the case of Facebook, other solutions for the customer need “I am bored and have nothing else to do”), non-consumption, and/or homegrown solutions.
Product improvement questions
The second part of the product sense interview is a bit more free-flowing than the first one, and the candidates that perform best in this part need hardly any guiding questions. I start off this section by telling the candidate to pretend they are the product manager for this product, and then ask them how they would go about identifying areas for improving the product and come up with some improvement ideas. I also call out that neither of us knows the product and has all the context, and that they can make assumptions.
This very first question is often the highest signal in terms of the maturity of the candidate’s product sense. The worst responses are the ones that jump straight to feature ideas, often just focusing on their own use case of the product. As a product manager, you should know that you are at best an imperfect proxy for the product’s user base, and at worst, completely atypical. Also, it’s not the job of the product manager to have all the ideas, but rather to focus on the most important opportunities and customer pain points and then solve those.
Slightly better responses to this question will start by outlining problems to solve, making and stating assumptions why these problems are relevant, and then ideate solutions for these problems.
The best performing candidates own this process almost independently from here. They tie all of the problems and solutions they identify back to the goals identified earlier.
Another signal in the response to this question is the quality of assumptions, opportunities, and solution ideas. For example, if the candidate makes the assumption that a company that is focused on growth should pursue opportunities to improve their margins, that’s not a very good assumption.
On the feature idea side, I try to push the candidate (if necessary) to come up with at least 3–4 ideas. Again, there is a spectrum of responses here. On the worst end, the candidate struggles to come up with ideas, provides ideas that are all a variation on a theme, or ideas that are just not great (for example, very complex and expensive solutions for a problem that is unlikely to have a big impact). On the best end, candidates come up with a variety of different ideas that range from incremental to substantial and that all could work in theory. I am not looking for “genius” ideas here — in any case, they would be hard to identify. I am rather looking for some reasonable ways to impact the key drivers of success for the product.
The best performing candidates independently move on to the next step, otherwise I ask the candidate directly how they would prioritize these ideas.
The worst responses here are wavering and do not have any reasonable way to prioritize the ideas, or avoid prioritizing altogether. Good responses talk about impact vs. effort, or do an ICE / RICE assessment. The best responses tie the prioritization back to strategy and goals, and also proactively address the fact that these ideas have a lot of uncertainty and therefore need to be validated.
If the candidate hasn’t brought it up by themselves, I then ask about how they would validate the highest priority idea. The worst responses are the ones in which the candidate immediately jumps to building and A/B testing the feature, or something similar. That’s a very expensive way to validate most ideas! The best responses are the ones that cover a variety of ideas for testing in different stages, including user research, prototype testing, looking at existing data, and only then potentially A/B testing.
The last question I ask is what success criteria they would look at if they launched the feature. The worst responses are the ones that either have no good idea for post-launch evaluation, or mention metrics that aren’t really measurable, or that mention only feature-level adoption metrics. In particular, if the candidate is talking about A/B testing, I watch out for any success criteria that aren’t A/B-testable. For example, you can’t A/B test the adoption of a new feature: if the feature isn’t present in the control group, adoption in the control group is guaranteed to be 0. If you only want to measure adoption, you don’t need an A/B test!
The best responses to this question are the ones that tie the success metrics back to the product metrics and goals identified in the very beginning, and that attempt to measure behavior change. They also mention how success measures can only be proxies and that it is good to look at a set of measures and countermeasures to avoid Goodhart’s Law as much as possible.
Wrapping up
To wrap up, I often ask candidates to summarize the discussion. This gives a little bit of extra signal in terms of structured communication, but mostly, it ties up the discussion nicely before I give the candidate the opportunity to ask me some questions.