Good product management is 70% Science. And 30% judgement.

Tom Hazeldine
5 min readJul 2, 2020

When I think of science I think of hypotheses, experiments, evidence and analysis — ideally in that order. Indeed having a clearly defined methodology is cornerstone of modern science. Because as any scientist will tell you if you repeat a well designed experiment you should get a similar result each time. If you don’t, your hypotheses wouldn’t stand up to scrutiny and it’ll be just another outlier or debunked theory.

Now the 70% science thing is a somewhat interesting observation. To me at least. Perhaps more interesting is how the other 30% isn’t science. And trying to make it so is a recipe for wasted effort and misplaced confidence — more on that later.

So which parts of product management are the 70% science? And why?

In simple terms it’s everything that relates to identifying and understanding market problems, assessing the severity of those problems and solving them. I’d also add most elements of constructing messaging to the list too. Here’s three examples to illustrate my point:

Example 1: Let’s say I interview 10+ people that belong to a given segment and I ask them the same set of probing questions in a neutral fashion. If I take care to understand the ‘why’ not the just the ‘what’, then experience has taught me I will get a strong, replicable understanding of that segment. Those firm foundations mean everything I do afterwards (designing solutions, positioning, messaging and launch) has a chance to be successful.

Example 2: The same would apply if ran a survey to test some feature ideas and got >25 responses. Provided I’d phrased the questions carefully and been smart with the format of questions (multiple choice where possible) I’d have confidence my results would be robust and replicable. Not only that but if I’d included some segmentation questions I’d be able to slice and dice the results to understand the niches where the greatest opportunities lie.

Example 3: Very similar themes apply when designing solutions. If I’ve ensured the most pertinent problems are being solved, the target persona/s preferences are understood (e.g. expectations on information density and navigation) the user experience is consistent and the designs are tested it’s likely the solution will be effective and well received.

In other words there’s a big chunk of product management where using tried and tested techniques, in a structured fashion, will yield robust, repeatable results time and time again. That sounds a lot like science to me.

And, whats more, just like science, all of the above can definitely be taught — I’ve learnt those approaches and I’ve coached other people to use them.

What about the other 30%? What is it and is it really so different?

The other 30% can loosely be termed ‘product strategy’. Or to put it another way the stuff everyone has an opinion on — well reasoned or otherwise. It includes roadmap prioritisation, whether to enter a segment or not, whether to invest in growth or reducing churn, how to react to competitive threats and when to partner / acquire / build. And much more besides.

Why the 30% is different and why it’s judgement rather than science is best illustrated by a couple of examples:

Example 4: Let’s assume I’ve got a variety of features on the roadmap that we know solve pertinent problems for our users and the wider segments we’re targeting (because we tested them and they were popular when we had users rank feature ideas). So which do we build first? The answer: it depends. And unfortunately it depends on numerous variables: company goals, where the company’s at in its lifecycle (seed funded start up, scale up etc…), the competitive landscape, the size of segments, lifetime value of one type of client vs another, risk appetite, various macro-economic factors (e.g. which segments might thrive during a deep recession in a COVID-scarred world)… I could go on…and on….

The key takeaway from example 4 is judgements have to be made. And whilst frameworks can be used to facilitate those judgements it’s not something where a hypothesis can be formulated and tested for your circumstances. That would entail actually solving the problem and measuring the impact — at which point the judgement has already been made! So ‘science’ as I defined it for the 70% can’t help (unfortunately).

Example 5: Imagine you’re desperate to add a particular feature set. It’ll round out your proposition and combined with your product’s existing functionality it’ll deliver a tonne of value to users. So do you build the features yourself, acquire a company that does it well or partner? Again, it depends. Is the Board in for the long haul or looking to sell up in year or so? Are there viable acquisition options? Do the owners of those viable options want to sell at a sensible price? Are the data models compatible? Are you investors good for more funding? Do you have the expertise to build the features yourself? Do you have a right to play in the new domain?

Just like option 4 that isn’t a choice you can research your way out of. There’ll be data that can inform the judgements but unlike the 70%, you can’t form a hypothesis, test it, then make the call.

But what about modelling? Isn’t that the next best thing to testing?

This is where my earlier point around wasted effort comes into play… Unfortunately the 30% aren’t situations you can accurately model — there’s far too many variables (see example 4). Believe me I’ve tried — multiple times in different contexts!

In fact, after many of the early years of my career were spent hunched over laptop, building spreadsheet models and plugging in assumption after assumption, the notion of an ‘accurate model’ feels like an oxymoron. Particularly with the time and resources available to the average product manager. My more recent efforts to build accurate financial projections for products I’ve managed has reaffirmed that view!

Ok … How about multi-criteria analysis?

As a consultant multi-criteria analysis came in handy when I wanted to quickly narrow down a long list of policy or technology options. It ‘proved’ we’d cast the net wide, which was a common ask. However, I don’t recall a single situation where it delivered any new insight. Quite the opposite. What I certainly recall is the temptation to fudge scores or weightings to ensure preferred options scored well.

Hence my earlier comment re. false confidence in analysis of the 30%. Multi criteria analysis can be dangerous, particularly in the hands of decision makers who don’t grasp the impact the choice of criteria weightings or scoring methodologies will have on the results. An opinion or perspective (and multi-criteria analysis is brimming with both) can be mistaken for facts or science.

So in summary the 30% is all about making judgements. You can use experience, relevant data (where you can find it), gaming out different scenarios, consulting colleagues with a different perspective or background than your own, reflecting on your company’s risk appetite, putting the decisions in the context of your goals… and much, much more besides.

The one thing you can’t do with the 30% is leverage a truly scientific approach. My advice is to embrace that rather than fight it.

--

--

Tom Hazeldine

Product leader, traveller, lover of music & sports