The parallels of science and product management

I'm several months in to my time working in tech for GlaxoSmithKline, a pharmaceutical company. The reason I joined was the ability to work in a field where I have the potential to contribute to impacting peoples lives for the better. OK, I'm not directly working in a laboratory analysing proteins and biomarkers but the hope is that indirectly I, with my teams, can contribute to helping people do more, feel better and live longer. A big part of achieving that is understanding the domain in all of its complexity. Scratching the surface in these initial months has helped me realise the parallels of the scientific method and modern product management. This should be no surprise given that much of the tech industry processes and methodologies are doctored versions of more established industries, due to techs relative immaturity. The roots of Agile and Lean, for example, being taken from manufacturing roots in Six Sigma and The Toyota Way. Nevertheless, we stand on the shoulders of giants and for good reason.

Creating the pill before understanding the disease

The old and failing mentality to product development is to build a roadmap of projects, typically defined by stakeholders or sponsors. The input to which is likely via a sales channel “give us this feature and we will give you money”, via a HIPPO (highest paid persons opinion) or simply gut feel. The problem with each of these inputs is that they're not focussed on solving your customers needs and they're rife with assumptions and biases. Falling into this trap ultimately leads you to an 80/20 product (where 80% of the users only use 20% of the product features), also known as the Pareto principle. You build a product that does not “spark joy” and that's a quick route to the bottom where you find yourself thanking the product and then putting it in a box (god bless Marie Kondo).

To combat this we've seen a shift in product management for several years now towards a more calculated and scientific data-informed approach. A shift from a focus of delivery and feature factories to a relentless customer outcome focus, sometimes described as “outputs to outcomes”. Working in this way requires a huge shift in how we think about building and developing products. A more experimental approach. Fundamentally though it requires TRUST. Real trust. Not the kind of trust we thoughtlessly comply to as a touted organisational value. With trust we give our teams the outcome (e.g. increase registration conversion by 4% in Q1) rather than the output (e.g. build a new streamlined registration page).

Of course, with any process or methodology it's easy to cargo cult. To use the lingo and not live it with action. I see this a lot. We convert our list of “projects” to a list of “hypothesis”. Or pressure teams to deliver a feature rebranded as an “MVP”. In the words of Alan Partridge “they've just rebadged it you fool!!”

This way of working requires a deep and relentless focus on your customers/users/patients. This insight can come in many forms whether it's quantitive research through data and analytics or qualitative research through user research; interviews, observation, focus groups etc. Much like drug development a deep understanding of the problem domain is critical and should be devoid of confirmation bias. We're not looking to prove what we think is right. We're looking to test our informed hypothesis. A hypothesis is not a guess or an idea. It cannot be formed without effective insight and research. Pharmaceutical companies spend billions of dollars on research to identify proteins and pathways that are effected by a disease. Yet the majority of technology companies are still devising product roadmaps based on perception, guesses, biases or by request from customers. Customers know the problem, not the solution. Companies know of neither.

Forming an effective hypothesis and experiments

Sadly, pharma companies don't invest in curing every disease. It has to fit with the companies strategy and objectives as well as being financially and competitively viable. No doubt many more complex parameters that I'm unaware of added to the equation. As is the case in product management. The products, initiatives and features we build must be driven by organisational objectives and strategy that the business sets. These should cascade down the organisation to drive alignment and ultimately inform our product hypothesis as an additional parameter. OKR's as one mechanism to achieve this alignment across an organisation:

company objectives

As we understand the business objectives and pair it with a deep understanding of our customers (a process that never ends because our customer cohorts are fluid) we can start to form our product hypothesis. The hypothesis attempts to propose an explanation for a particular phenomenon. A typical format for such might be:

We believe [this capability]

Will result in [this outcome]

We will have confidence to proceed when [we see these measurable signals]

Critically the outcome must be a direct outcome to your product strategy. If your initiative, for example, is to increase registration conversions on your e-commerce website then an outcome that attempts increase traffic to the registration page is a poor proxy outcome. Although doing so might indirectly have an effect on registration conversion it's not a hypothesis based on observed behaviour. It's purely a guess. You should be testing specific, informed assumption, not ideas. We may look at the data and analytics for our registration page and realise that potential customers start filling in the required fields and then don't always complete them. Dropping off at different points in the page. Knowing this, we may establish a hypothesis that:

We believe [that certain fields in the form put potential customers off completing registration and reducing the number of fields on the registration page]

Will result in [potential customers being more likely to complete all the fields]

We will have confidence to proceed when [we see a 4% increase in all field completion]

With a hypothesis in place we can focus on experiment design. A good way to approach this is asking “what is the cheapest and quickest way we can validly test our hypothesis?”. Again, the goal here, much like the scientific method, is not to try and prove your idea. When you focus on doing this it's very hard not to get married to the idea and continue to create new experiments until you've proven your confirmation bias. You end up making excuses for what's missing in the idea and end up adding to it. Continuing down the rabbit hole and you hit the sunk cost fallacy; your decisions are tainted by the emotional investments you accumulate, and the more you invest in something the harder it becomes to abandon it.

In drug development we use techniques like control groups or placebo's to ensure the results are significant. The same techniques can be applied in product development. The scientific method calls this the dependent and independent variables (the current state and the proposed state). To ensure that any increases we might see to our experiment are directly related to our changes, it's important to measure not only the new state (the reduced number of fields on the registration page) but also measure the existing state. A control group in the scientific method. A popular way to do this in tech is by a/b testing the two, or more, paths. In our example of the registration page we would keep the current page in place, the dependent variable, as variant A and the new registration page design, the independent variable, as variant B. Some users will be directed at the old and some at the new.

Measurement is key

Measurement is the hard bit. Without effective measurement we might as well just go back to the old project delivery mentality and this is the biggest hurdle to adopting experimentation. The first hurdle is usually a desire from the executive team to translate any measurement to direct financial return. The measurement of revenue is important but in product management it's too lagging a metrics to be useful and therefore not quick enough to make a decision. Instead we should deduce customer focussed metrics that we can directly correlate to driving our outcomes and use those as a proxy to revenue. If we take the example we've been using of increasing registration conversion by 4% in Q1, as our primary metric, then we can translate that into customer lifetime value as a quantifier of revenue.

Having a measurement is still not enough. As with the scientific method it's important to know our results are statistically significant. An increase or decrease in a single variant in a test does not mean confirmation. Knowing what statistical significance is for each test means we can make changes to our test until it is significant; either by increasing the number of subjects (patients, users or customers) or by increasing the duration of the test.

Capitalising on the differences

Despite the value in applying the scientific method to product management and development, thankfully there are some huge differences. Most of us aren't building digital products that put peoples lives at risk. This is a difference that we should embrace but without the cost of losing the rigour or science. The average cycle time for a drug to hit the market is 14 years and $2.6 billion and the average success rate is 12%. In contrast the low cost of computation and a shift to cloud computing has enabled us to develop products in vastly reduced cycle times. There are two cycle times in product development that we should be aggressively focussed on reducing; deployment times, that is the time it takes for code to be put in front of customers and user feedback cycle time, the time it takes to be informed about our changes from our users (either directly via interviews or indirectly via data). In product development you will hear this called dual-track Agile.

dual track Agile

Companies like Amazon and Google are relentlessly focussed on these two metrics. Amazon deploys code every 11.6 seconds and Google X have seen discovery cycles of 15 minutes. In some circumstances they have hired out a shop in a shopping mall, interviewed people in the mall and their cross-functional teams would ideate and deliver features out of the shop. We're not all Amazon or Google but if we don't shift towards this mindset we stand to be disrupted by them quicker than we could ever expect.