Product managers are under constant pressure to deliver features faster, improve product performance, and optimise team efficiency. Unfortunately, the outcome of product management is quite hard the gauge, which is a problem for every product manager when demonstrating that decisions were based, salaries adequate and, most importantly, that the effectiveness of processes improves. What is effect of choosing a feature, of having three interviews a week and making up a roadmap for the next some quarters? We need to find a way to rationalise the many different repetitive but hopefully compounding steps that PMs take over time.
We argue that PMs could adopt principles from the software engineering domain in which it was for long also pretty difficult to measure the concrete effects of written code when mapped on generated value and revenue. Meanwhile, costs were never a problem. Since long ago, LOCs, function points, or velocity could be used to put the engineering efforts in relation to project or product budgets and still the actual value creation has never been tracked.
Nowadays, there's great way to measure how engineering orgs improve, namely by a principle that the Google DevOps team invented: DORA (DevOps Research and Assessment) metrics offer valuable insights. Originally designed to evaluate software delivery performance, DORA metrics like deployment frequency, change failure rate, and lead time for changes can help product teams track experiment success, improve iteration cycles, and make data-driven decisions. In this post, we’ll explore how applying DevOps principles can transform product management strategies, ensuring better feature delivery, customer satisfaction, and business impact.
DORA is all about delivery and there are four key metrics:
Deployment Frequency (DF) – How often code changes are successfully deployed to production. (Higher is better; indicates rapid iteration and continuous delivery.)
Lead Time for Changes (LT) – The time it takes for a committed code change to reach production. (Shorter is better; reflects efficiency in development and release processes.)
Change Failure Rate (CFR) – The percentage of deployments that cause failures in production. (Lower is better; signifies stability and code quality.)
Time to Restore Service (TTR) – The time it takes to recover from a failure in production. (Faster is better; shows resilience and strong incident response.)
All key metrics are designed to help optimise the process of delivery but not the delivery artefacts directly, which were are to gauge in the past. By using this indirect method, managers and engineers have a clear set of indicators to improve by a number of practices to be applied.
Obviously, engineers accepted that a performing organisation can be recognised and achieved by tracking and tweaking the DORA indicators. You still have to believe that, for instance, high-frequent deployments are a sign of outstanding teams. There's research and experience that proofs this idea and for now, we need to assume that the DevOps community is on the right track.
In product management, activities typically fall into one of two categories: discovery and delivery. Whereas the delivery-related tasks such as defining concrete features and assigning time frames to when something is build or shipped, the discovery domain involves research, interviews and observations. All in all, delivery in PM and engineering is a joint activity where the responsibilities between both disciplines differ. However, as both parts act on the same outcome, classic DORA is the way to go also in product management.
For discovery, we need to redefine what's being measure. In principal, the ultimate outcome is gain of insight, or putting it in a startups context, the removal of uncertainty. We can map discovery tasks to a new set of indicators:
Experiment Frequency (EF) – How often product teams run experiments to validate hypotheses.
(Higher is better; promotes data-driven decision-making and continuous learning.)
Validation Lead Time (VLT) – The time it takes to confirm whether a new feature or experiment delivers expected results.
(Shorter is better; ensures quick iteration and avoids wasted effort on unsuccessful ideas.)
Rate of Well-Guessing (RWG) – The percentage of product hypotheses that are validated as correct.
(Higher is better; reflects the effectiveness of product intuition and user research.)
As "time to restore service" is hard to map to a product discovery activity, we can have another replacement metric:
🔍 Hypothesis Validation Value (HVV) = Importance × Uncertainty × Validation Rate
This metric quantifies the value gained from validating or invalidating a hypothesis. It builds on the principle that a hypothesis has inherent value based on:
Importance – How critical the hypothesis is to product success (e.g., user adoption, revenue impact). Uncertainty – How little is known about the outcome (higher uncertainty = higher potential learning). Validation Rate – The fraction of the hypothesis that has been confirmed or disproven through data.
How it Works:
A hypothesis with high importance but low uncertainty provides limited new insights.
A hypothesis with high uncertainty but low importance may be interesting but not impactful.
A high HVV score indicates a hypothesis that delivers substantial learning and strategic direction, making it a top priority for experimentation.
By incorporating HVV into product decision-making, teams can prioritize high-value insights, focus on the most meaningful experiments, and systematically reduce uncertainty in product strategy.
At their core, product discovery metrics aren’t just about measuring feature success—they’re about accelerating organisational learning. Just like DORA metrics revolutionised DevOps by quantifying delivery performance, these new product management metrics help teams validate assumptions, reduce uncertainty, and prioritise impactful decisions.
By tracking Experiment Frequency (EF), Validation Lead Time (VLT), Rate of Well-Guessing (RWG), Feature Adoption Rate (FAR), Impact-to-Effort Ratio (IER), and Hypothesis Validation Value (HVV), product teams can shift from opinion-driven decision-making to a systematic, data-informed approach.
Ultimately, the best product organisations aren’t just shipping features—they're continuously learning about their users, their market, and what truly drives value. The faster an organisation learns, the better its products become.