Thursday, June 23, 2022

How to Misuse and Abuse DORA DevOps Metrics

In the How To Measure Software Delivery Using DORA Metrics (YouTube) presentation, Dave Farley, author of "Continuous Delivery" and "Modern Software Engineering" describes how one can apply DORA measurements to drive software development to deliver on this state-of-the-art approach, but also explores a few of the common mistakes that can trip us up along the way.

I found the reference to Bryan Finster's October 2021 presentation How to Misuse DORA DevOps Metrics especially useful.

Bryan contrasts common pitfalls & fallacies with pragmatic and realistic advice.





He also points out that the 4 prominent DORA metrics constitute only the tip of the iceberg.










My earlier blog article on Software Productivity Metrics provides further details on these additional metrics.

Slide #29 in Bryan's deck puts these metrics into perspective ("To improve flow, we must improve CI.") and makes the case for a set of balanced metrics (#34):













Summary ("Closing Thoughts")

  • The 4 outcome metrics are only the tip of the iceberg.
  • Product development is a complex interaction of people, process, and products. There are no simple metrics.
  • Measures require guardrails to avoid perverse incentives.
  • Metrics are a critical part of the improvement toolbox, but…
    • We cannot measure our way to improvement.
    • We use them to monitor and inform the next improvement experiment.
  • Don’t measure people, invest in them. They are our most valuable asset.


[July 26, 2022 -- Update:

Abi Noda discusses Finster's recent article in the The DevOps Enterprise Journal | Spring 2022 (itrevolution.com) edition on the same topic:

Common misuses of the DORA metrics
  • Focusing too much on speed.
    • “Measuring deployment frequency without using quality metrics as guardrails will result in poor outcomes.”
  • Setting goals around DORA metrics. 
    • “The goal isn’t better DORA metrics… OKRs should be focused on desirable business outcomes.”
    • Choose goals, then choose metrics that align with those goals. 
  • Mistaking measuring DORA metrics as a way to improve. 
    • “[DORA metrics] don’t fix things.
      If we simply get a dashboard and do not buy into using it to identify improvement items, then nothing will get better.” 
  • Using DORA metrics as vanity metrics. 
    • “[DORA dashboards] are often used as ‘vanity radiators’ instead of information we can use to help us improve.”
  • Not including other signals in addition to the four key DORA metrics.
    • “The four key metrics DORA used to correlate behaviors of high-performing organizations are a small subset of the metrics recommended in the book Accelerate. They also represent only one aspect of the health of a system…”
]


[January 25, 2023 -- Update:

In his LinkedIn article, Abi Noda summarizes 

Common pitfalls of the DORA metrics, according to 
Nathen Harvey who helps lead DORA at Google:

1. Comparing teams to each other based on the four key metrics. Different projects have different needs, so we can think more critically about whether a team's metrics should fall in the low, medium, or high performance category given that context.

2. Setting goals for improving the DORA metrics, and in turn creating the wrong incentives. Instead set goals to improve the capabilities or factors that drive the DORA metrics.

3. Spending more effort on pulling data into dashboards than on actually improving

4. Not using the metrics to guide improvement at the team level. When the teams doing the work aren’t using the metrics to improve, this defeats the purpose of the metrics.

5. Using "industry" as an excuse for not improving. Even companies in well-regulated industries can focus on improvement.

6. Assuming you’re already world-class, so your organization doesn’t need to focus on improving. If software delivery is no longer the constraint, then what is? Identify what is preventing teams from making progress and focus on that.

7. Fixating on the four DORA metrics (which are outcomes) and forgetting about the capabilities. “We don’t get better at those outcomes by focusing on the outcomes. We have to focus on the capabilities that drive those outcomes.”

The big takeaways:
  • the DORA metrics are outcomes not goals,
  • context matters, and
  • a team must look to understand and improve the factors that drive the DORA outcomes.

P.S. I like the "You might also deliver wrong things 10x faster" statement in the "Fantastic Facts and How to Use Them" presentation referenced in one of the comments.
]

No comments:

Post a Comment