I have been reading about data-driven development productivity and effectiveness for quite some time now and trying to visualise it both from a developer’s shoes, as I have been one for a large part of my career and then from an engineering manager’s eyes where continuous improvement, effectiveness and productivity are a constant topic when it comes to ‘achieving more with less’

So, DORA, the “DevOps Reality Assessment,” is this super cool brainchild of a Google research group. They were like, “Hey, let’s figure out what makes software delivery tick!” And voila, DORA was born! It’s got these four metrics: Cycle Time, Deployment Frequency, Change Failure Rate, and Mean Time to Recovery. They’re like the Fantastic Four of software engineering, each with their own superpower to give you the lowdown on your team’s performance

  1. Deployment Frequency (DF): Measures how often a team successfully releases code into production
  2. Change Failure Rate (CFR): The percentage of changes released to production that result in failures
  3. Lead Time for Changes (LTTC): The time it takes from a commit to reach production
  4. Mean Time to Recovery (MTTR): The time it takes to restore service during downtime

DORA metrics are a perfect example of data-driven software development in action. They provide quantifiable data that can guide strategic decisions and help work towards organizational goals. By tracking these metrics, teams can identify areas of their software development pipeline that need improvement and optimize their processes accordingly.

For instance, Cycle Time gives an overview of the software delivery pipeline’s overall health and can help identify bottlenecks in the pipeline. Deployment Frequency measures engineering velocity, indicating the organization’s overall efficiency. Change Failure Rate is a measure of the overall quality of the codebase, and Mean Time to Recovery (MTTR) is a measure of the organization’s resilience, indicating how quickly and effectively teams can resolve production issues.

But here’s the thing. While DORA metrics are all groovy and data-driven, they can sometimes feel like that overbearing health freak friend who always fusses about, “70gm of protein, 10gm carb and 10gm fat every single day” 🙄

Why, you ask? Well, let’s break it down:

  1. Creativity Crusher: DORA metrics are like a one-size-fits-all t-shirt. It might fit most, but what about the folks who like to wear their shirts baggy or those who prefer a snug fit? Standardised metrics can sometimes stifle the creative genius in developers who might have unique, out-of-the-box solutions.
  2. Speed Over Substance: With metrics like Deployment Frequency and Lead Time for Changes, there’s a lot of emphasis on speed. It’s like being in a constant race, and sometimes, in the rush to cross the finish line, the quality of the code might take a hit.
  3. Micromanagement Mayhem: DORA metrics can sometimes feel like someone’s constantly peeping over your shoulder, watching your every move. And let’s be real, nobody likes a peeping Tom, right?
  4. Growth Grinch: DORA metrics are all about output and performance. But what about individual growth, learning, and those “Eureka!” moments that might take a bit more time but lead to something truly innovative?

So, while DORA metrics are a nifty tool for understanding and improving software delivery performance, it’s important to remember that they’re not the be-all and end-all. It’s like having a balanced diet – you need your proteins, carbs, and fats, but you also need your vitamins and minerals and in the end what matters most is you need to be mentally cheerful and physically fit for 8-10 hours of the day. Similarly, while adhering to DORA metrics, it’s crucial to foster an environment that encourages creativity, innovation, and individual growth.

In the age of peak AI/Co-pilots/LLM’s it’s a mindset shift from “I write code” to “I deliver value”

In the age of peak AI/Co-pilots/LLM’s it’s a mindset shift from “I write code” to “I deliver value”

And hey, remember, DORA metrics are like a treasure map. They can guide you to the treasure, but you’ve got to dig deep and interpret the signs to find the real gold! 🏴‍☠️🦜🏝️💰. They are one of the tools in your toolbox to better understand your team but they should never appear in your team’s performance discussions.

Among the first movers like Waydev, Athenian reco, and LinerB, there is a new promising contender, Faros who has Community Edition (hate to call it…duck-tape version) for open-source enthusiasts which pulls in data from Github and Atlassian Jira to wrangle data and provide productivity dashboards.


If you liked what you read and want to explore other productivity schools of thought then SPACE is another framework you can explore.

Update: 30 Aug 2023

Just weeks after my commentary on developer productivity metrics, there was a report released by McKinsey (lol, no linkages between the 2 events) which was further debated by Gergely Oroz in further detail with a lot of real world nuance. Both are worth a read. It is important to note that the McKinsey report provides a broad framework for measuring productivity, as the adage goes “If you can not measure it, you can not improve it – Lord Kelvin”.