Are You Leading Your Dev Team in the Wrong Direction?
When it's your job to help engineers ship efficiently, you need to know what "efficient" is.
In college, my buddy Ethan did something both hilarious and instructive. He and his friends decided to drive from New York to New Orleans for Mardi Gras. They took shifts at the wheel so they wouldn’t have to stop.
Somewhere in Virginia, one guy took over to drive through the night. Ethan woke up the following morning, and asked the driver, “How’s it going?”
“Good,” the driver replied, “I’ve been keeping it steady at 80 the whole time.” They should have been in Alabama or Mississippi, but looking out the window, something didn’t seem right. “I didn’t know the South had this many cornfields,” thought Ethan.
Then they passed a sign displaying the highway number, but it wasn’t one that he remembered being on the route. When they checked the map, they realized: The driver had made a wrong turn. The whole night they had been driving west, not south.
The driver had been so focused on one metric – speed – that he neglected all the others, including direction.
A similar thing happens on engineering teams all the time. Using the wrong metrics, a limited set of metrics, or metrics without context results in anti-patterns – and anti-patterns are what get you lost.
Leading Engineering Teams With Only DORA Metrics is Like Driving Without a Map
By quantifying key productivity and reliability indicators, the DORA Metrics have played an essential role in raising awareness among engineering leaders. Now, there is broader agreement about specific metrics to pay attention to, and the discussion has evolved into how best to leverage them.
But DORA is just a starting point in the transition to data-driven software engineering, not the end. As Ori Keren has laid out, there are limitations to the DORA Metrics.
The three limitations of DORA metrics:
The DORA team did not build a framework, product or methodology that enables you to measure the metrics in practice.
They’re all lagging indicators. To change outcomes, you need leading indicators to understand the underlying causes so that you can take action and achieve improvement.
They lack context. DORA metrics can tell you how fast you’re going and how stable you are in getting there but don’t tell you if you’re moving in the right direction.
If you’re relying on DORA Metrics alone, it’s like only paying attention to the speedometer when you’re driving. You still need a map—something to benchmark the success of your team against—or else it’s easy to get lost.
Engineering Benchmarks Provide Destinations for Dev Teams
Metrics tell you where you are, but benchmarks give you a map that shows you where you can go. Benchmarks reveal your areas for improvement, guiding you to where you should focus your efforts.
Historically, benchmarks for engineering metrics have been lacking. We’ve heard some variation of the following from hundreds of engineering leaders: “We’ve started to monitor our metrics, but now what? Is this good? How do we improve?”
That’s why in 2022, we studied ~2,000 dev teams and created our engineering benchmarks report, building on the seminal DORA research. Now we’ve followed that report with updated 2023 figures and at the end of Q3 2023, we’ll be releasing a deep dive into the data behind the report and breaking down key areas of interest along with our methodology. (Edit: Sign up here and we’ll send the full report to you the day it’s released!).
To make going deeper into the data as easy as possible, we also break cycle time (aka lead time for changes under the DORA framework) into its four sub-phases. This allows you to automatically filter and surface the information you’re looking for, so you can see the individual issues causing bottlenecks in any of those sub-phases and understand where to target improvement efforts based on benchmarking data.
What Are the Leading Indicators?
I mentioned before that DORA metrics are lagging indicators–so we knew that there were lots of other metrics that informed our DORA metrics. We also knew that users would definitely want visibility into those metrics, namely PR review depth, the number of active branches or WIP, and most importantly, the rework rate and PR size.
Keeping these leading indicators healthy enables engineering teams to shorten cycle time, prevent downstream problems, and boost application performance and reliability.
Improving Productivity Starts with Knowing Where to Go
Although learning where you’re underperforming is not a pleasant experience, it's ultimately for the better. Benchmarks are valuable precisely because they reveal often-overlooked areas for improvement—it’s hard to self-evaluate objectively.
This is also a key reason why LinearB focuses our metrics on teams, not individuals. We are here to provide tools to improve engineering teams, not micromanage individuals.
Some engineering leaders have also expressed concerns that benchmarks would take away their decision-making power. But benchmarks don’t tell you where to go. They’re a map that tells you where you can go. Ultimately, it’s up to leaders and their teams to decide what route is best for them.
This is part of why we found almost no teams were elite across every metric we measured. Different teams have different needs. For example, a team may have long pickup or review times, but this could be because they’re working on a sensitive part of the code base, and they need two reviewers for every PR.
Having benchmarks across ten different metrics(so far!) gives your teams the flexibility to choose the critical metrics for you and establish a path to improvement by setting customized team goals.
Small Pull Requests = The Quickest Path to Shorter Cycle Time
Our benchmarks study revealed that the average cycle time is 7 days– and between 3 and 4 days is taken up by the PR lifecycle. In other words, the majority of cycle time is taken up by the PR review process!
Yet if PRs are holding us back on cycle time, they’re also the area with the most improvement opportunity. When we crunched the numbers, we found that as PR size grows, pickup time grows exponentially.
Based on interviews(and personal experience), engineers know that to review a big PR, they need a big block of uninterrupted time, so we often put off the review. Increased PR size also can cause bugs to slip by as PRs with more lines of code get sent to busy reviewers, increasing the chance of skimming and negatively impacting MTTR and CFR.
Bottom line: The most effective way to shorten PR lifecycles is to decrease PR size.
The silver lining? It’s starting to become easy to accelerate and customize your PR lifecycle with free workflow tooling like gitStream, so between improving tooling and processes and the outsize impact of the PR process, there are quick and high-leverage ways to raise your team’s throughput. PR workflow tooling is showing promising results in early testing, with our data showing streamlined, customizable workflows led to a ~61% improvement in team cycle time.
Anecdotally, we’re also seeing the implementation of Continuous Merge processes via PR workflow tooling make a major impact. DevCycle’s VP of Engineering Nick Leblanc recently joined me on the Dev Interrupted Podcast to discuss how his team has implemented Continuous Merge.
I can’t wait to see what the data shows us in 6 months, there is so much potential for progress: provided we enable dev-driven change, not top-down.
Want Improvement? Let Your Devs Take the Wheel
In engineering orgs, individual engineers have to make changes in their workflows to drive improvement at the team level. Direction can come top-down, but actual improvement comes bottom-up. This means that engineering leaders should focus on building systems that empower developers to take the wheel and drive improvement in their processes to accomplish the team’s goals.
The first piece of this is making your goals visible, not hidden in some spreadsheet, only to be seen in a bi-weekly meeting. Your team should set explicit goals and track your progress against them in real-time charts. Visibility like this fosters a sense of ownership over improving processes and holds teams accountable, helping everyone. We’ve also seen that visibility alone can help improve team performance.
The second key is providing developers with tools that help them change how they work. At LinearB, we use our WorkerB automation bot that knows the goals our teams have set in our platform, and automatically takes action to help the team accomplish them. Like how Google Maps updates your route if you make a wrong turn, WorkerB enables your team members to course correct without stopping.
If you’re an engineering leader, empowering your developers to improve demonstrates that you empathize with them and understand their pain points. You earn their trust, which enhances your ability to influence and lead your team. And if you’re a dev? Well, I think we’d all like to spend a little less time in Jira.
Benchmarking your Metrics is the Route to Data-Driven Engineering
All told, DORA Metrics were a huge step forward for engineering teams. DORA ushered in the era of data-driven engineering by providing a small set of clear metrics.
Nine years later, the data is becoming clear: if we track key metrics like DORA, provide context with engineering benchmarks, and leverage automation to unstick devs’ workflows, you build high-performing dev teams.
If you build a high-performing dev team, and give them the right map, you’ll deliver better quality software, faster—software that’s valuable to your customers.
In modern business, that’s how you win.
See the Benchmarks Elite Teams Are Using - Read: “What Metrics Make Engineering Teams Elite: A Free Look Inside The Top 10% Of Engineering Org”
Understand where your team is in context and benchmark your team’s success against the industry standard of what makes elite dev teams with LinearB’s free deep dive into the foundations of our engineering benchmarks. Based on our analysis of over 2,000 dev teams we created this report to cover:
How elite engineering teams measure their performance beyond DORA metrics.
What makes engineering teams elite in terms of the development lifecycle, developer workflow, and business alignment?
Explore how your team stacks up in terms of metrics like cycle time, deployment frequency, and planning accuracy.
Learn what tools elite software teams use to hit these industry-leading benchmarks.
Really enjoyed writing this blog: what do you think I got wrong? What did I miss?
PS - if you want to be on the list for the full data report - let me know!