All Collections
About Velocity
Are these metrics actionable?
Are these metrics actionable?

What you can do with Velocity insights.

M
Written by Mike Koeneke
Updated over a week ago

Yes, in context. 

Metrics aren’t meaningful in a vacuum. You’ll need to pair your findings with conversations with team members to get an full understanding of what causes a particular metric to increase or decrease. Qualitative and quantitative data together will help you determine the best way to take action.

Here’s a breakdown of some of our core metrics and possible courses of action:

  • Impact: This metric reports the estimated difficulty of a change to the codebase, and thus, the impact a given change has on a project. Three variables are factored into this metric: the location of the change, the size, and the complexity. You might expect a new engineer to perform increasingly well in the metric as they go through the onboarding process. If their progress doesn’t meet expectations, you might take action to restructure onboarding or set the new hire up with more pairing.

  • Rework: This metric shows how much of their own code an individual contributor edits within three weeks after pushing. You may notice that a contributor has a high Rework percentage relative to their peers, and decide to pair them with a more experienced developer to help them ramp up.

  • Pull Requests Merged: This metric represents how often your team is delivering value to customers, so it’s your engineering team’s speedometer. A dip in Pull Requests Merged is flag that something recently changed that is causing a lapse in productivity. You may find that this lapse is correlated to a new code review policy and decide to roll back that decision.

  • Abandoned Pull Requests: This metric represents the number of pull requests that haven’t been touched for over three days. Abandoned pull requests indicates wasted efforts, so a high count can be a cause for concern. Digging deeper, you might discover a lack of clarity around a particular feature. You may choose to encourage limiting story points or creating best practices surrounding the way features are broken down by product and engineering.

  • WIP / Contributor: This metric shows the ratio of open pull requests to active contributors. If this ratio is higher than normal for your team, then it’s possible that your team is spread thin. You may choose to re-prioritize this sprint, and give your team less tracks of work.

  • Initial Review Ratio: This metric can help diagnose problems hiding within your code review process. If “approved w/o comment” is high and corresponds with an increase in bugs, you might take action to incentivize a more thorough review process.

The larger your team grows, the harder it is to make well-informed decisions. The handful of conversations you have each week may no longer represent the largest and most pressing problems that need to be addressed. Velocity helps you fill in the blanks and determine the best course of action, based on a complete understanding of your team’s work patterns.

Did this answer your question?