Skip to main content
Code Review

An in-depth look at your team’s code review process.

J
Written by James McGill
Updated over 6 months ago

Overview

The Code Review page provides team and individual contributor metrics around your code review process, so you can improve the thoroughness and efficiency of your collaborative practices.

This page is made up of four components:

  • Initial Review: The portion of reviews that began as approvals or request changes. We also break out approvals without any comments separately to track “rubber stamp” approvals.

  • Coverage: The percentage of files receiving at least one code review comment, which serves as an indicator of review thoroughness.

  • Speed: Also known as Time to Review, defined as the time from when the pull request is opened to the initial review.

  • Influence: The percentage of review comments that are addressed either by a response comment or a change to the code. This is a great way to ensure the review feedback is meaningful to the PR authors.

Notes:

  • Initial Review, Coverage, and Speed display review metrics for initial reviews.

  • Review metrics for BitBucket and GitLab repos will be different than those seen for GItHub repos. For more information, refer to this doc.

Initial Review

The Initial Review helps you understand how often pull requests get stuck in the code review process.

  • Approved: the number of pull requests opened during this period that were approved during their initial review (with comment)

  • Approve w/o Comment: the number of pull requests opened during this period that were approved without a comment (may indicate a poor review)

  • Comment: the number of pull requests opened during this period where the first action was a comment without either approving or requesting changes

  • Changes requested: the number of pull requests opened during this period that had changes requested.

Coverage

Coverage denotes the percentage of files in a pull request that receive at least one comment, so it's a good representation of review thoroughness. You’ll also see the relative size of comments:

  • Large (20 or more words)

  • Regular (between 8 and 20 words)

  • Trivial (fewer than 8 words)

Knowing the size of pull request comments helps paint a picture of the amount of time and thought put into the code review. If a team has a poor coverage percentage and a high approval percentage, for instance, it could mean that code reviews aren’t as thorough as they could be.

Note: Coverage considers initial code reviews only

Speed

Review speed and the Time to Review metric represent the turnaround of a review. The following actions count toward this metric: approval, comment, or request for change.

Note: Review speed considers initial code reviews only. Also, BitBucket comments, requests for change, or labels will not count as a review.

Influence

Review influence displays Review Cycles, or the the average count of back and forth from author to a reviewer in a pull request.

The Comments Addressed section displays the percentage of comments by that reviewer that lead to changes in code or further comments. A high percentage of comments addressed by code changes can suggest X, while a high percentage of comments addressed by further comments can suggest Y.

Note: Review influence considers all code reviews.

Reviewers

The Reviewers section breaks down the Code Review metrics by contributor or team. You can filter by name or sort by whichever metric is most valuable to you.

All of these metrics appear on a per-individual and per-team basis at the bottom of the page.

This section also displays the Involvement metric which is the percentage of pull requests from your organization that a particular contributor (or team) has reviewed. Ideally, involvement should be spread equally amongst a team. If involvement is heavily weighted towards one contributor, that person may become a bottleneck in the review process.

Understanding Code Review Metrics

  • When viewing review metrics over a period of time, the data set is reviews created over that period of time, rather than reviews for pull requests created over that period of time

  • Self-reviews (when an author leaves a review on their own PR) are always excluded from review metrics

  • Review metrics are scoped to either initial code reviews, or initial and subsequent code reviews. An initial code review is the first code review left by an author on a particular PR. For example: if André reviews a PR twice, and Antwan reviews that same PR once, the PR has 2 "initial reviews" (André's first review, and Antwan's first review), but 3 "total reviews." This scoping is done on a metric-by-metric basis, based on which makes the most sense for that metric.

Did this answer your question?