Metrics Catalog

Recommended Metrics

For a quicker start, if you are overwhelmed with the amount of things to measure, below is a recommendation about a smaller set of measures to start with, based on a nature of your project and/or aspect or measurement.

 

SCRUM

KANBAN / SUPPORT

FIXED SCOPE / PRICE

Ensure a predictability in a delivery of the most valuable scope incrementally/iteratively via sprint-by-sprint cadence, with the quality level allowing to ship to production every iteration





Monitor the input & output flows of your service as well as a drill down your pipeline retrospectively to ensure no bottlenecks, met SLAs and acceptable “first-time-right” quality



Keep a forecasted delivery date “On Track” to meet deadlines, with a strong change management to control the scope creep, and quality on a high level to ensure a safe rollout



QUALITY

ENGINEERING / TECHNOLOGY

DEVOPS INDUSTRY METRICS

View trends on defects to sign-offs releases with a greater confidence, analyze the overall testing efficiency, ensure a proper balance between manual vs automated testing



Make sure your dev teams are fast and reliable (volume & frequency of code commits), helping each other (peer reviews), and having quality gates over code and CI/CD pipelines in green



Boost the performance of your teams via faster lead time for changes to production, increased deployment frequency to production, faster time to restore service to production, and reduction in the change failure rate to production.







Metrics by Data Source Types

Release and task tracking

PERF supports these release and task tracking management tools:

Summary

Purpose

Summary

Purpose

Area: Requirements

Sprint Plan Change, %

A deviation of a sprint scope from the initially planned one (addition and reduction of the iteration scope). The more mature a sprint planning process, the closer to 0 this metric is.

Sprint Scope Creep, %

Amount of added / removed items in scope for a given iteration. The more mature a sprint planning process, the closer to 0 this metric is.

Sprint Scope Stability, %

Shows how much of the sprint scope which was initially planned managed to stay in the sprint by the end of the sprint. Target is 100% which means commitments turn into reliable delivery.

Backlog Health

Shows the amount of "ready for development" scope at each sprint start.

Scope ready for development

Amount of the scope in "Ready for Development" state divided on average team velocity. Thus, it shows for how many sprints or weeks team has a productive load. Recommended level: 2+ sprints/weeks ahead.

Improper Dependencies

Amount of planning collisions due to wrong dependencies between work items in a backlog.

Area: Progress

Burn Up with a forecasted completion date for a Project, Release or Sprint

Shows the current project performance, overall tendency and predicts future performance. The forecast for scope and a completion date (for a whole project, a specific release or a specific sprint) is based on a team delivery speed over the past iterations.

Remaining Estimate by weeks

Shows, week by week, the total amount of the “Remaining Estimate” in the selected scope of work.

Area: Productivity

Committed vs. Completed

Shows the amount of work planned and completed per iteration. This helps you determine your team's velocity and estimate the work your team can realistically achieve in future iterations. Works by Sprints and Releases.

Commitment Rate

Shows completion ratio by sprints or releases in work items, story points, or man hours of remaining or original estimate.

Lead and Cycle Time

To find out how much time on average a team spent on a task completion

Flow Efficiency, %

Ratio between Lead Time and Cycle Time. The ratio must be as close to 100 % as possible - it means task is processed most of the time not waiting for being processed

Reaction Time

To find out how quickly, in average, a team takes a task into work

Target Fulfillment on Reaction/Resolution Time

The percentage of work (e.g. scope of support tickets) which met the SLA (if defined) vs. not met.

Throughput

To know a number of completed tasks vs created tasks per iteration (week, month).

Time in Status

To see how much time, in average, a work item hangs on each stage of its Workflow

Created vs. Resolved

A cumulative trend between created work items against resolved work items - for last 7 / 30 / 90 days

Cumulative Flow Diagram

Visualization of an overall team effort, a classic view on execution of Kanban projects. Available for the whole project, selected release and selected sprint in the following variations:

  1. view by Status Buckets - per how those are set up in Perf for a project: "Blocked" / "To Do" / "Ready for Dev" / "In Progress" / "Done" over time.

  2. view by Statuses - per native statuses of each ticket in their respective tracking tool (Jira, TFS, etc.).

Issues Aging

To know how much work and for how long is pending implementation in the backlog.

"Not Done" Bucket Review

Shows how issues, which at the end of iteration were in "Not Done" status, had been changing their status during the iteration. Helps to understand what was the initial status of issues which were not done at the end of the iteration. Works for a Sprint and Release.

Number of open work stoppages (if tracked) and their trend

To show how often a team gets blocked in its work and how quickly stoppages are resolved. Work stoppages - and their efficient resolution - is one of the key factors to enable high velocity of a team on a project.

Work stoppages lifetime

To show the average life time of stoppages on a project, is counted when those got Closed. The closer to 0, the better - this means a quick turnaround to resolve things which block a team from moving forward.

General Velocity

To see an overall velocity trend, by months and by weeks. Applicable for Kanban as well.

Average Velocity

To compare the productivity of team on a long-term versus a short-term interval to see a performance improvement or a degradation.

Area: Quality

Bug Growth

Shows Number of fixed defects vs Number of logged defects by days/weeks/months or by iterations.

Quality Debt (in estimated man-Hours)

To quickly (approximately) answer a question "How much time does my team needs to fix all / critical defects?"

Open Bugs Over Time (by priorities and in total)

To see a trend of open defects over time (within last 180 days) on a project.

Defect Containment %

% of defects submitted by internal team vs. all logged defects (which usually include defects from Production too) - to assess an overall efficiency of the QA team and QA processes.

Re-Opened Defects %

To see defects % that have been re-opened at least once (i.e. not accepted by Testers and thus returned back to work to Developers).

Invalid Defects %

% of rejected defects i.e. defects which cannot be reproduced or duplicated ones or rejected due to any other reason.

Top-priority Defects Age Alert

To check if high priority defects are fixed quickly enough.

First Time Right %

A ratio of items, month over month, being accepted from the first time i.e. with no returns to earlier stages of a pipeline

Defect Leakage

A percentage of defects not detected by a testing team

Defect Density

A ratio of defects to a software size estimated in SP or in items

Area: Estimation

Overall Estimation Accuracy of a team, in %, for a selected time frame.

To identify team estimation mistakes timely via amber and red segments - and retrospect that with the team in order to improve their estimation accuracy going forward

NOTE: this only works if effort estimation in hours is used on a project and team reports their spent time.

The % of work items closed within a selected time frame which:

  • met original estimate

  • exceeded original estimate (>+20%)

  • exceeded original estimate significantly (>+100%)

To see the contribution of each part into the overall estimation accuracy % on a project.

Un-estimated work done, %

To see how much work is accomplished without being estimated.

Effort Variance in %, month over month

To show a deviation in % between initial estimates of tasks/stories/etc. on a project versus actual effort logged to those items.

Area: Team Workload

For Sprint or Release - Workload Summary consisting of: 

  • the % of overall team load with tasks comparing to people available capacity

  • % of unassigned work items

  • % of un-estimated items

To monitor the quality of sprint plannings and assignments to team members. Most valuable while using it for the active sprint or during a sprint planning.

NOTE: this only works if effort estimation in hours is used on a project

Per-person Capacity vs. Actual time reported (all in hours)

To see a per-person utilization over 2-week time intervals.

Reported Hours by weeks

Week by week total hours reported by a team.

Area: Best practices compliance - Tracking & Reporting Hygiene

Items not "Ready" but in active sprint(s)

Gives the amount of issues which are in active sprint(s) but still not marked as "ready for development"

Tasks not estimated but Effort logged

Shows the amount of items without estimates but with the effort logged. This indicator is counted for the whole project duration. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Number of issues in Backlog (out of Sprints) but already completed

Shows the amount of issues at the moment which are already completed in the backlog but not assigned to any sprint. Only valid items are counted here i.e. such ones like "Duplicate", "Rejected", "Cannot reproduce" or similar are not taken into consideration. This indicator is able to point out a contribution lost from sprint velocity. It is expected that on the project this health indicator value is equal to zero.

Completed work items with Remaining Estimate > 0

Shows the amount of items which are already closed but still have some remaining time in a tracking system. In case remaining time in completed tasks is not a zero, it will affect burn down charts showing incorrect status thus a planned effort within the iteration is completed. This indicator is counted for the whole project duration. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Effort logged on completion

Shows a percentage of work items closed within last 90 days and having any effort logged in a tracking system

Estimated items in SP at sprint start

Gives the percentage of estimated items at sprint start day relative to all issues in active sprint at sprint start day

Estimated items in hours at sprint start

Gives the percentage of estimated items at sprint start day relative to all issues in active sprint at sprint start day

Completed work items estimated in Hours (last 3 months)

This indicator shows a percentage of work items (e.g. tasks, sub-tasks and whatever applicable in your project) which are estimated in Hours. Checked for last 3 month

Completed work items estimated in Story Points (last 3 months)

This indicator shows a percentage of work items (e.g. User Stories, Epics, Improvements and whatever applicable in your project) which are estimated in Story Point . Checked for last 3 month

Scope "ready for dev" in active sprints

Gives the percentage of issues marked as "ready for development" at the 1st day of active sprint(s)

Items without Story Points but in Active sprints

Shows the amount of issues whose story points are not estimated though items are included into an active sprint. It is expected that on the project this health indicator value is equal to zero.

Stories which are Open/Ready state but with all sub-tasks already Closed

Shows the amount of stories (and other issues) which are still in Open/Ready/In Progress status but with all their sub-tasks closed. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Stories which are Closed but with any sub-tasks still incomplete

Shows the amount of stories (and other issues) which are already closed but all/some of their sub-tasks are still incomplete. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Number of items with effort logged after items were closed

Shows the amount of issues for which some effort was logged after those issues had been closed. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Bugs not attached to 'Affects Version'

Shows the amount of bugs which are not attached to any "Affected Version", so that it is not clear in what version/release those bugs are found. It is expected that on the project this health indicator value is equal to zero.

Bugs closed without a 'Fix Version'

Shows the amount of bugs not put to 'Fix Version' i.e. it's not clear in which version/release they're fixed (or planned to be fixed). It is expected that on the project this health indicator is equal to zero. Calculation is based on issues completed within last 90 days and excludes invalid bugs.

Items with Story Points changed during last month

Shows the amount of issues for which estimates in story points were changed during last month. Changing estimates in story points means estimate creep and is a bad practice. This metric does not cover cases of setting initial estimates, it is only about correcting estimates. It is expected that on the project this health indicator value is equal to zero.

Items with Original Estimate changed during last month

Shows the amount of issues for which Original Estimates (in hours) were changed during last month. Changing Original Estimates means estimate creep and is a bad practice. Good practice is to adjust Remaining Estimate to reflect an effort required to complete a task. It is expected that on the project this health indicator value is equal to zero.

Items in progress with no changes during last 2 weeks

Shows the amount of issues in progress with no changes during last 2 weeks. It's expected any updates to happen during such a long time e.g. people log their time, post updates, ask questions via comments...

Work logged with dates in the future

This indicator shows an amount of work log (effort) which are submitted with a future dates (most likely, by mistake) i.e. where date of a work log entry is later than 'Today'. Those require a correction to avoid an impact on other metrics.

Area: Best practices compliance - Functional Quality

Number of not completed defects (top priorities)

Shows the amount of defects which are not completed with priority as Blocker and Critical. It is expected that on the project this health indicator value is equal to zero.

Total number of not completed defects (all priorities)

Shows the amount of defects which are not completed at the moment. Suggested acceptable value is less than 10 not completed defects

Top priority vs Total defects

Shows a ratio of top-priority vs all defects which are open at the moment. Top priority ones mean those defects usually defined as of "Blocker" and "Critical" priority. Exact rule which defects should be considered as top priority should be specified in project configuration (Data Sources-JIRA-Quality Management)

Unresolved bugs older than a month (top priorities)

Shows the amount of defects (Blockers and Criticals) which has not been resolved since more than a month. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days.

Unresolved bugs older than a month (all priorities)

Shows the amount of defects (all priorities) which has not been resolved since more than a month. Suggested acceptable indicator value is <20. Calculation is based on issues created within last 90 days.

Time spent on bug fixing, %

Shows the percentage of time spent on fixing the bugs over a total effort spent on a project. Suggested acceptable indicator value is less than 20%. Calculation takes all work logs submitted within last 90 days.

Defects average lifetime (top priorities)

Shows an average lifetime (in days) for top priority defects. Calculation is based on issues in "Open" and "inProgress" states, closed within last 90 days.

Defects average lifetime (all priorities)

Shows an average lifetime (in days) for defects on a project, all priorities. Calculation is based on defects, closed within last 90 days.

Didn't find a metric you need?

If you need something else not mentioned above - check out the Custom Metrics feature and its ability to Configure advanced custom metrics over PERF data .

Code quality

Supported code quality tools:

Summary

Purpose

Summary

Purpose

Code Quality Summary (roll up report over multiple Sonar projects)

Bird-eye view on key metrics from Sonar on code quality with ability to drill down.

Unit Testing:

  • Unit Test Count

  • Unit Test Success Rate, %

  • Unit Test Coverage, % (for overall codebase, and for the new code only)

  • Unit Test Coverage % over multiple repositories on one chart

  • Unit Test per Class

A view on Unit Testing perspective of the project

Code Quality:

  • Technical Debt, hours

  • Duplicated Lines, %

  • Duplicated Lines % over multiple repositories on one chart

  • Code Maintainability Rating A..E

  • Code Reliability Rating A..E

  • Violations per severity (for overall codebase, and for the new code only)

  • Code Complexity per File / Class / Function / Method

  • Code Complexity per File over multiple repositories on one chart

A view on code quality metrics and ratings of a project

Security:

  • OWASP security vulnerabilities

  • Security Rating A..E

A measure how reliable is the code based on security vulnerabilities detected in it

Code Documentation:

  • Code Comments, %

  • Public Documented API, %

A view on a quality of source code documentation

Build pipeline and CI/CD

Supported CI/CD tools:

Summary

Purpose

Summary

Purpose

CI/CD Summary (roll up report over multiple Jobs/Pipelines)

Bird-eye view on key metrics about CI/CD with ability to drill down.

Deployment Frequency

A number of deployments per calendar month/week - to assess the average pace. According to Agile principles, a rule of thumb is to deploy smaller increments but more frequently.

Commits Lead To Broken Builds, %

A ratio of commits led to broken builds within a day, compared to a total amount of commits for that day. Allows to check how good are quality guards controlling the code before it is committed. The greater this figure the worse.

Average Build Time

Average time, in minutes, of a build process along with all automated test verifying build per pipeline. The less the better.

Build Results

Shows how fragile is the code base of a project; extremely helpful on a stabilization phase of a project/release when no active development but rather a bug fixing is supposed.

Avg Build Success for last 7d, %

Shows on a daily basis the percentage of successful builds for last 7 days to give a confidence that build pipeline is pretty stable because no failures happen thus pre-commit validation of code changes is performed well enough by developers.

Red Pipeline Time, %

Overall health of a CI/CD builds chain as a time waste due to build failures. The less the better.

Average Pipeline Lead/Cycle Time

Timing of stages on a CI/CD pipeline, helps to understand the overall 'Lead time in pipeline' as well as see bottlenecks. The less the better.

Pipeline Mean Time to Recovery

Shows a an average time a pipeline takes to recover from a failure.

Pipeline Success Rate

Shows a percentage of pipelines successfully integrated by week.

 

Source code management

List of supported source code management systems:

  • GitHub

  • GitLab

  • Atlassian Bitbucket

  • Atlassian Stash

  • Azure DevOps Repos

Here is the guideline about how to setup the above integrations in PERF - read how to configure source code management tools in PERF

Summary

Purpose

Summary

Purpose

Version Control Summary (roll up report over multiple code repositories)

Bird-eye view on key metrics from the source code repository on with ability to drill down.

Most Frequently Modified Files

Helps find out the most fragile points in the code base / architecture by showing a modification frequency for files in a project code base.

Biggest Commits

Shows the top of biggest commits in a project code base (last 7 / 30 / 90 days) to identify the biggest pain points for code review; the smaller the better.

Code base Change Trend

Shows an amount of code lines changed (added, modified, deleted) over last 6 months.

Commit Size per day (per-person view)

Shows a size of individual commits in a project code base per selected GIT repositories (branches) and selected team member(s) for the last 180 days

Commits Number per day (per-person view)

Shows a number of individual commits in a project code base per selected GIT repositories (branches) and selected team member(s) for the last 180 days.

Merge Request Average Lifetime

Shows the avg speed of new changes being incorporated to a master code due to delays on code reviews. The less the better - means less overhead for dev process/team to handle merges.



DevOps industry standard metrics

This is a set of measures and metrics described in the "Accelerate" book and summarized in this article

"Accelerate" DORA metric

Meaning

Relevant metrics in PERF

PERF data sources

"Accelerate" DORA metric

Meaning

Relevant metrics in PERF

PERF data sources

Deployment frequency

By “deployment” we mean a software deployment to production or to an app store. The reason the frequency of production deployments matters is because it tells you how often you’re delivering something of value to end users and/or getting feedback from users.

Deployment Frequency 

Jenkins, GitLab CI

Deployment Frequency (Custom Metric v2)

JIRA, Rally

Releases by Month

JIRA, Rally









Lead Time for changes

The time it takes to go from code committed to code successfully running in production.

Average Pipeline Lead/Cycle Time

Jenkins, GitLab CI

Time between Done and Released (TBD)

JIRA, Rally

Lead Time for Changes (Custom Metrics v2)

JIRA, Rally

Resolution Time for Production Defects

JIRA, TFS, Rally

Lead and Cycle Time

JIRA, TFS, Rally

Scrum Cycle Time

JIRA, TFS, Rally









Time to restore service

The time to restore service or mean time to recover (MTTR) metric calculates the average time it takes to restore service

Pipeline Mean Time to Recovery



Jenkins, GitLab CI

Mean Time to Recovery (Custom Metric v2)

Jira, Rally









Change Failure Rate

A measure of how often deployment failures occur in production that require immediate remedy (particularity, rollbacks).


Red Pipeline Time

Jenkins, GitLab CI

Pipeline Success Rate

Jenkins, GitLab CI

Change Failure Rate (Custom Metrics v2)

JIRA, Rally

Production Defect Density per Release

JIRA, Rally



Per-person metrics

Disclaimer! Please always remember that

1) Metric is just an indicator. As a manager you always see a bigger context in which you should interpret numbers.

2) Metrics depend heavily on how well you have set up your data sources. Remember: if you set up the rules of interpretation that are right for your project, you will get the right metrics.

PERF is more focused on Team-level view. Although, there’re a few per-person metrics/widgets available: 

 

Related pages