Metrics Catalog
Recommended Metrics
For a quicker start, if you are overwhelmed with the amount of things to measure, below is a recommendation about a smaller set of measures to start with, based on a nature of your project and/or aspect or measurement.
SCRUM | KANBAN / SUPPORT | FIXED SCOPE / PRICE |
Ensure a predictability in a delivery of the most valuable scope incrementally/iteratively via sprint-by-sprint cadence, with the quality level allowing to ship to production every iteration | Monitor the input & output flows of your service as well as a drill down your pipeline retrospectively to ensure no bottlenecks, met SLAs and acceptable “first-time-right” quality | Keep a forecasted delivery date “On Track” to meet deadlines, with a strong change management to control the scope creep, and quality on a high level to ensure a safe rollout |
QUALITY | ENGINEERING / TECHNOLOGY | DEVOPS INDUSTRY METRICS |
View trends on defects to sign-offs releases with a greater confidence, analyze the overall testing efficiency, ensure a proper balance between manual vs automated testing | Make sure your dev teams are fast and reliable (volume & frequency of code commits), helping each other (peer reviews), and having quality gates over code and CI/CD pipelines in green | Boost the performance of your teams via faster lead time for changes to production, increased deployment frequency to production, faster time to restore service to production, and reduction in the change failure rate to production. |
Metrics by Data Source Types
Release and task tracking
PERF supports these release and task tracking management tools:
Atlassian JIRA - learn how to configure Atlassian Jira in PERF
Microsoft TFS and Azure DevOps Boards - learn how to configure these tools in PERF
Rally Software - learn how to configure Rally in PERF
Summary | Purpose |
---|---|
Area: Requirements | |
A deviation of a sprint scope from the initially planned one (addition and reduction of the iteration scope). The more mature a sprint planning process, the closer to 0 this metric is. | |
Amount of added / removed items in scope for a given iteration. The more mature a sprint planning process, the closer to 0 this metric is. | |
Shows how much of the sprint scope which was initially planned managed to stay in the sprint by the end of the sprint. Target is 100% which means commitments turn into reliable delivery. | |
Shows the amount of "ready for development" scope at each sprint start. | |
Amount of the scope in "Ready for Development" state divided on average team velocity. Thus, it shows for how many sprints or weeks team has a productive load. Recommended level: 2+ sprints/weeks ahead. | |
Amount of planning collisions due to wrong dependencies between work items in a backlog. | |
Area: Progress | |
Burn Up with a forecasted completion date for a Project, Release or Sprint | Shows the current project performance, overall tendency and predicts future performance. The forecast for scope and a completion date (for a whole project, a specific release or a specific sprint) is based on a team delivery speed over the past iterations. |
Shows, week by week, the total amount of the “Remaining Estimate” in the selected scope of work. | |
Area: Productivity | |
Shows the amount of work planned and completed per iteration. This helps you determine your team's velocity and estimate the work your team can realistically achieve in future iterations. Works by Sprints and Releases. | |
Shows completion ratio by sprints or releases in work items, story points, or man hours of remaining or original estimate. | |
To find out how much time on average a team spent on a task completion | |
Flow Efficiency, % | Ratio between Lead Time and Cycle Time. The ratio must be as close to 100 % as possible - it means task is processed most of the time not waiting for being processed |
To find out how quickly, in average, a team takes a task into work | |
The percentage of work (e.g. scope of support tickets) which met the SLA (if defined) vs. not met. | |
To know a number of completed tasks vs created tasks per iteration (week, month). | |
To see how much time, in average, a work item hangs on each stage of its Workflow | |
A cumulative trend between created work items against resolved work items - for last 7 / 30 / 90 days | |
Visualization of an overall team effort, a classic view on execution of Kanban projects. Available for the whole project, selected release and selected sprint in the following variations:
| |
To know how much work and for how long is pending implementation in the backlog. | |
Shows how issues, which at the end of iteration were in "Not Done" status, had been changing their status during the iteration. Helps to understand what was the initial status of issues which were not done at the end of the iteration. Works for a Sprint and Release. | |
To show how often a team gets blocked in its work and how quickly stoppages are resolved. Work stoppages - and their efficient resolution - is one of the key factors to enable high velocity of a team on a project. | |
To show the average life time of stoppages on a project, is counted when those got Closed. The closer to 0, the better - this means a quick turnaround to resolve things which block a team from moving forward. | |
To see an overall velocity trend, by months and by weeks. Applicable for Kanban as well. | |
To compare the productivity of team on a long-term versus a short-term interval to see a performance improvement or a degradation. | |
Area: Quality | |
Shows Number of fixed defects vs Number of logged defects by days/weeks/months or by iterations. | |
Quality Debt (in estimated man-Hours) | To quickly (approximately) answer a question "How much time does my team needs to fix all / critical defects?" |
Open Bugs Over Time (by priorities and in total) | To see a trend of open defects over time (within last 180 days) on a project. |
% of defects submitted by internal team vs. all logged defects (which usually include defects from Production too) - to assess an overall efficiency of the QA team and QA processes. | |
To see defects % that have been re-opened at least once (i.e. not accepted by Testers and thus returned back to work to Developers). | |
% of rejected defects i.e. defects which cannot be reproduced or duplicated ones or rejected due to any other reason. | |
To check if high priority defects are fixed quickly enough. | |
A ratio of items, month over month, being accepted from the first time i.e. with no returns to earlier stages of a pipeline | |
A percentage of defects not detected by a testing team | |
A ratio of defects to a software size estimated in SP or in items | |
Area: Estimation | |
Overall Estimation Accuracy of a team, in %, for a selected time frame. | To identify team estimation mistakes timely via amber and red segments - and retrospect that with the team in order to improve their estimation accuracy going forward NOTE: this only works if effort estimation in hours is used on a project and team reports their spent time. |
The % of work items closed within a selected time frame which:
| To see the contribution of each part into the overall estimation accuracy % on a project. |
Un-estimated work done, % | To see how much work is accomplished without being estimated. |
Effort Variance in %, month over month | To show a deviation in % between initial estimates of tasks/stories/etc. on a project versus actual effort logged to those items. |
Area: Team Workload | |
For Sprint or Release - Workload Summary consisting of:
| To monitor the quality of sprint plannings and assignments to team members. Most valuable while using it for the active sprint or during a sprint planning. NOTE: this only works if effort estimation in hours is used on a project |
Per-person Capacity vs. Actual time reported (all in hours) | To see a per-person utilization over 2-week time intervals. |
Reported Hours by weeks | Week by week total hours reported by a team. |
Area: Best practices compliance - Tracking & Reporting Hygiene | |
Gives the amount of issues which are in active sprint(s) but still not marked as "ready for development" | |
Shows the amount of items without estimates but with the effort logged. This indicator is counted for the whole project duration. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. | |
Number of issues in Backlog (out of Sprints) but already completed | Shows the amount of issues at the moment which are already completed in the backlog but not assigned to any sprint. Only valid items are counted here i.e. such ones like "Duplicate", "Rejected", "Cannot reproduce" or similar are not taken into consideration. This indicator is able to point out a contribution lost from sprint velocity. It is expected that on the project this health indicator value is equal to zero. |
Shows the amount of items which are already closed but still have some remaining time in a tracking system. In case remaining time in completed tasks is not a zero, it will affect burn down charts showing incorrect status thus a planned effort within the iteration is completed. This indicator is counted for the whole project duration. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. | |
Shows a percentage of work items closed within last 90 days and having any effort logged in a tracking system | |
Gives the percentage of estimated items at sprint start day relative to all issues in active sprint at sprint start day | |
Gives the percentage of estimated items at sprint start day relative to all issues in active sprint at sprint start day | |
This indicator shows a percentage of work items (e.g. tasks, sub-tasks and whatever applicable in your project) which are estimated in Hours. Checked for last 3 month | |
Completed work items estimated in Story Points (last 3 months) | This indicator shows a percentage of work items (e.g. User Stories, Epics, Improvements and whatever applicable in your project) which are estimated in Story Point . Checked for last 3 month |
Gives the percentage of issues marked as "ready for development" at the 1st day of active sprint(s) | |
Shows the amount of issues whose story points are not estimated though items are included into an active sprint. It is expected that on the project this health indicator value is equal to zero. | |
Stories which are Open/Ready state but with all sub-tasks already Closed | Shows the amount of stories (and other issues) which are still in Open/Ready/In Progress status but with all their sub-tasks closed. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. |
Stories which are Closed but with any sub-tasks still incomplete | Shows the amount of stories (and other issues) which are already closed but all/some of their sub-tasks are still incomplete. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. |
Shows the amount of issues for which some effort was logged after those issues had been closed. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. | |
Shows the amount of bugs which are not attached to any "Affected Version", so that it is not clear in what version/release those bugs are found. It is expected that on the project this health indicator value is equal to zero. | |
Shows the amount of bugs not put to 'Fix Version' i.e. it's not clear in which version/release they're fixed (or planned to be fixed). It is expected that on the project this health indicator is equal to zero. Calculation is based on issues completed within last 90 days and excludes invalid bugs. | |
Shows the amount of issues for which estimates in story points were changed during last month. Changing estimates in story points means estimate creep and is a bad practice. This metric does not cover cases of setting initial estimates, it is only about correcting estimates. It is expected that on the project this health indicator value is equal to zero. | |
Shows the amount of issues for which Original Estimates (in hours) were changed during last month. Changing Original Estimates means estimate creep and is a bad practice. Good practice is to adjust Remaining Estimate to reflect an effort required to complete a task. It is expected that on the project this health indicator value is equal to zero. | |
Shows the amount of issues in progress with no changes during last 2 weeks. It's expected any updates to happen during such a long time e.g. people log their time, post updates, ask questions via comments... | |
This indicator shows an amount of work log (effort) which are submitted with a future dates (most likely, by mistake) i.e. where date of a work log entry is later than 'Today'. Those require a correction to avoid an impact on other metrics. | |
Area: Best practices compliance - Functional Quality | |
Shows the amount of defects which are not completed with priority as Blocker and Critical. It is expected that on the project this health indicator value is equal to zero. | |
Shows the amount of defects which are not completed at the moment. Suggested acceptable value is less than 10 not completed defects | |
Shows a ratio of top-priority vs all defects which are open at the moment. Top priority ones mean those defects usually defined as of "Blocker" and "Critical" priority. Exact rule which defects should be considered as top priority should be specified in project configuration (Data Sources-JIRA-Quality Management) | |
Shows the amount of defects (Blockers and Criticals) which has not been resolved since more than a month. It is expected that on the project this health indicator value is equal to zero. Calculation is based on issues created within last 90 days. | |
Shows the amount of defects (all priorities) which has not been resolved since more than a month. Suggested acceptable indicator value is <20. Calculation is based on issues created within last 90 days. | |
Shows the percentage of time spent on fixing the bugs over a total effort spent on a project. Suggested acceptable indicator value is less than 20%. Calculation takes all work logs submitted within last 90 days. | |
Shows an average lifetime (in days) for top priority defects. Calculation is based on issues in "Open" and "inProgress" states, closed within last 90 days. | |
Shows an average lifetime (in days) for defects on a project, all priorities. Calculation is based on defects, closed within last 90 days. |
Didn't find a metric you need?
If you need something else not mentioned above - check out the Custom Metrics feature and its ability to Configure advanced custom metrics over PERF data .
Code quality
Supported code quality tools:
SonarQube and SonarCloud - learn how to configure Sonar tools in PERF
CAST Application Intelligence Platform (AIP) - learn how to configure CAST
Summary | Purpose |
---|---|
Code Quality Summary (roll up report over multiple Sonar projects) | Bird-eye view on key metrics from Sonar on code quality with ability to drill down. |
| A view on Unit Testing perspective of the project |
| A view on code quality metrics and ratings of a project |
| A measure how reliable is the code based on security vulnerabilities detected in it |
| A view on a quality of source code documentation |
Build pipeline and CI/CD
Supported CI/CD tools:
Jenkins - learn how to setup Jenkins in PERF
GitLab CI - learn how to configure GitLab CI in PERF
Summary | Purpose |
---|---|
CI/CD Summary (roll up report over multiple Jobs/Pipelines) | Bird-eye view on key metrics about CI/CD with ability to drill down. |
A number of deployments per calendar month/week - to assess the average pace. According to Agile principles, a rule of thumb is to deploy smaller increments but more frequently. | |
A ratio of commits led to broken builds within a day, compared to a total amount of commits for that day. Allows to check how good are quality guards controlling the code before it is committed. The greater this figure the worse. | |
Average time, in minutes, of a build process along with all automated test verifying build per pipeline. The less the better. | |
Shows how fragile is the code base of a project; extremely helpful on a stabilization phase of a project/release when no active development but rather a bug fixing is supposed. | |
Shows on a daily basis the percentage of successful builds for last 7 days to give a confidence that build pipeline is pretty stable because no failures happen thus pre-commit validation of code changes is performed well enough by developers. | |
Overall health of a CI/CD builds chain as a time waste due to build failures. The less the better. | |
Timing of stages on a CI/CD pipeline, helps to understand the overall 'Lead time in pipeline' as well as see bottlenecks. The less the better. | |
Shows a an average time a pipeline takes to recover from a failure. | |
Shows a percentage of pipelines successfully integrated by week. |
Source code management
List of supported source code management systems:
GitHub
GitLab
Atlassian Bitbucket
Atlassian Stash
Azure DevOps Repos
Here is the guideline about how to setup the above integrations in PERF - read how to configure source code management tools in PERF
Summary | Purpose |
---|---|
Version Control Summary (roll up report over multiple code repositories) | Bird-eye view on key metrics from the source code repository on with ability to drill down. |
Helps find out the most fragile points in the code base / architecture by showing a modification frequency for files in a project code base. | |
Shows the top of biggest commits in a project code base (last 7 / 30 / 90 days) to identify the biggest pain points for code review; the smaller the better. | |
Shows an amount of code lines changed (added, modified, deleted) over last 6 months. | |
Shows a size of individual commits in a project code base per selected GIT repositories (branches) and selected team member(s) for the last 180 days | |
Shows a number of individual commits in a project code base per selected GIT repositories (branches) and selected team member(s) for the last 180 days. | |
Shows the avg speed of new changes being incorporated to a master code due to delays on code reviews. The less the better - means less overhead for dev process/team to handle merges. |
DevOps industry standard metrics
This is a set of measures and metrics described in the "Accelerate" book and summarized in this article.
"Accelerate" DORA metric | Meaning | Relevant metrics in PERF | PERF data sources |
---|---|---|---|
Deployment frequency | By “deployment” we mean a software deployment to production or to an app store. The reason the frequency of production deployments matters is because it tells you how often you’re delivering something of value to end users and/or getting feedback from users. | Jenkins, GitLab CI | |
Deployment Frequency (Custom Metric v2) | JIRA, Rally | ||
JIRA, Rally | |||
Lead Time for changes | The time it takes to go from code committed to code successfully running in production. | Jenkins, GitLab CI | |
Time between Done and Released (TBD) | JIRA, Rally | ||
Lead Time for Changes (Custom Metrics v2) | JIRA, Rally | ||
JIRA, TFS, Rally | |||
JIRA, TFS, Rally | |||
JIRA, TFS, Rally | |||
Time to restore service | The time to restore service or mean time to recover (MTTR) metric calculates the average time it takes to restore service | Pipeline Mean Time to Recovery | Jenkins, GitLab CI |
Mean Time to Recovery (Custom Metric v2) | Jira, Rally | ||
Change Failure Rate | A measure of how often deployment failures occur in production that require immediate remedy (particularity, rollbacks). | Jenkins, GitLab CI | |
Jenkins, GitLab CI | |||
Change Failure Rate (Custom Metrics v2) | JIRA, Rally | ||
JIRA, Rally |
Per-person metrics
Disclaimer! Please always remember that
1) Metric is just an indicator. As a manager you always see a bigger context in which you should interpret numbers.
2) Metrics depend heavily on how well you have set up your data sources. Remember: if you set up the rules of interpretation that are right for your project, you will get the right metrics.
PERF is more focused on Team-level view. Although, there’re a few per-person metrics/widgets available:
Per-person Workload
Per-person Capacity and Spent time
Per-person Estimation Accuracy
Per-repository Biggest commits
Per-person Commit number per day
Per-person Commit size per day
Also you may use Custom Metrics and e.g. slice by Assignee field