Bug Growth

Purpose

In a well-managed development process the less a lifetime of a defect (i.e. a time span between its creation and its fix) the better - because a fix is cheaper. 

Bug Growth shows a number of Fixed defects vs. number of Logged defects for a time frame (month, sprint and so on). This view shows a trend of submitted defects exceeds the trend of defects being fixed - to understand if a team can timely manage the quality debt. 

How metric helps

Bug Growth helps to reveal a decreasing trend in a number of fixed defects whereas a number of logged defects steadily grows. This is a point to conduct a root cause analysis to know the reason of an inefficient bug fixing process. For example, there might be a lack of resources allocated to bug fixing. But it is crucial to not neglect a critical number of bugs because it might eventually lead to system instability and low customer satisfaction.

Metric:

  • shows if a team can timely manage the quality debt

  • shows whether quality debt is growing or reducing over time

  • shows quality of product

  • shows whether test strategy is effective

  • shows how many issues are being taken from backlog to be fixed in sprint, i.e. in order to burn out quality debt

How metric works

Chart overview

Chart shows bug growth in items - Axis Y. Axis X possible views:

  • Day, Week, Month

  • Sprint, Version (Release)

1 Releases are to have start and end date to be reflected in the chart.

2 Items logged or fixed within a release/sprint start/end date are taken into account. Assignment to a release or sprint is ignored.

On hover over a column a hint appears with the following info:

  • Sprint name - a sprint name as it is in a tracking system;

  • Sprint time frame -a sprint start-end date;

  • Logged defects - a number of defects both internal and external submitted in a considered period;

  • Fixed defect - a number of defects both internal and external solved in a considered period.

Chart legend shows the following:

  • the last calculated number of logged defects;

  • the last calculated number of fixed defects.

By click on a column a pop up appears with the following information got from the defect tracking system:

  • Defect ID

  • Type

  • Priority

  • Summary

Top problems metric identifies 

  1. [in case logged issues > fixed issues] Not enough time allocated to issues resolution

  2. [in case logged issues > fixed issues] Quality debt is growing

  3. [in case logged issues > fixed issues] Team misbalance (skills ratio)

  4. [in case logged issues > fixed issues] Testing starts late on time

  • [in case logged issues > fixed issues] As issues are not estimated ahead, it's impossible to commit to their resolution on time, i.e. by end of sprint

  • [in case logged issues > fixed issues] Bug reporting process is poor enough, i.e. QA team skipps process of thorough investigation and provides only consequences instead of root cause in the ticket

  • [in case logged issues > fixed issues] Issues cannot be troubleshooted or investigation flow is too complex for issues to be fixed (e.g. monitoring and logging process is not in place)

  • [in case logged issues > fixed issues] Everyone can contribute to issues backlog

  • [in case logged issues = fixed issues] If there is quality debt, i.e. issues in backlog, then we can conclude that quality debt is not growing BUT it's not reducing as well

Calculation 

Logged bugs

Fixed bugs

Logged bugs = Nlog_end - Nlog_start
where

Nlog_end - a number of logged bugs at the iteration end
Nlog_start - a number of logged bugs at the iteration start

RAG thresholds: n/a.

Fixed bugs = Nfix_end - Nfix_start
where

Nfix_end - a number of fixed bugs at the iteration end
Nfix_start - a number of fixed bugs at the iteration start

RAG thresholds: n/a.

 

Calculation notes

1 Fixed defect is a defect that got a "Done" status within a considered period. "Done" status is a status from "Done" bucket in Project Configuration>Data Sources> Task Tracking System > Workflows.

2 If the metric is used at a task tracking data source with >1 projects items are distributed between sprints of all projects by their creation/fixed date. So sometimes an item from one project is reflected within a sprint of another project because it matches it by date. But it happens only if an item is not assigned to a sprint explicitly (sprint value is not empty in a task tracking system).

PerfQL

with sprints as (
select
name,
start_date as start,
coalesce(complete_date, finish_date) as finish
from sprint
where state !=''FUTURE''
order by start desc
limit 6
),

bugs as (
select
key,
type,
summary,
priority,
status,
created,
done_date,
url
from ticket
where is_defect(ticket)
)

select
s.name as "Sprint",
count(distinct logged.key) as "Logged Defects",
count(distinct fixed.key) as "Fixed Defects"
from sprints s
left join bugs logged
on logged.created between s.start and s.finish
left join bugs fixed
on fixed.done_date between s.start and s.finish
group by "Sprint", s.finish
order by s.finish asc

Data Source

Data for the metric can be collected from a task tracking system (Jira, TFS, Rally, etc.).