Lead Time for Changes

Context

The interval between a code change (first commit) and its release to the end users is considered Lead Time for Changes. 

“Shorter product delivery lead times are better since they enable faster feedback on what we are building and allow us to course-correct more rapidly. Short lead times are also important when there is a defect or outage, and we need to deliver a fix rapidly and with high confidence.”

Accelerate: The science of lean software and DevOps: Building and scaling high performing technology organizations

Description

  1. This metric will show the difference between transition to a particular Jira/Azure Boards/Rally status and release date of a fix version this ticket was assigned to.

  2. Calculation logic selects all releases completed within interval & counts lead time for changes for all tickets assigned to those releases.

  3. This metric is based on Task Tracking System's (Jira, Azure Boards, or Rally) data. The main pre-requisites are

    1. to have actual fix versions with correct release dates;

    2. tickets being released should be assigned to a fix version;

    3. tickets are being timely transitioned to appropriate statuses in TTS. 

  4. You can configure the status from where Lead Time for Changes will start:

    1. as usually there is no status for 'first commit' in TTS, so by default, it's one of the usual statuses when development starts ('In Progress','In Development','In Dev'); 

    2. the default can be changed to Code Review status to focus on & optimize further stages, like Code Review, Testing & Deployment.

  5. The median time is used while calculating a value for the interval.

  6. In case when a ticket is assigned to multiple releases - we're picking the last release's date while calculating lead time for changes.

  7. In case when ticket spent several times in status where lead time for changes starts - we're picking the first time ticket transitioned to this status.

  8. The current interval starts from today - interval duration (1 month/quarter). It's done to make a smooth transition from previous interval to the current one (especially when month/quarter just started). Example: Today is 5 Jan. If release interval is a month, then Jan bar will contain releases from 5 Dec up till now.

Configuration

We have 4 variables in the code of this metric:

  • Number of intervals to show. Basically, how many bars do you want to see on the chart? Default is 9

  • Interval type. If you don't release too often you probably don't want to see gaps for some months. As an example, if there was no release during any month - you will not see lead time for changes for that month. In this case, if you select quarter intervals, you will get median of all lead times for changes during that quarter. Default: month.

  • Dev Start Status. This is the status in Task Tracking System from where you want to count your Lead Time for Changes. By default, it's set to usual statuses for development start ('In Progress','In Development','In Dev'). Default can be changed to Code Review status to focus on & optimize further stages, like Code Review, Testing & Deployement.

  • Tickets. This is the scope for our measurement. For example, it doesn't make sense to count Lead Time for Changes for Epics, so we can exclude them here.

Code

Here is the code of the metric:

with number_of_intervals_to_show as ( select 9 --how many time intervals to show on the chart as show ), interval_type as ( select 1 --1=month, 2=quarters as i_type ), dev_start_status as --status(es) lead time for changes is starting from ( select 'In progress' union select 'In Development' union select 'In Dev' ), tickets as --scope of measurement ( select * from ticket where type not in ('Epic') ), interval as --preparing intervals - months or quarters ( select case when (select i_type from interval_type)=1 then to_char(i_finish, 'YYYY MM') else to_char(i_finish, 'YYYY "Q"Q') end as timeline, i_start, i_finish from ( select now() as i_finish, now()-interval '3 month' as i_start, 2 as type union select (date_trunc('quarter', now()) - interval '3' month* generate_series(0,(select show from number_of_intervals_to_show)-2))-interval '1 second' as i_finish, (date_trunc('quarter', now()) - interval '3' month* generate_series(0,(select show from number_of_intervals_to_show)-2))-interval '3 month' as i_start, 2 as type union select now() as i_finish, now()-interval '1 month' as i_start, 1 as type union select (date_trunc('month', now()) - interval '1' month* generate_series(0,(select show from number_of_intervals_to_show)-2))-interval '1 second' as i_finish, (date_trunc('month', now()) - interval '1' month* generate_series(0,(select show from number_of_intervals_to_show)-2))-interval '1 month' as i_start, 1 as type ) as temp where type=(select i_type from interval_type) ), releases as --selecting releases completed within intervals ( select * from release where finish_date>=(select min(i_start) from interval) ), tickets_releases as --collecting tickets released, counting their lead time for changes ( select b.key, b.type, b.created, b.priority, b.environment, b.story_points, b.summary, max(a.finish_date) as release_date, --case when tickets assigned to multiple releases - picking the last one min(c.start) as in_progress_date --case with ticket reopening - picking first time it was in status from releases a join tickets b on a.id=any(b.fix_releases) join tickethistory c on b.id=c.workitem_id and c.field=0 and c.status ILIKE ANY (select * from dev_start_status) and c.start<a.finish_date group by b.key, b.type, b.created, b.priority, b.environment, b.story_points, b.summary ) select timeline::varchar as timeline, (percentile_cont(0.5) WITHIN GROUP (ORDER BY DATE_PART('day', release_date-in_progress_date)+DATE_PART('hour', release_date-in_progress_date)/24.0)) as lead_time_for_changes from interval left join tickets_releases on release_date>=i_start and release_date<=i_finish group by timeline order by timeline asc

RAG Thresholds

RAG thresholds:

  • RED: >30 days 

  • AMBER: 7-30 days

  • GREEN: <7 days

 

That corresponds to 2022 Google's State of DevOps Report: https://services.google.com/fh/files/misc/2022_state_of_devops_report.pdf

Drill-down

Drill-down query to show Key, Summary, Type & Lead Time for Changes for tickets released within clicked interval:

select key as "Key", summary as "Summary", type as "Type", round((DATE_PART('day', release_date-in_progress_date)+DATE_PART('hour', release_date-in_progress_date)/24.0)::numeric, 2) as "Lead Time for Changes, days" from interval join tickets_releases a on a.release_date>=i_start and a.release_date<=i_finish where timeline=clicked_x_value order by "Lead Time for Changes, days" desc

Aggregation

If you have several teams and you want to see the Lead Time for Changes across all your teams, here is an example of how to do that.

Aggregation dataset:

Aggregation metric's code:

select timeline, avg(lead_time_for_changes) as "Average Lead Time for Changes (days)" from DORA_Lead_Time_for_Changes group by timeline order by timeline asc

As a result, you will see the average lead time for changes for all your teams. 

In the drill-down you will have the deployment frequency values for each team. Here is the code:

How to improve Lead Time for Changes

If you're not in a GREEN zone for this metric, please consider the following ways to decrease your lead time for changes:

  • Optimize Coding Time

    • Reduce rework in the coding cycles

    • Work with small batches

    • Avoid multitasking and context switching

    • Write clear requirements

  • Optimize Code Review Time

    • Keep it small. If reviewers decide to battle with your changes, it will take too much time to review the PR.

    • Code faster. Break large features into small pieces and keep the coding time short for each pull request. See the problems earlier.

    • Review faster. Don’t let your code changes stale to prevent potential merge conflicts.

    • Create a Code Review Checklist

    • Automate what can be automated in the code review process. Automation is the first gatekeeper of code reviews.

    • Be ready to be reviewed. Review yourself before submitting. Give it a clear and descriptive title and write the best description.

  • Optimize Testing (Waiting For Deploy) Time

    • Automate your CI/CD process.

    • Increase your test automation coverage.

    • Eliminate manual approval processes.

    • Reduce pipeline queue time. Improve your CI/CD tooling performance and capability.

  • Optimize Deployment Time

    • Identify friction in the CI/CD process.

    • Reduce CI/CD pipeline duration.

    • Cache build dependencies.

    • Optimize container image size.

    • Check the network latency.

    • Optimize resources.

    • Review pipeline architecture.