/
[ADOP] Deployment Frequency to High-Level environments

[ADOP] Deployment Frequency to High-Level environments

[IN PROGRESS]

Purpose

Deployment Frequency to High-Level environments’ metric is intended for Azure DevOps Pipelines (ADOP) data source and measures the frequency of code changes deployments of successful and unsuccessful pipeline stages into high-level environments. As per Agile principles, a good practice is to deploy more frequently in a smaller chunks.

Primary Dimension: Productivity
Secondary Dimensions: -

How metric helps

The metric measures how often code is deployed to critical environments (e.g., staging, production). It reflects the speed, efficiency, and reliability of your software delivery process, giving teams insights into their ability to deliver value and adapt to change.

Why it is valuable:

  • Tracks Delivery Speed - indicates how fast updates and features are delivered.

  • Identifies Blockages - highlights inefficiencies slowing deployments.

  • Encourages Best Practices - promotes CI/CD and automation adoption.

  • Enhances Stability - frequent, smaller releases often mean reliable, tested code.

  • Improves Agility - enables quicker response to customer needs and market demands.

Condition

Potential risks

What to do

Condition

Potential risks

What to do

The metric is too high

  • Rushing deployments may risk insufficient testing or quality issues.

  • Teams may experience burnout if deployments feel too frequent.

  • Reevaluate testing coverage to ensure quality isn’t sacrificed.

  • Automate and enhance post-deployment monitoring.

  • Confirm that frequent releases are aligned with product priorities.

The metric is too low

  • Indicates bottlenecks in your pipeline, slowing down value delivery.

  • Teams may lose agility and struggle to respond to feedback or bugs quickly.

  • Audit and streamline your CI/CD pipeline for inefficiencies.

  • Automate testing, builds, or deployment processes.

  • Address manual processes and dependencies that delay releases.

This metric acts as a compass for improving your software delivery process while balancing speed, quality, and stability.

When tracking and creating targets for this metric, it’s important to categorize the deployment frequency into larger buckets like the ones provided by DORA, instead of being meticulous about the actual number of deployments. Choose a goal like “multiple times a week” as opposed to three times a week, when a measure becomes a target, it ceases to be a good measure. The aim is not to hit specific numbers but rather to create a deployment system that’s fast, reliable, and flexible.

How metric works

Chart overview

image-20250321-131655.png

The vertical bar chart shows a number of deployments to high-level environments (axis Y) on a timeline (axis X) selected via the metric parameters (start date, end date) and split by intervals (by week/month/quarter).

The metric is calculated based on the following default values selected in parameters:

  • StartDate - 01.09.2024

  • EndDate - 28.02.2025 (not included)

  • SplitBy - Month

Date format: DD.MM.YYYY

The parameters can be changed via the Configure Parameters option

image-20250325-080513.png

On hover over a series a hint appears containing:

  • Timeline (StartDate + EndDate)

  • Deployment frequency

  • Number of deployments

Metric thresholds:

  • Red: -

  • Amber: -

  • Green: -

Clicking on a bar the Drill-down appears containing:

  • Run ID

  • Stage ID

  • Number of times

  • Date

Calculation 

Deployment Frequency to High-Level environments is number of successful and unsuccessful pipeline stages runs into high-level environments. The stages are selected in Azure DevOps Pipelines data source configuration in 'Deployment to high-level environment' field (See also PERF Data Source - Azure DevOps Pipelines). 

PerfQL

WITH deduped_pipeline_tag AS ( SELECT DISTINCT entity_id, entity_type, name FROM pipeline_tag WHERE entity_type = 'stage' AND name = 'DEPLOYMENT_STAGES_HIGH_LEVEL_ENV' ), timeline AS ( SELECT gs AS time_segment FROM generate_series( DATE_TRUNC(lower($SplitBy$), $StartDate$::timestamp), DATE_TRUNC(lower($SplitBy$), $EndDate$::timestamp), CASE WHEN lower($SplitBy$) = 'week' THEN '1 week'::interval WHEN lower($SplitBy$) = 'month' THEN '1 month'::interval WHEN lower($SplitBy$) = 'quarter' THEN '3 months'::interval END ) gs ), aggregated AS ( SELECT DATE_TRUNC(lower($SplitBy$), ps.start) AS time_segment, SUM(EXTRACT(EPOCH FROM (ps.finish - ps.start)) / 60.0) AS total_duration, COUNT(*) AS deploy_count FROM pipeline_stage ps INNER JOIN deduped_pipeline_tag dt ON dt.entity_id = ps.stage_id WHERE ps.start >= $StartDate$ AND ps.finish <= $EndDate$ GROUP BY DATE_TRUNC(lower($SplitBy$), ps.start) ) SELECT CASE WHEN lower($SplitBy$) = 'week' THEN TO_CHAR(t.time_segment, 'DD Mon') || ' - ' || TO_CHAR(t.time_segment + interval '6 days', 'DD Mon, YYYY') WHEN lower($SplitBy$) = 'month' THEN TO_CHAR(t.time_segment, 'YYYY Mon') WHEN lower($SplitBy$) = 'quarter' THEN TO_CHAR(t.time_segment, 'YYYY') || ' Q' || TO_CHAR(t.time_segment, 'Q') END AS time_segment_label, CASE WHEN a.deploy_count IS NULL OR a.deploy_count = 0 THEN 0 ELSE ROUND((a.total_duration / a.deploy_count)::NUMERIC, 0) END AS "Avg. Deployment time" FROM timeline t LEFT JOIN aggregated a ON t.time_segment = a.time_segment ORDER BY t.time_segment;
WITH relevant_stages AS ( SELECT ps.run_id, ps.stage_id, ps.start, ps.finish, ROUND((EXTRACT(EPOCH FROM (ps.finish - ps.start)) / 60)::NUMERIC, 0) AS time_minutes, DATE_TRUNC(lower($SplitBy$), ps.start) AS time_segment FROM pipeline_stage ps INNER JOIN deduped_pipeline_tag dt ON dt.entity_id = ps.stage_id WHERE ps.start >= $StartDate$ AND ps.finish <= $EndDate$ ), drill_down AS ( SELECT rs.*, CASE WHEN lower($SplitBy$) = 'week' THEN TO_CHAR(rs.time_segment, 'DD Mon') || ' - ' || TO_CHAR(rs.time_segment + interval '6 days', 'DD Mon, YYYY') WHEN lower($SplitBy$) = 'month' THEN TO_CHAR(rs.time_segment, 'YYYY Mon') WHEN lower($SplitBy$) = 'quarter' THEN TO_CHAR(rs.time_segment, 'YYYY') || ' Q' || TO_CHAR(rs.time_segment, 'Q') END AS time_segment_label FROM relevant_stages rs ) SELECT run_id AS "Run ID", stage_id AS "Stage ID", time_minutes AS "Time, Min" FROM drill_down WHERE time_segment_label = clicked_x_value ORDER BY time_minutes DESC;

Data Source

Data for the metric can be collected from the Azure DevOps Pipelines data source (Import API type).

Related content