Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
Describe the way how teams could leverage TelescopeAI functionality to support SAFe . They can get metrics on team level as well as aggregated metrics on ART, Solution or even Portfolio level.
Teams hierarchy and structure
In order to reflect SAfe structure one needs to create full or partial SAFe hierarchy (Portfolio → Solution → ART → Team) in TelescopeAI.
There are two approaches available.
Existing unit types
First, one could use existing unit types as SAFe entities, i.e.:
Existing units in TelescopeAI
SAFe unit types
Existing units in TelescopeAI
SAFe unit types
Account
Portfolio
Program
Solution
Project
ART
Stream
Team
Create SAFe units types
Another approach would be to create necessary SAFe unit types. This approach is strongly recommended at the early stage of TelescopeAI adoption.
Visual hierarchy in Planner
One could see the units hierarchy in Planner module. And it is possible to add different milestones or dependencies between them for each Solution, ART or Team .
Data structure assumptions and recommendations
PI (Planning increment) for metrics calculation
In order to create metrics tracking data per PI we need to have that Planning increment in the TelescopeAI. There are several approaches possible to add PIs for metrics calculation.
Jira fix version
If team does not use fixVersion in Jira for direct purposes, they could be used to define PIs. You just need to be careful to align start and end date with the sprints of the PI
CSV table
One can also define PIs via CSV table with PIname, PIstartDate and PIendDate columns. In such case you'll need to upload it into all the teams where this info is required to calculate metrics
Define in the metrics script
In some cases PI name is included into sprints name using strict naming convention. In such case you can get the dates of each PI from it's first and last sprints. Or you could just simply list PI name, start and end date directly in the script.
SAFe metrics
Here is the list of metrics which is complimentary to standard metrics listed in Metrics Catalog, i.e. Scrum metrics and Quality metrics.
Metric title
Description
Code / Link to metric
Example
Metric title
Description
Code / Link to metric
Example
Flow Distribution.
Chart visualizing the amount of effort spent (items, hours, SP) on each issue type (Features, Bugs, Support, Tech debt, etc).
SAFe description
What does it measure? Flow distribution measures the amount of each type of work in the system over time. This could include the balance of new business Features (or Stories, Capabilities, or Epics) relative to Enabler work, as well as the work to resolve defects and mitigate risks.
How is this measured? One simple comparison is just to count the number of each type of work item at any point in time or take the size of each work item into consideration by considering the number of story points. Agile Teams may choose to measure flow distribution per iteration, but commonly PI boundaries are used to make this calculation at the ART level and above, as shown in "Flow Distribution" metrics.
Why is this important? To balance both current and future velocity, it is important to be able to track the amount of work of each type that is moving through the system. Too much focus on new business features will leave little capacity for architecture/infrastructure work that addresses various forms of technical debt and enables future value. Alternatively, too much investment in technical debt could leave insufficient capacity for delivering new and current value to the customers. Target capacity allocations for each work type can then be determined to help balance these concerns.
with types as ( select date_trunc('Month', resolved) as Month, type, case when type='Story' then 1 else 0 end as Stories, case when type='Bug' then 1 else 0 end as Bugs, case when type='Incident' then 1 else 0 end as Incidents
from Ticket where resolved > '2022-06-30' order by Month desc )
select sum(Stories) as Stories, sum(Bugs) as Bugs, sum(Incidents) as Incidents, month from Types group by month order by Month asc
Flow Predictability
Chart reflects committed and completed items/SP per Iteration or a %/ratio of the completed items/SP out of committed number.
What does it measure? Flow predictability measures how well teams, ARTs and Solution Trains are able to plan and meet their PI objectives.
How is it measured? Flow Predictability is measured via the SAFe Program Predictability Measure (PPM). The PPM calculates the ratio of planned business value achieved to actual business value delivered in a PI. For more information on calculating this important metric, see the Inspect and Adapt article.
Why is this important? Low or erratic predictability makes delivery commitments unrealistic and often highlights underlying problems in technology, planning, or organization performance that need addressing. Reliable trains should operate in the 80 – 100 percent range; this allows the business and its stakeholders to plan effectively.
Flow velocity measures the number of backlog items (stories, features, capabilities, epics) completed in a given timeframe (sprint, month, quarter/PI); this is also known as the system’s throughput. For ART usual granularity is Throughput per Quarter/PI.
Chart shows amount of time each issue type spends from the creation till completion (Lead time) or duration of active development phase (Cycle time). Also, one can measure separate workflow steps duration to discover possible bottlenecks.
Flow time measures the total time elapsed for all the steps in a workflow and is, therefore, a measure of the efficiency of the entire system. Flow Time is typically measured from ideation to production, but it can also be useful to measure Flow Time for specific parts of a workflow, such as code commit to deploy, in order to identify opportunities for improvement.
How is this measured? Flow time is typically measured by considering the average length of time it takes to complete a particular type of work item (stories, features, capabilities, epics). A histogram is a useful visualization of flow time, (diagram above) since it helps to identify outliers that may need attention alongside supporting the goal of reducing the overall average flow time.
Why is this important? Flow time ensures that organizations and teams focus on what is important – delivering value to the business and customer in the shortest possible time. The shorter the flow time, the less time our customers spend waiting for new features and the lower the cost of delay incurred by the organization.
The metric shows a ratio of time spent on the work (not taking into account delays, waiting time and other waste) to the total task time in the pipeline.
Setting up the flow efficiency measurement requires preliminary and unified agreement about how to report the "Waiting Time". E.g. using a dedicated state in the workflow, which clearly distinguishes an item when it is under development or is blocked, i.e. in a waiting time state. One of the approaches to calculate might be to divide all the logged time by lead time.
Flow efficiency measures how much of the overall flow time is spent in value-added work activities vs. waiting between steps.
How is it measured? To correctly measure flow efficiency, the teams, trains, and value streams must clearly understand what the flow is in their case and what steps it passes through. This understanding is achieved with the help of Value Stream Mapping – a process of identifying workflow steps and delays in a system, as shown in Figure 6. (For more on Value Stream Mapping, see the Continuous Delivery Pipeline article and Ref [2]. In addition, the SAFe DevOps course provides comprehensive guidance on performing Value Stream Mapping.) Once the steps have been mapped, flow efficiency is calculated by dividing the total active time by the flow time and is expressed as a percentage, as shown in the diagram above.
Why is this important? In a typical system, that has not yet been optimized, flow efficiency can be extremely low, often in single digits. A low flow efficiency highlights a lot of waste in the system along with bottlenecks and delays that should be addressed. Conversely, the higher the flow efficiency the better the system is able to deliver value quickly.
FLow Load
Flow load indicates how many items are currently in the system.
Flow load indicates how many items are currently in the system. Keeping a healthy, limited number of active items (limiting work in process) is critical to enabling a fast flow of items through the system (SAFe Principle #6).
How is it measured? A Cumulative Flow Diagram (CFD) is one common tool that is used to effectively visualize flow load over time (Figure 8). The CFD shows the quantity of work in a given state, the rate at which items are accepted into the work queue (arrival curve), and the rate at which they are completed (departure curve). At a given point in time, the flow load is the vertical distance between the curves at that point.
Why is this important? Increasing flow load is a leading indicator of excess work in the process. The likely result will be an increase in future flow times as queues start to build up in the system. For this reason, measuring and reducing flow load is of critical importance. Furthermore, it is easy to see how more frequent delivery lowers flow load while improving flow time and flow velocity.
Metrics is necessary to track WIP disparity - it shows how many % of tasks in work at each period of time.
Team level: Shows how many % of US within iteration is in progress. RAG limits are 30% and 70% . ART level : Shows how many % of Features are in progress each PI. RAG limit is 50% .
with
target_sprints as ( select *, id as latest_sprint_id from sprint where State <> 'FUTURE' Order By finish_date ASC offset 20 limit 1),
TargetDates as ( select generate_series (ts.start_date, ts.finish_date, '1 day')::timestamp as eachday from target_sprints ts
),
TargetTickets as ( select --ts.id as sprint_id, t.id, key, resolved, created from target_Sprints ts left join Ticket T on ts.id = any(T.sprints) ),
StatusChangesHistory as ( select tt.*, start as statusAssigned, status -- case when status in () then 1.0 else 0.0end as InProgress from targetTickets tt join tickethistory th on th.workitem_id = tt.id where field = 0 and start != created and (status is not NULL and status != '') order by tt.id, th.start ),
NextStatuses as ( select *, lead(status) over (id) as nextStatus, lead(statusAssigned) over (id) as nextStatusAssigned from StatusChangesHistory ),
targetTicketsPerSprint as ( select NS.*, eachday, case when status in ('In Development','Code Review', 'Ready for Develeopment') then 1.0 else 0.0 end as InProgress
from TargetDates td join NextStatuses NS on td.eachday between statusAssigned and NextStatusAssigned ), results as ( select distinct count(id) as total, max(InProgress) as WIP, eachday from targetTicketsPerSprint group by eachday )
select --* from targetTicketsPerSprint 100*Wip::numeric / total::numeric as WIP_percent,eachday from Results
/*name, resolved, 100*sum(WIP::numeric)/Sum(Total::numeric) as WIP_Percent from targetTicketsPerSprint group by name,resolved --*/
with
target_releases as ( select *, id as latest_release from Release --where Closed = false Order By finish_date ASC limit 5),
TargetDates as ( select generate_series (tr.start_date, tr.finish_date, '1 day')::timestamp as eachday from target_releases tr
),
TargetTickets as ( select --ts.id as sprint_id, t.id, key, resolved, created from target_releases tr left join Ticket T on tr.id = any(T.fix_releases) where type = 'Story' ),
StatusChangesHistory as ( select tt.*, start as statusAssigned, status -- case when status in () then 1.0 else 0.0end as InProgress from targetTickets tt join tickethistory th on th.workitem_id = tt.id where field = 0 and start != created and (status is not NULL and status != '') order by tt.id, th.start ),
NextStatuses as ( select *, lead(status) over (id) as nextStatus, lead(statusAssigned) over (id) as nextStatusAssigned from StatusChangesHistory ),
targetTicketsPerSprint as ( select tr.name as release_name, NS.*, eachday, case when status in ('In Development','Code Review', 'Ready for Develeopment') then 1.0 else 0.0 end as InProgress
from target_releases tr left join TargetDates td on (td.eachday >= tr.start_date and td.eachday <= tr.finish_date) join NextStatuses NS on td.eachday between statusAssigned and NextStatusAssigned ), results as ( select distinct count(id) as total, max(InProgress) as WIP, --eachday, release_name from targetTicketsPerSprint group by --eachday, release_name )
select --* from targetTicketsPerSprint 100*Wip::numeric / total::numeric as WIP_percent, --eachday, release_name from Results
/*name, resolved, 100*sum(WIP::numeric)/Sum(Total::numeric) as WIP_Percent from targetTicketsPerSprint group by name,resolved --*/
Bottlenecks (status)
Metric allows to discover growing time trend for some ticket status.
Team level: Time in each status for US each iteration out of 6/12 iterations. ART level : Time in each status for Feature each iteration out of 6/12 iterations.
Discover team-members or teams which are usually taking much more tickets than others. Team level: % of US per each team-member each iteration out of 6/12 iterations. ART level : % of US per each team each iteration in 3 PIs
WITH target_releases AS ( SELECT , id AS latest_release FROM release ORDER BY finish_date ASC LIMIT 5 ), targetdates AS ( SELECT generate_series(tr.start_date, tr.finish_date, '1 day')::timestamp AS eachday FROM target_releases AS tr ), targettickets AS ( SELECT t.id, key, assigned_to, resolved, created FROM target_releases AS tr LEFT OUTER JOIN ticket AS t ON http://tr.id = ANY (t.fix_releases) WHERE type = 'Story' ), statuschangeshistory AS ( SELECT tt., start AS statusassigned, status FROM targettickets AS tt INNER JOIN tickethistory AS th ON th.workitem_id = http://tt.id WHERE field = 0 AND start <> created AND (status IS NOT NULL AND status <> '') ORDER BY http://tt.id , th.start ), nextstatuses AS ( SELECT , lead(status) OVER () AS nextstatus, lead(statusassigned) OVER () AS nextstatusassigned FROM statuschangeshistory ), targetticketspersprint AS ( SELECT tr.name AS release_name, ns., eachday, CASE WHEN status IN ('In Development', 'Code Review', 'Ready for Develeopment') THEN 1.0 ELSE 0.0 END AS inprogress FROM target_releases AS tr LEFT OUTER JOIN targetdates AS td ON td.eachday >= tr.start_date AND td.eachday <= tr.finish_date INNER JOIN nextstatuses AS ns ON td.eachday BETWEEN statusassigned AND nextstatusassigned ), results AS ( SELECT DISTINCT count(id) AS total, max(inprogress) AS wip, /release_name,/ assigned_to FROM targetticketspersprint GROUP BY /*release_name, */ assigned_to ), ordered_columns AS ( SELECT round((100 * wip::numeric) / total::numeric,1) AS wip_percent, /release_name,/ assigned_to FROM results ) SELECT --release_name, assigned_to, wip_percent FROM ordered_columns
Handoffs
(Requires a separate status for each handoff type)
This metric identifies possible issues in transitioning tickets between different assignees or process stages.
Team level metric: Average waiting time for each Handoff status per iteration .
ART level metric: Average waiting time for each Handoff status per PI.
Time in Status per week or per month can be used OotB to track handoffs via status changes. Process-specific custom metric would required to track handoffs via assignee change.