Being SAFe

Main goal 

Describe the way how teams could leverage TelescopeAI functionality to support SAFe . They can get metrics on team level as well as aggregated metrics on ART, Solution or even Portfolio level.

Teams hierarchy and structure

In order to reflect SAfe structure one needs to create full or partial SAFe hierarchy  (Portfolio → Solution → ART → Team) in TelescopeAI.

There are two approaches available.

 Existing unit types

First, one could use existing unit types as SAFe entities, i.e.:

Existing units in TelescopeAI

SAFe unit types

Existing units in TelescopeAI

SAFe unit types

Account

Portfolio

Program

Solution

Project

ART

Stream

Team

 

Create SAFe units types 

Another approach would be to create necessary SAFe unit types. This approach is strongly recommended at the early stage of TelescopeAI adoption.  

 Visual hierarchy in Planner

One could see the units hierarchy in Planner module. And it is possible to add different milestones or dependencies between them for each Solution, ART or Team . 

Data structure assumptions and recommendations

PI (Planning increment) for metrics calculation

In order to create metrics tracking data per PI we need to have that Planning increment in the TelescopeAI. There are several approaches possible to add PIs for metrics calculation.  

Jira fix version

If team does not use fixVersion in Jira for direct purposes, they could be used to define PIs. You just need to be careful to align start and end date with the sprints of the PI  

CSV table

One can also define PIs via CSV table with PIname, PIstartDate and PIendDate columns. In such case you'll need to upload it into all the teams where this info is required to calculate metrics

Define in the metrics script 

In some cases PI name is included into sprints name using strict naming convention. In such case you can get the dates of each PI from it's first and last sprints. 
Or you could just simply list PI name, start and end date directly in the script. 

SAFe metrics  

Here is the list of metrics which is complimentary to standard metrics listed in Metrics Catalog, i.e. Scrum metrics and Quality metrics.

Metric title

Description

Code / Link to metric

Example

Metric title

Description

Code / Link to metric

Example

Flow Distribution.

Chart visualizing the amount of effort spent (items, hours, SP) on each issue type (Features, Bugs, Support, Tech debt, etc).

 SAFe description

What does it measure?
Flow distribution measures the amount of each type of work in the system over time. This could include the balance of new business Features (or Stories, Capabilities, or Epics) relative to Enabler work, as well as the work to resolve defects and mitigate risks.

How is this measured?
One simple comparison is just to count the number of each type of work item at any point in time or take the size of each work item into consideration by considering the number of story points. Agile Teams may choose to measure flow distribution per iteration, but commonly PI boundaries are used to make this calculation at the ART level and above, as shown in "Flow Distribution" metrics.

Why is this important?
To balance both current and future velocity, it is important to be able to track the amount of work of each type that is moving through the system. Too much focus on new business features will leave little capacity for architecture/infrastructure work that addresses various forms of technical debt and enables future value. Alternatively, too much investment in technical debt could leave insufficient capacity for delivering new and current value to the customers. Target capacity allocations for each work type can then be determined to help balance these concerns.

Source: https://www.scaledagileframework.com/metrics/

with types as (
  select 
  date_trunc('Month', resolved) as Month, 
  type,
  case when type='Story' then 1 else 0 end as Stories,
  case when type='Bug' then 1 else 0 end as Bugs,
  case when type='Incident' then 1 else 0 end as Incidents
  
    from Ticket
  where resolved > '2022-06-30'
  order by Month desc
  )
  
  
select sum(Stories) as Stories,
sum(Bugs) as Bugs,
sum(Incidents) as Incidents, 
month 
from Types 
group by month  
  order by Month asc

 

Flow Predictability

Chart reflects committed and completed items/SP per Iteration 
or a %/ratio of the completed items/SP out of committed number.

What does it measure?
Flow predictability measures how well teams, ARTs and Solution Trains are able to plan and meet their PI objectives.

How is it measured?
Flow Predictability is measured via the SAFe Program Predictability Measure (PPM). The PPM calculates the ratio of planned business value achieved to actual business value delivered in a PI. For more information on calculating this important metric, see the Inspect and Adapt article.

Why is this important?
Low or erratic predictability makes delivery commitments unrealistic and often highlights underlying problems in technology, planning, or organization performance that need addressing. Reliable trains should operate in the 80 – 100 percent range; this allows the business and its stakeholders to plan effectively.

Source: https://www.scaledagileframework.com/metrics/

 

Flow Velocity

Flow velocity measures the number of backlog items (stories, features, capabilities, epics) completed in a given timeframe (sprint, month, quarter/PI); this is also known as the system’s throughput.
For ART usual granularity is Throughput per Quarter/PI.

Flow Velocity 
Throughput

 

Flow Time 

Chart shows amount of time each issue type spends from the creation till completion (Lead time) or duration of active development phase (Cycle time).
Also, one can measure separate workflow steps duration to discover possible bottlenecks.

Flow time measures the total time elapsed for all the steps in a workflow and is, therefore, a measure of the efficiency of the entire system. Flow Time is typically measured from ideation to production, but it can also be useful to measure Flow Time for specific parts of a workflow, such as code commit to deploy, in order to identify opportunities for improvement.

How is this measured?
Flow time is typically measured by considering the average length of time it takes to complete a particular type of work item (stories, features, capabilities, epics). A histogram is a useful visualization of flow time, (diagram above) since it helps to identify outliers that may need attention alongside supporting the goal of reducing the overall average flow time.

Why is this important?
Flow time ensures that organizations and teams focus on what is important – delivering value to the business and customer in the shortest possible time. The shorter the flow time, the less time our customers spend waiting for new features and the lower the cost of delay incurred by the organization.

Source:

Lead and Cycle Time

 

Flow Efficiency

The metric shows a ratio of time spent on the work (not taking into account delays, waiting time and other waste) to the total task time in the pipeline.

Setting up the flow efficiency measurement requires preliminary and unified agreement about how to report the "Waiting Time". E.g. using a dedicated state in the workflow, which clearly distinguishes an item when it is under development or is blocked, i.e. in a waiting time state.
One of the approaches to calculate might be to divide all the logged time by lead time.

Flow efficiency measures how much of the overall flow time is spent in value-added work activities vs. waiting between steps.

How is it measured?
To correctly measure flow efficiency, the teams, trains, and value streams must clearly understand what the flow is in their case and what steps it passes through. This understanding is achieved with the help of Value Stream Mapping – a process of identifying workflow steps and delays in a system, as shown in Figure 6. (For more on Value Stream Mapping, see the Continuous Delivery Pipeline article and Ref [2]. In addition, the SAFe DevOps course provides comprehensive guidance on performing Value Stream Mapping.) Once the steps have been mapped, flow efficiency is calculated by dividing the total active time by the flow time and is expressed as a percentage, as shown in the diagram above.

Why is this important?
In a typical system, that has not yet been optimized, flow efficiency can be extremely low, often in single digits. A low flow efficiency highlights a lot of waste in the system along with bottlenecks and delays that should be addressed. Conversely, the higher the flow efficiency the better the system is able to deliver value quickly.

 

 

FLow Load

Flow load indicates how many items are currently in the system.

 

Work in Progress (Relative)

Metrics is necessary to track WIP disparity - it shows how many % of tasks in work at each period of time.


Team level: Shows how many % of US within iteration is in progress. RAG limits are 30% and 70% .
ART level : Shows how many % of Features are in progress each PI. RAG limit is 50% .

 

Bottlenecks (status)

Metric allows to discover growing time trend for some ticket status.

Team level: Time in each status for US each iteration out of 6/12 iterations.
ART level : Time in each status for Feature each iteration out of 6/12 iterations.

 

Bottlenecks (team)

Discover team-members or teams which are usually taking much more tickets than others.
Team level: % of US per each team-member each iteration out of 6/12 iterations.
ART level : % of US per each team each iteration in 3 PIs

 

Handoffs

(Requires a separate status for each handoff type)

This metric identifies possible issues in transitioning tickets between different assignees or process stages.

Team level metric: Average waiting time for each Handoff status per iteration .

ART level metric: Average waiting time for each Handoff status per PI.

per week or per month can be used OotB to track handoffs via status changes.
Process-specific custom metric would required to track handoffs via assignee change.

Â