Being SAFe
Main goal
Describe the way how teams could leverage TelescopeAI functionality to support SAFe . They can get metrics on team level as well as aggregated metrics on ART, Solution or even Portfolio level.
Teams hierarchy and structure
In order to reflect SAfe structure one needs to create full or partial SAFe hierarchy (Portfolio → Solution → ART → Team) in TelescopeAI.
There are two approaches available.
Existing unit types
First, one could use existing unit types as SAFe entities, i.e.:
Existing units in TelescopeAI | SAFe unit types |
---|---|
Account | Portfolio |
Program | Solution |
Project | ART |
Stream | Team |
Create SAFe units types
Another approach would be to create necessary SAFe unit types. This approach is strongly recommended at the early stage of TelescopeAI adoption.
Visual hierarchy in Planner
One could see the units hierarchy in Planner module. And it is possible to add different milestones or dependencies between them for each Solution, ART or Team .
Data structure assumptions and recommendations
PI (Planning increment) for metrics calculation
In order to create metrics tracking data per PI we need to have that Planning increment in the TelescopeAI. There are several approaches possible to add PIs for metrics calculation.
Jira fix version
If team does not use fixVersion in Jira for direct purposes, they could be used to define PIs. You just need to be careful to align start and end date with the sprints of the PI
CSV table
One can also define PIs via CSV table with PIname, PIstartDate and PIendDate columns. In such case you'll need to upload it into all the teams where this info is required to calculate metrics
Define in the metrics script
In some cases PI name is included into sprints name using strict naming convention. In such case you can get the dates of each PI from it's first and last sprints.
Or you could just simply list PI name, start and end date directly in the script.
SAFe metrics
Here is the list of metrics which is complimentary to standard metrics listed in Metrics Catalog, i.e. Scrum metrics and Quality metrics.
Metric title | Description | Code / Link to metric | Example |
---|---|---|---|
Flow Distribution. | Chart visualizing the amount of effort spent (items, hours, SP) on each issue type (Features, Bugs, Support, Tech debt, etc). SAFe description What does it measure? How is this measured? Why is this important? | with types as ( |
|
Flow Predictability | Chart reflects committed and completed items/SP per Iteration What does it measure? How is it measured? Why is this important? |
| |
Flow Velocity | Flow velocity measures the number of backlog items (stories, features, capabilities, epics) completed in a given timeframe (sprint, month, quarter/PI); this is also known as the system’s throughput. |
| |
Chart shows amount of time each issue type spends from the creation till completion (Lead time) or duration of active development phase (Cycle time). Flow time measures the total time elapsed for all the steps in a workflow and is, therefore, a measure of the efficiency of the entire system. Flow Time is typically measured from ideation to production, but it can also be useful to measure Flow Time for specific parts of a workflow, such as code commit to deploy, in order to identify opportunities for improvement. How is this measured? Why is this important? |
| ||
Flow Efficiency | The metric shows a ratio of time spent on the work (not taking into account delays, waiting time and other waste) to the total task time in the pipeline. Flow efficiency measures how much of the overall flow time is spent in value-added work activities vs. waiting between steps. How is it measured? Why is this important? |
|
|
FLow Load | Flow load indicates how many items are currently in the system. |
| |
Work in Progress (Relative) | Metrics is necessary to track WIP disparity - it shows how many % of tasks in work at each period of time. |
| |
Bottlenecks (status) | Metric allows to discover growing time trend for some ticket status. |
| |
Bottlenecks (team) | Discover team-members or teams which are usually taking much more tickets than others. |
| |
Handoffs (Requires a separate status for each handoff type) | This metric identifies possible issues in transitioning tickets between different assignees or process stages. ART level metric: Average waiting time for each Handoff status per PI. | Time in Status per week or per month can be used OotB to track handoffs via status changes. |
|