/
PerfQL Examples
PerfQL Examples
- 1 I want to know who submitted bugs and sub-bugs on my project and how many bugs submitted by each person
- 2 I want to know who submitted bugs and sub-bugs for the last sprint (example of JOIN function)
- 3 I want to know number of issues per Label for a particular sprint
- 4 I'm interested in tasks distribution between QA engineers (based on Custom Field)
- 5 I would like to know line changes in code per Story Point by month (join TTS with GIT)
- 6 I wonder how many issues are in every sprint after September 2021
- 7 I would like to track Velocity Trend in my project
- 8 Custom burnup chart (without forecast)
I want to know who submitted bugs and sub-bugs on my project and how many bugs submitted by each person
select key, reported_by from Ticket
where type='Bug' or type='Sub-bug'
I want to know who submitted bugs and sub-bugs for the last sprint (example of JOIN function)
select ticket.key, ticket.reported_by from Ticket join Sprint on sprint.id=any(ticket.sprints)
where (type='Bug' or type='Sub-bug') and sprint.name='Sprint R48'
PerfQL for JSON type of fields
To build a custom metric with data which is stored in JSON form, select nessesary data from JSON in the Delivery Central → Perf → Custom Metrics → step 2:
Open Developer Tools in your browser and open Network tab;
Insert PerfQL with the JSON field, for example: select custom_fields from Ticket;
Click Run Preview;
Click on the sample-data in the console;
Open Responce tab.
Response tab has list of all custom fields and their data.
I want to know number of issues per Label for a particular sprint
select key, unnest(labels) as label from Ticket left join Sprint on Sprint.id = any(Ticket.sprints)
where Sprint.name='Sprint R48';
I'm interested in tasks distribution between QA engineers (based on Custom Field)
with custom_fields as (
select
key,
jsonb_array_elements(custom_fields) ->> 'name' AS name,
jsonb_array_elements(custom_fields) ->> 'value' AS value
from Ticket
)
select key, value from custom_fields
where name = 'Responsible QA'
I would like to know line changes in code per Story Point by month (join TTS with GIT)
with joinedTandC as (
select sum(Commit.lines_changed::int) as changes,
Ticket.key as key,
Ticket.resolved as resolved,
Ticket.story_points as points
from Ticket
inner join Commit on
Ticket.resolved > '2021-01-01'
and Ticket.story_points is not null
and Ticket.story_points > 0
and Commit.message is not null
and Commit.lines_changed > 0
and Commit.message like concat(Ticket.key, '%')
group by key, resolved, points
)
select sum(changes)::float/sum(points) as "Line Changes Per Story Point",
/* sum(points) as "Sorry Points",*/
/* sum(changes) as "Line Changes", */
date_trunc('month',resolved) as "Month"
from joinedTandC
group by "Month" order by "Month"
I wonder how many issues are in every sprint after September 2021
select ticket.key, sprint.name from Ticket join Sprint on sprint.id=any(ticket.sprints)
where sprint.start_date > '2021-09-01'
I would like to track Velocity Trend in my project
with sprints_results as ( -- velocity by sprints
select
s.name,
s.start_date,
(select sum(story_points) from Ticket t where s.id = any(t.sprints)) as story_points
from Sprint s
order by s.start_date
), rolling_avg_sprints as ( -- rolling average velocity by sprints (3 sprints)
select
name,
avg(story_points) over (ORDER BY start_date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) as rolling_average,
start_date
from sprints_results)
select -- percentage of how current avg velocity is more than previous avg velocity
name as "Sprint Name",
rolling_average * 100 / (first_value(rolling_average) over (order by start_date ROWS between 1 preceding AND 1 preceding)) as "Velocity Trend, %"
from rolling_avg_sprints
Custom burnup chart (without forecast)
-- Generate scope
with scope as (
select key, created from ticket
--, release r
-- WHERE r.id = ANY(ticket.fix_releases) and r.name = 'version 1.15.2'
),
-- Generate ticket snaphsots
snapshots as (
select * from ticket_snapshots(ARRAY(select key from scope), 'week')
),
-- Generate days
days as(
select generate_series(
(select date_trunc('day', min(created)) from scope),
date_trunc('day', now()),
'1 week') as start
),
-- Get snapshot actual by each date
snapshot_on_day as(
select snapshots.key as key, max(snapshots.snapshot_created) as created, days.start as day from snapshots, days
where snapshots.snapshot_created < days.start
group by snapshots.key, days.start
),
-- Generate done count by date
done as(
select count(distinct snapshot_on_day.key) as done_count, snapshot_on_day.day as start from snapshots sn
inner join snapshot_on_day
on sn.key = snapshot_on_day.key
and sn.snapshot_created = snapshot_on_day.created
where is_done(sn)
group by snapshot_on_day.day
),
-- Generate created count by date
created as(
select count(distinct key) as created_count, days.start from scope, days
where created < days.start
group by days.start )
select created.created_count, done.done_count, created.start as startdate from created left outer join done on done.start = created.start order by startdate
, multiple selections available,
Related content
Common Datasets (updated)
Common Datasets (updated)
More like this
Defect Density per Story Point per Developer
Defect Density per Story Point per Developer
Read with this
Catalogue of Custom Metrics (aka Pre-defined)
Catalogue of Custom Metrics (aka Pre-defined)
More like this
Custom Metric development
Custom Metric development
More like this
Custom Aggregated Metrics (actual)
Custom Aggregated Metrics (actual)
More like this
Open Issue Aging
Open Issue Aging
More like this