Agile metrics can be used to the advantage or the detriment of teams and an organisation’s Agile success. This session looks at several of the core Agile metrics used to measure success to help you understand what success looks like, why the metric is desirable and what the metrics can tell us.
Understanding why we want these metrics is critical to capturing something of value, rather than just doing 'because'. What will leaders and decision makers do with these metrics? What value do they add?
Steve will also dive into the negative impacts of some of the Agile metrics we are sometimes forced to capture, how chasing velocity leads to gaming the system etc. He’ll look at bad metrics such as the seven deadly sins of Agile measurement and how to avoid them in your enterprise.
4. “
Establish an Economic Framework
Our overall goal is to influence economic outcomes
“
Our most important decisions involve tradeoffs between multiple measures of
performance
When done correctly; an economic
framework will shine a bright light
into all the dark corners of product
development
Don Reinertsen, The Principles of Product Development Flow Vapor trail
5. Why do we want to capture metrics
What is the purpose
How will they be used
Who are they for
6. Why Metrics?
To support decision making processes
Measure Value (Product) or Process
To affirm and reinforce Lean and Agile principles
To measure outcomes
To follow trends, not numbers
Reveal, rather than conceal, context and significant variables
Provide fuel for meaningful conversation
7. At the current churn rate, 75% of the S&P
will be replaced by 2027
8. If a measurement happens at all, it is because it must have
some conceivable effect on decision and behavior. If we can't
identify what decisions could be affected by a proposed
measurement and how that measurement could change them,
then the measurement simply has no value.
“How to Measure Anything” By Douglas W. Hubbard
“
“
9. What For?
Support organisational objectives
Optimise learning
Should be simple, big & visible
Well-understood and easily adopted
Guide actions and decisions
“Escape Velocity” By Geoffrey Moore
10. Bad Metrics
Collected because they always have been
Encourage gaming of the system
Result in bad behavior
Ignore the system
Don’t align with the Why!!
@goriansteve
11. The Seven Deadly Sins of
Agile Measurement
1. Using metrics as levers
2. Using a convenient metric (rather than one that provides
critical insight)
3. Bad analysis
4. Motivating people to hide information
5. Too costly measures
6. Too many measures (information overload)
7. Too few measures (unbalanced)
Larry Maccherone, Rally Software
12. Types of Metrics
Internal vs. External
Qualitative vs. Quantitative
System Efficiency vs. Local Efficiency
Hypotheses, Experiments, and Little Bets
16. Productivity Metrics
Value Points Delivered per Time Period
Cost per Value Point
Concept To Cash
Revenue per Employee
Lead Time per Story
Mean Time to Ticket Resolution
SLA Achievement Metrics
Velocity
19. Stable Teams
Creates an almost 2:1 difference in throughput in Teams
that are 95% or more dedicated compared to teams that are
less than 50% dedicated.
AND
60% better productivity
40% better predictability
60% better responsiveness.
20. Predictability
Say / Do Ratio
Velocity Variance
Cycle Time per Story Point
Feature Comparison
Epic Comparison
flickr.com/uggboy
22. Quality
Maintenance Complexity Trending
Defect Density
Issue Re-introduction Rate
Defect Arrival / Kill Rate
Unit Test Coverage
Auto Functional Test Coverage
flickr.com/ivyfield/
24. Responsiveness
Cycle Time per Story
Lead Time per Story
Queue / Batch Size
Average Impediment Lifetime
Mean Time to Release
Mean Time to Fix
flickr.com/y500
27. How Velocity Works
This iteration we completed 90 story points and next
iteration we will do 160
Whoa - What does your velocity look like over
the last 3 iterations?
Well over the last 3 iterations we completed 45,
80 and 70 story points.
Ok, so what has changed in your team or your
work that makes you think you can achieve 160
story points?
Nothing, but to satisfy our customer we have to.
31. Examples of Badness
I want the ‘Blue’ team to work on my projects
because their velocity is higher
I want to compare the output of people in the
Team
We committed to 120 points and completed
them all and carried forward 30?
HUH
As a manager I have to constantly drive my
teams to ensure they meet the goals ‘we’ set
35. The Lean Canvas 30-Jun-2014
Proposed Solution
Iteration #1
Problem Conceptual Solution
Metrics Outcomes
Top
features
Expected
Outcome
Unique Value Notes
Proposition
Single, clear, compelling
message that states why
the solution will be
different and worth
experimentation
Impact Mapping Communications Plan
PRODUCT
Learnings
Deploy / Pivot / RIP
Lean Canvas is adapted from The Business Model Canvas (http://www.businessmodelgeneration.com) and is licensed under the Creative Commons
Attribution-Share Alike 3.0 Un-ported License.
37. WSJF Prioritising
user value + time value + RR | OE
job size
“Images used with permission of Scaled Agile, Inc. See ScaledAgileFramework.com for more information
38. My Example
Feature User
Business
Value
Time
Criticality
Risk
Reduction/Opportunit
y Enablement
Cost of
Delay
Job
Size
WSJF
Presentation
5 8 6 19 10 1.90
Sales Proposal
10 10 9 29 7 4.14
Remote
Training
feedback 3 3 3 9 4 2.25
39. Value
In our experience no single sensitivity is
more eye-opening than cost of delay
Don Reinertsen, The Principles of Product Development Flow
Time
“
Cost of
Delay
“
40. AGILE EVM
Measures Schedule and
Performance – not Value
Forecasts in financial units
Expects everything to be
defined up front
No Assertion of Quality
PV, EV, CPI, SPI, ETC, EAC
42. Waiting Times more than
double as utilisation moves
from 80% to 90% and
double again as it moves
from 90% to 95%
Don Reinertsen, The Principles of Product Development Flow
43.
44. Control Queues Not Capacity Utilisation
Don Reinertsen, The Principles of Product Development Flow
Waiting on SME’s Longer Cycle Times
Waiting on Sign Offs Increased Risk
Management Reviews More Variability
Big Upfront Analysis More Overhead
Waiting on Releases Lower Quality
Waiting More Administration
Waiting Less Motivation
Waiting Flow On effects
45.
46.
47. Why do we want to capture metrics
What decisions or behaviours do you wish to impact
How will they be used
Lever or to facilitate feedback
Who will use the metrics
Are the metrics fit for purpose
@goriansteve
48. Why do we want to capture metrics
What is the purpose
How will they be used
Who are they for
Agile is a team game, but for metrics to be successful we have to look at the entire system – not just the individual components.
Talk about objective and outcomes – then what you are going to do to gather data to change the outcomes
[SL]
Lifespans of top companies are shrinking, according to a study of the S&P 500 Index
• in 1958 companies on the S&P500 had a 61-year tenure for an average firm
this narrowed to 25 years in 1980
—to 18 years now.
• A warning to executives: At current churn rate, 75% of the S&P 500 will be replaced by2027.
I wonder how this would compare on the ASX200, NZSE50, SGX200
To survive and thrive, leaders must “create opportunities, operate efficiently, trade off maturing or non productive product lines and build new divisions
at the pace commensurate with the market without losing control of
the company.
Thinking in those terms it is obvious that Portfolio management is critical for long term profitability
Tidy up
There are multiple classifications of metrics that I hear people concerned with and that I see people tracking.
Relative units of sizing
Applied at the story level
Slides 152, 153: Change the graphics
Showcase the impact of concept switching- usually a 2:1 ratio difference between working on a single project at a time vs 3 at a time
Refer to the “Insights” metrics fro Rally insights.
This is what the date from nearly 13,000 different teams
show about stable teams:
Discuss cost of replacing experienced engineers
Al Gore up the cheery picker showcasing global warning
Accepting that Trends are am important factor – lets use Velocity as a tool to show how important Trends are
A principle metric is a Cumulative Flow Diagram, which shows us how much work is in progress over time. It’s broken out by the work states that item progress through.
On a CFD, any vertical line tells us how much WIP there was on that day
It shows how smoothly work is flowing, where the bottlenecks are, This is valuable input for helping decide where to focus our improvement efforts.
Batch Times – lower costs, etc
http://www.rallydev.com/sites/default/files/Measuring__Integrated_Progress_on_Agile_Projects_using_Rally_0.pdf
Agile methods do not define how to manage and track costs to evaluate expected Return on Investment information. Therefore the iteration burndown and burnup charts (as used in Scrum) do not provide at a glance project cost information. Agile metrics neither provide estimates of cost at completion of the release nor cost metrics to support the business when they consider making decisions like changing requirements in a release. AgileEVM does provide this information, and is therefore a excellent extension to the information provided by burndown charts.
Talk about objective and outcomes – then what you are going to do to gather data to change the outcomes
Talk about objective and outcomes – then what you are going to do to gather data to change the outcomes