Workplace Learning & Development professionals have a problem -- too often they don't get enough (or any) feedback on the efficacy of their designs. What can we do to fix that?
8. Issues with Typical Measures
• Levels 1 & 2 are not meaningful
• Levels 3 & 4 are difficult and costly
o Require access to the full target audience
o Measuring behaviors requires extensive and costly
observation
o Difficult to implement without pre-existing
organizational performance metrics in place
o Difficult to attribute due to confounding variables
9. The Evaluation Venn Diagram
Enough
budget or
resources or
resources to
measure
Enough
control over
the
environment
Good
methods to
evaluate
All too often,
they don’t
overlap at all.
11. We are measuring what we can
control
• Seat Time
• # of learning objects
• # of people trained
• Completion Status
• Pre/post scores
Why don’t we
just weigh
them?
The Inestimable Gloria Gery
14. So, what can we do about this?
visible
desirable
feasible
What is in this
intersection?
15. Guerilla Evaluation
• A quicker and less expensive method to ensure
a feedback loop that can be used to assess and
improve the training intervention
• Not intended to be a full measure of efficacy
• Qualitative measures of:
o Retention of information
o Attitude
o Anecdotal or Observable behavior change for a small sample
size
16. Based on Nielsen's Guerilla HCI
In 1994, Jakob Nielsen wrote a highly influential article called Guerrilla HCI:
Using Discount Usability Engineering to Penetrate the Intimidation Barrier
The article addressed the reasons software development teams rarely did
usability research to improve the design of software interfaces.
Studies showed that
qualitative feedback quickly
became repetitive after 5-6
users, and that working
with a small sample could
provide meaningful design
feedback.
http://www.nngroup.com/articles/guerrilla-hci/.
17. Right vs. Better
https://twitter.com/karlfast/status/223825451079057408
18. It’s like Traditional PM vs Agile
VS
To Do Doing Done
“By putting the most serious planning at the beginning,
with subsequent work derived from the plan, the waterfall
method amounts to a pledge by all parties not to learn
anything while doing the actual work.”
- Clay Shirky
19. Keep the cycles short
Why feedback
is like weather
prediction
20. Formative - User Testing
Standard Usability Testing
The first part of the evaluation process is
standard usability testing that involves
watching end users interact with the
software, followed by a short interview.
Typical evaluation measures such as a
pre/post test could be incorporated here.
21. Summative - Follow up Interview
• Can be used in conjunction with other
evaluation measures
• 30-45 minute follow-up interviews that occur
4-6 weeks after the training intervention
• Small sample group (~6 users per audience)
• Structured interview questions
22. Structured Interview Format
Structured interview questions relating to:
● Learner impressions/feedback
● Most memorable elements
● Small number of retention questions related to
key learning objectives
● Anecdotal usage of the material (How have
they applied the ideas from the training?)
23. Brinkerhoff Success Case
“Performance results can’t be achieved by training alone;
therefore training should not be the object of evaluation”
• Part 1: Survey to determine who was successful and
who was not
• Part 2: In depth interviews with a selection of successful
and not successful users
Find Out Quickly What’s Working and What’s Not
27. What do you think?
• With 2-3 people around you, make
a list of quick and dirty evaluation
options.
• As soon as you think of one, come
up with another one as quickly as
possible.
28. Questions?
• Thanks for coming
• Contact:
o Julie Dirksen
o julie@usablelearning.com
o http://usablelearning.com
o Twitter: usablelearning