This post is in response to a question posed on the POINT-of-WORK Performance Support Solutions networking group on LinkedIn by Rebecca Everett. Feel free to join in the exchange by joining the group.
Question: Hi all, and thanks Gary for the opening question. I’ve got two wishes; first is measuring impact, second is scalability. Both seem costly and the thread connecting a performance improvement initiative to ROI is still perceived as tenuous, so evidence is sorely needed when making the business case. Thanks!
**********************************************************************************
Refer to the DRIVER tab on my website Living in Learning for additional details. The “E” in the DRIVER discipline stands for “Evidence” and if using Kirkpatrick, there are four levels as we know. The first two are easy to obtain but do not provide much more than validating satisfaction for the learning event and proof that knowledge was transferred…but only for as long as it can be remembered…offering up a promise of potential. No impact can be measured until we get to levels 3 & 4…downstream, post-training at Point-of-Work.
If you’re using the 5 Moments of Need, APPLY, SOLVE and CHANGE, the third thru fifth moments are where measurable results can be obtained. Why? Because all three manifest at Point-of-Work where real work is accomplished and generates value…or compromises it through mistakes and errors…or loses it outright…all three impact scenarios represent measurable consequences…and very difficult to acquire.
The secret sauce is identifying discrete measurable impacts before any training or performance support asset design, development and delivery begins. In the DRIVER discipline, the “D” stands for Discovery which I created in self-defense in those situations where a training request, though well-intended was, not a viable solution…and I had had enough of my team taking it in the shorts for poor training when training wasn’t going to deliver the mail.
I developed a Point-of-Work Assessment (PWA) where part of the discovery objectives includes “evidence” that there is indeed a performance issue…and how big that issue really is…and how does the stakeholder know it’s an issue. The stakeholder’s knowledge related to the deficient performance is based on recognition that outcomes/impact are lacking; reveling they are measuring something beside seat of pants assumptions…or not. Either way, you need to confirm.
Check out Point-of-Work Assessment tab on the website because the “D” looks at six Performance Attribute clusters; one of which is Impact/Analytics. This is where the self-defense motivation gets covered by accomplishing:
- What is currently tracked to frame current state performance impact serving as benchmarks
- What is the desired future state performance impact?
- What are current state KPIs?
- Are those KPIs the right ones? Is something being measured that is irrelevant?
- What should future state KPIs include? (nothing wrong with status quo if they’re already right on)
- Should something else be measured that better highlights evidence at future state?
- What are the data sources of KPI analytics? (potential for LRS data using xAPI)
- How are those analytics captured, analyzed and reported in current state?
- Could a performance dashboard be a useful future state reporting platform?
- …and likely some additional follow up questions…
It is important to nail these attributes down, because without establishing a current state benchmark right up front, there will be great difficulty justifying proof of impact after the fact.
As for ROI? Yikes…the ROI on obtaining ROI is not good. If all we are seeking to justify is our training investment, why bother? There is no ROI on potential.
We’re missing a huge opportunity to justify our relationship as a viable business partner to the business. I coined a phrase in another post a few years ago called EOSC – Evidence of Sustained Capability where the evidence is based upon actual business outcomes (or impact) if you will, that are more than simple transactions of validating training…especially when the PoWA reveals that the request for training is bogus to begin with…and deficient performance is not going to be impacted by training.
We should be chasing sustained workforce capability at Point-of-Work. Certainly, training plays a role, though a small one, but no learner shifts into a performer role until they are at Point-of-Work. And that is where our discovery should start and where impact/evidence is benchmarked and targeted for improvement.
That would be my $.02
What thoughts can anyone else add to the discussion? Jump into the comments section on the group site and let fly!
G.
Gary G. Wise
Workforce Performance Advocate, Coach, Speaker
gdogwise@gmail.com
(317) 437-2555
Web: Living In Learning
LinkedIn
Great insights, Gary. I became a certified KPI professional to push the agenda of KPI based performance measurement. But, the very company that sponsored my training decided not to listen…… people who call themselves managers just don’t seem or want to understand what benchmark or standard is when it comes to learning outcomes. That’s why training is such a lucrative ‘waste’ of financial resources. In a sense, it’s a bogus scam.
BTW, Kirkpatrick did later on mention “ROE”. Return on expectation applies chiefly to “soft” areas like EI or leadership. if there is a some consistent observable behavioral change, at least we can attest to measure the improvement outcome.
As Einstein said, “Not all that counts can be measured and not all measure counts”.
Wise words, Yuvarajah! I appreciate you reading and sharing your comments. Take good care!