One never knows how having a good conversation with a friend and respected colleague like Mark Britz can uncork a brilliant idea for a blog post that I had not planned to write. We were discussing the painfully slow adoption by L&D organizations of the illusive performance paradigm. We both blog and always search for the right words, and that’s when Mark dropped a statement that flung a craving on me to write this blog:
“L&D is chasing the right outcomes with the wrong outputs!”
In truth this statement may not turn anybody else’s crank, but right after he said it, I never heard another word he said…sorry Mark!
Despite me being the author of this post, I have to give props to Mark for pushing me over the edge by dropping the right words into my head.
Seriously, check this out…first, consider “outcomes” as representing measurable performance results…a.k.a. business outcomes. Then…consider “outputs” as the product L&D renders in order to train competency into the workforce…a.k.a. training solutions…the outputs produced by L&D.
Why are training solutions the wrong output? Honestly, they are NOT wrong – but the expectations of what those outputs produce may be…especially when training alone is not enough…and only contributes to potential. Last I checked, potential cannot be measured or considered as a tangible outcome.
Maybe we should examine those expectations a little closer by asking, “What were you expecting from training?” Quite possibly if our stakeholders embedded expectations…as well as our own [L&D’s)…were to drive performance, it may be easier to see why training solutions by themselves may well be the wrong outputs.
Digging deeper, it’s probably safe to say those expectations are all over the map and rightly so as each learning and performance ecosystem is unique… each need to improve performance is unique as well. And yet, within that all that diversity of need there really does exist uniqueness that is a common thread…common end game…regardless of industry – sustained business outcomes. Maybe it’s just me but don’t business outcomes sound like a lowest common denominator regardless of unavoidable diversity in their environments?
If that resonates with you as it does for me, it seems like L&D’s target outputs should be driven by the expectations of business outcomes produced by the workforce…at the Point-of-Work and not producing solutions that are limited only to transfer of knowledge that may not be retained long enough to apply.
The logic in my brain points to the beginning point for addressing any business challenge should be a complete understanding of the business outcomes NOT being delivered at the Point-of-Work.
Also in my brain, as skewed as it might be with a rampant performance consulting bias, training outputs have not even been suggested as part of a solution; however, long-held traditional approaches promote that as a default. True, training may be necessary – my position is simple…don’t go there first.
We cannot begin the effort by looking through a “training solution lens”. I can always build training IF NECESSARY. BUT…if emphasis is not first laser-focused on what solution output closes the performance gap(s) at Point-of-Work, the traditional default training solution may well be the wrong output.
That said, at least for me, where and what we start investigating matters – business outcomes @ Point-of-Work. The focus of our discovery – deficient business outcomes – and the root cause(s) behind them clearly shine a light on whatever the best solution blend should be.
Discovery really needs to infiltrate Point-of-Work and include:
- People – who does the work…supports, mentors & reinforces the work…impacted by the work
- Process – clear definition of what is the work at task-level and root cause(s) to breakdown
- Content – assets used to accomplish the work…apply in the process of accomplishing the work
- Technology – tools, applications, end-user devices used to accomplish the work
- Measures – KPIs that track the work…KPIs that SHOULD track the work…current KPI benchmarks
…and there is still more in each category, but in the final analysis if discovery of this type – PERFORMANCE ASSESSMENT – is not accomplished on the front end by someone on the L&D team having performance consulting skills, the chances are better than good that the wrong outputs needed to drive the desired outcomes are going to come up short.
So…am I busting L&D for being wrong?
Nope, not even, but I am saying that well-intended training solutions will not likely carry forward to the Point-of-Work to deliver the expected mail.
What then is the solution?
Blow up L&D and start over?
Bag the core concepts of ADDIE and all of its sexy new derivatives?
Hardly, but what needs to happen requires close examination of your chosen methodology and what you do first. I could not give a rip if you adopt 70:20:10, PDNA, Five Moments of Need, or anything else. None of those will work either if you do not follow what’s at their respective core methodologies – assessing performance. They all require assessment and that assessment must happen at Point-of-Work and examine the five areas I called out above.
Why?
- Point-of-Work is common to all methodologies
- Point-of-Work is where deficient performance creates business value…AND loss
- Point-of-Work is where business outcomes manifest…not during training
- Point-of-Work is downstream and out of scope for most training outputs
- Point-of-Work is where levels three, four and five are measured as evidence of impact
Regardless of models, methodologies or frameworks you end up pursuing, I cannot emphasize enough the critical need to A.) Bring performance consulting skills in-house or, B.) Build/Grow them to support effective performance assessment at Point-of-Work.
This whole Point-of-Work obsession of mine is truly NOT rocket science. Trust me, Homey don’t do rocket science. But Homey does know how to defend himself because he and his teams have been BUSTED too many times by producing excellent training outputs that did not deliver and sustain the desired business outcomes. Homey never got out front of the training paradigm to crush the myth that training was going to drive the performance everybody wanted.
Don’t be a Homey…chase the right outcomes with the right outputs!
Oh…one more thing…thanks to Mark Britz for lighting the fuse.
Gary G. Wise
Workforce Performance Advocate, Coach, Speaker
gdogwise@gmail.com
(317) 437-2555
Web: Living In Learning
LinkedIn
I believe that changing the perspective of proper performance assessment is the key to truly measure “point of work performance”. In most organizations where I have worked, performance assessment is non-existent or “easy” (we can’t make it too hard!). My thought is that employers are scared to actually find out what is their true level of workforce performance.
True, Manuel, and for that reason when discovery is being accomplished the manager is NEVER in the same room with the individual contributor or in focus groups. Must avoid bias and situations where the manager may be the source of the dysfunction.
Gary,
I could not agree more. If the delivery does not tie directly to organizational goals and strategic aim, why do it? For to long the onus has been on the L&D professional to deliver a body of work that addresses an issue that is not a training issue. May just be a performance issue, meaning perhaps old Joe down on the shop floor doesn’t give 2 squirts of pee if a task is done correctly and completely. That’s a performance issue, no amount of training will help old Joe.
Thanks for reading and sharing a thought, Albert!
Mostly agree with your post Gary and Mark’s comment. L&D is too often chasing the right outcomes – or Pursuing Performance – via problem resolution – with its solution-set – when the real root cause may be in the design of the Process itself (not lean enough or with some process steps’ variability night tight enough (SixSigma or otherwise)) – or with an out of whack Consequence System – or any number of other Process/Performance variables – totally out of the wheelhouse of (I would guess) a majority of ID/ISDers.
I believe our Value Add could be/can be/is for some, that we can help our clients determine when awareness/knowledge/skills are not at the root of the performance issue (problem or opportunity) – and what expertise is likely needed to really diagnose and solve the root cause or root causes – and which from a Pareto (20ish/80ish) perspective might be the key levers to make the improvement, and which other experts who might be able to help avoiding/stopping the probable backsliding that the C in DAMIC is intended to address. And we an help further as a support to resolution implementation when that requires new awareness or knowledge or skills learned well enough that the human variables are then equipped to carry on.
We in ID/ISD don’t own Performance Improvement exclusively – but can be a valuable asset to the Enterprise in the upfront discovery of gaps and causes – and then in providing our expertise and the right Performance Support and/or Learning/Training solutions when and where warranted.
Cheers!
Thanks, Guy! I wonder sometimes if it would be easier to ramp up an ID to handle the performance assessment or to introduce a Performance Lead role to liaison between business stakeholder and the ID function. I suggest this because in my last few roles, the ID population is not equipped with the skill set to be fully effective at the Point-of-Work. Could they be effective? Certainly, with the right development and coaching…and in smaller shops the ID likely wears a half dozen hats anyway. In my gut I’m feeling that the organization might be better positioned to sustain performance if there was a visible separation in the person of a “performance lead” or whatever label is attached that is not “inside” L&D in a traditional L&D role. Personally, I do not see Performance Improvement as an HR function. That “performance lead” role would liaise with the ID/DEV Factory, the stakeholder and the analytics function…not necessarily as a project manager, but as strategic oversight and continuity from performance gap to measuring evidence of impact. Thoughts?