In this final snippet of the Point-of-Work Assessment methodology we focus on assessing the attributes of Impact & Analytics. Traditionally, we (L&D) do an excellent job of evaluating at Level 1 and often at Level 2 to document successful learning interventions. In reality, those evaluations only represent POTENTIAL since none of our participants have delivered any tangible business value or impact at Levels 3 & 4 which only manifest at Point-of-Work. Often, Levels 3 & 4 are defined in general, non-specific terms and represent a challenge when the business is asking for verifiable evidence from L&D of tangible impact. The secret sauce requires us to define a foundation of both current state and future state measures in advance of building any solution. Building a successful foundation demands upfront assessment of the analytics available, accuracy, and relevance to tangible impact only visible at Point-of-Work.
Current State Benchmarks
Figure 1 below illustrates five key areas of assessment beginning with “where we are today” to establish Current State Benchmarks. As with any journey, even our GPS is useless without identifying the point of origin. We know where we need to go…just as our stakeholders have an idea of what they want from our efforts…but we need to establish current state performance measures to establish a “before” benchmark if we hope to provide evidence of improvement in future state outcomes in our “after” benchmarks at Levels 3 & 4. Neglecting this level of assessment dramatically limits our ability to show evidence that L&D contributed meaningfully to the bottom-line…which by the way is atypical of a cost center.
Part of establishing current state is making sure the existing key performance indicators (KPIs) are appropriate measures and relevant to the performance targeted for improvement.
- Are there KPIs that are mis-aligned or missing?
- Are there KPIs that are irrelevant?
- Will new performance requirements have new KPIs for monitoring and tracking?
These questions must be answered when we craft future state measures to show evidence of our success.
In addition to identifying relevant measures, we must consider several things:
- Where in the ecosystem will these KPIs surface?
- Which systems are sources of data?
- How do we…can we…access them?
- Are there protocols related to Who accesses the data?
- When do we measure…How long…How often do we measure?
There are all elements of a Measurement Plan provided in the PWA methodology
Utilization & Performance
Obviously, we want evidence of positive business outcomes from sustainable performance. Do not overlook the “sustainable” aspect of performance in that statement. Measuring impact as a “snapshot” and calling it good when we see improvement does not confirm we have established a trend or pattern of sustainability over time. The Measurement Plan defines a beginning and an end (or ongoing) to measuring data, how often, by whom, etc.
Also, from a L&D perspective, it is very helpful to know “who is using what” of the resources and performance support solution assets we provide. First, we learn quickly how often a particular asset is being used…but we do not actually know where in a workflow or why. The point for L&D’s benefit is knowing what and who…and this enable follow-up investigation to find out where and why. Secondly, we need to establish a communication loop for knowledge workers to provide feedback to content owners to address asset usability, relevance, accuracy, and offer suggestions and ideas for improvement.
Utilization of systems and resource assets also serve to show levels of engagement, and engagement is essential in the longer-term goal of reaching full adoption and sustainability. The IT team has a primary interest in systems utilization, and IT is one of the richest areas for knowledge worker performance data that resides in the hands of the IT help desk. The call logs are usually categorized by “reason codes” or “tagged” in some way that implies we need to collaborate with IT during our PWA efforts. What better information could we have upfront than to know the reason behind calls to the help desk?
Analysis & Reporting
The use of data, or “big data”, is rapidly integrating into workflows to drive decisions and actions all across the enterprise. I often find there is no shortage of analytic data; the challenges become more of:
- Does a measurement plan exist, and is it compatible with future state?
- What data is available, where is it, and should I utilize it?
- Do I have access rights to acquire the data?
- Can I do the analysis efficiently and effectively enough to enable informed decisions?
- How are results reported…in what format…how and to whom are the results sent?
- Do analytics feed a performance dashboard? Are performance dashboards desirable?
- Does the right technology exist to support multi-platform data capture?
Certainly, there are more consideration that are part of a Measurement Plan than what I’ve included in this post, but hopefully the message is clear…we must plan for post-training impact measurement BEFORE pre-training and support solution design and development. Who knows, the solution may not be training at all, opening the door for making tangible performance impacts with non-training solutions applied and tracked at Point-of-Work. That said, L&D needs to own and be equipped to work effectively in both venues.
As I inferred earlier, we do not do a very good job of defining analytics and impact metrics associated with performance attributes relative to Level 3 (observable behavior) or Level 4 (financial impact) at Point-of-Work. Why? Usually because we have not defined the measures related to performance at point-of-Work. What do we do then? Cook the books, and make assumptions, and maybe rely upon a little voodoo…been there – done that! Never again.
In a recent LinkedIn thread someone lamented on how outdated Kirkpatrick’s four levels of evaluation was so outdated. Truthfully, I see only using the first half (Levels 1 & 2) as being outdated. Achieving Potential is a good thing, but it does NOT pay the rent. Our business stakeholders are asking for improved and accelerated performance, and more importantly, verifiable evidence of outcomes that hit the bottom-line. L&D is in the best position to provide this evidence but only if we prepare through assessing the attributes at Point-of-Work that are relevant, accessible and reportable as stakeholder-accepted evidence of impact.
Thanks again for reading. As always, I welcome thoughts, comments, ideas, and/or push-back as you are so moved.
Take good care!