I just had breakfast with a good friend and colleague, and, the usual thing happened; the conversation shifted to performance challenges around business systems implementations. He is a process improvement engineer blessed with the curse of a training background followed by deep business process improvement experience. While that sounds like a contradiction in terms, it is a blessing/curse we both share. We both know and love the profession of training, but, because of that background, we both know where training can never be effective – system implementation and adoption.
One of the most read posts I’ve made in the last five years of blogging, “Deployment vs. Implementation: Is There a Difference?”, may provide a deeper dive for you as I will not rehash everything discussed there in this post. Suffice it to say, the target objective for Training…and IT…are often the same…survive GoLive [deployment]. The process of Implementation is all too often defaulted to the Help Desk function. Adoption, another process often overlooked, and when not fully realized is…well…seen as a result of poor training of the end-user population. Too often precious resource dollars are then invested to attack the symptom of poor adoption with additional training. Trust me…been there…done that.
Multiple ERP deployments endured in my past validate that this scenario replays in many organizations where new enterprise business application(s) is/are introduced into operational workflows. From the title one could infer that end-user precision in the workflow begets exemplary performance. Granted, exemplary performance is what we’re all chasing, but performance…exemplary or less than…is symptomatic of something other than training. Training often takes the hit, but even training…excellent or less than…is symptomatic of something else too – poor workflow documentation.
Now this is where I would typically launch into my familiar rant on why embedded performance support should be integrated into the introduction of any enterprise system. Read this earlier post to go deeper on this topic: “Embedded Performance Support & Scaling to Successful Implementation”.
I hate to use language that smacks of “absolutes”, but experience motivates me to say this…Fail to nail down workflows and document them precisely and everything downstream from that suffers. And that would include training development and delivery; usually dependent upon those painful and useless simulations. I’m not going to go off on simulations in this post as I’m attempting to stay on the rails and offer something rant-free…for a change.
Who does the documentation of workflow in your organization? Often, it is someone within the IT project team who is mapping a path through workflow transactions. It is tedious work, and nobody looks forward to drawing the short straw. The process is complicated by instability of the systems as countless customizations are applied to the “out-of-the-box” software application. As such, the Training team is tucked away in their silo waiting for screens to stabilize [a.k.a. come over the wall from IT] so they can attack the “final” system interface with Captivate to begin training development.
Training works from IT’s output…a stable system…and then working with system SMEs, they create training to align with the workflow. Trust me once again…the system is never stable. Consider these scenarios posed as questions:
- How many times does the workflow change before GoLive once it has been deemed “final”?
- How many times after GoLive is the system workflow tweaked to optimize the flow?
- How often is the system updated due to changes to the workflow?
- How often is the application patched or version updates that cause workflows to change?
When you consider the answers to those questions you can see why I made the statement that the “system is never stable” for very long. And when workflows change, your simulations used for training need to be revised and updated. Have you ever updated a Captivate simulation that has something in the middle of a 10-minute narration change? It’s ugly, and it hurts, and it takes a lot of time. And when you get right down to it, we’re talking redundant effort, primarily because the workflow documentation is not based upon single-source documents. Too many versions create redundancy and inconsistency…and that trickles down to mistakes by end-users…and calls to the Help Desk…and entry error investigations and corrections…and the list goes on.
What is needed is an embedded performance support [EPS] workflow documentation methodology that authors a single-source document that serves as:
- A master workflow record for IT system validation efforts [created once by IT SMEs]
- Source content for contextual performance support [PS] asset development
- Uncoupled narration from PS content so it can be edited on the fly [changes, insertions, etc.]
- Re-use as source content for experiential training scenarios [simulations…koff]
So…what is my point? Two views come to mind:
One – If you are battling performance issues of a business application that has already been deployed, do some digging and determine the root cause(s) before throwing good money after bad on additional training
Two – If you are preparing to deploy a new business application, invest the resources on the front end that produces single-source documentation output. Finding the right vendor’s EPS authoring tool can reduce your workflow documentation costs by 70% or more…not to mention the time no longer required by Training to rebuild those ridiculous simulations as a separate set of documents.
By now you’ve probably come to the conclusion that I despise simulations. To a point, I will admit, they do serve a purpose, but as for post-training retention value…pfffft! Maybe it’s me, but sitting back to “watch it run” or “try it” in a “click this, then this, then that”…with no contextual support as to why…or where do I find the data I need to plug in…or what happens if…I think they [simulations] do not give us the return we need given the hours spent developing them.
Instead, give me the option to click an icon from WITHIN the business application workflow at MY MOMENT OF NEED by requiring no more than two-clicks and ten seconds of my time to access contextual performance support specific to my role and task at hand.
Additionally, in the post-training environment where we find ourselves in actual live application workflows, we do NOT have the time to leave the system and log into the LMS in an attempt to find a contextual simulation object in order to overcome a moment of need. I’ve been in too many business applications that will time-out on you if you delay too long…or drop/lose the data you’ve entered if you leave the application to query a simulation resident someplace else. This is why I say that simulations are better oriented toward training than actual workflow applications.
Addressing moments of need within the workflow is what EPS delivers, but that does not happen by building training simulations…it happens through intentional design that uses single-source workflow documentation. Too many times I complete discovery during a training request and find that workflows are corrupted and/or not aligned with separate documents used as training content. We talked about wanting to adopt agile methods and then perpetuate redundant efforts by neglecting the huge benefits of single-source documented workflows.
Gary G. Wise
Workforce Performance Advocate, Coach, Speaker
gdogwise@gmail.com
(317) 437-2555
Web: Living In Learning
LinkedIn
Gary, you have hit the nail on the head – almost! The system implementation you discuss in your excellent blog piece is most likely only one of many systems that the users need to use in order to do their jobs. The support they need for doing their jobs should not be embedded in one of those applications. It needs to sit outside of all the applications involved in the workflow and be instantly accessible whenever needed and regardless of which application might have been in use at the time (that is, regardless of where the user was up to in the end-to-end workflow when they realized they needed guidance). At Panviva we call this ‘system agnostic’, end-to-end process support ‘Business Process Guidance’.
Regards,
Ted.
Hey Ted! Thanks for reading and leaving your comments. I agree totally with the “agnostic” approach where the EPSS is not embedded within a specific system application; rather, it is embedded within an organization’s discipline and ultimately covers multiple systems. I think we are in the same book just on different pages. You are spot on with the end-to-end mindset that may span multiple systems. You’re on a page deeper in the same book. I’m at the beginning of that book where EPS and the technology are not as far down the maturity scale in terms of adoption. I would also go so far as to say that an end-to-end process may not only use multiple systems, there may be process components that have nothing to do with any system whatsoever. At the fifth level of the maturity scale, EPS would be as you describe it. I’m finding folks are somewhere between level one or two and the call to action is “start small and scale”. A first proof of concept may be limited to a time and billing module in a much larger application. As adoption spreads we approach what you’ve described and Panviva does so well with BPG. Thanks again for reading and sharing, Ted!