It’s Sunday morning, and I’m in Orlando at the front end of the Learning 2013 conference once again. Elliott Masie and his Consortium have a knack for putting on a good conference every November for learning and development professionals, and I anticipate this year’s conference being no exception. As with earlier conferences that I’ve spoken at this year, there is once again a lot of buzz around the concept of AGILE. AGILE what? That is a key question to ask because from what I’m seeing, not all AGILE instructional design approaches are scoped equally when it comes to learning and performance solutions.
With that set up, I could potentially walk a thin line of promoting one approach or the other. At the same time, I recognize that each AGILE methodology has its own unique sweet spot. The real question…does the approach’s sweet spot match up to your reasons, or better yet, your needs for pursuing AGILE?
My own experiences of AGILE as a methodology come from software development projects and countless IT efforts involving software development and/or customizations related to deployments of enterprise systems. Not being a software developer, my view of AGILE is skewed by the perspective of being a Training team member from the business application side of software development. Whether through participating in multiple SCRUMS or SPRINTS, my view of AGILE reflects the idea of building and testing small, stand-alone, chunks of development code that were uploaded and stressed with our attempts to either validate that it worked as advertised…or to break it. When something broke or did not function appropriately we would follow an iterative “rinse and repeat” process until that particular chunk was nailed down tight.
But what about those times we can be AGILE and there is no software being developed? What about the need to bring performers on a particular business application up to a state of readiness. Methinks I will run across those scenarios much more often. That does not diminish the importance of being linked with project development teams during software development, I’m just saying whatever AGILE method you pursue, make sure it fits where you will find yourself most often when trying to support BOTH learners and performers.
AGILE for training design and development follows that same principle where the “chunks” are training components that can be rapidly designed and developed for SME or end-user pilot review to gather feedback and then wade through iteration cycles until it’s nailed down tight. Each “chunk” or object could be an entire module or even a smaller component of a module, like a knowledge check, an interaction, or any other piece in the training solution that a SME or a small pilot group could review, test functionality, and provide feedback. Let the iterations begin. SMEs will choose a chunk-level review over an entire storyboard any day.
The team I was on in a previous life followed this training development process before the label of AGILE became the popular rage. We did it because there was a need for rapid delivery of learning, and we went so far as prioritizing chunks to meet the most important learning objectives first. And it worked. But here’s the “BUT”…it was still training. It was only training, and that focus is what prompted the title of this post. My advice is this – choose your AGILE approach on not only “what” needs to be agile…but include “who” needs to be agile…“where” do they need to be agile…“when” do they need to be agile, and I guarantee you will find yourself in the downstream, post-training work context. Not only will you be in the work context, but you will be faced with driving performance with an acute need for it to be executed flawlessly. This downstream work context defines just how agile your AGILE methodology really is…or if it is scoped to even be addressing the work context.
WHAT or WHO?
AGILE can support rapid development of training content, and there’s nothing wrong with that if training assets in hand quickly is your primary objective. To me, that’s a “what”, not a “who”. I suppose the “who” in that case could be the Learner. Again, nothing wrong with getting learning into the right hands quickly, but that represents only one step on a longer journey to competency. My position is that if we are only supporting the Learner with learning content, we are only supporting a fraction of the learning ecosystem. The ecosystem concept implies we have a larger, more holistic continuum of learning and performance that we need to support. I’ve written about that as the Learner-to-Performer Continuum and the continuum tracks with the edge-to-edge domain of a dynamic learning and performance ecosystem.
According to research by Josh Bersin shared a few years ago, Learners spend about +/- 5% of their work-year in some form of formal learning [Training…in any blend you can think of…]. My question comes from that very statistic – “What about the other 95%?” What about the 95% of the ecosystem where Learners morph into Performers and are at the point of work – performing without a net – with all the risk and business liability and potential for tangible loss riding on their ability to perform flawlessly in their respective work contexts. Yeah…so what about that 95%? Training was never designed, intended, nor scoped to go downstream into the work context. An AGILE methodology that is limited to training design will not address the most business-critical part of the ecosystem.
That is precisely why I recommend strongly that you should adopt an AGILE methodology that is grounded on the Performer in the downstream, post-training work context. To me, this is the “Who” [the Performer @ the point of work] that matters even more than the “What” of rapid training program design and development.
Dr. Conrad Gottfredson developed “Five Moments of Need” that I feel do the best job of covering the learning and performance ecosystem from edge-to-edge. The moments include:
- Learning something new or for the first time [Training]
- Learning more of something [Training]
- Trying to remember or APPLY [ Performer Support]
- When things change [Performer Support]
- When things break or fail [Performer Support]
The ecosystem is covered edge-to-edge by these five moments when you consider the Learner is supported with the first two moments with Training, while the Performer is supported in the last three – when Performers are downstream in their post-training work context and the solution that fits best are Performer Support assets…not Training assets.
When you weigh the tangible business value [or potential loss] during the last three moments of need, I submit that THIS is where AGILITY needs to manifest. That said; I would only want to adopt an AGILE methodology that starts at Moment #3 with a few analysis tools that enable critical discovery that focuses upon performance over knowledge transfer. Again, nothing wrong with transferring knowledge [Training], but if we address what is required by a Performer at Moment #3 – the moment of APPLY – then the actual Training I may need to develop has all of a sudden become smaller, secondary, or in some cases, unnecessary.
This is especially true with enterprise system deployments. The focus shifts from learning about the new system from A to Z and then forgetting how to execute on B through Z three weeks later when the thing goes live [been there, done that]. The key emphasis while the right brand of AGILE now becomes role-specific, task-centric Performer Support [PS] on the front end, as opposed to traditional application of performance support job aids being a post-training value-add. An AGILE methodology needs to start with APPLY by integrating discovery gained through three essential analyses:
- Rapid Task Analysis – that defines workflow by role, task, step, process, & concept
- Critical Skills Analysis – assigns priority across the RTA results to parse out PS from Training
- Audience Analysis – who has the responsibility to do what, by role
The beauty of this AGILE approach is that it is not additive in nature to your design and development efforts – IF – we can get beyond the existing training development paradigm that we’ve all grown up internalizing. The AGILE result renders a Learning Experience and Performance [LEaP] plan that intentionally maps the Performer Support requirements before considering what formal learning needs to be addressed. Instead of using a limited scope AGILE methodology to accommodate rapid training development, we are first mapping workflow performance requirements that enable design and development decisions specific to creation of PS assets for iterating and evaluating feedback from the point of work. In reality, these feedback results define for us what we DO NOT have a need to train; hence, this version of AGILE is, as I described, not an additive proposition. In many cases, Training becomes smaller…if it is needed at all. Emphasis is truly placed on the “other 95%”.
So…the whole point of this post is to highlight the fact that not all AGILE methodologies are scoped to address an edge-to-edge learning and performance ecosystem. This means that choosing to adopt an AGILE method requires considering the scope of the environment you are tasked to support. Don’t let rapid development of training become the single driving directive behind making your choice. This decision is not so much “if” you pursue AGILE as it is “when”. We have little choice in whether we become more AGILE in what we do, but if we stop at rapid development of training assets, we are only perpetuating the limited scope of the 5% slice of the ecosystem, albeit faster than before, but we’re still just training. Take a look at the “other 95%” and consider the potential for impacting flawless performance @ the point of work as the cornerstone of your strategy. From what I’ve experienced, sustained capability trumps knowledge transfer every time.