March 13, 2019 by Gary G. Wise
I was on my third cup of coffee as I slogged through the emails that landed overnight. Three of them had attachments from an individual tasked to curate content for the project team I was on. There were eight other team members on the email distribution list on two of the emails; the third went to the entire department of thirty-two colleagues and peers. The attached reads on two of the emails were relevant articles aligned with our project with each representing a 15-20 minute read; the third to the whole department was also good information but worthy of my “read later” file. I printed the “read later” document (knowing that if I simply saved it to my hard drive, I’d likely never find it again) and popped it onto the top of my “read later” stack I had previously neglected to…read later. Sound familiar?
How much curated content never reaches the point (or the right person) to deliver knowledge, wisdom, and insights essential for critical-thinking and informed decision-making to drive productivity forward? How much productivity is diverted to non-productive activities despite being tasked to ultimately accelerate productivity? Content curation is a necessary evil that can quickly deliver a tsunami of non-productive time if it is not optimized early. To “read later” is not sustainable practice no matter how well-intended.
When you consider that roughly 2% (according to Pandexio) of bulk knowledge embedded within curated content is actually extractable as wisdom and supportive of critical-thinking and essential for establishing actionable insights, we can clearly see that the act of content curation alone is only part of the cost to get to the 2%. What if the curated content was itself curated for the relevant insights…the embedded, actionable 2% of knowledge and wisdom before the curated content itself was forwarded?
How much productive time would be protected by everyone tasked to reread the original curations and extract the “right insights”. Do the math…only two documents…times 8-team members…times a 20-minute read each…and only on the project-relevant curated content. To me that looks like 5-hours of productive time spent reading what had already been curated. Plus…what guarantee do we have that the 8 readers will extract the same 2%…or the “right insights” or what if they find “other relevant insights” that might be missed by their team members? How do they capture their “new insights” and share them with the rest of the team without another reading? How many times will those same curated content documents get re-forwarded to perpetuate a “rinse & repeat” cycle of distracting productive time of another knowledge worker? Sure, they’re gaining positive knowledge and wisdom and forming their own insights, but at what cost?
This scenario is one I’ve lived over and over in previous corporate gigs. Keep in mind this example, though fictitious, is not unlike our day-to-day workflows as knowledge workers. If accelerating productivity is something we really seek, I’m convinced part of the solution includes elimination and/or at least minimizing the non-productive cycles we spend in pursuit of generating actual productivity.
One such solution is increasing Speed-To-Insight using a cloud-based Insight Curation Engine (ICE) technology like Pandexio to accomplish something I’ve referred to as Curation 2.0. It goes something like this:
- Extract the relevant 2% from bulk knowledge
- Define actionable insight(s) using 140-characters
- Clarify the 140-character insight with a free-text abstract note
- Tag the actionable insight with multiple keywords
- Group the insights by Topic (that are also tagged by multiple keywords)
- Save the insight in a searchable Digital Brain accessible by a curator-defined recipient, or group distribution list, or the whole enterprise
- Attach the original source document to the insight as an Optional read versus making the primary task to reread the content document
- Enable recipients to capture their own insights and share by repeating 1-through-7
There’s little doubt that enabling user-generated knowledge is a rapidly growing necessity in sustaining a dynamic Learning Performance Ecosystem. That said, I’m not suggesting ICE technology is exclusive to the L&D function, though I wish I had access to these capabilities while researching bulk knowledge sources in support of course content and performance support solution design, development and delivery in previous lives.
But there’s more…there may be a larger audience scattered across the ecosystem…boomers with hard-drives, heads and hearts stuffed full of knowledge, hard-earned wisdom, and actionable insights poised to abandon ship. Yes, those same souls are about to retire and walk right out the door with all that knowledge, wisdom and insight to go fishing forever. Would it be more cost-effective to curate that walking knowledge archive and capture their wisdom and insights now rather than to return to Go and attempt to reacquire what was once in-house?
Hmmm…so maybe it’s not all about curating new content…maybe it’s also about capturing knowledge as brain-based intellectual property while it’s still property. Yikes, a bear!
For the sake of transparency, I do not work for Pandexio; however, I’ve been equipped with their ICE product to construct a sample Hyperpoint for my personal obsession and coaching business called POINT-of-WORK SOLUTION DISCIPLINE. I plan to formally announce free access by invitation only at Learning Solutions 2019 in my breakout session #809 – Performance Support: Enabling Productivity Acceleration at Point-of-Work on March 27th at 4:00PM ET.
Seeing a Hyperpoint live and in captivity for a specific application is a preferred way for me to discern what it can do. If that approach works for you as well, send me an email addy, and I’ll add you to the list to receive an access link for the Hyperpoint once the site goes live. I also welcome deeper discussions if the need surfaces.
Thanks for reading and take good care!