Have You Targeted Your Training Investment Where it Matters Most? (A Crowdsourced Performance Assessment Part I)

It can be a bittersweet thing when you get The Call. “Hey, we have realized that our new project managers are not managing engagement profitability as successfully as we would like. Can you help?” It might be time to make an investment in training.

Sounds great! We are all here to help our companies identify, target, and execute smart investments that drive performance.

The bittersweet part is what often comes next: “Oh, we really would like to roll this out at the end of next month. How’s that work for you?”

We all Know the Value of Performance Assessment … But

We all know we should conduct a performance assessment before we launch new development so we can accurately target a training investment. But all too often, we feel pressured to skip doing it. This project seems so straightforward, could it really be worth it? That project is so urgent, do we really want to spare the time? This content is so clear and well thought out that we don’t need to assess the performance need it supports. This quarter, can we really spare the staff?

Think about your experience. Can you point to a project over the past year when your company either spent too much on a new training program or got too little in results because the business and Learning & Development did not collaborate to align on the target business outcomes, the performance issues, their relative importance, and their root causes?

A Time-Intensive Approach

For years, I have been lucky. From time to time, when clients have run into important business problems with particularly knotty performance issues, they have asked my team to help do a performance analysis. What great opportunities to work on some really interesting issues!

We use an approach to performance analysis that’s grounded in Blanchard, Robinson, and Robinson’s Zap the Gaps process, seasoned with ideas from Wallace’s Lean ISD, and includes some of our own refinements to quantify the analysis. As you would expect, we begin with the business goals: what’s the current state, the desired state, and the gap. We then consider performance issues: again current state, desired state, and gap. Finally, we consider the solution.

The heart of the process is the performance analysis. We capture performance issues in a “performance model”. This starts as a list of tasks that simply identifies what performances are required. Then we augment that list. For each task, we identify:

  • The business impact of reducing that gap,
  • The performance gap between typical and peak performance, and
  • The ability to improve performance via a learning solution.

The resulting performance model lays out the case for investing in each task. When the the business impact, performance gap, and the ability to improve are all high, that’s a high leverage task.

Over the years, this has been some of my favorite work. It always creates value (including the occasional “Gee, maybe we don’t really need so much training after all!”). And it’s just fun to do. It’s always fascinating to take a deep dive into a new part of a business.

The problem is that it’s not cheap. The key reason is how we collect the performance analysis data. We interview or shadow a representative sample of peak and typical performers and people who manage or coach them. Running such an engagement therefore requires getting out into the field to spend quality time with multiple people. It can take 8 weeks or more, require travel to multiple locations, and so comes with a corresponding budget. When the audience is segmented (e.g., this business unit will have different needs than that one or this region has a different target population than that one), then the costs get even higher. Like the special china, one only dusts off this approach for the fanciest occasions.

A Faster Cheaper Model

Recently, we had the chance to take a fresh look at how to do this kind of work. Instead of gathering in-depth information from a representative sample of people, what if we instead gathered only a little information but did so from a much broader group of people? In short, what if we crowdsourced the data gathering?

Here was our situation. We were helping a client scope out a new training investment to enable their sales force to make better decisions during their sales process. Their sales force contained roughly 1000 people who typically took the role as a first job out of college. The sales team executed a complex business-to-business sale that ran in four month cycles.

If we were to launch the training in time to meet the big cycle in the year, we simply did not have time to use a traditional approach to performance assessment. Sound familiar? Well, necessity can be the mother of invention. What we ended up finding was that by crowd-sourcing the analysis, we were able to shine a clear light on what mattered in a week-and-a-half of calendar time and less than 2 days of staff time.

Their sales process consisted of six overarching phases. Across the phases, the core team identified 36 distinct decisions. The figure below shows an example of one decision (Note: The actual data is sanitized from the client engagement).


For each decision, we were able to identify our ability to impact business results through a learning solution. We did this by capturing the three items listed above for each decision: performance gap, business impact, and ability to improve. By combining this data, we determined a total “ability to impact” measure and used it to label decisions as “high”, “medium”, or “low” priority.

 Crowdsourcing Performance Assessment Part 2

The chart above illustrates the data (although the specifics are again sanitized). For example, the chart shows that there were 7 decisions in “Territory Planning”. Of those 8 were low priority. On the other hand, “Pilot Testing” had 5 decisions. Of those, 4 were high priority. You can begin to see how the data drove how we targeted the learning solution.

In fact, we were able to go beyond just this overall assessment of business priority. To more specifically determine what kind of learning solution would best improve performance, we were able to dig down into the critical decisions to identify specific mistakes that detracted from performance. To help us gather the content we would need to remediate mistakes, we used a structure to capture them. Here is an example from daily planning.

From our crowdsourced analysis, we identified over 200 such mistakes. What a wealth of detail to inform the learning solution!

With this information in hand, we were able to clearly target and detail out the learning solution. When we stepped back, we found that the overall size of the learning solution remained about the same as originally planned. However, where we targeted that investment changed dramatically. From an original plan of “let’s give an hour to each phase” we moved instead to “let’s put the most time to the high-value decisions.”

That's a description of what we reached. 

So, how did this crowd-sourced approach actually work? Tune in for Part II for a step-by-step description of the process.

Blanchard, Kenneth H., Robinson, Dana Gaines, and Robinson, James C. Zap the Gaps! Target Higher Performance and Achieve It!. William Morrow, 2008.
Wallace, Guy. Lean-ISD: Instructional Systems Design That Makes A Difference. CADDI, 1999.

Download Kineo's NEW Blended Learning Guides. 


You may also be interested in...

Leave us your comments...