Have You Targeted Your Training Investment Where it Matters Most? (Part II)

This is Part II of a two-part description of a rapid, crowdsourced approach to conducting performance assessment. Part I described the problem we faced and what we achieved. Here, we describe how we executed the approach.

Where We Left Off

In Part I, we described how we wanted to conduct a performance assessment to help size and target a large training investment. The business goal of the investment was to enable a large sales team to accelerate time to proficiency and grow revenue. The performance gap, broadly stated, was to make better decisions throughout a complex six-phase sales process. But which decisions mattered most? What issues caused underperformance? We wanted to find out so as to make sure we focused training where it would have the largest bang for the buck.

A traditional approach to performance analysis would require use to get into the field and interview or shadow a broad sample of sales staff and their coaches and managers. However, we had neither had calendar time nor budget required for such an approach. So, unable to interview a smallish number of people in depth, instead we surveyed a largish number, while asking each for only a little information. In short, we crowd-sourced our performance analysis.

A Bird’s Eye View

We ran the process in five steps:

  1. Generated a task model for the sales process. We organized the model in two layers. A top layer identified the six phases. Then within each phase, we identified a set of specific decisions to be made.
  2. Identified field staff to survey who had insight into sales rep performance. To limit the time we asked from them, we allocated each person to answer questions about only one of the six phases.
  3. Constructed surveys, one for each phase. The structure of the survey was critical. It allowed us to size the the “business impact” and size the “performance gap” for each decision. It also identified common mistakes made for each decision, which allowed us to later identify “ability to improve” for each decision. (See part I for a discussion of these metrics).
  4. Ran the surveys and consolidated the results. Simplify and summarize the results by creating an overall “relative impact” metric for training each decision (which was simply the product of business impact x performance gap).  
  5. Shared the results with the core team to give them the opportunity to comment and, if desired, override the data based on their judgment. (They chose not to).

Let’s go over each step.

Step 1. Generate a Task Model

We started with a list of the six phases in the sales process. Our next step was to break that down into a task model. What specific decisions did reps face in each phase?

We completed that work within our “core team.” Our team included two respected subject matter experts as well as a client project manager. The client project manager facilitated a working session in which she walked these subject matter experts through the each phase. She asked them to lay out what decisions sales reps faced in the phase, sometimes using example sales situations to probe. The workshop generated a total of 36 decisions across the 6 phases.

Step 2. Identify Field Staff

We next identified field staff who could provide trusted insight into performance and impact. Here’s what we knew:

The client sales organization had approximately 1000 sales reps and 150 sales managers.Experienced sales managers would be capable of answering the survey.We wished to get at least 3 responses for our survey for each phase.We wished to limit time by asking each respondent to only respond to one phaseWe anticipated a high response rate: at least 50%.

Running the math, that meant that we wanted to send a survey to a total of 36 respondents (3 responses for each of 6 phases with a 50% return rate). Given the size of the sales team, no problem there. We also had an ace in the hole in that the sales team contained roughly a dozen sales coaches who were even better to able to respond and were virtually certain to respond. In short, we faced no shortage of candidates.

Again, the work fell to the client project manager to identify specific individuals to survey for specific phases. She created a docket that included a mix of regions and sales managers versus sales coaches in each phase.

Step 3. Construct the Survey

We wanted to ask for no more than 15 minutes of each respondent’s time. Unless we asked the right questions, we would not get the level of concrete detail we wanted. But each phase contained somewhere between four and eight decisions. How could we get useful detail in limited time?

We broke the survey into three parts.

A summary rating of the impact of each decision on business results

(Estimated 3 minutes to complete)

Our sponsor had identified upfront that our business goals were to grow revenue for all reps and accelerate time-to-proficiency for new reps. To keep our analysis focused, we focused only on the goal of revenue growth. (Our thinking in making this simplifying assumption was that we felt that we would accelerate time to proficiency if: a) our training focused on the key decisions that drive revenue and b) in addition we provided a clear explanation for each sales decision that simply described what the decision was, why it mattered, and laid out a simple process for how to make it. We assumed the explanations would take little training time.)

We asked respondents to rate the impact of each decision on revenue. The key to how we phrased the question was to make it concrete. We asked participants to think of specific people with whom they had experience and rate the impact of gaps.

We also gave respondents the opportunity to suggest new decisions that we may have overlooked. While we received a few suggestions, the core team deemed them to be either included in the existing decisions or not important enough to incorporate.

A summary rating of the performance gap for each decision

(Estimated 3 minutes to complete)

We then asked respondents to tell us the size of the performance gap. Our perspective here is that we are unlikely to train peak performers to do better. Rather, our goal was to reduce the gap between peak and average performers.


A listing for each decision of common and critical mistakes

(Estimated 10 minutes to complete)

This final section of the survey consumed the most time. We asked respondents to consider each decision separately. For each decision, what are the common and critical mistakes that reps make?

We asked respondents to identify one or two such mistakes per mistake. To ensure quality, we provided a format.

Step 4. Run the Survey

We ran the survey. The process was straightforward:

Our sponsor sent an email to all participants explaining our goals and requesting their participation.We then followed up with the survey itself.Each day, we sent a tickler to those who had not responded.

We limited the time to respond, giving only 3 work days. We felt that either participants would respond right away or they wouldn’t respond at all. Even so, well over half the people we solicited actually responded. We attributed that to the good reputation of the training group, the direct request from the sponsor, and that we asked for only 15 minutes of time.

Because of the short time we left the survey open, we could run the entire performance analysis in just a week and a half.

Step 5. Summarize Results Using “Relative Expected Benefit”

At this point, we had lots of data. But we did not really have a simple summary of it that we could use to make decisions. We wanted to boil it down.

To do that, we returned to the idea that through our performance analysis, we were creating a business case for investing in training at the level of each individual decision. How strong was that case?

The expected benefit for providing training on a particular decision could be calculated this way.

Expected benefit =  Business Impact  x  Performance Gap   x  Ability to Improve

To see why, let’s consider a fictitious example. Consider the decision about choosing accounts to categorize as “high priority” (meaning those for which a rep would invest in developing a specific account strategy). Imagine we had captured the following estimates about that decision:

Reps have, on average, 50 accounts.Each time a rep makes a bad decision, it costs,  on average, $10k in lost sales.Peak performers make bad choices 10% of the time.Average performers make bad decisions 25% of the time.Given the types of mistakes that cause bad decisions, we could provide training that would reduce the gap between peak and average performers by a third.

Now, of course we did not have all of this detail. But if we did, we could then calculate the expected benefit of training that decision.

Expected benefit = $25k lost sales per rep =

Business Impact: 500k total lost sales per rep (50 accounts x $10k lost sales)

x Performance Gap: 15% gap between average and proficient

x Ability to Improve: Able to eliminate 33% of the gap

So, that’s how the formula works. Problem is, how do we tackle our situation since our rapid analysis did not actually provide such nice concrete numbers? Well, we did know the relative impact of various decisions. Each respondent rated the business impact of each decision they considered relative to each other decision. They did the same for performance gap. So, while we did not have actual dollars or percentages, we had relative rankings. So we could use the same math to derive a “relative expected benefit” of providing training for each decision.

Finally, we did not know the relative “ability to improve.” Here, we made a simplifying assumption that we could eliminate the gap just as well for any decision as for any other.  That allowed us to calculate a “relative expected benefit” for each decision by multiplying the size of the business impact by the size of the performance gap.

Once we calculated the relative expected benefit, we categorized the results as “High”, “Medium” and “Low” benefit. This was relatively easy to do… looking at the data gave some clear break points. That gave us a clear overview of the decisions and, hence, where to invest in training time.

Crowdsourcing Performance Assessment Part 2

Step 6. Share the Results

We provided the results back to the full core team. This gave them a layered understanding of the performance issues. They could start with the high-level bar chart shown above. They could then dig into the decisions in each phase and see where the numbers came from. They could see the shape of the various decisions. Some decisions had high performance gaps but low business impact. Other decisions were really important but generally made well. If the team wanted to further understand and challenge the data, they could dig into the specific mistakes identified for each decision.

Since this was our first time using a rapid crowd-sourced approach to deriving the data, we gave the core team the opportunity to editorialize and adjust the data. Interestingly, once they had a chance to examine it, they felt confidence in it and so chose not to adjust it.

Looking Backward at the Experience

Our whole goal with this effort was to create a “right sized” training intervention that tightly targeted the specific performance issues that would result in the largest business improvement.

At the start of our performance assessment, the only data we had available to help size the training was that the sales process was complex and consisted of six phases. Of course, some individual team members had much deeper knowledge. However, that knowledge was not summarized, shared, or validated for use in scoping and targeting the learning solution. At the end of the performance assessment, we knew quite a lot more. We knew we faced 36 specific decisions. For each one, we knew how much benefit we could expect from providing training relative to each other decision. We also knew what key mistakes reps fell into when making each decision. In short, we had a dramatically more clear picture, one that allowed us to proceed confidently into design and sizing.

All that in 10 calendar days and less than 2 person-days of project team time. Using our typical performance assessment process, getting such a result would have taken 8+ weeks and perhaps 5x the person-days.  

It’s interesting to see where the acceleration came. Looking back, we spent our usual amount of time upfront in setting up the analysis. And we also spent our usual amount of time at the end analysing and sharing the data. The key to making the process affordable was that we acquired the performance assessment data in perhaps 10% of the time we would normally take.

We and our client were mutually quite pleased with this first attempt at crowd-sourcing performance analysis. Certainly, we missed the color and background that comes with actually getting out into the field. At the same time, the tradeoff seemed very well worth it. After all, we really viewed it as a choice between the crowdsourced approach and nothing at all.  

Looking Forward: Reconsidering “Ability to Improve”

As we look forward to using this approach again, there is a refinement we anticipate making.

One of the key factors in whether to provide training is whether training can actually resolve the problem to be solved. As Robert Mager has said, if someone held a gun up to your head, could you do this? If so, it’s not something we can train. And even if it’s a training problem, some training problems are just harder than others.

In this analysis, we made the simplifying assumption that it would be just as easy for us to remediate problems in one decision as another. That seems risky. Is it as easy to train someone to, e.g., choose whether to take a bus or train to work as to choose a spouse?

We anticipate adding an explicit assumption about “ability to improve” for each task analyzed. And we plan to provide that assumption ourselves. After all, that assumption embodies our real promise to the business: if we provide training on these tasks of the type we have proposed, we will eliminate the portion of the performance gap we have promised.

To make the judgment, we will use the critical mistakes. Which of those mistakes represent knowledge or skills issues? Which have to do with contrary incentives or poor systems? Which mistakes lead the rep to have do more more work versus save the rep time? From experience, we know that when we can save time as well as improve results our success rate is high. But when we require performers to take an extra step beyond what they do now, our success rate is lower.


This post has described an early attempt in using a faster, cheaper approach to performance analysis.

We do not expect that this approach is suitable for all cases. When a company sets out analyze a truly critical business issue, we would not suggest that it throw away a traditional approach of getting out into the field and use this instead.

However, we wonder whether this faster, cheaper approach, refined through more experience, could expand where and when companies actually conduct performance analysis. As Clayton Christensen points out in The Innovator’s Dilemma, faster and cheaper approaches can, over time, disrupt markets by moving from the bottom of the market up. Could such an approach allow companies to conduct performance assessment for many simpler, day-to-day projects in which teams, perhaps feeling a bit guilty, skip it today to save time or cost?

We hope you find the ideas useful. As we have said, this was a first attempt. We’d love to hear from you. How would you push this approach forward?

Clayton M. Christensen.  The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Review Press, 1997.

Would you like a summary of how to conduct this performance assessment in your organization? Download the Crowdsourcing Performance Assessment Checklist