All the research points to one thing: spend more time evaluating your courses. But is this really the answer to getting those all-important insights and to prove business value? Chris from Boost Evaluation pulls the plug on the hype and tells us why we need to get our head out of the sand. To make evaluation work for everyone, forget the course.
The problem with the evaluation hype
Research by CIPD, Towards Maturity, and even ourselves at Boost Evaluation (in partnership with Training Journal) keeps reiterating the same message: spend more and more time evaluating your courses. The books and theories by Kirkpatrick, Bersin, Philips etc. all reiterate this with models and processes.
Evaluating courses more thoroughly and more often is the right thing though isn’t it? You do need to inform continuous improvement and prove value for money. Yes, but…
The fault with the statement, and the sentiment in all this research, is not 'do more evaluation' but the word 'courses'.
We’re under pressure to bury our heads into the wrong thing.
People need to stop thinking of learning evaluation as a course-by-course exercise. It’s time to look-up and follow some simpler steps to finding real measurement gold.
Two reasons why the rhetoric is wrong
A key finding from the research is that the more important a level of evaluation is (from reaction through learning to impact on behaviour and impact on business/society), the less often it is measured.
But can you really measure every intervention so thoroughly? No, for two reasons:
- It’s impossible
On a course-by-course basis this is completely impossible. Evaluation would be way too onerous. It would drain your time and budget way too much. No wonder it scares L&D teams off.
- Learning isn’t a course
Yes, people do go on courses or complete elearning courses, and as research suggests, this represents about 10-20% of learning. But what about the ad-hoc coaching, social networks, the informal learning from colleagues, the books, YouTube videos, articles, resource-based elearning and more?
‘Courses’ represent such a small part of the picture. If you want to really understand what makes an impact on performance and business KPIs, you need to evaluate at a much higher, broader level.
Don’t be an ostrich
The worst scenario is that we spend time evaluating courses - burying our heads in the sand - whilst all the other learning takes place above ground. And to do this to the extent reports make out we should, would actually detract time away from making better learning. Kind of ironic, right?
Is there another way to measure effectiveness and prove L&D value to the business?
Yes. All you need to do is change your mindset from evaluating courses, to evaluating learning. And get smart about it.
Three practical tips to help you evaluate real learning
Rather than putting all your evaluation eggs in one or two baskets – investing a disproportionate time into evaluating courses or the expensive bits of a strategy – pull up for air and try these instead:
- Scan the landscape
Find out how workplace learning is actually taking place. How are your audiences learning every day? What part do managers, coaches and buddies play? What social media networks get used and how? What are people’s first ports of call when they need help? How and when is formal learning used? Capture how learning and performance support really happens, whether it’s your content or not. Then you can evaluate performance impact in context of the big picture.
- Do a little bit of evaluation for a lot of learning interventions
This is a great starting point, and a very efficient approach, especially if you set up some reusable templates. Be careful though, most approaches like this rely on pointless happy sheet data with questions that are so vague that they give little insight. There is a way to raise this from 'reaction' to 'impact' comparisons and make each evaluation more specific to each given intervention, so aim high even if you are doing just this.
- Go for the long game
Evaluate your whole learning strategy, rather than on a course-by-course basis. Look to set up dashboards that pull in relevant data to tell you what’s hot, what’s popular, and related KPI stats. The upside is that you’ll have a fantastic birds-eye view of what is and is not working. The downside is that if you need data to drive small improvements in specific interventions then you might need to follow this up where needed with more detailed evaluations. But you can add extra detail and depth to your evaluation over time.
Since learning is continuous and iterative, shift your thinking to make your evaluation strategy the same.
Give yourself a break by adapting to a simpler, wider evaluation strategy that helps you get in sync with your learning audiences, and more holistically measures against performance indicators.
So get your head out of the sand, and instead scan the learning landscape. You’ll find that by doing so, evaluation becomes a friend, not foe, of L&D.