Measuring Trainers, not just Training?
One measurement to rule them all...

Measuring Trainers, not just Training?

Am I as good as my peers at what I do?

As a classroom facilitator I have often thought about what it is I do and actually is it easy to define? Sure, the perception of a `trainer` is a little like the rock-star of any organisation, turn up, set up, stand up, pack up, leave? But this isn't the case (anymore) and in lean times, how can we demonstrate what it is we do beyond the laminated veneer of Kirkpatrick?

In my time within L&D, I have seen and worked alongside some amazing training professionals and some who with all the will in the world find themselves ineffective. This being said, I have never come across a `trainer` who has been managed out of a business for performance. When assessing the capability of a trainer I am hard pressed to define clear deliverables that are entirely owned that are focused on capability.

"How can we measure trainer capability when we are focused on the learner?"

In this 70:20:10 focused world, the role of the "classroom" practitioner is but a small part of what we all do, ultimately enabling & upskilling people to follow their own path. My experience of measuring across learning functions / departments often leads back to the ubiquitous Kirkpatrick model.

This article is not aimed at being dismissive or defensive of KirkyP, in fact, on a personal level I like its simplicity, but as a delivery focused L&D chap, what does it mean to me? The results that often are a direct result of what I do are quite low level, assessing happiness & immediate knowledge transfer. As these are often the key metrics attributed to my profession, is it fair to say my capability is proven by asking "Would you recommend?" or a score on a test.

In recent times, we have seen greater focus on encouraging learning outside of the classroom, supporting that 20% of 70:20:10 which has been a wonderful challenge and rewarding to see. That said, how much of my coaching and time has made an impact and how much was the line manager, peer & personal reflection that affected the number.

"Well what's wrong with that? They are learning, we have statistics!!"

I am happy that we are starting to develop evaluation methodologies that show our worth as an industry, I see pioneers and a desire to work out what it is we do vs the results.

That said, a great many of us are still a little unwise as to our own personal contribution. There is a wide scope in the quality of learning professional and with a focus on 70:20:10 and measurement of learning and business results happening later, it is difficult to say what impact a specific training professional has had? If that is the case, then how can we make sure that learning professionals are effectively performance managed on a set of defined KPI's?

I believe much of this stems from the diversity of the trainers role and what it is we do. There is a lot of background work outside of the classroom and of course beyond it that can be assessed and explored to give the ultimate experience to delegates & learners.

Breaking down the learning experience to measurable stages

In order to work out what we do I started doing some research and stumbled across this model (Bailey, S. 2012) and its simplicity resonated with me.

How can we apply it to the day to day life in learning?

Trainers must focus on what they want learners to do at each stage of a learning cycle: engage in the content, participate in the learning, and then activate new behaviors. (Dr Sebastian Bailey, 2012)

Dr Bailey's suggestion that learning should be engaged with before by actively interacting with the participants is very interesting to me. As many of my connections are within the world of Sales & Marketing this provides opportunity. By engaging before training and building excitement and brand awareness even within your organisation; is there scope to measure as you would a marketing professional?

Measuring Engagement

Rather than going out and just delivering the sessions, should it not be the responsibility of the learning professional to build the environment ahead of their course/ intervention? I am not an expert in Marketing KPI's or running a campaign, but I would love to work collaboratively to understand how trailers, comms, posters, pre-work, articles or meetings could build the learning experience and be measured ahead of training? How can this be measured? Likes, "upvotes" and social media style interactions all come to mind?

Participation? You mean NPS right?

Look, I get it, trainers often have a degree of vanity and love a metric in the moment. NPS might not be the most effective measure of a trainers ability? But hey, who doesnt like a happy sheet? Knowledge transfer and quiz time also probably cover this off for in the moment feedback.

3...2...1... Activate

Activation of learning is the behemoth right now, beyond Kirkpartick we throw about the numbers of 70:20:10. It is the peanut-munching, grey, long-nosed shadow in the corner of every L&D Department.

Measuring numbers and performance is essential and will be collated from many departments, but how do we measure the impact of a trainer coaching and supporting beyond the classroom?

If completed on a wide scale, this should be evident, but often training professionals have a degree of autonomy and quite busy diaries. In this case, how can a business effectively measure impact of a single trainer with a insignificant number of learners to see how much impact they have had to the business.

What about ROI?

So... I expect many of you are measuring training performance, using a model not too dissimilar from Kirkpatrick to assess results. The challenge is does this model go far enough to measure the trainers contribution to the business results? Can you identify a top performing L&D practitioner from your numbers. If you cannot, then is it possible to demonstrate which "trainers" need extra support.

Training Professionals are often the last to get development, the old adage that "Cobblers Kids Need New Shoes" has never been more true in modern, lean times.

Lets change how we view the role of the trainer in the classroom to make sure we have the best L&D professionals possible.

Who's responsibility is it to measure the results of a trainer?

Increasingly, we measure our learners on a grand scale but push the onus of learning onto them and it becomes their responsibility. The concluding question I ask, is should we push the responsibility of the training professional to capture their impact themselves and measure them on their ability to spin? Or is there a better way to measure how well we activate the learning for our delegates?

Thoughts & Musings of Ben Hurrell

Reference: https://trainingmag.com/trgmag-article/creating-learning-transfers

To view or add a comment, sign in

More articles by Ben H.

Others also viewed

Explore content categories