Building a Cycle of Learning Evaluation

While corporations have focused on measuring learning and development for some time, most methods still revolve around the basic four levels of evaluation published by Donald Kirkpatrick in 1959: reaction, learning, application, and impact. (Jack Phillips defined a fifth level, ROI, which monetizes cost and benefits to calculate the return on investment using the formula ROI% = Net Program Benefits/Program Costs x 100.) While these levels are useful, they have some fundamental weaknesses:

  • Measurement begins at the end of a learning event. Fundamental issues require re-training or other remediation for those who have already completed the course, webinar, or other L&D product.
  • Most L&D data are isolated from related HR, employee, or business data. While evaluation of a specific course or program can be achieved accurately, data isolation limits the ability of advanced analytics to reveal how the impact of that single course or program coheres with the impact of other factors.
  • Measurement ends when the evaluation of an individual product or program ends. These levels do not inherently feed forward to inform new products or programs.

These weaknesses can be mitigated by following the steps described below, which expand the popular levels of measurement into a cycle of learning evaluation.

The Learning Measurement Cycle

Begin to measure before Kirkpatrick Level 1.

Two measures can be applied to products or programs to predict success before data are collected on learner reactions: enablement indicators and engagement indicators. 

Enablement indicators assess how well a particular product or program is designed for accessibility by its target audience. For example, an audience of first-line managers may share a need for specific content, but that content may require customization for language, local business processes or cultural norms. In addition, if that audience is situated across the globe, a face-to-face learning event will be less accessible than an online learning event.

Engagement indicators document how rapidly or evenly the target audience consumes the learning product. While attendance numbers are commonly collected on a monthly or quarterly basis, more frequent monitoring of attendance patterns at the launch of a new product or program may identify patterns that that reveal why particular segments of the audience are slow to consume learning.

When portions of an audience do not attend a learning event, the effect on higher levels of evaluation is the same: no learning can occur and no impact is gained, regardless of whether attendance suppression is due to poor product design or other factors.

Use analytics to assess the impact of L&D solutions in the context of other environmental factors.

The workplace is complex, and many factors can influence whether new content is learned and, more important, whether it is applied to benefit the business. Using analytics to evaluate findings at all levels of measurement can identify environmental factors that enhance or inhibit learning or application of new knowledge or tools. For example, lower results in one segment of the business may indicate a business process variance that makes the new learning irrelevant or that a supervisor in that area is discouraging her employees from using what they have learned.

Apply evaluation results from past learning events to future events.

It is easy to examine measures of one L&D solution to improve only that solution without considering how those measures may apply to other solutions. However, findings from earlier assessments should be studied to improve the design of future products and programs.

Use meta-analysis to identify learning patterns or preferences that persist across business or geographical boundaries, content areas, or employee groups.

As large datasets are created that span content areas, business groups, geographical boundaries, and other divisions, the tools of big data can be applied to determine whether unique knowledge-seeking behaviors exist for particular groups of employees or for a company as a whole. This knowledge can become a competitive advantage as L&D solutions are created to leverage those behaviors, thus accelerating the acquisition and use of knowledge.

By extending the traditional levels of learning measurement, the work of Donald Kirkpatrick, Jack Phillips, and others can be enhanced. By beginning measurement before looking at learners’ reactions and by continuing to measure after calculating ROI, a cycle of learning can be built to reveal powerful insights into the preference, behaviors, and limitations of workplace learners.

To view or add a comment, sign in

Others also viewed

Explore content categories