Evaluating engagement in e-learning
Beyond the buzzword
Synergy. Think outside the box. Move the needle. No matter the industry, buzzwords and catchphrases abound. The line between corporate and “edtech trends” (Gallagher, 2018) continues to blur, solidifying a shared vocabulary. Terminology rises to prominence, achieves viral status, and quickly becomes cliché, to the point that original meaning is diluted—even as its occurrence saturates the common consciousness of a field.
Engagement provides a strong example of this phenomenon. The term has become so notorious that it was identified as one of the most pervasive and egregious buzzwords of 2017 (Henry, 2017). Engagement pertains to multiple categories—straddling the borders between business, politics, lifestyle, and education. It is also a word that tends to be overused, “but points to something important happening in society” (Thorne, 2017). Learner engagement is particularly important in e-learning, as learners do not necessarily experience the support of a face-to-face cohort, and learning may be largely dependent on the learner’s ability to self-direct (Dixson, 2015). Level of learner engagement informs a number of processes, including content creation and prioritization of tasks (Denny, 2016). It can also corroborate the effectiveness of a particular opportunity for training and development. Data-driven demonstration of engagement affects further funding and further opportunities for expansion.
Critical to establishing evidence of engagement is the ability to objectively evaluate the now-nebulous concept. In turn, to evaluate effectively, we must redefine this term with its wayward meaning. We must also define it within the specific, desired context of e-learning. A number of factors unique to e-learning will be important to consider, including barriers that may impede engagement. Primary and secondary factors include “psychological motivation, peer collaboration, cognitive problem solving, interaction with instructors, community support, and learning management” (Lee et al., 2019).
Current measures
Current measures for evaluation of engagement in e-learning are largely grounded in the collective works of Clark and Mayer (2011) and Kirkpatrick and Kirkpatrick (2016), among others. The Model developed by the latter, when viewed through the lens of the Three Approaches of the former, creates a powerful framework for objective and measurable evaluation. This combined framework considers the four levels of training evaluation— Reaction, Learning, Behavior, and Results—from three “evidence-based” perspectives: What works? When does it work? and How does it work? (Clark and Mayer, 2011, pp. 51-52).
Evidence-based practice takes into account objective design decisions that operate independently of momentary trends. What is right for the learner within the context that learner takes place? Interestingly enough, Kirkpatrick and Kirkpatrick revised their Model in 2016 to include Engagement as a marker of the Reaction level, defining it as the extent to which the learner interacts with content. As such, evaluation of engagement is traditionally characterized by a learner’s number of clicks, completion rate, frequency of logins, self-led learning, and propensity to ask questions (Denny, 2016), particularly in the case of a blended learning opportunity that involves a face-to-face component. Of course, clicks and completion are not always reliable measures of meaningful engagement. As anyone who has been tricked by a clickbait article understands, a click in and of itself does not tell stakeholders whether or not the user navigated away immediately, stayed on the page to learn more, or continued clicking frantically to move forward.
High-tech and low-or-no-tech solutions are currently available for tracking these evidence-based components of evaluation. In the field of e-learning, technology perpetuates technology; emerging technologies are employed to evaluate meaningful engagement with existing technology and vice versa. Many stakeholders are now familiar with Sharable Content Object Reference Model (SCORM) capabilities and are moving into more versatile solutions, including experience application programming interface (xAPI), which go beyond the basic capabilities of a Learning Management System (Foreman, 2013). xAPI technology allows stakeholders to understand more nuanced and complex user-system interactions and can even help them to understand where learners “get stuck” or give up in a given course. Such technology allows program administrators to analyze data in a way that offers insight into a learner’s thought processes and behaviors, as opposed to proof of their ability to memorize or “cheat” an e-learning course.
Moving forward
Emerging theory tends to question the limitations of traditional e-learning measures that include clicks and screen time. Such markers of engagement are often considered too narrow or even antiquated in a field that has moved on to explore new frontiers—virtual reality and augmented reality being the foremost (Designing Digitally, 2018).
The Learning-Transfer Evaluation Model adopts a more multidimensional and holistic approach. Building on basics established by the likes of Kirkpatrick-Kazell and their Four-Level Model, Thalheimer (2019) expands scope to consider three unique interaction types: knowledge, decision-making, and task competence. His model seeks “to target more meaningful learning outcomes” across eight tiers. These stratified layers range from basic attendance—simply showing up—to the more advanced, which culminate in work being done and tasks being accomplished. By this standard, the quality of engagement and e-learning interactions can be measured most effectively by evidence that substantiate desired outcomes.
References
Clark, R. C., & Mayer, R. E. (2011). E-Learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. Hoboken, NJ: Wiley.
Denny, J. (2016, December 20). 5 elements of measuring engagement in training evaluation. Retrieved from https://elearningindustry.com/
Designing Digitally. (2018, May 15). How does augmented reality work in Elearning? Retrieved from https://www.designingdigitally.com
Dixson, M. D. (2015). Measuring student engagement in the online course: The online student engagement scale (OSE). Online Learning, 19(4), 1-15. doi:10.24059/olj.v19i4.561
Foreman, S. (2013, October 14). The xAPI and the LMS: What does the future hold? Retrieved from https://learningsolutionsmag.com/
Gallagher, K. (2018, December 27). Rigor, grit, collaboration: Teachers share why buzzwords don't always inspire. Retrieved from https://www.edsurge.com/
Henry, Z. (2017, November 21). 25 buzzwords that you really need to stop using right now. Retrieved from https://www.inc.com/
Ituma, A. (2011, April). An evaluation of students' perceptions and engagement with e-learning components in a campus based university. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/1469787410387722
Kirkpatrick, J. D., & Kirkpatrick, W. K. (2016). Kirkpatrick's four levels of training evaluation. Alexandria: ATD Press.
Lee, J., Song, H., & Hong, A. (2019). Exploring factors, and Indicators for Measuring Students’ Sustainable Engagement in e-Learning. Sustainability, 11, 1-12. doi:10.3390/su11040985
Roffe, I. (2002). E‐learning: engagement, enhancement and execution. Quality Assurance in Education, 10(1). 40-50. https://doi.org/10.1108/09684880210416102
Thalheimer, W. (2019). LTEM: The Learning-Transfer Evaluation Model. Retrieved from https://www.worklearning.com/ltem/
Thorne, T. (2017, November 21). BAD BUZZWORDS!? Retrieved from https://language-and-innovation.com/2017/11/21/bad-buzzwords/