Two small changes to boost your e-learning
Let’s first set the scene of how we use e-learning. The education team at boost.ai are responsible for all product related training for our partners and clients. A couple of years ago we started to gradually move from a user manual to a full e-learning and knowledge base approach. A couple of iterations later, we have moved from a “feature description” to an entertaining storytelling of how to use our product.
All our external courses are an AI trainer certification. Each of them address one of the stages in using the boost.ai platform to build a virtual agent. All of them have the same introduction videos - which is an overview of the elements and ecosystem of our solution - making sure we all speak the same lingo and know the bigger picture before we start with one of the certification courses.
The courses are divided into chapters. A chapter explains the different areas of our solution used for each step in building a virtual agent. Every chapter starts with a video explaining the concept of this step/feature. This is to get our learners on board with platform specific terms, express value of features and describe functionality. This video is followed by one or more videos showing learners how to engage with the features in our software. Finally, they get step-by-step instructions on how to do this themselves (they get access to the full version of the software). Sometimes, we use a quiz between videos to serve as a memory jogger and a self-measurement tool for learning. It’s a fairly standard approach to teaching, but we have spent a lot of time balancing the entertaining storytelling with the necessary how-to’s. And we are proud of how we have achieved this! (If you want to see our work, go to boost.ai/ai-trainer-certification)
Once our transition to e-learning was complete, the next step was to find out how learners responded to the changes. To do this, we use a quick and easy survey at the end of every course. One of the questions is “Are the topics well explained?”. Learners rate every chapter in the course on a likert scale that goes from “No, not at all” to “Yes, very much so”. We intentionally avoided the neutral answer option.
From face-to-face teaching (remember that?), we know that two areas - thus two chapters - are a little more abstract than others. This makes them seem more complex and are therefore sometimes harder to understand. For these two chapters, we started seeing the following results:
We weren't satisfied with the 26% that answered “No” for the chapter “Improving the model”. And we really wanted a higher percentage for “Yes, very much so”. By comparison, all other chapters received a score of higher than 50% for that option.
We searched the open comments section to see if anyone articulated why the chapters were not explained well. Many of our learners are professional communicators, so feedback was good and plentiful. In general, we categorized it into two main issues:
- There is too much to grasp
- I don't know where to start or how to approach this
When we designed and wrote the courses, we knew that there was a lot of un-relatable knowledge to absorb in a short amount of time. We spent a lot of time fragmenting a relatively large and complex field of work. We also aimed to make the storytelling-approach help our learners to build their knowledge step by step - making it easier to absorb new knowledge by building it on top of existing knowledge.
So what could we change?
Change 1A: Throughout our e-learning, we used one kind of examples in the explainer videos and then other types of examples in the practical assignments that follow the videos. This was done intentionally, because our notion was: if one understands the concept one is able to apply this knowledge to more than one instance. But by switching examples, without knowing that the learner understands the concept, did we not add complexity instead of helping them understand and gain confidence? Was this leading to the “there is too much” feeling? Did we really build on existing knowledge?
In the explainer videos we talked about writing training data by using examples for a specific topic: to make the VA able to understand when someone wants to report that their phone should be blocked. In the practical assignment afterwards the learners would write training data. But for the practical, the aim was to make the VA understand when someone wants to book, move or cancel a doctors appointment. This way the practical assignment was primarily a review or test of knowledge. That wasn’t our intention. We very much wanted the practical work to be a part of the learning experience. That’s why we decided to make new explainer videos and this time use the same examples as in the practical assignments. We did not change anything else in how we explain the subject. The only change in the explainer videos was the examples we used.
Change 2A: We also moved away from the strict “movies first, then practical” structure and put a practical assignment between the videos of the course.
We believed this fragmentation - although increasing the quantitative labour load of a learner - would be experienced as less work. The fragmented and chronological approach lets the learner absorb theory in small bite sizes and then immediately use it practically - as opposed to piling theory and explanations on top of each other before it was properly internalized. We also believed that this would allow the learner to make and discover their mistakes by their own means - as opposed to us telling them about all the mistakes one can make when they write training data. Hopefully, they would now see a video, write training data, then see a new video with more in depth theory on training data and think - “yes, I did make that mistake!”.
The next chapter - “Improve the model” - is about correcting errors in your prediction model. Even if the boost.ai platform is built so that non-technical people can improve prediction models, it is still somewhat complex work. And you will also benefit from experience. It’s hard to teach experience, so this was our approach when we designed the e-learning for this chapter:
Errors in a prediction model can, to a certain degree, be categorized if we base the categories on the cause of the error. For instance, bad training data is the most common cause, although there are many ways training data can be bad. We used real examples of errors to make the videos for this chapter and we explained how to fix them. The total material of errors and explanations was split into 2 videos. We also made a resource/checklist on how to troubleshoot errors systematically (for our learners to use) and then we added a practical assignment where learners troubleshoot prediction errors on a server. These errors come from the work the learners did in the previous practicals where they wrote training data.
What makes this complex is that, because of the unique intent hierarchy structure of boost.ai, there can be thousands of intents. For instance: There aren’t just intents for a phone that needs to be blocked, there are intents for everything a telecom provider would like the VA to answer for their customers. This makes the troubleshooting scope large - and that can be hard to deal with when you have no experience and are facing it for the first time.
However, scoping it down is difficult. We want the errors to be real errors - and not made up/constructed ones. They need to be similar to the ones we see from live VA’s that are facing users every day. This links our educational material directly to the problems an AI trainer will face on a daily basis when they start to work on their company’s VA. But how can we scope this down? None of our hundreds of live VA’s have such a narrow scope that they only have a couple of intents.
Change 1B: Well, it turns out we actually do have some servers with a very narrow scope, namely the ones we use for learners doing the courses. Courses have been running for a long time already, so we had a lot of good data from live users (learners). We used errors from these servers (errors learners make during all the practicals they do in courses) and used those errors as examples in the videos where we show them how to troubleshoot these errors!
We believed that would make a stronger link between the instructional videos and the troubleshooting assignment. This also scoped it down to errors only related to booking, moving and canceling a doctor's appointment - but we still have examples of all the most common errors regardless of the topic. And, this way we were able to have relevant and believable examples of them.
Change 2B: In addition to this, we looked at the structure (videos first, then practical) like we did with the other chapter:
The idea is that a learner sees an explainer video about a specific category of errors and how to fix them, and they then analyze the model they have been working on for the same kind of errors. Again, this fragmentation increases the labour load of a learner, but we believed it would be experienced as less work because of the bite-size approach.
The entire improvement process of the course took about two weeks to complete. By comparison, we usually spend at least 2 weeks making one animated explainer video. This was faster because we had everything we needed and it was just a matter of putting together the pieces in the right order and switching out the examples. This is what happened with our user feedback:
There was a significant improvement in the learners satisfaction of how well we explain the topics. We strongly believe this improvement is mainly due to the two small changes we did:
Concretization: Relate instructions and explanations directly to the practical assignment - making sure the practical is a learning experience and not an evaluation.
Downscope and focus: Video - practical - video - practical - video - practical etc…
Boost.ai specializes in conversational artificial intelligence (AI). Inventor of the world’s most complete software for building, implementing and operating virtual agents powered by conversational AI technology, boost.ai helps banks, financial institutions, and other enterprise companies focus on their business objectives and customer relationships while creating new lines of revenue and new customer experiences. With unlimited scalability, enterprise-level security and best-in-class privacy features, boost.ai’s technology is used by hundreds of global companies and organizations. Boost.ai is a privately held Norwegian software company founded in 2016 with North American headquarters in Santa Monica, Calif.