How to Use Analytics to Improve the Design of Your eLearning Course

| 3 Min Read

Analytics in eLearning gives designers the potential to significantly enhance the quality and effectiveness of their courses. Metrics on learner behaviour and interactions are useful tools to help improve the delivery and structure of eLearning courses. In this blog, we’ll cover the key eLearning metrics to think about when designing your eLearning and how they could influence the way you create and optimise your content in your eLearning modules. 

main-img-1

A good recommendation for learning designers is to prioritise measurement and optimisation from the onset of the course design process. Think about how you’re going to measure your learning before you start designing. By asking fundamental questions like,“What do we want the learner to be able to know? Or, "What do we want them to know about the success of the program?” you're setting the foundations for an engaging and effective course that places the learner's best interest centre stage.

Asking these questions at the start of designing the eLearning module can help with designing beautiful eLearning experiences that seamlessly integrate analytics into the course. Not only does this clarify the learning goals, but also you can pull metrics that align with those objectives.

There are a number of metrics that you can use to understand how to improve the design of your eLearning course. For example, a metric on the most incorrectly answered questions in an eLearning module can reveal patterns in answers, we can do more digging in order to understand why those patterns have emerged. If there’s a question that everyone’s getting wrong, it may be that we didn’t provide enough information or that the question was ill-worded, causing people to misunderstand the question. This kind of data can improve the design of your eLearning course by helping you to ensure questions are clearly worded and that people are being fairly tested on that knowledge.  

On the other hand, we can design questions to gauge someone’s existing level of knowledge. If they get those questions wrong, we can redirect the learners toward a module where they can complete additional learning to refresh their knowledge. Sometimes we include questions that personalise the learning experience. There might be a question about a particular topic to gauge how much the learner already knows. If you get that question wrong, then you get redirected toward more content on the topic. In this situation, having a lot of people get that initial question wrong is not necessarily a bad thing. Rather, it highlights that they need more support on that topic area. 

However, if it was a scenario-based question after we’ve given them the content, then it shows that we needed to cover certain aspects of learning in a different way. As mentioned earlier, sometimes it can be as basic as a question being worded in a confusing way or the question not being clear as it could have been. Having this data is a great way to see how learners perform before and after the learning experience, this will indicate how successful your course is as well as any optimisation options. 

Another metric that can influence the design of an eLearning course is completion tracking. Completion tracking looks at average screen time and total time spent on the eLearning module. You can also track how learners are progressing from a screen-by-screen level. If we know someone isn’t getting past a certain screen, or they might be spending a lot of time on one screen, that could suggest that the content could be challenging to learn or that there may be a lot of information to absorb in the screen. This kind of data could influence how to structure content within the program, as well as design the right course length to cater to students’ focus levels.    

If learners are completing the module in a shorter time than expected, then it might mean people are skipping through the content. What does that mean? Do we still care if they are skipping through the content but they are still answering questions correctly? Asking these questions while designing eLearning can be useful in figuring out if the content developed for the eLearning program is too long, potentially irrelevant, or not meeting the interests of learners, leading them to skip through the course. 

Capability indicators are metrics that help to assess areas of knowledge that you want to achieve based on a set of questions that have been asked. For example, you could have metrics set up, such as customer empathy, product knowledge, or store processes for a company that sells products. Some questions might relate to only one of those metrics, and some may relate to a couple of metrics, but they all then link up to the net promoter score (NPS). When used correctly, NPS is a great way to highlight areas of poor performance while pinpointing opportunities to create a better learner experience.

Finally, sometimes designers use a confidence rating to see if a learner is confident with a certain level of knowledge around a particular topic. They could submit a rating at the start of the module, asking “How confident are you about this topic?” and complete a rating at the end of the module, asking “Now, how confident are you about this topic?” You can then draw conclusions about performance of the learning, and whether the module adequately covers the information in a way that equips the learner to feel confident about applying that knowledge in a real-world scenario. 

Overall, it is vital to shift our attention towards more meaningful analytics that go beyond simply gauging user satisfaction with the course. The metrics listed above will hopefully get you thinking about the type of analytics that you would want to track to improve the design of your eLearning course. eLearning design is more than visually pleasing learning; its about leveraging analytics to improve instructional design, enhance learner engagement, and ensure the overall effectiveness of the course.




Contact Us