E-Learning is a funny creature. It’s created by a mixture of art and technology, always a heady if slightly risky mix. It’s a spontaneous activity stifled by process, it’s great communication strangled by noise. All to often, the magic gets lost in the detail. What could have been a great user experience becomes merely an experience in navigation and clicking buttons, answering multiple choice questions and wondering if you’ve got time to get a coffee before the video finishes.
Usability and utility are fundamental questions in the design and build of e-learning. What is it going to deliver to you and how easy will it be to use? These are questions of enlightened self interest, part of the contract that we draw up with learners, the reasons that we are asking them to give us some of their time, in return for which we will provide them with something worthwhile and worthy.
But they are questions that are, all too often, lost in the murk of learning objectives, project plans and production deadlines. All too often, the baby goes out with the bathwater.
Usability testing is like tuning an engine. The engine will run without it, but it’s only operating at a fraction of it’s potential performance. Often, usability testing consists of either showing it to someone, or piloting it with a group, but usually at the end of the process. It’s rare that this happens early on, or that there is budget for the rework that is really needed. Good usability testing needs to happen at the component level and be embedded throughout the design and production process. It needs to cover elements as wide ranging as the graphic design, the navigation model, the style, the help and support functionality and how you log on. It’s a holistic process, and it needs to be owned by someone.
In practical terms, it’s a combination of using previous learning and knowledge, and testing new elements effectively. Institutional knowledge is often overlooked or ignored, or held in silos. Using a Wiki, a Project Initiation Document where it’s held, or simply running a ‘lessons’ meeting at the start of the project can all help, but you also need to consider how ‘essential’ this meeting is viewed. Wikis are all well and good, but if nobody looks at the data, it’s stale and of little value.
Often valuable lessons are learnt, then lost, then learnt again, then lost again. Actively think about how we can embed and maintain that learning.
Testing is also important. This can be simple observation, or more sophisticated techniques, including keystroke logging or just narrative feedback. The point to remember is that there are experts in these fields and we can get value from using them. For larger programmes, it’s worth factoring this in.
Often the focus falls on utility, what the programme is about, what the learning objectives are, how people will perform differently at the end, but none of this will be achieved to maximum effect without taking account of usability. Sure, the programme may work, but it won’t be fine tuned.