There is a network of people around the globe who interact and contribute in our community here, but one of my US friends in particular always keeps me on the straight and narrow by asking me for case studies and evidence to support new ideas. So, in the interest of anglo-american relations, i thought i’d share a case study in mobile learning.
The purpose of this project was to distribute Compliance training to a regulated industry, in this case to iPads and iPhones. We work in a number of regulated industries, ones where people legally have to be accredited in certain areas in order to trade. There is a typical pattern of regulated training that it’s generally quite boring and generally quite unpopular! Most regulated training has to be completed and re-certified every year, so the pain in ongoing.
The purpose of going to mobile was to enable people to complete the training whilst on the move, in other words to align the requirement to complete the training more closely to their everyday reality. Asking people who are busy, who move around a lot and whose time tends to be chunked into small bites to complete a long course is never popular, so it’s better to structure the training in small sections and find a delivery medium that fits their lifestyle, hence mobile learning in small chunks.
So, we redeveloped an existing course to run on the iPad. Users were invited to complete the training within six weeks.
The first, and by far the most noticeable aspect of the results was that we had a huge uptake within the first three days. In fact 90% of users completed the course within this time. Now, for compliance training, this is highly unusual, you would normally expect 90% of people to need chasing in week five, but the result may be misleading. Being the first time that a project has delivered mobile training in the organisation, we are working within a very generous honeymoon period. I suspect that whatever we had put out would have had a high uptake on sheer novelty value.
The other factor that may have skewed results significantly is that we asked for volunteers for the Pilot, in other words, people with an active interest in technology and mobile learning. So i guess that all we’ve proved so far is that interested people respond fast.
The part of the population that i am actually most interested in are the ten percent of users who have not completed the course, even once it was downloaded.
Whilst nobody had trouble with the actual process of installation and download, we did find this small percentage who had trouble, primarily with navigation on the iPad. Now, this was interesting, because the course was built with ‘native’ functionality for navigation: page swipes, closing the App by pressing the physical ‘home’ button. Our working assumption was that it was better to tune our design into the expected behaviours of the iPad, but in reality, we had a small percentage of the population where this caused confusion. Many of their comments reflected that they were expecting navigation and usability dynamics more typical of traditional e-learning, rather than mobile. For example, users asked ‘where the next button is’, and often used the terminology of not knowing ‘where to click’ next.
On reflection, i suspect that his percentage would have been much higher who experienced problems with navigation had we not been working with a motivated volunteer population. At the start of many e-learning modules we would include a ‘how to’ section, showing how the navigation works, providing a glossary and so forth. We did not do this for the mobile solution, swayed largely by the fact that we assumed the ‘page turning’ dynamic of the interface was intuitive, and forgetting that ‘intuitive’ just means ‘previously learnt’ when it comes to interfaces.
Similarly, the other feedback that we received was in line with what you’d expect from a motivated population: people liked the fact that the learning was available at a time to suit them, they liked the iPad interface and they actually stated that they enjoyed the learning. I would have expected a higher percentage to favour the media content (the video, audio etc) but in reality there was no significant preference shown for this over text.
We did limit our assessment questionnaire to six questions at the end, asking about their experience of using the mobile learning, so it may be that this could be beefed up a bit as well. The questionnaire was structured around three questions with a 1-5 rating and three free text ones.
The average length of the free text answers was eight words, which is also shorter than i would expect on a PC based questionnaire, which may reflect a wider trend of people being less keen to type longer responses on mobile (which may be obvious, but would need to be tested). This in itself could be significant for assessment design for mobile.
So there you have it: a short case study about a mobile learning project. It’s certainly provided me with some food for thought, not least the experience of building in the post course assessment, which is always valuable, but sometimes neglected.
If you have any case studies of your own, please do feel free to share them.