This week, in the UK, we’re completing the census. Every ten years, we record the details of everyone living in every household in the UK, a giant count of who is here and what they are up to. In other words, the Management Information.
We love to count things, the track them, to measure them, to measure the changes over time and to look for trends in the things that we are measuring. We are seemingly obsessed with knowing how things are changing.
It’s easy for measurement to become an activity in it’s own right, a self fulfilling prophesy. Why do we measure things? Because we’ve always measured them! In learning and development, the measurement is usually about scores and attendance, progression and failure. We look to see who has attempted a piece of e-learning, who has passed it, who has failed it, how long they took to do it and how they fit against the behaviours and results of the wider population.
There are, though, challenges with the use of this type of data. It’s easy for the analysis to be incomplete, to seek to use the data to reinforce a preconceived hypotheses, rather than to allow the analysis to speak for itself. It’s easy to end up measuring things specifically to prove a point, rather than to see what conclusions the data supports. It’s easy to rely overly on quantitative data, rather than introducing the challenge of analysis that faces us with qualitative inputs.
In learning terms, it’s easy to focus on testing to generate a score, a pass or fail, rather than to measure true changes in skills or attitudes. One reason for this is that good assessment design is actually quite a challenging process. Often assessment is the last thing to be written, and it’s written as a series of fact checking questions, a last grasp of the straw. It’s easier to think about how to measure knowledge than to consider how to measure attitudes or skills, or even behaviours and decision making capability.
In practical terms, we need to ensure that assessment is considered from the earliest stages of learning design. You almost need to start by considering what it is that you want to measure, then ensure that you have the metrics in place to measure when you write the materials.
Designing assessment methodologies whereby we can see users demonstrate capability and understanding is something that is inherently possible with e-learning, and we need to work to ensure that we capitalise on this native strength. It’s easy to fall into the trap of just sticking a multi choice set of questions at the end, without ever considering the alternatives or wondering whether it’s really going to deliver a useful measure against the metrics that count for success.
Instead of measuring performance at the end, we can capture data as users work through pieces, designing analytical methods whereby their behaviour and activities within learning scenarios count towards the assessment results themselves. For example, in a branching assessment, where decisions in one section can impact on choices further down the line, we can tie assessment scores into the branching decision points. There is still a place for simple knowledge checks, but we need to be aware that that is all that they are; checks of knowledge.
In many cases, it’s worth asking why we are measuring something at all. If you’re not going to actively use the assessment data for anything, should you be measuring it? It’s easy to just fall into the trap of sticking an assessment at the end, because that’s what we think should be done. Some things can’t be measured as successful or failed until long after the training has been completed and the learner has incorporated the activities back into their everyday world.
If we want to assess skills, attitudes, adaptability, flexibility, decision making and so forth, we need to adapt our assessment methods to match.