There’s been some discussion this week about creating parameters for the measurement of the effectiveness of social learning. It’s a good call to arms, because it’s all too easy to be busy creating solutions without stopping to think of the essentials like this. ‘Social’ is a layer that can surround and enhance our learning solutions, but ensuring it’s effective requires us to think about measurement techniques, and also what we mean by ‘effective’!
There are a range of tools that measure social media presence and ‘impact’, such as Klout and Peer index, but they are not really measures of effectiveness, at least not in a format that we would want. They are geared more towards characterising your social presence, with a view to how influential you are. Both are quite refined models and, whilst not quite fit for our purpose, can probably influence our thinking though.
Klout deals with quantitative measures of engagement, primarily things such as how many sites you are active on, how many connections you have on those sites (and how many connect back to you), how often you ‘broadcast’ and what happens to that information e.g. retweeting of your content. It also introduces a qualitative element in the form of ‘K’s that you can award to people who have inspired you. So together, this gives us a combination of the subjective and the objective, although both measures are open to criticism. Purely measuring volume of interaction is not, in itself, a measure of learning, it’s a measure of busyness. There is no value judgement on the quality of those interactions.
Similarly, awarding ‘K’s to people is a good way of introducing a subjective element, of encouraging community responsiveness and rewarding activity, but it’s a bit ‘one size fits all’.
There are probably two things we could do to enhance this type of system to suit our needs. One would be to introduce topic specific ‘K’ type points, and the other is to address ‘knowledge and application’.
Instead of just having generic ‘points’, we could give community members the ability to recognise and reward members on points such as ‘fostering community’, ‘supporting users’ and ‘providing challenge’. Indeed, these awards could be weighted, so that providing challenge get more points that support, or vice versa, to see what impact this might have on behaviours. Bearing in mind that the quality of interaction is important in a learning environment or community of practice, we may be happier with a lower volume of higher quality interactions.
There is, naturally, a counter argument, which is that this is some form of social engineering, and that whatever conversation takes place in a social space is, by definition, good quality. Whilst this may be true of Facebook and Twitter, i think it’s fair to assume that in a more formal learning space, it’s acceptable to define and aim for ‘quality’.
Knowledge and application is where we would seek to measure retention and, more importantly, application of knowledge. The ability to synthesise data to inform and enhance the quality of practical day to day activities is one of the core elements of our learning methodology. What can you do differently tomorrow from what you did today? It’s not enough to learn in the abstract, we are looking for learners to be able to do something with that knowledge. This is not a ’cause and effect’ approach to learning, we do not want them to parrot what we say, but rather it’s a place for individuals to take what they have learnt and generate their own vocabulary around it. When they can tell the story themselves, questioning and challenging what we say, then that’s a successful outcome.
This ability to use social spaces in learning to support not just knowledge transfer but knowledge application is essential. If we can create spaces for discussion and challenge, a safe environment where new vocabulary can be tested and rehearsed, then we are adding something fundamentally new to the learning equation.
So to create a truly comprehensive set of metrics to measure the effectiveness of social learning, we will need to find ways to measure both knowledge transfer and application, which will come down to either more formal assessment techniques, or observation and scoring of dialogue by a moderator, or possible scenario based or role play areas. Maybe more of a gaming approach whereby you can rehearse against experienced ‘experts’?
The challenge thrown down is a good one: how do we measure the effectiveness of social learning? These ideas are ill formed at the moment, but i aim to find a case study client to try them out. I think this is a time for experimentation, to see what is working commercially, take the best of it, enhance it for our needs and try to come up with a framework.