How do you measure the return on investment (ROI) for social learning?

Just because something’s hard, doesn’t mean you shouldn’t do it. Today i want to address the question of how can we measure the return on investment (ROI) in social learning? Let’s imagine that we’ve delivered a project into the business that includes a social learning dimension and we are writing up the results.

In simple terms, we are getting a return on investment if we are delivering change into the business: if there is a quantifiable difference in skills, knowledge or attitudes that can be attributed directly to our efforts. This is all very well, but at a more granular level, what is delivering that change and, by adopting a social learning mindset, are we affecting greater change, or just adding a costly channel to what we already do? What will be more effective? Another day of workshops or a semi formal layer of social activity?

Using social learning around formal learning may let you do things that you can’t do in the formal space, or it may simply enhance the quality of what you do already. For example, social spaces are great for allowing communities to construct meaning, by analysing case studies or by providing constructive challenge to materials or stories that we tell them. This leads to a more critical appraisal of the subject, which is arguably better than blind trust that what we are saying is true. But how do we quantify this? In terms of the learning methodology, social allows us to explore and reflect: this leads to greater internalisation of messages, a greater ability to apply the learning to everyday situations.

There are some traditional models for evaluating learning, the Kirkpatrick one is the most used. I don’t want to get into a detailed analysis of that here, particularly as you can find a great explanation here on the Mindtools site. I’m more interested in practical applications specific to social learning and airing some new ideas.

Learning Pilgrims identify two dimensions for measurement: “(A) Costs associated with implementing, managing, and maintaining the infrastructure needed to deliver social learning [and] (B) Measuring the benefits (the tricky part).” [Learning Pilgrims 2012] This second point is a good place to start.

To measure the impact on individual learners, we can ask them for feedback on the experience, we can test them to check knowledge and skills and we can quantify their interactions. Direct interviewing can give us qualitative results: maybe a good practical way to do this is to identify two people who have engaged heavily in activities and two people who haven’t. Interview them (in person, over the phone, by email) and ask for their key reasons for engaging or their key reasons for not engaging. There we go: you should now have four qualitative soundbites that you can use in your report. It’s a start.

I’m very keen on narrating our learning, both personally and as a way of the organisation building a legacy from learning activities. Indeed, the narrative may form part of our assessment of how great a return we have achieved. A project that delivers a coherent, well told narrative around learning may well be worth it’s weight in gold. And learning narratives don’t need to be complex, they really are like an article or a newspaper column describing what we explored, what we decided and what we did. Gathering some learning narratives from individuals and teams has the double benefit of adding a layer of reflection for learners and providing more data you can draw upon for your reports!

Next, look at the total population and measure how many people have engaged and how many people are passive. In fact, you will probably have three groups: people who have failed to register/enter the space, people who enter and consume content without participating and finally people who enter the space and actively produce content. This will give you a some quantitative data, which you could express as dormant/passive/active with a score e.g. 25/125/200. By quantifying the data, we can make comparisons between different applications, between different courses. We can also try to make a leap of faith and quantify engagement in workshops by the same metric: the number of people who register and don’t turn up, the number to actively participate, the number who are passive. We may have to rely on a qualitative judgement from the facilitator for this.

If we view social learning as a semi formal layer that surrounds the formal, then we can split down computer time into formal aspects (e-learning) and social (informal) aspects, such as taking part in forums, writing blogs, producing podcasts. Measuring e-learning is generally done through assessment, which is easy, but the social aspect is harder. We can measure the number of words people write in forums, but what does that really tell us? Shorter may actually be better. Perhaps a peer review process would work? Asking people to award points or stars for the value of content: crowd sourced quantitative data. Maybe as part of the narrative at the end, as we ask people to produce learning stories, they can nominate their top participants.

But this is still really very soft data. Perhaps we should flip things around and start by asking what change we want to achieve as a whole, then measuring the population before and afterwards to see if we have achieved it. For example, let’s say we are running a leadership course, a subject that is traditionally hard to assess! We could start by defining our metrics: maybe we want learners to be able to identify strategies for leadership (not a wooly metric like ‘recognise great leaders in action’, but a practical one, such as being able to make a decision which of two paths to take.

Having defined our metrics, we could assess the group pre course and post experience. If we are looking at decision making, we can assess whether individuals make different decisions after the course and, crucially, if they can explain their reasoning.

Ultimately, whilst i hate to say it, return on investment in social learning is inevitably going to rely on more qualitative data, it’s more subjective, because, by it’s nature, it’s only semi formal. Unless we start assessing people, we are relying on observation, reporting and interview. If we add formal assessment, we probably change it from being social learning to being part of the formal, core experience.

Measuring benefits is not just about statistics on the number of posts or contributions. While this is important, such data should be measured in conjunction with the outcome” [Learning Pilgrims, 2012]

So, in summary, here are four ideas for generating qualitative and quantitative data for return on investment.

1. We can interview to provide soundbites around engagement. This is qualitative data.

2. We can produce amalgamated attendance/participation scores to compare between courses. This would give us some quantitative results.

3. We can write crowd sourced learning narratives that capture both the shape of the discussion and the practical outcomes: these would be qualitative, but you analyse them to make them quantitative (e.g. by identifying common learning phrases and counting which narratives use them).

4. We can measure pre and post course decision making through formal activity/scenario based assessment. This could be either qualitative, through scenario based assessment or, more likely, could be quantitative through formal assessment against criteria.

Learning Pilgrims identify some other metrics that can be measured, including “Reduction in the number of calls to the help-desk“, “reduction in the number of meetings or formal training sessions“, “does it generate new ideas?” [Learning Pilgrims 2012] All good data.

But this data is only half the story: how do we calculate a financial cost or benefit?

If we take too reductionist approach, we may come up with a figure, but whether the figure has meaning is another matter. In social media marketing we may strive to at ‘getting a customer through to the point of purchase via Facebook costs $136 dollars‘, but we are not in a transactional encounter like this. We are not selling something: we are changing people, we are learning.

I guess that one answer to the question of ‘was it worth it‘ is to ask, ‘based on the evidence, did it feel worth it‘. In other words, if we produce a report illustrating the quantitative and qualitative feedback identified above, especially once we are able to compare results between courses, did it feel worth the effort? It’s like any retail experience, do we feel we had our money’s worth?

The aim of this article was to take an open minded approach to measuring return on investment for social learning. We already have formal, established models like Kirkpatrick, but, in a true action research mindset, we should try new things and keep those ones that work. Also, in a truly collaborative and social learning mindset, i should share my thoughts and invite you to share yours too.

About julianstodd

Author, Artist, Researcher, and Founder of Sea Salt Learning. My work explores the context of the Social Age and the intersection of formal and social systems.
This entry was posted in Assessment, Change, Collaboration, Effectiveness, Fact, Formal Spaces, Informal Spaces, Legacy, Measurement, Narrative, ROI, Social Learning and tagged , , , , , , , . Bookmark the permalink.

11 Responses to How do you measure the return on investment (ROI) for social learning?

  1. Pingback: How do you measure the return on investment (ROI) for social learning? | elearning&knowledge_management | Scoop.it

  2. Pingback: Measuring change: marking progress in learning | Julian Stodd's Learning Blog

  3. Pingback: Time to reflect, renew, and engage! « Monday Morning Musings

  4. Pingback: 23rd April: Creating and Sustaining a social and mobile learning culture in your organisation | Social and Mobile Learning Workshops

  5. Thoughtful and relevant considerations. Qualitative assessments are more cumbersome and less satisfying in a metrics focused environment, but you highlight how they are essential to these types of questions. I’m curious about your focus on participation in 1 & 2 rather than learning. For example, in your quick qual interviews of 4 people you just want to know why or why they did not participate, not whether or not engaging impacted learning in some way. Is this because we are in earlier stages of integrating social components and still need to know how to attract participants?

    • julianstodd says:

      I think it’s more an oversight than planned… it probably reflects my usual focus on generating engagement, so you did well to spot it! I like the idea of measuring the impact of the learning, but that always gets messy, trying to separate cause from effect and attribute it to one particular element. In part i think i was also rebelling against the first thing people ask for which is ‘how many words have they written’… thanks for sharing your feedback 🙂

  6. Pingback: 7 traits of the Social Leader | Julian Stodd's Learning Blog

  7. Pingback: A methodology for learning. Part 7 – Footsteps | Julian Stodd's Learning Blog

  8. Pingback: An introduction to Scaffolded Social Learning | Julian Stodd's Learning Blog

  9. Pingback: Connected: Trails | Julian Stodd's Learning Blog

  10. Pingback: Social learning – is there any other kind? | Maarifa - Communications and Knowledge Management

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.