Assessment

Today i am sharing a simple view of assessment: it’s a pragmatic view at a complex subject. Whilst it’s possible to measure anything, when we look at assessment within programme design, the harder question can be to ask how we should understand what we measure, and even whether we should measure anything at all. This piece illustrates how you may tackle assessment at a practical level by creating two documents: an ’Assessment Intent’ document, and an ‘Assessment Methodology’.

An Assessment Intent document will allow you to address three questions, to at least get clarity in your own head and that of your team: [1] what are you trying to measure, [2] what is the context of measurement, and [3] why are you measuring it.

[1] To ask ‘What’ you are measuring can help us to think about the relationship between what it is we can conceive of measuring (typically interactions, observation of behaviour in the real world, inference from what we see, etc) and what we hope to achieve (leadership, excellence, fairness, change etc) and to see if there is a causal, deterministic, or even identifiable relationship between them.

[2] To ask about the context of measurement is about confounding factors, otherwise known as the imperfect world – when we cannot control variables, we are often confounded in knowing whether we are measuring anything at all. Or at least if there is any validity to what we are measuring. I addressed something similar to this last week in the writing on ‘The Experimental Organisation’. Typically one finds this to be the most challenging of the three parts, and the risk is that we fall into the trap of simply measuring what is easy to measure, or conceive of measuring.

[3] To ask ‘why’ are you measuring it is more pragmatic: if you are not going to do anything with the assessment, you may as well not assess it. Comfort for comforts sake is the curse of Constrained Orgs. In my book on Learning Methodology i wrote a whole chapter on ‘Assessment’, much of which felt like imploring people not to just tick the box, but rather to take the bold decision not to assess at all, at least unless we were confident about validity.

So this ties into overall design: what will we measure, what will we do with it – and is that ‘do’ held within the programme, or outside it e.g. does a ‘result’ generate developmental action, reflective action, etc

Once you have this in place, you can look at producing a ‘Methodology Document’ – how you measure.

Again we can take a very simple view, which may upset purists: what can be [A] ‘self reported’ (this ties into the personal narratives – my story of learning and change over time), what can be [B] ‘produced’ (assets of learning – group stories, co-created narratives etc – also some formal tests or assessments etc), and what can be [C] inferred (may include observation, and includes the new and emergent ‘sense making’, including more recent AI powered analytic tools like SenseMaker)

[A] The easiest ways to tackle the ‘self reported’ are through journaling or storytelling approaches, but we can also use lightly structured free text survey tools to structure the responses (i find this balances freedom of expression with ease of analysis, and i suspect is easier for individuals to complete.

[B] There are innumerable ways for learners to ‘create’ assets – again, co-created stories are useful, but also the outputs from e.g. constructive games, even simulations. The trick is to differentiate between ‘assembly’ and ‘creation’ – assembly is where people just manipulate the things we give them, whilst to ‘create’ they either build, or add context, around them. In general, we are seeking creativity.

[C] Inference can come through observation (i see you behaving differently, so i assume you have learned), but also through automated analysis e.g. sentiment of stories, frequency of word use, or deeper linguistic analysis of free text responses or stories. These tend not to ‘give’ us an answer about learning (e.g. they do not ‘prove’ we have learned), but they may provide evolved pictures from which we can infer it, especially when we make the assessment longitudinal, through the course of a programme, so we can see the language evolve. This, again, is harder.

Assessment is more an art than a science, but on the positive side, you do not need to do much to really add value. Again, at risk of upsetting purists, i would say the following: generating one or two really interesting and robust data points about reflection and application is of far more value than an abstract assessment of interactivity of knowledge. And ‘qualitative’ does not mean ‘vague’: it means ‘real’, so whilst it may need more work to shape, you can defend yourself because it’s the real voice of real people… so in other words, it carries an authentic value.

About julianstodd

Author, Artist, Researcher, and Founder of Sea Salt Learning. My work explores the context of the Social Age and the intersection of formal and social systems.
This entry was posted in Assessment and tagged , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.