Today i am sharing part of a new section of the Learning Science book that i’m working on with Sae Schatz and Geoff Stead: this piece considers the ‘webs of limitations’, or why it’s so hard to change a Learning Organisation. Over the last several weeks, we’ve discussed core Learning Science as well as various enabling technologies, which form part of the Learning Ecosystem.
There are quite a few compelling reasons for Organisations to embrace this work: for example, the growing demand for lifelong learning, and to keep workers up to date on changing technologies and contexts of work.
There are also positive relationships  between Organisational Learning (including training programs ) and Organisational performance.
A true Learning Organisation  is exciting. It has a Culture that evolves fluidly to cope with unexpected industry change, and both Learning and Development sit at its heart. The knowledge and capabilities exist for every organisation to be an amazing Learning Organisation, without unnecessarily burdensome investments or unbearable risks.
It’s not necessarily about ‘doing more’, or spending more money, but rather about changing what we do, and how we think about it, as well as stopping doing certain things.
Despite this clear truth, Learning and Development (L&D) professionals are often stymied by Organisational inertia.
Sometimes it feels like there’s a web of limitations that holds innovation back. But L&D can be the hero of this story: all you need are a few tools to cut through the pesky web.
Web #1: Intra-Organisational Boundaries
Organisational change is challenging, and Organisational Culture may unintentionally draw us back into repeating, familiar patterns. So if, in the past, L&D was only seen as a deliverer of training courses, it can take significant work to rewrite expectations with your peers, to be seen as the goto team to shape new Learning Culture (beyond traditional courses). It’s almost as though we need to claim a new space, against the expectations of others.
But improving human performance requires more than this: it may require changes to equipment, infrastructure, processes, and incentives (as we explored in the post on a Learning Engineering approach). It may challenge legacy structures of power, ownership and control.
It’s not simply about new knowledge or skills.
The transformation required is systemic, whilst projects, authority, and budgets will tend to be domain based.
Or to put it another way: we are trying to solve a systemic challenge within vertically segmented structures. We are within the problem we are trying to solve.
Web #2: Old-School Procurement Systems
The ‘old’ tends to be ‘sticky’.
Our systems are intentionally designed for stability: essentially they are structured to be resistant to change, in that we do not wish our Organisational structure to ‘accidentally’ change.
We want that change to be both directed and purposeful.
This ‘bias for stability’ manifests as established processes, tribal expectations, and limitations on deviations from the norm.
In practice these constraints may begin exerting influence at the earliest inklings of Organisational Learning – at the pre-concept phase.
Firstly, consider most organisational procurement systems: These systems are designed to purchase tangible ‘things’ – ideally in neatly defined packages (e.g., a piece of software, set of licences, or number of days in a dedicated seminar).
Organisations typically buy what they know how to buy.
And commercial vendors typically sell what they know people will buy.
That means, in many cases, the ‘organisational development’ offered by a packaged program or learning activity is driven by what the market is offering, instead of what is actually needed by the Organisation, or where the Organisation’s most impactful opportunity lies.
Note that this is not an aberration of the system: this is the system.
But it begs the question of what capability you wish to hold internally, and what you will ‘buy in’ – what markets offer is not necessarily calibrated to success, beyond being a measure of what markets will buy. The system may be self fulfilling, but again, not out of malice or ignorance – rather through self reinforcement and lack of holistic measurement or calibration.
Secondly, procurement of Learning Technologies tends to be a long term venture, and hence people perform, deliver a project, and move on. Successful implementation within the established Organisational models of employment will tend to reward the associated leader and team with new opportunities, promotion, or a change of context.
A new leader subsequently replaces the original proponent which can lead to a loss of Organisational knowledge or context’ as a procurement program advances.
In this framework, there is little incentive to change your mind – and yet as technology changes, that is exactly what we should do.
Anyone who completes a learning technology implementation, who then turns around the next week and says ‘we should drop this and try something new’ would be brave indeed. And yet this is what the evidence may tell us we need to do.
Yesterday’s decisions do not automatically carry legacy value.
The stickiness of systems is partly held in their monolithic nature, largely a result of market forces and an outdated notion of what ‘risk’ may mean.
We often associate IT risk with stability, and data security, and desire uniformity, but risk may equally be being left behind or out competed. It’s not that we want unstable or insecure systems, but a safe system in a failed Organisation is of little use.
Julian has written previously about the need for Diverse Ecosystems of Learning Technologies: core infrastructure systems, alongside a landscape of much lighter systems which may be divergent, small scale, local, even crowd sourced.
And crucially, are rapidly disposable.
These are test beds, and we need to build appropriate sandbox environments to test these new components, as well as appropriate crowd sourcing approaches to find them in the first place. An IT team may not be the only source of insight into new technologies, and an opportunity may lie in a parallel or divergent space, but which can be repurposed into an L&D context.
The challenge of learning technology is a classic one: benefits may be given by evolved technology, but it is often the underlying conception of need, and understanding of the Learning Science, which can drive a true ‘redefinition’.
Or to put it another way: technology may facilitate learning, but does not fundamentally change the cognitive foundations of learning.
Technologists can ‘build’ technology. Learning Scientists can build knowledge of how we learn. But it’s a dance that is complicated by cultural and commercial factors.
And then on top of that, people become politically and powerfully invested in the systems of procurement and ownership. So the systems we are trying to modify or redefine are not neutral, but are owned, and change may erode power and control.
So, much as technology can remain anchored in older models (as we discussed in our recent SAMR post), merely substituting and adapting new technology into old systems, so too can Organisational structures. That is, organisations may attempt change and innovation but find themselves stymied by old existing paradigms, power structures, and procurement methods. Hence we see new and emergent concepts and technologies adapted and constrained to do what we already know how to do, as opposed to upending the model and re-conceiving what’s possible, from a true learning perspective.
Web #3: Hard Maths (Difficulty Quantifying Outcomes Directly)
Organisations may struggle to understand and explain the relationship between L&D investments and meaningful organisational outcomes.
They may resort to counting easy – but relatively meaningless – things, like how satisfied employees were with a training program (so called ‘smile or happy sheets’) or how many hours employees have spent learning. (Per LinkedIn’s annual survey for 2023, the top five metrics Organisations use to evaluate their L&D are still these sort of ‘vanity metrics’.)
These sorts of metrics provide minimal insights, and the outcomes collected are usually divorced from the things business leaders care about.
We should note that this failure is well intentioned but constrained: not only are these the things that have ‘always been’ measured, they are often the things that are easiest to measure – and they have a tempting appearance of ‘usefulness’ or ‘truth’.
Surely if people stay longer/watch more/click next, then they have learned?
When a real data-driven assessment is conducted, Organisations tend to use a ‘one-and-done’ approach. For instance, an organisation might measure the debut iteration of a new L&D program. (And then simply use smile-sheets for ‘data’ thereafter.)
Learning Engineering (and other general best practices) would tell us to measure each iteration over time, and to continuously use those data to improve the program long term (but per our earlier point, this often challenges resource allocation and budget paradigms). Similarly, another best practice is to use distal measures, for instance, examining performance impacts six months following an intervention.
The L&D programs who invest in psychometricians and data scientists typically produce much better analysis, but understanding what has been said and seen is itself a complex thing/specific skill.
We cannot assume that ‘data literacy’ is universal: clearly there are emergent skills within our evolving Organisations that we may need to train people in.
And we can over-simplify: for instance, we’ve seen complex sets for very large-scale Organisations reduced down to dial-indicators on a summary dashboard – stripping away any information about data reliability and validity or the assumptions made in the analysis. Similarly, we’ve seen the converse: Where (nerdy) data scientists explain each analysis used, along with a whole alphabet of Greek letters outlining the mathematical details to anyone who can read them.
There is a way to do this well!
There are viable ways to measure the impact of an L&D program or specific learning activities on relevant outcomes. (Sae insists we link to her favourite business book, How to Measure Anything, here.)
The problem isn’t that L&D is ephemeral or unmeasurable. Similarly, as Edward Tufte popularised, there are excellent ways to accurately portray complex information.
It’s just hard to do.
And so we often don’t see the explicit connections between L&D and Organisational outcomes monitored objectively and effectively, and as a result, Organisational leaders have difficulty placing a monetary value on these programs – making them easy to ignore (or cut completely).
This speaks to the emergent roles of L&D: the Institutional Storyteller may be a valid role. Or the Data Interpreter!
 This is a meta-analysis: The purpose of this meta-analysis study is to examine the correlations between the Dimensions of Learning Organisation Questionnaire (DLOQ) and frequently examined outcomes including organisational performance and employee attitudes.
Positive relationships were found between the DLOQ and organisational performance (e.g., financial, knowledge, and innovative performance) and employee attitudes (e.g., organisational commitment and job satisfaction) and the sub-dimensions (e.g., affective, continuance, and normative commitment), with a notable exception of a negative relationship between the DLOQ and turnover.
The constituent questions of the DLOQ scale make up seven dimensions that measure the positive impact and the cultural features of a supportive Learning Organisation. These dimensions are the following: i) continuous learning; i) inquiry and dialogue; iii) team learning; iv) embedded systems; v) empowerment; vi) system connection; and vi) strategic leadership.
 This is a meta-analysis: Specifically, a one standard deviation increase in training was associated with 0.25 standard deviation increase in Organisational performance.
 Ju et al. (2022) (that first meta-analysis above) has a research-based definition of a Learning Organisation: the learning Organisation can be defined as an Organisation in which (a) people continuously learn, (b) learning creates collective meaning and values, and (c) members’ behaviour reflects new knowledge and insights.