Generative AI: Creativity – Comfort – Truth

Tomorrow i am stepping out of my comfort zone and delivering a short session to a group of Head Teachers and School Principles on Generative AI. It’s been a while since i was summoned to the head teachers office… but this time i’m hoping to avoid detention.

I thought that i’d base it around three ideas: ‘Creativity’, ‘Comfort’ and ‘Truth’, which offer the chance to consider what Generative AI may give us and cost us.

There are some common narratives flowing through our education systems: that Generative AI is ‘cheating’, that it’s ‘biased’ and ‘unfair’, that it does not hold ‘truth’, that it brings ‘uncertainty’, but also some exceptionalism, that it will never ‘replace teachers’, will never ‘really teach’.

I hope to deliver a more positive assessment (with the caveat that i suspect Generative AI will decimate our legacy conceptions of formal education – and the commercial structures that have sprung up around them).

Much of what i will discuss relates to the dialogic nature of Generative AI: it’s not a search engine, but rather a story engine, and one that we participate in.

Generative AI supports creativity in that it allows articulation of ideas, and rapid prototyping of them. It can create successions of reflective surfaces that we can use for various things – cheating being one of them, but imagine if we actually use them to learn…

In many creative arts, one first learns the tools and conventions, and then may find power by deviating from them: music, poetry, even prose, all have structures that we can choose to follow, or act in opposition towards. Both approaches may give power to our voice.

Being in dialogue with Generative AI flips the model, so the learner drives (or sketches) the framework of enquiry, as opposed to this being predetermined by the teacher or instructional designer. Naturally this leads to divergence, but arguably all social collaborative approaches to learning flips the model and lead to divergence. And i would argue that in a very real sense Generative AI acts as part of a social context, not a technological one.

Generative AI abstracts the step of mastering the medium. I no longer need to learn to read music to compose, to work a camera to film or direct, and to own a paintbrush to create brushstrokes. So storytelling is democratised, liberated, although also we lose a connection to the worked surface of the art: for example, painting in oils is a visceral experience, but a slow and costly one. Perhaps it’s better to view Generative AI as broadening the space of experience, certainly not replacing it.

Our ‘Comfort’ is both a foundation and a boundary: the things that we know and understand. Being able to articulate our comfort, and to separate it from notions of ‘good’ and ‘truth’ may help us more effectively explore the potential that new technologies give us. If we remain ‘in comfort’, we remain in the past. Change is inherently an act of violence, against our comfort and certainty. And indeed our comfort may lie in the systems and structures of the present, of our codified certainty, so change also fractures those systems, and hence imposes discomfort.

Interestingly, in recent research we’ve been using Adobe Firefly to summarise and visualise narratives of ambiguity, complexity, and discomfort, and the Art Engine always, without exception, visualises a human in the images of discomfort. Because whilst stasis is a physical phenomena, comfort is a very human one. And hence discomfort is pain.

Finally, the question of ‘truth’, is a contentious one. Does Generative AI hallucinate, is it biased, is it even intelligent? Can it give us a truth? To some extent, we can duck this question, by thinking about ducks.

If we treat it as an engine of truth, then it creates truth, because in certain contexts, what we believe is what we act upon as the truth. And i recognise how dangerous that approach it, but it’s nonetheless a very human one.

We have assorted mechanisms for discerning truth, from scientific enquiry to spiritual meditation. Some provide quantifiable and replicable versions of a truth, whilst others create or provide ‘your truth’ or ‘mine’. Do i trust you? If i say ‘yes’, then my trust is ‘true’, even though you can only quantify my actions, not my underling thought. In this sense, ‘belief’ is a truth, even though it is only a belief.

It’s easy to get hung up on questions of truth in learning, but we should also remember that our existing systems are not perfect arbiters of truth. Probably what we need is discernment, validation, dialogue, and a healthy dose of scepticism. Perhaps to understand what ‘true enough for now’ means. Which all sounds a bit grubby, but so long as we are in motion, perhaps true enough is good enough.

I’m pulling this together as a first draft of structure and will see how it lands!

#WorkingOutLoud on Generative AI

If you enjoy my work here on the blog, you may enjoy ‘Social Leadership Daily’, my Substack community that explores Social Leadership in action, for sixty seconds a day. You can find that work here.

About julianstodd

Author, Artist, Researcher, and Founder of Sea Salt Learning. My work explores the context of the Social Age and the intersection of formal and social systems.
This entry was posted in Learning and tagged , , , , , , , . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.