Guide to the Social Age 2019: Algorithmic Wars

This is the fourth in a series of articles exploring ‘The Social Age 2019’. I redraw the map every year, so the work is cumulative over time. Today, Algorithmic War: how our evolving relationship with knowledge is shifting everything.

Social Age Map 2019 Algorithmic War

Humans are pattern recognition machines, so it’s ironic that we are facing such a struggle conceptualising, and coming to terms with, machines that can determine the patterns of humans. And yet that is exactly the foundation of the Algorithmic Wars, the new battleground of sense making and power in the Social Age. It’s partly a battleground of ignorance and misunderstanding, partly a battle about power, and partly an exploration of what it means to be a self determining, free, ’human’, and how free we really are.

When i was twelve, and busy trying to fail at maths, i got as far as ‘equations’. And ‘algebra’. Teachers would occasionally set challenges, typically involving cars, distance, destinations, and efficiency, and i would struggle to discern meaning from chaos. At no point did they ask me to mathematically predict human behaviour, or wrest control of policing, or policy, from human hands. Maths was largely the purgatory between ‘History’, and ‘Home’, best sat next to the window, which facilitated daydreaming.

Today, more than at any point in human history, ‘maths’, characterised as ‘algorithms’, rules our lives. It keeps planes in the air, minimises the amount of time it takes an ambulance to reach you, determines the price of your wheat and gas, and directly impacts on the words that your politicians speak to you when campaigning. Not specifically because the Organisations behind these things have recruited people who were brilliant at algebra, but because computing power, and the conceptual frameworks of programming and analysis that it enables, have evolved. The tools are now more powerful than the hands that wield them.

When we hear conversations about ‘algorithms’, we are typically not simply hearing about hard problems that are solved faster by computers: we are hearing about radically large and hard datasets that are being ‘understood’ by computers. We are seeing pattern recognition at scale, predictive power unleashed, and a level of understanding that would be beyond us as humans, no matter how engaged i had been in that class.

Algorithms’, in the contemporary context of debate, are radically complex predictive, and analytic, systems, which enable us to make sense of large scale data at speeds that are typically useful.

I would venture that if we had to characterise the foundations of Algorithmic War, it’s not the specific outcomes in isolation that are usually the issue (although in some cases, most certainly are the issue), but rather it’s the broader context of those outcomes, and the ways that those outcomes become inescapable, as we feel the imposition of new systems of organisation, sense making, and power, at great scale, and speed. It’s the way that algorithms give rise to new types of power, and how that power is impacting back into our wider society.

Take Facebook (an easy target, i realise, but when ‘sense making’, it’s ok to start at ‘easy’). Contemporary criticism of Facebook hinges on how the hidden algorithms give us something unexpected, undesired, or somehow deceptive: by ‘choosing’ one news item over another, by ordering and regulating my ‘feed’, by filtering future stories based on my profile and interaction with current stories, we find ourselves individually, and collectively, in a new space. We like to think that we make sense of the world by looking around us, gatherings news, evidence, opinion, and fact, and making a judgement. We react badly if we feel those inputs are being deliberately skewed.

And yet, of course, we have never been the objective problem solvers that we would like to think we were: every way we look at the world is through a filter, and the context of stories is personal in every case. But despite these failings, we have at least felt an element of control: i can ‘choose’ to read the Guardian, or the Daily Mail, i can watch Fox or the BBC. I can choose who is in my community, and who is outside of it.

Those people who take issue with Facebook may be described in two camps: those who feel that the ‘well intentioned’ algorithms are driving undesirable outcomes (echo chambers, inappropriate juxtaposition of content, filtering out of alternative views etc), and those who feel that the fundamental technology is bad, and possibly being used in deliberately deceptive ways (fake news, interference in democracy, creation of artificial social movements, and pseudo viral effects). Or to put it another way: in one view, we are progressing in broadly positive ways, but with highly undesirable side effects and consequences, or we are progressing in fundamentally flawed ways, deceived by technocrats who are unaccountable to anyone.

This is reflected in the responses of the wider system: governments seek to regulate, to control poor effects, whilst concurrently seeking to automate, to maximise beneficial ones. Individuals seek to disengage and tune out, to minimise concerns on privacy, and deliberate bias, whilst seeking to maximise individual gain (through optimising utility and value), and amplifying those messages that mean the most to us.

Predictably, we are in a conflicted time, hence that term ‘Algorithmic War’, because it’s not an outright acceptance, or rejection, of the technology that is at stake: it’s more about how we can evolve our structures of understanding, and effect, in considered ways. Because one thing is certain: if we do nothing, then technology will take us into places that we, as society, are entirely unprepared to go. And we are well down that path.

In popular media, in Organisational adoption, and in the initial narratives of success, or failure, we are often acting as unconscionably naive, or unhelpfully vague. For example, we understand that ‘bias’ can be an issue, but that leads to populist narratives around inherent bias that simply do not stand up to scrutiny, for two reasons: firstly, that there is nothing inherently biased about machine learning systems, there is just bias in the data we feed them, and secondly, a failure to realise that a core feature of machine learning systems is that they learn.

This was my conversation with a taxi driver in London last week: he accepted the arrival of self driving cars, but described how they would not know how to react to a pigeon in the road. He said that ‘professional’ drivers knew to just keep driving, because pigeons always took off at the last minute. So he could accept that self driving cars could learn to drive, but could not accept that they could equally model the pigeon avoidance mechanisms of professional drivers, and learn to do that too.

Some issues are more clearly emergent ethical conversations: should we feed people up images of self harm and suicide (some evidence shows that the ability to explore these topics can lead to better outcomes), or is it simply exacerbating the issue. Should we have adverts for McDonalds showing up next to those posts as well, or should they be held more ‘respectfully’. As a society, i have no doubt that we will figure these things out, although not without some fails along the way.

These emergent and ethical conversations (about privacy, about decency, about protection, safeguarding, and harm) are of vital importance. But they are not the whole foundation of the Algorithmic Wars.

The enhanced ability of computers to predict behaviour goes far deeper than serving up adverts, or suggesting news stories. There is a predictive power of conversations on social channels to indicate future action, e.g. protest turning into violence. Or the ability for Organisations to scan social channels to predict who of their staff is most likely stealing, or rousing dissent. Our ability to ‘listen in’, to ‘predict’, to sense the location of tipping points, all of this is evolving.

Unless we smash the looms, unless we choose to reject the many benefits of these new technologies, we are just at the start of a long, and evolutionary process. There will be many mistakes along the way, and some people will make a very great deal of money, or achieve significant influence and power, by exploiting the new dynamics faster than we can regulate, or even notice what it is that is happening. But none of this makes it all bad or wrong.

As ever, our challenge is this, and it’s a challenge we must face up to in the middle of this war: technology will take us into places that we are ill equipped to deal with. But our ability to deal with it cannot be framed in the old understanding of knowledge, decision making, and power. It’s a new type of challenge that is faced in a new kind of space. And it will require new types of thinking to ensure that, on balance, the change takes us into a new type of space that we can comfortably inhabit. Primary interpretations of the current swathes of change according to know and well understood frameworks may be dangerous: it may comfort us to think of small groups of elite enemy agents undermining our democracy, but this is but one facet of change.

The real outcome of the Algorithmic Wars may be decided through schism and conquest, but most likely will be an outcome of optimisation and greed: the ways we engage with knowledge, the ways we shop, connect, think, act, all influenced by myriad underlying algorithms. An unknowably complex series of filters and moderators of individual action: a radically complex set of predictive engines, and all continuing to learn, to evolve, in a tumbling wheel of change.

Guide to the Social Age 2019

Perhaps our greatest challenge is to find ways to narrate, and understand the sheer scope of the challenge, and to articulate what it means for us as individuals, for our Organisations (which have so much to gain, and so much to lose) and for wider society as a whole.

What you need to know:

  • ‘Algorithms’ are impacting almost every aspect of our lives, and the things that the media worry about may only be a small part of the challenge.
  • Some people, and some Organisations, will become extremely rich and powerful by riding this wave: but others will be left behind.
  • There is almost certainly no evil cavil with a master plan: it’s our own ignorance the is our greatest enemy, and being spoon fed naive interpretations by politicians and media.

What you need to do:

  • Find your space to learn: be a learner, not a passenger.
  • Consider the broader social change in both positive and negative terms: the Algorithmic Wars are about far more than Facebook and Russia.
  • Plan for action: Organisations that do not adapt, will fail. Possibly fast.

About julianstodd

Author, Artist, Researcher, and Founder of Sea Salt Learning. My work explores the context of the Social Age and the intersection of formal and social systems.
This entry was posted in Social Age and tagged , , , , , , , , . Bookmark the permalink.

9 Responses to Guide to the Social Age 2019: Algorithmic Wars

  1. Francesca Lacey says:

    when you say ‘ that there is nothing inherently biased about machine learning systems’ I don’t think you are taking into account that these systems are largely constructed by white men- hence inherent bias…

    • julianstodd says:

      Hi Francesca, thanks for sharing your reflection. I think i stand by my original statement: there is nothing inherent in machine learning that is biased, but i do agree that bias may be introduced through design, or more likely through selection of the datasets that we feed them on. So in that context, the ‘white men’ may build a machine learning algorithm that is identical to that built by any other group of men. Or women. But each of those groups would get different outcomes if they selected different data to train the system on.

      There has certainly been great interest in this bias in, for example, the data used to feed the systems that explore e.g. likelihood of crime: if the data fed is based on police reports, and e.g. as here, in New York, where there is plenty of data to suggest that young black men are targetted by racial profiling, and hence much more likely to be marked as criminals, then the system will take that tainted input and produce tainted outputs. In this context, areas with historic rates of crime may be targetted as more likey to experience crime, and the concurrently higher levels of police present in those areas may actually spark further crime.

      Anyway, thanks for stopping by and sharing your reflection: as i have said at the start of these essays, i encourage everyone to sketch their own map, and if yours is different from mine, or, better, if we have some similarities, and some different views, then that itself is valuable for us to explore. Any one sketch map is imperfect. With best wishes, Julian

  2. Pingback: Guide to the Social Age 2019: Inequality | Julian Stodd's Learning Blog

  3. Pingback: Guide to the Social Age 2019: Trust | Julian Stodd's Learning Blog

  4. Pingback: Guide to the Social Age 2019: New Citizenship | Julian Stodd's Learning Blog

  5. Pingback: Guide to the Social Age 2019: Interconnectivity | Julian Stodd's Learning Blog

  6. Pingback: Guide to the Social Age 2019: Power | Julian Stodd's Learning Blog

  7. Pingback: Guide to the Social Age 2019: Change | Julian Stodd's Learning Blog

  8. Pingback: The Social Learning Guidebook: A Free Resource | Julian Stodd's Learning Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.