Last week, I shared the first three elements from a session I’m running this week on future technology, and the innovation we see, the impact it has, around the organisation. Today, I’m sharing the next piece, looking at ‘sense maker’ technology.
Some new technology is transformative, breaking models of the past, and at a single leap moving us into a new space. Other aspects of technology are more subtle, and sense making probably falls into this space: we are seeing a range of applications and technologies that help us make sense of the world around us, help us make sense of the multitudinous inputs and many communities, help us to perform better.
There are the technologies that simply enable us to be engaged within communities, the Social Collaborative Technologies, from Facebook, to Twitter, to LinkedIn, or, within the organisation, from Yammer, to Jive, to any number of bespoke technologies. Communities are central to Social Learning, and to the development of Social Leadership. They are the entities within which we see social filtering take place, filtering that can help us determine the signal from the noise.
There are sense maker technologies that link us to sensory arrays: at the most elementary level, I can take a look at the WebCam before walking down to the beach, but I can also see whether my trains are running on time, whether there are delays on the drive to the airport, and even which carriage of the train is going to be busiest. Some of these sensory arrays comprise other elements of technology, remote physical senses, whilst others connect us to other people, who themselves form a sensory array. We’ll touch on this further when we look at AI, but by contributing micro-amounts of engagement to learning systems we ourselves act as part of the system, the sensory array.
Sense making technology will increasingly ask, what if? As it moves from passive ability to provide us with data and resources, through to an active sense making ability, where it carries out simply filtering, precognition, and analysis, through to a space where it makes proactive suggestions of what we may wish to do next. Currently, this simply means suggesting I leave early to catch a train, but in the future it may suggest when my next change of career will take place, when I should plan to replace my washing machine, or how I may make a more effective sale next time around. Sense making rapidly tips into performance planning.
True video indexing, where video can be transcribed and indexed, partitioned and made available, is already within our reach, but, again, coupled with the ability of artificial intelligence, we will see multiple video sources stitched together with contextual sense making narratives, a point at which video will transform from being a passive and hard to index resource through to being a directly applied one. When we can scan, and skim through, video, as easily as we can text, we will start to see significant changes in how we engage with the media.
Image recognition is already advanced: facial recognition is where we feel early impacts, as well as the ability, for example, to identify trees from a photograph of a leaf. But as deep learning bites, we will gain an ability for context recognition: this is one of the most challenging things for computers to do, we know what a chair or a table looks like, but there is no fixed form. Often the definition is contextual: if I sit on the table, and put my dinner on a bookshelf, the meaning is transformed by the context. Contextual recognition will transform the ability of learning systems to interact with the environment around them, to spot trends, and identify aspects of performance that can be moved into learning.
Finally, sense maker technology will likely include synchronous validation: so as sources of information come before this, we will have some real-time metric of their validity, enabling us to better make decisions based upon them. Imagine an election in the future, where alongside the video of the debate, we see the temperature of truth.