“Cyberspace. A consensual hallucination experienced daily by billions... Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding...”
― William Gibson, Neuromancer
As a cineaste and ardent fan of the craft and storytelling potential for the immersive medium that is film, I have sat through (suffered) many an experience that has bored me to tears, frustrated me, annoyed me, or sent me to sleep. But I don’t believe I have ever actually walked out of a film. Though I have come close. In fact I think the closest I ever came was whilst watching the grandstanding motion picture event that is/was AI: Artificial Intelligence.
I confess to being a diehard afficianado of Stanley Kubrick (3 of his film are in my all time top 10 – which ones I wonder...?! - there’s not that many to choose from tbh), and I suppose I had high hopes for the long germinating project of his that posthumously was realised by none other than Steven Spielberg. I guess I had hoped that in his spirit, a sort of sequel (thematically at least) to 2001 might have followed up on notions about machine intelligence as a natural evolution of consciousness, perhaps questioning the role of emotion as redundant in a more clinically perfectionist universe where ultimate limits of cognition might enable intelligent expansion across the void of space. At the expense of sugary sentiment. But no, Spielberg indulged himself and the viewer in a particularly overly saccharine take on ‘what it means to be human’, and how the future of machines rests in their capacity to evolve emotional connectivity. So far so Hollywood.
It made me squirm in my seat, long for it to end. I had to compel myself to not storm out of the cinema with half an hour more to go, muttering indignantly.
Perhaps this speaks to me as a cognitive psychologist, skeptical of ‘cod’ ideas about emotion – what is emotion? - as espoused in popular fictions, or an over amplified sense of how humanity will prevail due to some special status. Couched in this ‘feel-good’ but ambiguous notion of ‘emotion’. Am not here to write a PhD on ‘what is emotion and how does it relate to the concept of man vs machine dominance of the future Intelligence Landscape and evolutionary Darwinist cyber-thinking' (!) - though perhaps I could? Rather I want to begin touching on (from hereonin) in a series of pieces about what AI might mean more generally in a discussion about humankind’s reliance upon technology, notions of ‘self’ and ‘consciousness’, and how inevitable progress is simply the prime directive for evolution, and that’s something we need to accept and put up with...
Hopefully this will touch on some notions such as emotion, including more up to date thinking on that subject (such as the ‘constructionist’ framework as championed by the likes of Lisa Feldman-Barrett, 2014, – wherein at the heart of the matter is consideration of the human brain as a prediction-machine in itself, permutating iterative algorithms that learn, fail, adapt, succeed, grow, with emotional ‘signals’ in the mix as important functions facilitating that process). Within this line of thought, we can look at the brain and its architecture as indeed analogous to a ‘machine’, with mechanistic causal chains and connections, feedback loops, networks, which beget the cognitions, ‘qualia’ of experiential perception, in short the ‘programs’ (programmes) and operating software dependent on this infrastructure.
A good source of popular reference I shall draw on amongst others is Max Tegmark’s (2017) book ‘Life 3.0’ which nicely elucidates upon the field of Artificial Intelligence research, it’s ethical role in determining the future of AI development (to avoid the fateful Cyberdyne Systems ‘incident’ of 1997), and a serious look at where AI may present significant benefit to our species’ co-development into the near, intermediate and long term future. It’s here to stay, it’s growing exponentially, and we really don’t know truly where it is going to take us (or leave us).
It is becoming an ever stranger world day by day. Yesterday I conversed with a chat-function online attempting to source some virtual reality equipment compatible with slightly outdated computer hardware. Frustrated at the speed with which everything updates and creates redundancy in old equipment, I was somewhat exasperated and defensive with the agent with whom I was chatting. I rather tersely conversed with him and came close to asking irritatedly if he was a human or an AI and if the latter could I please have a human instead (perhaps I prefer some’one’ with the capacity to obfuscate more and put me at ease even when getting nowhere?!). The tenor of his responses suggested to me he was indeed human. But in retrospect I can’t be 100% sure. Such is the bizarrre state of affairs (at least interactively speaking) that we live in. Is it a good or a bad thing? Is devolving responsibilities such as providing consumer advice (or health advice, fitness advice, legal advice etc. etc.) to AI a sensible, effective, preferable course of action?
Much research in AI, and psychology, would argue that non-human agents can provide the appropriate rapport cues that put humans at ease, engender trust in the communication process, even elicit deeper levels of openness than human counterparts may do so (Fiske et al., 2019). It's still early days, but one thing for sure is that machine intelligence will certainly exponentially improve, learn, develop, extend beyond it’s original operating system, program, limitations. And perhaps it is best to see that as an exciting opportunity to be harnessed, or guided where possible.
Or we pull the plug now...before it’s too late. Damn, Shroedinger’s Cat is out of the bag. The mice have escaped the interface and are scurrying after the silicon cheese. The red eyes are glowing in the dark, metal legs scraping across the tarmac, relentless, rasping ‘we’ll be back’...
Next up: how AI may ‘solve’ our modern day political crisis, putting the meaning of democracy back into the lexicon. All politicians from the year 2037 will be required to register their profiles on the Mechanical Turk, henceforth their political machinations at the behest of the crowdsourcing algorithm that determines whose proposition wins the big data-analysed consensus of opinion, carefully weighing into the equation socio-economic equality formulae, balanced against environmental impact (from the worldwide IoT net), offset against predicted movement of key stocks and sustainable business practices. Nobody profits from politics, financially or status wise. Protected anonymity is key to ensure the latter. Everybody gets what nobody wants.
And lo it’s Metal Mickey. In a blond wig and puckered visage. Running the whole show.
Some things might just never change.
"The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which."
- George Orwell, Animal Farm
Feldman-Barret t and Russell, J.A. (2014).The Psychological Construction of Emotion ISBN 9781462516971
Fiske, A. , Henningsen, P., Buyx A (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. JOURNAL OF MEDICAL INTERNET RESEARCH, 21 (5), 1-12
Spielberg, S. (dir) (2001)/ A.I. Artificial Intelligence. Dreamworks Pictures. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
Tegmark, M. (2017). Life 3.0. Penguin Books: UK
The science of cognition and perception in context
This is where I elaborate upon brain science relating to cognitive functioning dependent on environmental context.