“We all go a little mad sometimes”
-Norman Bates, Psycho
You are slowly coming to. After a perturbed sleep characterised by restlessness, periods of deep dreamless repose, then frenetic and hyper-real scenarios in which you (delete as appropriate): flew free as a bird through the night sky/confronted family long deceased/turned up for a French exam inexplicably at your primary school bereft of trousers. You are exhausted. But more importantly, you are flummoxed. Your brain feels like there is a mini electrical storm and thoughts are whirling round untethered as like garments on a washing line captured in a passing maelstrom.
Slowly, gradually, stability restores, your bedroom resolves into its mundane but comforting familiarity. You realise you are not back in University digs. You haven’t lost twenty years. You have to get up or you’ll be late for work!!
What has happened? What did your brain get up to in those darkened hours where the cat wandered off and the mice were let loose in the attic?!
It is possible that your brain functioning has regressed into some kind of entropic state...
Entropy is a term associated with physical systems such as are characterised in the second law of thermodynamics (I.e. involving transfer of energy via heat and ‘work’) and which generally refers to the degree of disorder and ‘randomness’ of the system (uncertainty). An oft touted analogy is the higher probability that a glass tumbling from a table will shatter into a multitude of parts increasing the disorder of that system, than the reverse occurring (a multitude of shards and dissipated beer re-assembling and returning from the floor to a re-constituted pint upon the table). (Also being often mentioned as a convincing argument for why time tends to flow forwards rather than backwards – as in episodes of Red Dwarf.)
The concept of entropy has been mooted with respect to brain activity characterising conscious states (by the likes of Carhart-Harris, 2014; 2018). In the sense that entropic, disordered or aysnchronous brain activity (Muthukumaraswamy et al., 2013) appears to mitigate what may be referred to as ‘primary’ consciousness. Loosely tying in with a previous piece about recapturing the state of ‘wonder’ that captivates an infant’s experience of the world (based on less fixed and stable representations of a world that has not yet been consolidated into a meaningful and functional mental model), this primary state is said to be that which young children exhibit. A state that precedes our more sophisticated and developed ‘normal waking state’. Mainstream thinking in neuroscience views the brain as a ‘prediction engine’, and which perhaps conforms to the ‘free-energy principle’ as espoused by Karl Friston (2010). Everything we do, and perceive, in effect is based upon predictions from our internalised model of the world, and which we match against (confirming/disconfirming this) signals returning up through our sensory pathways. And which consequently either result in rejection (fixed thinking) or adaptation and refinement of the model (flexible thinking).
Of further interest is the contention, from the likes of Carhart-Harris's research into psychedelic compounds impacting upon brain function, that such ‘entropic signatures’ are observed in brain activation under exposure to substances including psilocybin (magic mushrooms), LSD, DMT (all of which bind to serotonergic receptors throughout key brain structures that are strongly involved in the generation of consciousness). In effect, the concomitant ‘asynchrony’ observed in brain wave activity appears to disrupt or ‘knock out’ ‘ordinary’ consciousness. I like to refer to this (perhaps bastardising the term in it’s true definition, - technically 'stopping' and shocking back into regular rhythm - but it feels useful nonetheless) as ‘defibrillating’ cognition. in the sense that by initially shocking the brain into an arrhythmic state, the system must compensate and bring things back ‘online’, I.e. by re-establishing the ‘natural’ rhythm that reconstitutes consciousness to it’s functional, waking state. And by doing so, having shaken things up so that when they fall back down ‘to earth’ something has changed, improved, resettled. Sometimes (frequently) it is very different to shake our fixed ideas, break old habits and patterns of thinking, and a shock is necessary to allow updating of the model.
It is argued that this is what happens when we are asleep and go into the hectic chaotic entropic state experienced when we dream. You might like to interpret groundbreaking significance into your dreams as messages from your unconscious providing insight whilst you slumber. But in some ways, it’s simply indicative of your brain shaking things up in order to allow cognitive structures, the model that helps you make sense of and successfully make your way through the waking day, to reconstitute and update additional information, as well as slew off redundant information gathered haphazardly whilst ‘awake’. As Norman Bates once intoned ‘we all go a little mad sometimes’, and indeed that appears to be what is happening whilst we sleep. Psychosis is experienced, but fortunately not acted out due to biological mechanisms that prevent our motor functions enacting out our fantasies, whims, terrors, in the depths of the night.
All systems require moments of downtime in order to repair, review, be subject to maintenance. It appears to be no coincidence that areas manifesting this ‘entropic’ pattern of brain activation whilst under ‘primary’ consciousness, as experienced in REM sleep, psychedelic states, include the default mode network: that neural basis of ‘selfhood’. Going to sleep, or ‘out of one’s mind’, allows an escape from ‘self’. Perhaps the most blessed holiday one can take. (Even though you may go to Benidorm, you must take that excess baggage of the self with you, and pay for the extra kilos – you can’t escape it that easily.)
But by taking the default mode offline, you can allow it to defibrillate, shock it out of (and back into) rhythm, then let the dust settle, let the self re-integrate, renew. A new you!
As I have previously opined, focusing, absorbing in, goal directed behaviour that activates the ‘task positive’ network, gives rise to a potentially highly productive state. And at the same time deactivates components of the default mode network, and the ‘self-indulgent’ facets of that network. Consequently, one takes a break from the self, so to speak, and when the self reconstitutes thereafter, there is ‘self-improvement’. In the sense that one has learned from the experience, one’s brain topology (DMN) changes and one’s capacity for future performance is enhanced (towards a more optimised functional state). And the potential capacity to be more in control of these networks (towards self mastery!).
I will explore this further, extending the model into the sphere of adventure experiences: how we can utilise the ‘need to accommodate’ perceptual cues from an overwhelming environment to shock us into this ‘defibrillated’, expansive, and progressive state. Refining our models and giving rise to a more motivated and adaptive mindset.
So welcome entropy, revel in it. When you awake and your brain feels scrambled, rejoice! For you ought to be acknowledging that you have improved during the night! Your server has rebooted, the system upgraded.
And the root of the term entropy?
It comes from the Greek for ‘transformation’....!
Carhart-Harris R.L., Leech, R., Hellyer, P.J., Shanahan, M, Fielding, A., Tagliazucchi, E., Chialvo, D.R., Nutt, D. (2014) The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Front Hum Neurosci 8:20
Carhart-Harris, R (2018). The entropic brain – revisited. Neuropharmacology. 142:167-178
Friston, K. (2010). The free-energy principle: a unified brain theory? Nature reviews. Neuroscience 11, 127-38
Muthukumaraswamy SD, Carhart-Harris R.L., Moran, R.J., Brookes, M.J., Williams, T.M., Errtizoe, D., Sessa, B., Papadopoulos, A., Bolstridge, M., Singh, K.D., Feilding, K.D. , Friston, K.J. , and Nutt., D.J. (2013) Broadbandcortical desynchronization underlies the human psychedelic state. J Neurosci 33(38):15171–15183
Allan, J.F., McKenna, J. and Hind, K (2012). Brain resilience: Shedding light into the black box of adventure processes. Australian Journal of Outdoor Education, 16(1), 3-14
Andrews-Hanna JR. (2012). The brain's default network and its adaptive role in internal mentation. Neuroscientist. 18(3):251-70.
Bressler, S.L. and Menon, V. (2010). Large-scale brain networks in cognition: emerging methods and principles. Trends in Cognitive Sciences 14 277–290
Bruya, B. (Ed.). (2010). Effortless attention: A new perspective in the cognitive science of attention and action. Cambridge, MA, US: MIT Press
Brymer, E., & Houge Mackenzie, S. (2017). Psychology and the extreme sport experience. In F. Feletti (Ed.), Extreme sports medicine. (pp. 3-13). Springer.
Castella, J., Boned, J., Mendez-Ulrich, J.L. and Sanz, A. (2018). Jump and free fall! Memory, attention, and decision-making processes in an extreme sport. Cognition and emotion. May. 1-27.
Craig, A. D. (2002). How do you feel? Interoception: the sense of the physiological condition of the body. Nat. Rev. Neurosci. 3, 655–666.
de Groot, J.H.B., Beetsma, D.J.V., van Aerts, T.J.A., le Berre, E. , Gallagher, D. , Shaw, E., Aarts, H. and Smeets, M.A.M. (2020). From Sterile Labs to Rich VR: Immersive Multisensory Context Critical for Odors to Induce Motivated Cleaning Behavior. Behavior Research Methods (in press)
El-Deredy et al. (2017). Neuroengineering a device to improve the control of worker’s attention in high altitude mines. FONDEF (Chile) research grant 2017-2019 [~£140,000 funding awarded]
El-Deredy, W., Weinstein, A, and Gallagher, D (2018). Human cortical responses to stress. University of Valparaiso, Chile. Talk at Chilean Society for Neuroscience
Elton, A. and Gao, W. (2015). Task-positive Functional Connectivity of the Default Mode Network Transcends Task Domain. Journal of Cognitive Neuroscience 27:12, pp. 2369–2381
Gallagher, D (1995). Cognitive-Induced Analgesia: Attentional Processes and Meditative Chanting. MSc thesis, Lancaster University.
Gallagher and El-Deredy (2009, 2010, 2014, 2018). Various field visits to high altitude (3000-5000m) mountain ranges to collect pilot data on cognitive-physiological functioning.
Hamilton, J.P., Farmer, M., Fogelman, P. and Gotlib, I.H. (2015). Depressive Rumination, the Default-Mode Network, and the Dark Matter of Clinical Neuroscience. Biol Psychiatry. 78(4): 224–230.
Harrivel, A.R., Weissman, D.H., Noll, D.C. and Peltier, S.J. (2013). Monitoring attentional state with fNIRS. Frontiers in Human Neuroscience. 7, 861, 1-10
Hockey, G. R. J. (2011). A motivational control theory of cognitive fatigue. In P.L. Ackerman (Ed.), Cognitive fatigue: multidisciplinary perspectives on current research and future applications (pp. 167-188). Washington, DC: American Psychological Association
Lin, P. Yang, Y., Gao,J., De Pisapia, N., Ge, S., Wang, X., Zuo, C.S., Levitt, J.J., & Niu C. (2017). Dynamic Default Mode Network across Different Brain States. Scientific Reports volume 7, Article number: 46088
Mittner M1, Hawkins GE2, Boekel W2, Forstmann BU (2016). A Neural Model of Mind Wandering.Trends Cogn Sci. 20(8):570-578.
Mooneyham, B.W. and Schooler, J.W. (2013). The Costs and benefits of Mind-Wandering: A Review. Canadian Journal of Experimental Psychology.
Moran, J.M., Kelley, W.M. and Heatherton, T.F. (2013). What can the organization of the brain’s default mode network tell us about self-knowledge? Frontiers in Human Neuroscience. 7, 391, 1-6
Nann, M., Cohen, G., Deecke, L. & Soekadar, S.R. (2019). To jump or not to jump - the Bereitschaftspotential required to jump into 192-meter abyss. Scientific Reportsvolume 9, Article number: 2243
Paulus, M.P., Flagan, T., Simmons, A.N., Gillis, K., Kotturi, S., Thom, N., Johnson, D.C., Van Orden, K.F., Davenport, P.W. and Swain, J.L. (2012). Subjecting Elite Athletes to Inspiratory Breathing Load Reveals Behavioral and Neural Signatures of Optimal Performers in Extreme Environments. Plos One, 7, 1-11
Paulus, M.P., Potterat, E.G., Taylor, M.K., Van Orden, K.F., Davenport, P.W., Bauman, J., Momen, N, Padilla, G.A. and Swain, J.L. (2009). A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments Neurosci Biobehav Rev. 33(7): 1080–1088.
Posner, , J, Russell, J.A.,c and Peterson, B.S. (2005) The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev Psychopathol. 17(3): 715–734
Reilly, T., Gallagher, D., El-Deredy, W. and Blanchette, I. (2005). An ergonomics model of human performance under environmental stressors: Role of executive processes and the pre-frontal cortex. BBSRC research grant application
Spreng, R. N. (2012). The fallacy of a “task-negative” network. Frontiers in Psychology, 3, 14
Taylor, L., Watkins, S.L., Marshall, H., Dascombe, B.J. and Foster, J. (2016).
The Impact of Different Environmental Conditions on Cognitive Function: A Focused Review. Frontiers in Physiology. 6, 372, 1-12
Tommerdahl, M., Lensch, R., Francisco, E., Holden, J. and Favorov, O. (2018). The Brain Gauge: a novel tool for assessing brain health. Journal of Comprehensive Integrative Medicine. 2, 1, 1-52
Uddin LQ, Kelly AM, Biswal BB, Castellanos FX, Milham MP. (2009). Functional connectivity of default mode network components: correlation, anticorrelation, and causality. Hum Brain Mapp. 30(2): . doi:10.1002/hbm.20531.
Vanhaudenhuyse, A, Demertzi, A., Schabus, M. and Noirhomme, Q. (2011). Two Distinct Neuronal Networks Mediate the Awareness of Environment and of Self. Journal of Cognitive Neuroscience 23:3, pp. 570–578
“To see a World in a Grain of Sand
And a Heaven in a Wild Flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.”
― William Blake, Auguries of Innocence
Take a look at small child as you wave a shiny object in front of her face. Or indeed any object really if she’s a baby. What inference can we make about what she sees, what she experiences? What she thinks?
It’s distinctly likely that the child is in a perpetual state of wonder! What is this (not dagger) that I see before me (paraphrasing Macbeth for literary kudos)? With underdeveloped cognitive and perceptual faculties the whole wide world is a vastly novel experience. The little brain must be overwhelmed with the intense stimulation of continually seeing new stuff, figuring out what it is in relation to old stuff (not that’s there’s much of that). But it’s all fresh: a perspective on life is based on an exploratory state of being, uncluttered by preconception.
At heart this is what we might, with a little imagination and a desire to refresh our own perspective, seek to attain as world-weary adults. A worldview driven by wonder. Using new eyes to see richness around us. A consumerist society attempts to delude us into provoking brief spikes of interest at colourful stimuli that momentarily capitalise on a frozen instance where hand is in pocket and reflexively thrusts forth to part with hard earned cash (and thereby obtain the next useless thing that ultimately fails to create longer term satisfaction).
Art at least taps more genuinely into this aspirational state of being, stimulating the senses, attempting to shift the focus and open up perspective to new ways of looking at the world. Modern technologies such as Virtual Reality may well offer another avenue to take us out of the routine, mundane mode of operation and present an opportunity to view the world much as a baby does. Psychedelic drugs or other ‘altered states’ technologies and techniques may defibrillate our cognitive faculties in a more violently immediate fashion and reduce us to a more ‘primary state of consciousness’ (Carhart-Harris, 2018). But nature is the most readily available resource at our disposal perhaps to evince this capacity for wonder.
Look upon the Grand Canyon for the first time and can you fail to be seized in near paralysis at the magnitude of it, the resplendent grandeur? A vast palette of colours, shapes, scents, depth, distance, along with a failure to grasp how this has come into being, what it means...
That is because you are in a state of ‘awe’. And this is a special state that can be evoked under the right conditions, the right (environmental) context. Perhaps it is our evolutionary means of predisposition to cast off the shackles of outdated, routinised ways of thinking. It is a way of shocking the system into taking stock of limitations, and realising that there are new things out there to experience, and which reveal the boundaries we set are illusory. There is always something else beyond the horizon. This is something we forget very quickly as we form ever tighter walls around ourselves, building a protective shell in which to hold off the hordes of new ideas that threaten to undermine cherished ‘stability’ of ‘in the box thinking’.
Awe is an area of research amongst cognitive scientists that has it’s roots in a fundamental cognitive/perceptual ‘need to accommodate' (Guan et al., 2018 - there's that pesky posterior cingulate cortex again!). We form constructs, conceptual hierarchies of ideas about objects, categorising and itemising the world into definable chunks. When we encounter new chunks, new items, we strive to fit these into the boxes we have already labelled upon mental shelves. If the item doesn’t ‘fit’ it is subsumed into another ‘near fit’ box, or perhaps discarded onto the junk pile. But occasionally we find ourselves in a place where the whole environment so overwhelms that the system cannot cope. It cannot find any impetus to streamline and categorise what is beholds. Perhaps this might cause ‘meltdown’ in some, but generally, more hopefully, it provokes a ‘need to accommodate’. The cognitive and perceptual boundaries stretch, the brain becomes more plastic, knowledge structures bend or break and new concepts, ideas, experiences update the model and ‘reboot’ the server!
For a moment we perceive as babies do, letting go our firm grip on what we believe reality should be. Throwing ourselves at the mercy of the vista before us. Giving over to a more vulnerable state of perception, of being. The brain grows, as do you. Nature has gifted us this medium to upgrade our perceptual apparatus. It does us the world of good to occasionally (even regularly!) seek out that state of wonder, the provocation of awe. It is the panacea for a world embittered by the familiar, insulated from voluntary vulnerability. We all need to see the world through the infant’s eyes once in a while.
I will continue to develop ideas around awe, principles of which ‘accommodate’ into my wider model of brain function, adventurous experiences, development and ‘mastery of’ the self, and how virtual reality can be used as a tool to facilitate greater understanding of the underlying processes that govern how we may refresh our views of the world.
For an earlier take on this check out a previous piece I wrote, suggesting, as with the current appetite for personal training regimes, an awe workout.
Carhart-Harris, R (2018). The entropic brain – revisited. Neuropharmacology. 142:167-178
Guan, F. Xiang, Y., Chen, O, Wang, Wand Chen, J. (2018). Neural Basis of Dispositional Awe. Front. Behav. Neurosci., (12) 209, 1-7
"A man's got to know his limitations."
- Inspector Harry Callahan - 'Dirty Harry'
How do we set limits in our world? We might purport to be free-thinking, expansive in worldview, and capable of achieving goals that stretch our capacities to grandiose ambition...
But is this wishful thinking? Are we deluding ourselves?
The answer is most likely yes. One might say well that’s fine, it’s better to have a stretching and unrealistic view of our own capabilities as this can spur us on to achieve things we aspire towards in life. But actually, thinking about it, are we not then just like the denizens of The Matrix, going about our days in a blithely complacent mode of operation that is content to ‘know’ we have the potential to achieve whatever we want, but actually deluding ourselves about the extent of this capacity. In effect this unfounded belief that we can reach out and extend our actions unbounded in the world may be in itself limiting our actual potential...
To feel as if one can achieve anything can foster complacency that actually prevents doing anything about it. And this, one may argue, rests in setting boundaries for ourselves that are actually constrained by a self-satisfied sense that we can do whatever we want, go where we please, achieve our dreams. By virtue of ‘knowing’ that we can take that step whenever we want. Tomorrow...the next day...at some point. Is it in effect tantamount to 'confined optimism'? (Being overly optimistic may be as deleterious to goal achievement as being pessimistic and negative...so where's the balance, the optimal point where skills and challenges come together to prompt 'flow'?)
Might it be more productive and satisfying to be realistic and look more closely at what our boundaries actually are in order to then define where we can get to and then where we might aspire to go further? How far does the range of thought extend? I wonder if anyone has tried to quantify this, and relate it to ‘physical terms’? - a useful ‘thought experiment’ in itself! Much is said of the freedom to think infinitely, and the beauty of having brains that allow us to wander at will in the realms of imagination that has no ‘true’ limitations. To the very edge of the known universe...and beyond! But firstly, perhaps we should consider how far can we actually think in those terms. And secondly to what extent do we exercise that capability? (As an aside, and as I have begun to speculate elsewhere, consider how the evolution towards superintelligent AI may unseat our own place at the head of the table as dominant species, by simple virtue of our incapacity to imagine where the limits of thought may extend to in 'beings' with exponentially greater capacity for thought! There are simply things we cannot think about. Much as we can't imagine innovations that have yet to take place!)
Are our thoughts bounded by the mundane borders of our own regional concerns? Do we simply rove within stepping distance of our own cognitive domiciles or do we seek to search beyond, to thrust out into the unknown? An interesting experiment would be to devise a protocol for measuring the extent that people think and to equate that through some normalised attribute to distance in ‘real’ terms. It would be interesting to devise characteristics that categorise different types of thinkers and relate to demographic variables. Does the ‘professorial’ typology of thinker, as a rule, go beyond ‘typical’ boundaries compared to the blue-collar citizen (being stereotypical for a moment to illustrate an idea)? Would such a metric require homogenisation of data to account for other variables in order to ‘standardise’ a measure relative to different personality/lifestyle characteristics? Does the ‘artist’ head the pack in striding out beyond the known borderlands to map new territories?
An intriguing idea with the development of ever more capable neuroimaging technologies and theoretical frameworks is the possibility that measures of ‘expansive cognition’ may become more viable. But as with focus over the years on for instance psychedelic or ‘altered states’ phenonena towards a ‘unified theory of consciousness’ (eg. Kingsland’s recent book tying together various strands of research in this area, 2019), is it possible that ‘expanding the mind’ through various techniques simply throws ‘ordinary’ functional brain status into ‘asynchrony’, reducing the stability and robustness of cognitive functioning and moving the system closer to (and over the edge of) an entropic state (towards chaos and randomness – perhaps unbounded, but largely ‘meaningless’??; Carhart-Harris, 2018). By ‘expanding the mind’ do we risk simply placing our cogitative capacities into a tighter box that turns off critical capacity and instead revels in a further delusion of being ‘set free’?!
Another way to think of this is in terms of the perceptual limitations that we inevitably have as a consequence of brain structure, evolutionary development, contextual situation, and habituated tendency to rely on our sensory-perceptual apparatus as determined through long practice and automated routine. What are the perceptual boundaries that we are constrained by and to what degree can we, through practice, skill development, broadened mindset, push the boundaries further?
This is a big and fascinating question and topic and many current books by prominent neuroscientists, cognitive psychologists and philosophers detail various illusions, and experimental scenarios that show the fallibility of our ‘equipment’. (See the likes of Eagleman, Hoffman, Lotto, Dennett, Macknik and Martinez-Conde, Chabris and Simons etc. in the references section.) A now famous example concerns a gorilla and a basketball game...As an aside, before the ‘invisible gorilla’ went global, I was sat in an auditorium at a provincial campus in Toronto with perhaps 50 academic types. A chap gave a talk in which he referenced a study from the 80s involving a slightly ethereal lady carrying an umbrella and wandering across a basketball scene (we saw the grainy video of this). Having just shown this video, he then went on to ask the audience’s indulgence on a subsequent video in which a group of young people amateurishly tossed basketballs to one another in a circle within a fairly tight frame of the scene. At the end of this the speaker asked the audience to recount how many passes of the ball had been effected, and if anyone had noticed anything else in the scene. I don’t recall anyone saying they had. He replayed the scene. A gorilla indeed appeared and beat his chest in the midst of the action. The audience provided a mix of responses from surprise to cynicism that this was a different video. The point is the audience was comprised of some of the most eminent scientists from the fields of visual perception, cognition, consciousness research. The Invisible Gorilla demonstration was born there and then, launched to the world, and had it’s effect even despite ‘priming’ as to the nature of it beforehand...We believe we see the totality of our environment and that is is ‘there to see’ and we have only to ‘look’. But how deluded we are to the blind spot, not only in the fovea itself, but also in our own view of reality!
Of course, over the passage of time it’s possible that I have slightly reconstructed the temporal order of this sequence of events erroneously and in fact the ‘Umbrella woman’ came afterwards, to reference the inspiration for the Gorilla! But the point remains, an audience of ‘experts’ in visual perception were taken by surprise by this facet of boundaried perception that is effectively wired into our brains.
So to return to the original point, do we appreciate the boundaries of our own perceptions, thought-streams and worldviews? Or do we labour under the misapprehension that we have as much capacity (unlimited) as we seek to employ? Perhaps we do have this ‘infinite capacity’ to think beyond any boundaries....And with that perhaps, also, we ALL have the capacity to go beyond our limitations (in a physical sense). Anyone can get off the couch and run a marathon if their mood takes them? Right? Especially if their pants are on fire...Realistically, probably not. Sure you can get up off the couch and implement an incremental programme of exercise that suggests you can get to that finish line eventually. But you’ll never really be able to predict if you are going to collapse of a coronary or simply give-up when push comes to shove. But still exercising one’s faculties seems to be as much the point here as the end destination being achieved. And likewise, in order to think ‘beyond boundaries’ or even towards the boundaries, most probably requires at least putting some exercise in, however incremental, to develop that capacity. After all it’s distinctly possible that our cognitive functions depend on the regular exercising of the underlying neural mechanisms, management of blood flow and metabolic resources to sustain this infrastructure, and the robustness and flexibility of our mental muscles!
So with that, have a think about where your boundaries lie – don't just assume they are infinitely capable just because they are ‘there’. Maybe redefine your boundaries as the basis for pushing them. But importantly exercise your mental faculties, think about grand thoughts but also smaller, more realisable ideas, and from there build them into greater ambitions. Do this regularly, strengthen your cognitive musculature. Begin to redraw your world afresh, instead of sitting comfortably back into the confines of your virtual and apparently limitless box! From this can only grow vision, capability, and enthusiasm to explore the boundaries of the known, and redefine what is the unknown!!!
Sources of reference:
Carhart-Harris, R (2018). The entropic brain – revisited. Neuropharmacology. 142:167-178
Chabris, C. and Simons, D. (2010). The Invisible Gorilla. Harper Collins
Dennett, D. C. (1991). Consciousness Explained. Penguin Books
Eagleman, D. (2011). Incognito: The Secret Lives of the Brain. Canongate Canons
Hoffman D. (2019). The Case Against Reality: How Evolution Hid the Truth from Our Eyes. Allen Lane
Kinsgland, J. (2019). Am I dreaming? The New Science of Consciousness and How Altered States Reboot the Brain. Atlantic books
Lotto, B. (2017). Deviate: The Science of Seeing Differently. W&N
Macknik, S. and Martinez-Conde, S.(2012). Sleights of Mind: What the neuroscience of magic reveals about our brains. Profile Books
Beyond simplicity – the increasingly complex role of the Default Mode Network in goal-directed behaviour
“Twas brillig, and the slithy toves, Did gyre and gimble in the wabe;.”
― Lewis Carroll, Jabberwocky
Whilst sometimes simplicity is a desirable state of affairs when presenting elaborate concepts, the Occam’s Razor approach sometimes falls short of the real story. This is an issue I have always had with conveniently distilled principles, and systems of ‘thought’ which generalise such principles in appealing to human nature’s desire for answers! Life is rarely simple when one thinks carefully about it! Having said that a good starting point is to make things black and white, to get a fundamental idea across, accept it’s generality. Then once the penny has dropped, begin to dismantle it, and pick apart to get down to finer granularity and more specific meaning...
So far, the ‘anti-correlated’ networks model as espoused throughout my writings, serves as a useful concept for understanding how the brain’s machinery acts in apparent opposition. Much like a computer’s core components being in binary state of on/off, so the brain functions act antagonistically to subserve one need or another, but not holding both in action at once. Likewise with muscles that flex or extend, equilibrium doesn’t generate results, it’s the dynamic shift of balance that creates the impetus and overcomes inertia. (Quantum physics aside for now with respect to superposition, or the capacity to potentially be active in all possible states at once...)
Extending notions such as Occam’s Razor to cut through to the likely ‘truth’ of matters in fundamental understanding, I tend to adopt (and encourage others to do so) the tenet that, if something seems simple to understand, and makes perfect sense, it’s probably not actually accurate. It is far too convenient to be grasped as such. The notion of two networks that switch on and off falls into this category. It is an elegant idea, and nicely slots into a narrative of the universe that brings light and darkness into holistic unity, yin and yang, male and female. Opposing forces that balance and harmonise, creating energy and action in an eternal dance of momentum and interaction.
So for a while now, having established, and proclaimed this principle of brain mechanics, I yearn to dismantle the proposition, to find greater complexity in the simplistic idea, to begin to specify where the general principle holds true, and where it breaks down. Or provides further sub-divisions of principles that begin to be usable beyond the generic.
A paper in 2012 by Nathan Spreng questioned the ‘dismissal’ of the default mode network as ‘task negative’. That is, it has been by definition labelled as ‘default’ and associated with a resting state that has no bearing on task performance. In opposition with the so-called ‘task positive’ network (also known as Central Executive, and referenced in terms of ‘fronto-parietal’ functioning), it appears to ‘switch off’ when the CEN is ‘on’, and by my own hypothesising, represents a potential distracting influence on goal directed tasks that could be demonstrated measurably as negatively impacting on task performance. (Signature of poorer task performance being an indication that DMN activity/dynamic functional connectivity fluctuations are ‘encroaching’ on stable FC of Central Executive network activity facilitating focus on task.) Given that the DMN is said to be a key hub in which sense of ‘self’ is constructed, it stands to reason that certain elements of goal-directed functioning will require a self-referential perspective in order to perform successfully. This is particularly likely where performing a task requires drawing on prior experience: memories of what ‘I’ have encountered previously. This has indeed been shown to be true with respect to accessing autobiographical memories that help ‘solve’ a problem (Andrews-Hanna, 2012; Spreng et al., 2009; Buckner & Carroll, 2007). So the question then becomes one of narrowing down what is the role of the DMN in cases where this important integrative hub (with let’s remember a significantly high metabolic demand for resources) can contribute relevant processing power to performance on-task.
Elton and Gao (2015) began to address this question and stimulate further research impetus to unpack the flexible role of the default mode network in goal directed behaviour, and to disparage the notion that it is only involved in task-irrelevant thoughts and ‘meaningless’ distraction. As an integrative hub for multiple sources including multisensory information, and formulating a sense of ‘self’ clearly this is an important functional network that has much bearing on one’s capacity to successfully negotiate life’s challenges! My earlier piece indicated that it already will have an important role in consolidating information following task performance, and integrating experiences into an updated model of the world and where ‘I’ fit in. So we can already surmise that, whilst performance on a task ‘in the moment’ might be the key priority (for survival?) requiring DMN ‘as a whole’ to be ‘switched off’ and not interfere with the current priorities, a key component of any successful and evolutionarily prosperous system is to take stock of it’s achievements and failures, evaluate lessons learnt, and store this knowledge to enable future performance, efficiency and optimal operational capacity!
Research such as Elton and Gao (2015) describe indicates that the DMN does in fact show flexible functional connections even when performing ‘on task’, with increased connectivity with regions involved in executive control. This is task specific, and both involved in internally oriented as well as externally oriented tasks. (Despite general consensus that on externally-focused tasks the DMN shows a pattern of reduced gross activation.) Elton and Gao postulate that the DMN serves a dual purpose in ‘broadband’ monitoring both the internal and external worlds “to maintain self-consciousness and vigilance during this state (Gilbert et al., 2007; Raichle et al., 2001)”, but also to ‘uncouple’ regions not pertinent to external task-orientation where required. Given this monitoring function, a most consistently preserved coupling was observed to be with regions of the salience network, including the right inferior frontal gyrus – pertinent in detecting salient stimuli. Likewise, in emotional tasks with the anterior insula, where rapid response to task relevant features was facilitated.
So now we have a sense that this integrative hub can shift its processing power and capacity between task-specific coupling/uncoupling (particularly when the task becomes more externally oriented and therefore preferring disengagement with more ‘self-referential’ processes) and more ‘broadband’ monitoring of the world (both externally and internally) in order to facilitate flagging of salient stimuli in accordance with the salience attentional network that can enable switching between 'task positive' (CEN) and default mode networks ‘more generally’. As such this represents a window into how the brain seeks to optimise it’s capacity to switch functions ‘on and off’ as it were, or redistribute resources and be more selective in turning relevant aspects ‘off’ as necessary.
The question that falls upon my own research aspirations to address involves how these networks subserving aspects of ‘self’ in the context of adventurously challenging contexts can threaten to undermine efficiency/optimality of brain functioning and disrupt task performance. Particularly where performance on task may have life-threatening consequences! And likewise, how certain individuals are able to better make decisions and perform ‘on-task’ in the moment of requirement by virtue of this underlying capacity to turn the relevant regions ‘off’ whilst still actively monitoring the environment (both externally and internally). And from that how other individuals less capable of governing their functioning in this way, can learn to harness more control over this optimal state of being. Finally, with a deeper understanding of the functional role and connectivity between and within these key brain networks, we can begin to appreciate how concepts such as ‘self’ have basis in targetable/measurable neural activity, and how the system synthesises experiences and goal-directed behaviours into an updated and enhanced model of ‘self’ that is the basis for improved performance and sense of purpose in the future!
Andrews-Hanna, J. R. (2012). The brain’s default network and its adaptive role in internal mentation. Neuroscientist, 18, 251–270
Buckner, R. L., & Carroll, D. C. (2007). Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57.
Elton, A. and Gao, W. (2015). Task-positive Functional Connectivity of the Default Mode Network Transcends Task Domain. Journal of Cognitive Neuroscience 27:12, pp. 2369–2381
Spreng, R. N. (2012). The fallacy of a “task-negative” network. Frontiers in Psychology, 3, 145.
Spreng, R. N., Mar, R. A., & Kim, A. S. (2009). The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. Journal of Cognitive Neuroscience, 21, 489–510.
Continuing the line of thought that the brain exhibits differential network activation/functional connectivity between task performance and resting state, there is recent evidence to support the notion of ‘neurogenisis’ or ‘brain growth’. While it makes sense to think that by doing some productive cognitive ‘work’ we will learn, strengthening the connections in the brain, it is an exciting development to be able to see where that growth might have taken place. Fitting with the large scale networks model, Lin et al. (2017) have observed changes in ‘topology’ of key networks subsequent to performing a task. Focusing on the default mode network once again, measures of this ‘resting state’ and comparison of networks connectivity prior to, during and immediately following a task, can reveal these marked differences that signify change and ‘growth’ has taken place. This may be a key to unlock markers of ‘self-development’!
I have been arguing perhaps with a little negative bias (for the sake of simplicity) that the DMN with it’s ‘selfish thinking’ traits represents a barrier to productivity, a proclivity towards distraction, and an obstacle to optimal performance. But within this argument that maybe tends towards throwing out the baby with the bathwater, the self gets a hard rap. Self has of course many facets, and there is no escaping self, be it from setting goals to ‘improve the self’, to ‘master the self’ to use self reflection as the basis for progression in life towards goals and achievements. So we should use this moment to (self) reflect on the utlity of a brain network that puts self to the forefront. it has its purpose! And its time and place.
Certainly, being overactive in the DMN can lead to negative consequence and dysfunction (mental health issues), and perhaps it’s contents can inhibit optimal (task postiive) performance by diverting resource at key moments away from focusing on the task at hand. But significant components of cognition and behaviour rely on a self referential perspective – be it drawing on prior experience (memories) or solving problems that take into account one’s own position in space – in order to effect appropriate responses when navigating around the world and making relevant decisions. And it makes sense to think that once a task has been performed, a necessary further ‘task’ relies on subsequent processing to integrate the lessons of that ‘experience’ into self referential brain structures for later utility in solving other problems and performing better on future tasks. In short, providing a repository of heuristics that one can draw on to short cut behaviours and decision-making and consolidate the brain’s status as a prediction machine that acts on prior knowledge in order to update it’s models of the world adaptively. ‘I’ have experienced such and such before, so now ‘I’ can use that experience as a short cut to future rapid decision making and problem solving. Therefore having experiences can shore up one’s surety (confidence) in acting decisively next time. It is speculated here that the self has some bearing in this, with personal significance that can perhaps help boost instinctual capacity towards such speedy decision making...
From this standpoint, and following Lin et al.s (2017) line about reconfiguration* of network topology subsequent to task performance (as differentiated from topology prior to task performance), it figures that (positive) changes should be observable in the default mode network. And that such modifications represent a re-organisation of the ‘self referential’ processing stream that contributes to improved performance on a future occasion. In short, ‘self development’ being manifest in this ‘self’ pertinent brain network. Further research will seek to build on this notion that the default mode network may provide a functional imaging (and neurocognitive) signature of enhanced ‘self’ predicting better performance on cognitive tasks, and a capacity to intrinsically modulate the brain’s own capacity to efficiently turn on and off networks pertinent at the appropriate time to effect optimal performance.
In short, the ‘self’ first deliberates prior to a task, perhaps uncertain, speculative. Then the self ‘retreats’ whilst performing said task. Finally, the self reconstitutes in the aftermath, re-integrates new experience, and updates it’s model for future reference. Growth has been neurally effected...
*” A possible explanation for our finding is that the DMN has been modulated by the prior cognitive task, which would shape the DMN topology properties and lead to the development of a new DMN organization”
Lin, P. Yang, Y., Gao,J., De Pisapia, N., Ge, S., Wang, X., Zuo, C.S., Levitt, J.J., & Niu C. (2017). Dynamic Default Mode Network across Different Brain States. Scientific Reports volume 7, Article number: 46088
Follow my science based tweets on: https://twitter.com/CognitvExplorer
See my professional profile on linkedin:
Upgrade your brain by plugging into nature:
““The ships hung in the sky in much the same way that bricks don't.”
― Douglas Adams, The Hitchhiker's Guide to the Galaxy"
Slartibartfast hails from the planet Magrithea, being a designer of planets, and is particularly renowned for (and proud of) his award winning work on coastlines. Notably “the fjords of Norway....were mine”.
Oh to have the power to create worlds...
Well you can. With a little acumen for programming anyone can indeed be the architect of their own environment. Virtual Reality interfaces are readily available and via open source software such as Unity one can embark on a journey literally creating the context in which that journey takes place.
To a citizen of a more anachronistic age such technology may seem to be ‘indistinguishable from magic’ as one Arthur Clarke would have put it. But it is here, it is open to access, and it can be embraced, and immersed within.
From a perceptual and philosophical standpoint this puts us in a fascinating position. We can explore the mechanisms behind how we perceive our own reality by actually engaging in the construction of it! Which in itself is a perfect literal analogy for the basis of perceived reality being a construction of the brain anyway...
At our fingertips is a medium that allows for the generation of scenes in real time letting us place objects in situ, and explore the interrelationship of these to one another and in their own place within that scene background. And once we have done this, by donning a set of goggles we can then supplant our physical environment with a virtual one, and thence to physically move about within it. And even to interact with those objects therein! That’s quite a proposition to consider right there!
We can realise our own personal Narnias. We can create the wardrobe then step right on through into the snowy wastes beyond.
How might this assist in solving ‘problems’ encountered in the real world one might ask (I.e. to get past the ‘gimmick’ phase allowing one to infinitely exterminate zombies...)? One avenue I am currently exploring is to replicate my own working environment (currently my living room/office space). The constraints of physical space coupled with a disorganised system employing sheets of paper, post-its and wall space lead me to wonder how I can better organise my workspace in the virtual realm. By replicating this space I can potentially recreate an infinite variety of similar workspaces in which I will never be short of space to ‘organise’ my thoughts (on virtual post-its). And the very process of doing so will help me improve my capacity to organise, and my ability to connect thoughts together, seeing intersections of disparate ideas that may catalyse creative innovation in my own thought processes. I will never, in effect, need to ‘tidy up’, and risk losing my thinking as the post-its go into a figurative bin.
A further benefit of this virtual realm being used in such a way is to physically navigate throughout it. The act of moving through a scene can in itself significantly contribute to learning, to memory, to general cognitive ability. In acting (I.e. moving) my brain will galvanise it’s motor areas, engaging sensorimotor functions that enrich the perception of the world with which I am engaging, and the thinking processes that are collectively being operationalised. Consider the idea of the ‘memory palace’ (also known as the ‘Method of Loci’). This was a mnemonic device utilised in the ancient world as a means to commit to memory various facts by virtue of imagining key cues to said facts being ‘placed’ in an imagined ‘palace’ that the individual could then reconstruct in mind and ‘walk down the corridors’ as it were of such a personally significant realm. Thereby revisiting the various locations of these facts and conjuring a vivid environment in which memories could be readily accessed.
Such a device was also employed by the character Hannibal Lecter and referenced throughout Thomas Harris’s (2000) novel ‘Hannibal’, as well as being written about by historian Jonathan Spence (1984). Naturally, the technique lends itself to the medium of Virtual Reality. A team led by linguist Aaron Ralby has explored the potential for this, devising a PC/Virtual Reality platform in which one can construct one’s own memory palaces in virtual environments, thereby employing the technique of spatial learning to perhaps learn new languages or string together facts as an aide-memoire. (see Wired article)
All this points to exciting possibilities for the utility of virtual reality technology to enhance our cognitive capabilities, extending more traditional media (eg. written diaries, mnemonics). Furthermore, as a tool to uncover the basic mechanisms of perception, and notably our own brain’s propensity to derive a notion of ‘veridical experience’ from the constituent elements of the environment in which we are immersed, this is potentially invaluable. As a somewhat strange ‘coincidence’ today, as I sought to get to grips with some basic geometrical scene properties, I constructed a simple environment with various geometric shapes. At centre stage (well actually off to one side) was what I intended to be a sort of plinth acting as a bridge that one could walk along, and composed of various objects embedded within it (obstacles as it were). When I interfaced with this for the first time in the VR headset, the scale and position that I had somewhat randomly selected, just so happened to correspond uncannily with the table at which I am sat typing. The ‘plinth’ became a table, being exactly the right height, and even the same colour and texture. I reached out (virtually) and was able to touch in the right proportions the surface in front of me (physically). Unwittingly I had constructed a virtual environment that effectively corresponded to my physical environment, despite intending something quite different (the positioning was totally by chance as I am a complete novice so far). Perhaps this is my unconscious acting as the architect of my environment, very much in the way that my unconscious brain functions act to likewise construct the nature of ‘reality’ that I come to perceive in a conscious fashion....Boundless potential exists when we begin to explore how far we can mix our realities in this way, further consolidating the brain's capacity to engage with it's environment and extended cognitive functional capacity out into the virtual realm but coupled with physical sensory feedback as to that interaction...
Hopefully before too much longer I shall be working on fjords of my very own. And then finally determine what is the actual question for which the answer is, by all accounts, 42.
VR 'memory palaces' could help you master a new language. Wired magazine:
Harris, T. (2000). Hannibal. Arrow Books: London
Spence, Jonathan D. (1984). The Memory Palace of Matteo Ricci. New York: Viking Penguin. ISBN 978-0-14-008098-8.
“You talk as if a god had made the Machine," cried the other. "I believe that you pray to it when you are unhappy. Men made it, do not forget that. Great men, but men. The Machine is much, but not everything.”
― E.M. Forster, The Machine Stops
Consider a simple proposition. That ‘self’ is the basis for human suffering, discontent, inequality. That ‘self’ has a basis in specific and malleable physiology. That ‘self’ can therefore be ‘turned off’, re-moulded, inhibited, put to ‘better use’. Directly controllable in effect.
Age old traditions, modes of thinking, pincipled systems purport the above. Eastern methods for banishing the ‘illusion of self’ have perpetuated for millenia. Yogic practices, ancient spiritual ceremonial rituals, and even more modern day techno-philosophies abound to tackle this ‘realisable’ premise.
What if a future version of our society encompasses an Artificial intelligence enhanced state of being that has established this capacity to overrule ‘self-indulgence’? Would this be a preferable state of affairs? Max Tegmark (Life 3.0, 2017) refers to a future scenario of ‘Libertarian Utopia’ in which machine derived superintelligence lives in harmony with human (standard) intelligence and a plethora of hybridised human-machine variants (cyborgs, upgrades, augmented intelligences). Now this may or may not come to pass, but the question Tegmark is posing is to what degree one or another envisoned future scenario may be selected? This is effectively dependent on humanity’s current role in defining where artificial intelligence may take us as a species, as well as a development in itself that perhaps WILL at some point fly the nest, break out of the confines of human-limited intelligence as dictated by our biological machinery...
I would like to pose a proposition as per the opening gambit, and which seeks to get at the heart of the issue with respect to the human element determining this future. The core of my thinking is lodged in a ‘simple’ notion that the self is indeed ‘locatable’ within a physiological frame that can be notionally influenced, moulded, ‘turned off’ even. As our understanding of ‘intelligence’ increases, or at least with respect to brain functioning pertinent to cognition (our perception and construction of ‘reality’), we can begin to envision ways to address the existential question of ‘selfhood’ and its basis in suffering, individual ambition and collective inequality... This understanding takes into account the motivators of our own behaviour within the world at large, how we can mould our own environment, and how through technology to extend ‘intelligence’ into that world, and expand (or shrink) our awareness within this wider system,
The ‘simple’ proposition is that a ‘default state’ within the brain feeds (metabolically) a network of regions that are involved in internally-directed cognition, pertaining to one’s perspective of where one is in space, what one has previously experienced through the course of one’s life, and what may happen to ‘one’ in future anticipated scenarios. This ‘self’ can be seen to be a cause for anguish, greed, obsession and addiction. These ‘cortical mid line structures’ (amongst other areas) essentially could be said to define ‘self’ at a neurophysiological level. Conversely, a set of brain regions involved in ‘goal-directed’ cognitive functioning, and located for instance in pre-frontal cortical structures, work together, in conjunction with an attentional system that maintains focus on task (or fluctuates between ‘default’/self-distraction and task-focus). With task focus comes productivity, enhanced performance, and, actually, banishment of the self. When the ‘task-positive’ network is engaged, the individual is exactly that – engaged in, absorbed by, purposeful action. The ‘default’ / self-related network is tuned down as these networked regions operate in a mutually exclusive fashion by and large (though as evidence from research becomes more granular there will be greater understanding of nuanced interaction/fluctuation between networks under certain productive conditions).
In our technology-enhanced future scenario we can, underpinned by burgeoning awareness of how our biological machinery supports our cognitive functioning, perceptions and behaviour, begin to consider how we might influence this state of being. Through technological interventions. Technology can be described as ‘the application of scientific knowledge for practical purposes, especially in industry’ (thanks google). ‘Industry’ can be interpreted as ‘productivity’ and as directed towards sustaining economic activity that advances affluence of a society, an individual, a ‘quality of life’. So we might defer to ‘technology’ in the sense of computer/artificial intelligence imbued apparatus, but in fact can apply this more generically to any practical intervention wherein knowledge can enhance and inspire change, progress, development.
Jamie Wheal and Steven Kotler (2017) have written about how science and technology can (and has already begun to) inspire interventions that transform thinking, prompt innovation, and connect with more ancient traditions of self-mastery, so I owe a debt to this for referencing many strands of research I have mined to inform my own. A central conceit of all this is that indeed technological interventions can target, neuroscientifically, the brain construct of selfhood. This can effectively be boiled down to three possible means of ‘intervention’:
It should be emphasised here that this task-positive state of being comes with it a sense of satisfaction and wellbeing as a consequence of being focused, and aligned: one’s skills are being employed to their best, in harmony with the environment. When one ‘returns’ from this task-focused ‘journey’ one has grown, strengthened from that experience. Skills consolidated, learnings gleaned. And re-connection with ‘self’, however momentary gains a feeling of being more in control, more fulfilled (and better capable of disconnecting as needed back into a goal directed state).
Now the interesting thing from an egalitarian perspective here is that focusing on a goal and tuning down the brain regions that are ‘self focused’ naturally lends itself to being more outward focused, and altruistic in a sense. For it is not about one’s individual role or goal per se. When the default ‘self’ is ‘banished’ it’s own self-interest ought to retreat from focus within the context of pursuing goal-directed tasks, opening up to possibilities that are beyond self and more in the interests of a wider societal context. This is the tricky part to define when it comes to structuring tasks that do not become too individually constrained, but that is a piece for future exposition. This idea has not yet been fully qualified in research and care should be taken simplifying the argument too readily, as in fact areas within the default mode are involved with Theory of Mind which consider the thoughts and feelings of others (see Filkowski et al., 2016). As such there will doubtlessly be components of default activity that may be necessary to preserve when engaged in tasks that have collective benefits. But nonetheless in principle, the removal of self paves the way towards harmonious interrelationship with other and the wider community and environment. Further research as mentioned will tease out the nuances of functionally connective brain resource distribution when in task-positive states.
Summing up then, the basis here is that ‘self’ is a function of brain regions that demand a lot of energy, somewhat wastefully, and which detract from effective performance on goal related tasks. Now there is deeper understanding of how this is based in brain activity that can be relatively localised, we can also seek to intervene in that system and divert resources to more producitve networks of brain regions that optimise task performance. In doing so the self ‘switches off’, and a more industrious and fulfilling state of being arises. Through this scientific understanding we can devise technological interventions that help in this process of ‘self mastery’. We can do this through structuring our environment in a way to motivate and direct participants towards task focus at expense of internal distracting ‘self-thoughts’. We can do it by offering pharmacological substances (which may target quite specifically receptors such as serotonergic 5-HT2A) that dissolve self and desynchronise electrophysiological activity associated with ‘normal cognition’ (Muthukumaraswamy SD, et al. (2013)), or creating observable impact on the ‘balance’ of stress hormones such as cortisol and noradrenaline (influencing arousal and therefore motivation). Finally we can devise better measuring apparatus that can probe this brain activity, and even through ‘neurofeedback’ give better control over the distribution of activity and its cognitive associative processing. And in accord with this, through Internet of Things enabled connectivity with our WHOLE environment, perchance synthesise a means for ‘controlling’ all aspects of our ‘being’. Like a thermostat controlling the heating system in your house,you could ensure that your brain state resonates most harmoniously with the wider external environment and the demands of any task that requires fulfilling. All the whilst being more attuned to collective goals, needs of society, needs of the planet...
So a future could exist in which a more egalitarian status is conferred societally-wide by understanding how to guide individual ‘self-absorption’ more towards altruistic goal directed focus through application of technology (as conceptualised in different ways above). The techno-spiritual revolution could go hand in hand with the resurgence of ancient practices (as is happening anyway) so that we all reconnect with our species’ purpose to evolve harmoniously with the machines that will eventually replace us. But at which point, with self banished there is no ‘one’ left to mind!
5th December 2053: sat by the virtual fireside in my condominium, preparing for the day’s ‘work’. Have installed my fNIRS headband and ensured it’s synced with the Human-Environment integration system, networked world-wide via the Global Internat of Things. The AI huge-data analysis framework has concocted my task list for today and is now through real time neurofeedback providing my own digital augmented operating system with impetus to optimise my large scale brain networks connectivity protocol. I can feel now in tune with my own cognitive capabilities, switching effortlessly into a cenrtal-executive prefrontal cortex connective state. My ‘self’ is dwindling as my engagement tunes in to my goal which is to provide innovative solutions to sustainable energy technology challenges affecting a community in India. I am virtually in my remote ‘neighbour’s shoes, understanding and experiencing what daily challenges he faces making ends meet. But importantly I can help focus all my energies onto a constructive solution that can be fed across the web network to stimulate further solutions and innovations, coupled with AI enhanced superadditive innovation engineering. We can test it all iteratively in VR, 3D print concepts and then test for real in-situ. After a day’s work like this I can return to my ‘self’ space briefly and review how better ‘I’ feel for it. I don’t even need to use the fNIRS interface to marshall my own thought processes, that;s more for my ‘working day’ conncting planetary wide for industrious applications. No this is for personal ‘benefit’ as I strengthen my own resolve and motivation as a person to be a better version of myself, content, fulfilled, and eager and full of energy to get into tomorrow’s working day helping my fellows across the worldweb.
Sinister? Against the spirit of human potential? Granting too much 'power' to AI? Or idealistic and unattainable? To be explored....
Carhart-Harris, R. and Nutt, D.J. (2017). Serotonin and brain function: a tale of two receptors. J Psychopharmacol. 31(9): 1091–1120
Filkowski, M.M., Cochran, R>N. And Haas, B.W. (2016). Altruistic behavior: mapping responses in the brain. Neurosci Neuroecon. 2016; 5: 65–75.
Muthukumaraswamy SD, et al. (2013) Broadbandcortical desynchronization underlies the human psychedelic state. J Neurosci 33(38):15171–15183
Sitaram, R., Ros, T., Stoeckel, L., Haller, S., Scharnowski, F., Lewis-Peacock, J., Weiskopf, N., Blefari, M.L., Rana, M., Oblak, E., Birbaumer, N., and Sulzer, J. (2017). Closed-loop brain training: the science of neurofeedback. Nature Reviews Neuroscience volume 18, pages86–100
Tegmark,, M., (2017). Life 3.0. Pemguin Books: UK
Wheal, J. and Kotler, S. (2017). Stealing Fire:How Silicon Valley, the Navy SEALs and Maverick Scientists are Revolutionizing the Way We Live and Work. Harper Collin
“Cyberspace. A consensual hallucination experienced daily by billions... Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding...”
― William Gibson, Neuromancer
As a cineaste and ardent fan of the craft and storytelling potential for the immersive medium that is film, I have sat through (suffered) many an experience that has bored me to tears, frustrated me, annoyed me, or sent me to sleep. But I don’t believe I have ever actually walked out of a film. Though I have come close. In fact I think the closest I ever came was whilst watching the grandstanding motion picture event that is/was AI: Artificial Intelligence.
I confess to being a diehard afficianado of Stanley Kubrick (3 of his film are in my all time top 10 – which ones I wonder...?! - there’s not that many to choose from tbh), and I suppose I had high hopes for the long germinating project of his that posthumously was realised by none other than Steven Spielberg. I guess I had hoped that in his spirit, a sort of sequel (thematically at least) to 2001 might have followed up on notions about machine intelligence as a natural evolution of consciousness, perhaps questioning the role of emotion as redundant in a more clinically perfectionist universe where ultimate limits of cognition might enable intelligent expansion across the void of space. At the expense of sugary sentiment. But no, Spielberg indulged himself and the viewer in a particularly overly saccharine take on ‘what it means to be human’, and how the future of machines rests in their capacity to evolve emotional connectivity. So far so Hollywood.
It made me squirm in my seat, long for it to end. I had to compel myself to not storm out of the cinema with half an hour more to go, muttering indignantly.
Perhaps this speaks to me as a cognitive psychologist, skeptical of ‘cod’ ideas about emotion – what is emotion? - as espoused in popular fictions, or an over amplified sense of how humanity will prevail due to some special status. Couched in this ‘feel-good’ but ambiguous notion of ‘emotion’. Am not here to write a PhD on ‘what is emotion and how does it relate to the concept of man vs machine dominance of the future Intelligence Landscape and evolutionary Darwinist cyber-thinking' (!) - though perhaps I could? Rather I want to begin touching on (from hereonin) in a series of pieces about what AI might mean more generally in a discussion about humankind’s reliance upon technology, notions of ‘self’ and ‘consciousness’, and how inevitable progress is simply the prime directive for evolution, and that’s something we need to accept and put up with...
Hopefully this will touch on some notions such as emotion, including more up to date thinking on that subject (such as the ‘constructionist’ framework as championed by the likes of Lisa Feldman-Barrett, 2014, – wherein at the heart of the matter is consideration of the human brain as a prediction-machine in itself, permutating iterative algorithms that learn, fail, adapt, succeed, grow, with emotional ‘signals’ in the mix as important functions facilitating that process). Within this line of thought, we can look at the brain and its architecture as indeed analogous to a ‘machine’, with mechanistic causal chains and connections, feedback loops, networks, which beget the cognitions, ‘qualia’ of experiential perception, in short the ‘programs’ (programmes) and operating software dependent on this infrastructure.
A good source of popular reference I shall draw on amongst others is Max Tegmark’s (2017) book ‘Life 3.0’ which nicely elucidates upon the field of Artificial Intelligence research, it’s ethical role in determining the future of AI development (to avoid the fateful Cyberdyne Systems ‘incident’ of 1997), and a serious look at where AI may present significant benefit to our species’ co-development into the near, intermediate and long term future. It’s here to stay, it’s growing exponentially, and we really don’t know truly where it is going to take us (or leave us).
It is becoming an ever stranger world day by day. Yesterday I conversed with a chat-function online attempting to source some virtual reality equipment compatible with slightly outdated computer hardware. Frustrated at the speed with which everything updates and creates redundancy in old equipment, I was somewhat exasperated and defensive with the agent with whom I was chatting. I rather tersely conversed with him and came close to asking irritatedly if he was a human or an AI and if the latter could I please have a human instead (perhaps I prefer some’one’ with the capacity to obfuscate more and put me at ease even when getting nowhere?!). The tenor of his responses suggested to me he was indeed human. But in retrospect I can’t be 100% sure. Such is the bizarrre state of affairs (at least interactively speaking) that we live in. Is it a good or a bad thing? Is devolving responsibilities such as providing consumer advice (or health advice, fitness advice, legal advice etc. etc.) to AI a sensible, effective, preferable course of action?
Much research in AI, and psychology, would argue that non-human agents can provide the appropriate rapport cues that put humans at ease, engender trust in the communication process, even elicit deeper levels of openness than human counterparts may do so (Fiske et al., 2019). It's still early days, but one thing for sure is that machine intelligence will certainly exponentially improve, learn, develop, extend beyond it’s original operating system, program, limitations. And perhaps it is best to see that as an exciting opportunity to be harnessed, or guided where possible.
Or we pull the plug now...before it’s too late. Damn, Shroedinger’s Cat is out of the bag. The mice have escaped the interface and are scurrying after the silicon cheese. The red eyes are glowing in the dark, metal legs scraping across the tarmac, relentless, rasping ‘we’ll be back’...
Next up: how AI may ‘solve’ our modern day political crisis, putting the meaning of democracy back into the lexicon. All politicians from the year 2037 will be required to register their profiles on the Mechanical Turk, henceforth their political machinations at the behest of the crowdsourcing algorithm that determines whose proposition wins the big data-analysed consensus of opinion, carefully weighing into the equation socio-economic equality formulae, balanced against environmental impact (from the worldwide IoT net), offset against predicted movement of key stocks and sustainable business practices. Nobody profits from politics, financially or status wise. Protected anonymity is key to ensure the latter. Everybody gets what nobody wants.
And lo it’s Metal Mickey. In a blond wig and puckered visage. Running the whole show.
Some things might just never change.
"The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which."
- George Orwell, Animal Farm
Feldman-Barret t and Russell, J.A. (2014).The Psychological Construction of Emotion ISBN 9781462516971
Fiske, A. , Henningsen, P., Buyx A (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. JOURNAL OF MEDICAL INTERNET RESEARCH, 21 (5), 1-12
Spielberg, S. (dir) (2001)/ A.I. Artificial Intelligence. Dreamworks Pictures. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
Tegmark, M. (2017). Life 3.0. Penguin Books: UK
The science of cognition and perception in context
This is where I elaborate upon brain science relating to cognitive functioning dependent on environmental context.