“To see a World in a Grain of Sand
And a Heaven in a Wild Flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.”
― William Blake, Auguries of Innocence
Take a look at small child as you wave a shiny object in front of her face. Or indeed any object really if she’s a baby. What inference can we make about what she sees, what she experiences? What she thinks?
It’s distinctly likely that the child is in a perpetual state of wonder! What is this (not dagger) that I see before me (paraphrasing Macbeth for literary kudos)? With underdeveloped cognitive and perceptual faculties the whole wide world is a vastly novel experience. The little brain must be overwhelmed with the intense stimulation of continually seeing new stuff, figuring out what it is in relation to old stuff (not that’s there’s much of that). But it’s all fresh: a perspective on life is based on an exploratory state of being, uncluttered by preconception.
At heart this is what we might, with a little imagination and a desire to refresh our own perspective, seek to attain as world-weary adults. A worldview driven by wonder. Using new eyes to see richness around us. A consumerist society attempts to delude us into provoking brief spikes of interest at colourful stimuli that momentarily capitalise on a frozen instance where hand is in pocket and reflexively thrusts forth to part with hard earned cash (and thereby obtain the next useless thing that ultimately fails to create longer term satisfaction).
Art at least taps more genuinely into this aspirational state of being, stimulating the senses, attempting to shift the focus and open up perspective to new ways of looking at the world. Modern technologies such as Virtual Reality may well offer another avenue to take us out of the routine, mundane mode of operation and present an opportunity to view the world much as a baby does. Psychedelic drugs or other ‘altered states’ technologies and techniques may defibrillate our cognitive faculties in a more violently immediate fashion and reduce us to a more ‘primary state of consciousness’ (Carhart-Harris, 2018). But nature is the most readily available resource at our disposal perhaps to evince this capacity for wonder.
Look upon the Grand Canyon for the first time and can you fail to be seized in near paralysis at the magnitude of it, the resplendent grandeur? A vast palette of colours, shapes, scents, depth, distance, along with a failure to grasp how this has come into being, what it means...
That is because you are in a state of ‘awe’. And this is a special state that can be evoked under the right conditions, the right (environmental) context. Perhaps it is our evolutionary means of predisposition to cast off the shackles of outdated, routinised ways of thinking. It is a way of shocking the system into taking stock of limitations, and realising that there are new things out there to experience, and which reveal the boundaries we set are illusory. There is always something else beyond the horizon. This is something we forget very quickly as we form ever tighter walls around ourselves, building a protective shell in which to hold off the hordes of new ideas that threaten to undermine cherished ‘stability’ of ‘in the box thinking’.
Awe is an area of research amongst cognitive scientists that has it’s roots in a fundamental cognitive/perceptual ‘need to accommodate' (Guan et al., 2018 - there's that pesky posterior cingulate cortex again!). We form constructs, conceptual hierarchies of ideas about objects, categorising and itemising the world into definable chunks. When we encounter new chunks, new items, we strive to fit these into the boxes we have already labelled upon mental shelves. If the item doesn’t ‘fit’ it is subsumed into another ‘near fit’ box, or perhaps discarded onto the junk pile. But occasionally we find ourselves in a place where the whole environment so overwhelms that the system cannot cope. It cannot find any impetus to streamline and categorise what is beholds. Perhaps this might cause ‘meltdown’ in some, but generally, more hopefully, it provokes a ‘need to accommodate’. The cognitive and perceptual boundaries stretch, the brain becomes more plastic, knowledge structures bend or break and new concepts, ideas, experiences update the model and ‘reboot’ the server!
For a moment we perceive as babies do, letting go our firm grip on what we believe reality should be. Throwing ourselves at the mercy of the vista before us. Giving over to a more vulnerable state of perception, of being. The brain grows, as do you. Nature has gifted us this medium to upgrade our perceptual apparatus. It does us the world of good to occasionally (even regularly!) seek out that state of wonder, the provocation of awe. It is the panacea for a world embittered by the familiar, insulated from voluntary vulnerability. We all need to see the world through the infant’s eyes once in a while.
I will continue to develop ideas around awe, principles of which ‘accommodate’ into my wider model of brain function, adventurous experiences, development and ‘mastery of’ the self, and how virtual reality can be used as a tool to facilitate greater understanding of the underlying processes that govern how we may refresh our views of the world.
For an earlier take on this check out a previous piece I wrote, suggesting, as with the current appetite for personal training regimes, an awe workout.
Carhart-Harris, R (2018). The entropic brain – revisited. Neuropharmacology. 142:167-178
Guan, F. Xiang, Y., Chen, O, Wang, Wand Chen, J. (2018). Neural Basis of Dispositional Awe. Front. Behav. Neurosci., (12) 209, 1-7
"A man's got to know his limitations."
- Inspector Harry Callahan - 'Dirty Harry'
How do we set limits in our world? We might purport to be free-thinking, expansive in worldview, and capable of achieving goals that stretch our capacities to grandiose ambition...
But is this wishful thinking? Are we deluding ourselves?
The answer is most likely yes. One might say well that’s fine, it’s better to have a stretching and unrealistic view of our own capabilities as this can spur us on to achieve things we aspire towards in life. But actually, thinking about it, are we not then just like the denizens of The Matrix, going about our days in a blithely complacent mode of operation that is content to ‘know’ we have the potential to achieve whatever we want, but actually deluding ourselves about the extent of this capacity. In effect this unfounded belief that we can reach out and extend our actions unbounded in the world may be in itself limiting our actual potential...
To feel as if one can achieve anything can foster complacency that actually prevents doing anything about it. And this, one may argue, rests in setting boundaries for ourselves that are actually constrained by a self-satisfied sense that we can do whatever we want, go where we please, achieve our dreams. By virtue of ‘knowing’ that we can take that step whenever we want. Tomorrow...the next day...at some point. Is it in effect tantamount to 'confined optimism'? (Being overly optimistic may be as deleterious to goal achievement as being pessimistic and negative...so where's the balance, the optimal point where skills and challenges come together to prompt 'flow'?)
Might it be more productive and satisfying to be realistic and look more closely at what our boundaries actually are in order to then define where we can get to and then where we might aspire to go further? How far does the range of thought extend? I wonder if anyone has tried to quantify this, and relate it to ‘physical terms’? - a useful ‘thought experiment’ in itself! Much is said of the freedom to think infinitely, and the beauty of having brains that allow us to wander at will in the realms of imagination that has no ‘true’ limitations. To the very edge of the known universe...and beyond! But firstly, perhaps we should consider how far can we actually think in those terms. And secondly to what extent do we exercise that capability? (As an aside, and as I have begun to speculate elsewhere, consider how the evolution towards superintelligent AI may unseat our own place at the head of the table as dominant species, by simple virtue of our incapacity to imagine where the limits of thought may extend to in 'beings' with exponentially greater capacity for thought! There are simply things we cannot think about. Much as we can't imagine innovations that have yet to take place!)
Are our thoughts bounded by the mundane borders of our own regional concerns? Do we simply rove within stepping distance of our own cognitive domiciles or do we seek to search beyond, to thrust out into the unknown? An interesting experiment would be to devise a protocol for measuring the extent that people think and to equate that through some normalised attribute to distance in ‘real’ terms. It would be interesting to devise characteristics that categorise different types of thinkers and relate to demographic variables. Does the ‘professorial’ typology of thinker, as a rule, go beyond ‘typical’ boundaries compared to the blue-collar citizen (being stereotypical for a moment to illustrate an idea)? Would such a metric require homogenisation of data to account for other variables in order to ‘standardise’ a measure relative to different personality/lifestyle characteristics? Does the ‘artist’ head the pack in striding out beyond the known borderlands to map new territories?
An intriguing idea with the development of ever more capable neuroimaging technologies and theoretical frameworks is the possibility that measures of ‘expansive cognition’ may become more viable. But as with focus over the years on for instance psychedelic or ‘altered states’ phenonena towards a ‘unified theory of consciousness’ (eg. Kingsland’s recent book tying together various strands of research in this area, 2019), is it possible that ‘expanding the mind’ through various techniques simply throws ‘ordinary’ functional brain status into ‘asynchrony’, reducing the stability and robustness of cognitive functioning and moving the system closer to (and over the edge of) an entropic state (towards chaos and randomness – perhaps unbounded, but largely ‘meaningless’??; Carhart-Harris, 2018). By ‘expanding the mind’ do we risk simply placing our cogitative capacities into a tighter box that turns off critical capacity and instead revels in a further delusion of being ‘set free’?!
Another way to think of this is in terms of the perceptual limitations that we inevitably have as a consequence of brain structure, evolutionary development, contextual situation, and habituated tendency to rely on our sensory-perceptual apparatus as determined through long practice and automated routine. What are the perceptual boundaries that we are constrained by and to what degree can we, through practice, skill development, broadened mindset, push the boundaries further?
This is a big and fascinating question and topic and many current books by prominent neuroscientists, cognitive psychologists and philosophers detail various illusions, and experimental scenarios that show the fallibility of our ‘equipment’. (See the likes of Eagleman, Hoffman, Lotto, Dennett, Macknik and Martinez-Conde, Chabris and Simons etc. in the references section.) A now famous example concerns a gorilla and a basketball game...As an aside, before the ‘invisible gorilla’ went global, I was sat in an auditorium at a provincial campus in Toronto with perhaps 50 academic types. A chap gave a talk in which he referenced a study from the 80s involving a slightly ethereal lady carrying an umbrella and wandering across a basketball scene (we saw the grainy video of this). Having just shown this video, he then went on to ask the audience’s indulgence on a subsequent video in which a group of young people amateurishly tossed basketballs to one another in a circle within a fairly tight frame of the scene. At the end of this the speaker asked the audience to recount how many passes of the ball had been effected, and if anyone had noticed anything else in the scene. I don’t recall anyone saying they had. He replayed the scene. A gorilla indeed appeared and beat his chest in the midst of the action. The audience provided a mix of responses from surprise to cynicism that this was a different video. The point is the audience was comprised of some of the most eminent scientists from the fields of visual perception, cognition, consciousness research. The Invisible Gorilla demonstration was born there and then, launched to the world, and had it’s effect even despite ‘priming’ as to the nature of it beforehand...We believe we see the totality of our environment and that is is ‘there to see’ and we have only to ‘look’. But how deluded we are to the blind spot, not only in the fovea itself, but also in our own view of reality!
Of course, over the passage of time it’s possible that I have slightly reconstructed the temporal order of this sequence of events erroneously and in fact the ‘Umbrella woman’ came afterwards, to reference the inspiration for the Gorilla! But the point remains, an audience of ‘experts’ in visual perception were taken by surprise by this facet of boundaried perception that is effectively wired into our brains.
So to return to the original point, do we appreciate the boundaries of our own perceptions, thought-streams and worldviews? Or do we labour under the misapprehension that we have as much capacity (unlimited) as we seek to employ? Perhaps we do have this ‘infinite capacity’ to think beyond any boundaries....And with that perhaps, also, we ALL have the capacity to go beyond our limitations (in a physical sense). Anyone can get off the couch and run a marathon if their mood takes them? Right? Especially if their pants are on fire...Realistically, probably not. Sure you can get up off the couch and implement an incremental programme of exercise that suggests you can get to that finish line eventually. But you’ll never really be able to predict if you are going to collapse of a coronary or simply give-up when push comes to shove. But still exercising one’s faculties seems to be as much the point here as the end destination being achieved. And likewise, in order to think ‘beyond boundaries’ or even towards the boundaries, most probably requires at least putting some exercise in, however incremental, to develop that capacity. After all it’s distinctly possible that our cognitive functions depend on the regular exercising of the underlying neural mechanisms, management of blood flow and metabolic resources to sustain this infrastructure, and the robustness and flexibility of our mental muscles!
So with that, have a think about where your boundaries lie – don't just assume they are infinitely capable just because they are ‘there’. Maybe redefine your boundaries as the basis for pushing them. But importantly exercise your mental faculties, think about grand thoughts but also smaller, more realisable ideas, and from there build them into greater ambitions. Do this regularly, strengthen your cognitive musculature. Begin to redraw your world afresh, instead of sitting comfortably back into the confines of your virtual and apparently limitless box! From this can only grow vision, capability, and enthusiasm to explore the boundaries of the known, and redefine what is the unknown!!!
Sources of reference:
Carhart-Harris, R (2018). The entropic brain – revisited. Neuropharmacology. 142:167-178
Chabris, C. and Simons, D. (2010). The Invisible Gorilla. Harper Collins
Dennett, D. C. (1991). Consciousness Explained. Penguin Books
Eagleman, D. (2011). Incognito: The Secret Lives of the Brain. Canongate Canons
Hoffman D. (2019). The Case Against Reality: How Evolution Hid the Truth from Our Eyes. Allen Lane
Kinsgland, J. (2019). Am I dreaming? The New Science of Consciousness and How Altered States Reboot the Brain. Atlantic books
Lotto, B. (2017). Deviate: The Science of Seeing Differently. W&N
Macknik, S. and Martinez-Conde, S.(2012). Sleights of Mind: What the neuroscience of magic reveals about our brains. Profile Books
Beyond simplicity – the increasingly complex role of the Default Mode Network in goal-directed behaviour
“Twas brillig, and the slithy toves, Did gyre and gimble in the wabe;.”
― Lewis Carroll, Jabberwocky
Whilst sometimes simplicity is a desirable state of affairs when presenting elaborate concepts, the Occam’s Razor approach sometimes falls short of the real story. This is an issue I have always had with conveniently distilled principles, and systems of ‘thought’ which generalise such principles in appealing to human nature’s desire for answers! Life is rarely simple when one thinks carefully about it! Having said that a good starting point is to make things black and white, to get a fundamental idea across, accept it’s generality. Then once the penny has dropped, begin to dismantle it, and pick apart to get down to finer granularity and more specific meaning...
So far, the ‘anti-correlated’ networks model as espoused throughout my writings, serves as a useful concept for understanding how the brain’s machinery acts in apparent opposition. Much like a computer’s core components being in binary state of on/off, so the brain functions act antagonistically to subserve one need or another, but not holding both in action at once. Likewise with muscles that flex or extend, equilibrium doesn’t generate results, it’s the dynamic shift of balance that creates the impetus and overcomes inertia. (Quantum physics aside for now with respect to superposition, or the capacity to potentially be active in all possible states at once...)
Extending notions such as Occam’s Razor to cut through to the likely ‘truth’ of matters in fundamental understanding, I tend to adopt (and encourage others to do so) the tenet that, if something seems simple to understand, and makes perfect sense, it’s probably not actually accurate. It is far too convenient to be grasped as such. The notion of two networks that switch on and off falls into this category. It is an elegant idea, and nicely slots into a narrative of the universe that brings light and darkness into holistic unity, yin and yang, male and female. Opposing forces that balance and harmonise, creating energy and action in an eternal dance of momentum and interaction.
So for a while now, having established, and proclaimed this principle of brain mechanics, I yearn to dismantle the proposition, to find greater complexity in the simplistic idea, to begin to specify where the general principle holds true, and where it breaks down. Or provides further sub-divisions of principles that begin to be usable beyond the generic.
A paper in 2012 by Nathan Spreng questioned the ‘dismissal’ of the default mode network as ‘task negative’. That is, it has been by definition labelled as ‘default’ and associated with a resting state that has no bearing on task performance. In opposition with the so-called ‘task positive’ network (also known as Central Executive, and referenced in terms of ‘fronto-parietal’ functioning), it appears to ‘switch off’ when the CEN is ‘on’, and by my own hypothesising, represents a potential distracting influence on goal directed tasks that could be demonstrated measurably as negatively impacting on task performance. (Signature of poorer task performance being an indication that DMN activity/dynamic functional connectivity fluctuations are ‘encroaching’ on stable FC of Central Executive network activity facilitating focus on task.) Given that the DMN is said to be a key hub in which sense of ‘self’ is constructed, it stands to reason that certain elements of goal-directed functioning will require a self-referential perspective in order to perform successfully. This is particularly likely where performing a task requires drawing on prior experience: memories of what ‘I’ have encountered previously. This has indeed been shown to be true with respect to accessing autobiographical memories that help ‘solve’ a problem (Andrews-Hanna, 2012; Spreng et al., 2009; Buckner & Carroll, 2007). So the question then becomes one of narrowing down what is the role of the DMN in cases where this important integrative hub (with let’s remember a significantly high metabolic demand for resources) can contribute relevant processing power to performance on-task.
Elton and Gao (2015) began to address this question and stimulate further research impetus to unpack the flexible role of the default mode network in goal directed behaviour, and to disparage the notion that it is only involved in task-irrelevant thoughts and ‘meaningless’ distraction. As an integrative hub for multiple sources including multisensory information, and formulating a sense of ‘self’ clearly this is an important functional network that has much bearing on one’s capacity to successfully negotiate life’s challenges! My earlier piece indicated that it already will have an important role in consolidating information following task performance, and integrating experiences into an updated model of the world and where ‘I’ fit in. So we can already surmise that, whilst performance on a task ‘in the moment’ might be the key priority (for survival?) requiring DMN ‘as a whole’ to be ‘switched off’ and not interfere with the current priorities, a key component of any successful and evolutionarily prosperous system is to take stock of it’s achievements and failures, evaluate lessons learnt, and store this knowledge to enable future performance, efficiency and optimal operational capacity!
Research such as Elton and Gao (2015) describe indicates that the DMN does in fact show flexible functional connections even when performing ‘on task’, with increased connectivity with regions involved in executive control. This is task specific, and both involved in internally oriented as well as externally oriented tasks. (Despite general consensus that on externally-focused tasks the DMN shows a pattern of reduced gross activation.) Elton and Gao postulate that the DMN serves a dual purpose in ‘broadband’ monitoring both the internal and external worlds “to maintain self-consciousness and vigilance during this state (Gilbert et al., 2007; Raichle et al., 2001)”, but also to ‘uncouple’ regions not pertinent to external task-orientation where required. Given this monitoring function, a most consistently preserved coupling was observed to be with regions of the salience network, including the right inferior frontal gyrus – pertinent in detecting salient stimuli. Likewise, in emotional tasks with the anterior insula, where rapid response to task relevant features was facilitated.
So now we have a sense that this integrative hub can shift its processing power and capacity between task-specific coupling/uncoupling (particularly when the task becomes more externally oriented and therefore preferring disengagement with more ‘self-referential’ processes) and more ‘broadband’ monitoring of the world (both externally and internally) in order to facilitate flagging of salient stimuli in accordance with the salience attentional network that can enable switching between 'task positive' (CEN) and default mode networks ‘more generally’. As such this represents a window into how the brain seeks to optimise it’s capacity to switch functions ‘on and off’ as it were, or redistribute resources and be more selective in turning relevant aspects ‘off’ as necessary.
The question that falls upon my own research aspirations to address involves how these networks subserving aspects of ‘self’ in the context of adventurously challenging contexts can threaten to undermine efficiency/optimality of brain functioning and disrupt task performance. Particularly where performance on task may have life-threatening consequences! And likewise, how certain individuals are able to better make decisions and perform ‘on-task’ in the moment of requirement by virtue of this underlying capacity to turn the relevant regions ‘off’ whilst still actively monitoring the environment (both externally and internally). And from that how other individuals less capable of governing their functioning in this way, can learn to harness more control over this optimal state of being. Finally, with a deeper understanding of the functional role and connectivity between and within these key brain networks, we can begin to appreciate how concepts such as ‘self’ have basis in targetable/measurable neural activity, and how the system synthesises experiences and goal-directed behaviours into an updated and enhanced model of ‘self’ that is the basis for improved performance and sense of purpose in the future!
Andrews-Hanna, J. R. (2012). The brain’s default network and its adaptive role in internal mentation. Neuroscientist, 18, 251–270
Buckner, R. L., & Carroll, D. C. (2007). Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57.
Elton, A. and Gao, W. (2015). Task-positive Functional Connectivity of the Default Mode Network Transcends Task Domain. Journal of Cognitive Neuroscience 27:12, pp. 2369–2381
Spreng, R. N. (2012). The fallacy of a “task-negative” network. Frontiers in Psychology, 3, 145.
Spreng, R. N., Mar, R. A., & Kim, A. S. (2009). The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. Journal of Cognitive Neuroscience, 21, 489–510.
Continuing the line of thought that the brain exhibits differential network activation/functional connectivity between task performance and resting state, there is recent evidence to support the notion of ‘neurogenisis’ or ‘brain growth’. While it makes sense to think that by doing some productive cognitive ‘work’ we will learn, strengthening the connections in the brain, it is an exciting development to be able to see where that growth might have taken place. Fitting with the large scale networks model, Lin et al. (2017) have observed changes in ‘topology’ of key networks subsequent to performing a task. Focusing on the default mode network once again, measures of this ‘resting state’ and comparison of networks connectivity prior to, during and immediately following a task, can reveal these marked differences that signify change and ‘growth’ has taken place. This may be a key to unlock markers of ‘self-development’!
I have been arguing perhaps with a little negative bias (for the sake of simplicity) that the DMN with it’s ‘selfish thinking’ traits represents a barrier to productivity, a proclivity towards distraction, and an obstacle to optimal performance. But within this argument that maybe tends towards throwing out the baby with the bathwater, the self gets a hard rap. Self has of course many facets, and there is no escaping self, be it from setting goals to ‘improve the self’, to ‘master the self’ to use self reflection as the basis for progression in life towards goals and achievements. So we should use this moment to (self) reflect on the utlity of a brain network that puts self to the forefront. it has its purpose! And its time and place.
Certainly, being overactive in the DMN can lead to negative consequence and dysfunction (mental health issues), and perhaps it’s contents can inhibit optimal (task postiive) performance by diverting resource at key moments away from focusing on the task at hand. But significant components of cognition and behaviour rely on a self referential perspective – be it drawing on prior experience (memories) or solving problems that take into account one’s own position in space – in order to effect appropriate responses when navigating around the world and making relevant decisions. And it makes sense to think that once a task has been performed, a necessary further ‘task’ relies on subsequent processing to integrate the lessons of that ‘experience’ into self referential brain structures for later utility in solving other problems and performing better on future tasks. In short, providing a repository of heuristics that one can draw on to short cut behaviours and decision-making and consolidate the brain’s status as a prediction machine that acts on prior knowledge in order to update it’s models of the world adaptively. ‘I’ have experienced such and such before, so now ‘I’ can use that experience as a short cut to future rapid decision making and problem solving. Therefore having experiences can shore up one’s surety (confidence) in acting decisively next time. It is speculated here that the self has some bearing in this, with personal significance that can perhaps help boost instinctual capacity towards such speedy decision making...
From this standpoint, and following Lin et al.s (2017) line about reconfiguration* of network topology subsequent to task performance (as differentiated from topology prior to task performance), it figures that (positive) changes should be observable in the default mode network. And that such modifications represent a re-organisation of the ‘self referential’ processing stream that contributes to improved performance on a future occasion. In short, ‘self development’ being manifest in this ‘self’ pertinent brain network. Further research will seek to build on this notion that the default mode network may provide a functional imaging (and neurocognitive) signature of enhanced ‘self’ predicting better performance on cognitive tasks, and a capacity to intrinsically modulate the brain’s own capacity to efficiently turn on and off networks pertinent at the appropriate time to effect optimal performance.
In short, the ‘self’ first deliberates prior to a task, perhaps uncertain, speculative. Then the self ‘retreats’ whilst performing said task. Finally, the self reconstitutes in the aftermath, re-integrates new experience, and updates it’s model for future reference. Growth has been neurally effected...
*” A possible explanation for our finding is that the DMN has been modulated by the prior cognitive task, which would shape the DMN topology properties and lead to the development of a new DMN organization”
Lin, P. Yang, Y., Gao,J., De Pisapia, N., Ge, S., Wang, X., Zuo, C.S., Levitt, J.J., & Niu C. (2017). Dynamic Default Mode Network across Different Brain States. Scientific Reports volume 7, Article number: 46088
Follow my science based tweets on: https://twitter.com/CognitvExplorer
See my professional profile on linkedin:
Upgrade your brain by plugging into nature:
““The ships hung in the sky in much the same way that bricks don't.”
― Douglas Adams, The Hitchhiker's Guide to the Galaxy"
Slartibartfast hails from the planet Magrithea, being a designer of planets, and is particularly renowned for (and proud of) his award winning work on coastlines. Notably “the fjords of Norway....were mine”.
Oh to have the power to create worlds...
Well you can. With a little acumen for programming anyone can indeed be the architect of their own environment. Virtual Reality interfaces are readily available and via open source software such as Unity one can embark on a journey literally creating the context in which that journey takes place.
To a citizen of a more anachronistic age such technology may seem to be ‘indistinguishable from magic’ as one Arthur Clarke would have put it. But it is here, it is open to access, and it can be embraced, and immersed within.
From a perceptual and philosophical standpoint this puts us in a fascinating position. We can explore the mechanisms behind how we perceive our own reality by actually engaging in the construction of it! Which in itself is a perfect literal analogy for the basis of perceived reality being a construction of the brain anyway...
At our fingertips is a medium that allows for the generation of scenes in real time letting us place objects in situ, and explore the interrelationship of these to one another and in their own place within that scene background. And once we have done this, by donning a set of goggles we can then supplant our physical environment with a virtual one, and thence to physically move about within it. And even to interact with those objects therein! That’s quite a proposition to consider right there!
We can realise our own personal Narnias. We can create the wardrobe then step right on through into the snowy wastes beyond.
How might this assist in solving ‘problems’ encountered in the real world one might ask (I.e. to get past the ‘gimmick’ phase allowing one to infinitely exterminate zombies...)? One avenue I am currently exploring is to replicate my own working environment (currently my living room/office space). The constraints of physical space coupled with a disorganised system employing sheets of paper, post-its and wall space lead me to wonder how I can better organise my workspace in the virtual realm. By replicating this space I can potentially recreate an infinite variety of similar workspaces in which I will never be short of space to ‘organise’ my thoughts (on virtual post-its). And the very process of doing so will help me improve my capacity to organise, and my ability to connect thoughts together, seeing intersections of disparate ideas that may catalyse creative innovation in my own thought processes. I will never, in effect, need to ‘tidy up’, and risk losing my thinking as the post-its go into a figurative bin.
A further benefit of this virtual realm being used in such a way is to physically navigate throughout it. The act of moving through a scene can in itself significantly contribute to learning, to memory, to general cognitive ability. In acting (I.e. moving) my brain will galvanise it’s motor areas, engaging sensorimotor functions that enrich the perception of the world with which I am engaging, and the thinking processes that are collectively being operationalised. Consider the idea of the ‘memory palace’ (also known as the ‘Method of Loci’). This was a mnemonic device utilised in the ancient world as a means to commit to memory various facts by virtue of imagining key cues to said facts being ‘placed’ in an imagined ‘palace’ that the individual could then reconstruct in mind and ‘walk down the corridors’ as it were of such a personally significant realm. Thereby revisiting the various locations of these facts and conjuring a vivid environment in which memories could be readily accessed.
Such a device was also employed by the character Hannibal Lecter and referenced throughout Thomas Harris’s (2000) novel ‘Hannibal’, as well as being written about by historian Jonathan Spence (1984). Naturally, the technique lends itself to the medium of Virtual Reality. A team led by linguist Aaron Ralby has explored the potential for this, devising a PC/Virtual Reality platform in which one can construct one’s own memory palaces in virtual environments, thereby employing the technique of spatial learning to perhaps learn new languages or string together facts as an aide-memoire. (see Wired article)
All this points to exciting possibilities for the utility of virtual reality technology to enhance our cognitive capabilities, extending more traditional media (eg. written diaries, mnemonics). Furthermore, as a tool to uncover the basic mechanisms of perception, and notably our own brain’s propensity to derive a notion of ‘veridical experience’ from the constituent elements of the environment in which we are immersed, this is potentially invaluable. As a somewhat strange ‘coincidence’ today, as I sought to get to grips with some basic geometrical scene properties, I constructed a simple environment with various geometric shapes. At centre stage (well actually off to one side) was what I intended to be a sort of plinth acting as a bridge that one could walk along, and composed of various objects embedded within it (obstacles as it were). When I interfaced with this for the first time in the VR headset, the scale and position that I had somewhat randomly selected, just so happened to correspond uncannily with the table at which I am sat typing. The ‘plinth’ became a table, being exactly the right height, and even the same colour and texture. I reached out (virtually) and was able to touch in the right proportions the surface in front of me (physically). Unwittingly I had constructed a virtual environment that effectively corresponded to my physical environment, despite intending something quite different (the positioning was totally by chance as I am a complete novice so far). Perhaps this is my unconscious acting as the architect of my environment, very much in the way that my unconscious brain functions act to likewise construct the nature of ‘reality’ that I come to perceive in a conscious fashion....Boundless potential exists when we begin to explore how far we can mix our realities in this way, further consolidating the brain's capacity to engage with it's environment and extended cognitive functional capacity out into the virtual realm but coupled with physical sensory feedback as to that interaction...
Hopefully before too much longer I shall be working on fjords of my very own. And then finally determine what is the actual question for which the answer is, by all accounts, 42.
VR 'memory palaces' could help you master a new language. Wired magazine:
Harris, T. (2000). Hannibal. Arrow Books: London
Spence, Jonathan D. (1984). The Memory Palace of Matteo Ricci. New York: Viking Penguin. ISBN 978-0-14-008098-8.
“You talk as if a god had made the Machine," cried the other. "I believe that you pray to it when you are unhappy. Men made it, do not forget that. Great men, but men. The Machine is much, but not everything.”
― E.M. Forster, The Machine Stops
Consider a simple proposition. That ‘self’ is the basis for human suffering, discontent, inequality. That ‘self’ has a basis in specific and malleable physiology. That ‘self’ can therefore be ‘turned off’, re-moulded, inhibited, put to ‘better use’. Directly controllable in effect.
Age old traditions, modes of thinking, pincipled systems purport the above. Eastern methods for banishing the ‘illusion of self’ have perpetuated for millenia. Yogic practices, ancient spiritual ceremonial rituals, and even more modern day techno-philosophies abound to tackle this ‘realisable’ premise.
What if a future version of our society encompasses an Artificial intelligence enhanced state of being that has established this capacity to overrule ‘self-indulgence’? Would this be a preferable state of affairs? Max Tegmark (Life 3.0, 2017) refers to a future scenario of ‘Libertarian Utopia’ in which machine derived superintelligence lives in harmony with human (standard) intelligence and a plethora of hybridised human-machine variants (cyborgs, upgrades, augmented intelligences). Now this may or may not come to pass, but the question Tegmark is posing is to what degree one or another envisoned future scenario may be selected? This is effectively dependent on humanity’s current role in defining where artificial intelligence may take us as a species, as well as a development in itself that perhaps WILL at some point fly the nest, break out of the confines of human-limited intelligence as dictated by our biological machinery...
I would like to pose a proposition as per the opening gambit, and which seeks to get at the heart of the issue with respect to the human element determining this future. The core of my thinking is lodged in a ‘simple’ notion that the self is indeed ‘locatable’ within a physiological frame that can be notionally influenced, moulded, ‘turned off’ even. As our understanding of ‘intelligence’ increases, or at least with respect to brain functioning pertinent to cognition (our perception and construction of ‘reality’), we can begin to envision ways to address the existential question of ‘selfhood’ and its basis in suffering, individual ambition and collective inequality... This understanding takes into account the motivators of our own behaviour within the world at large, how we can mould our own environment, and how through technology to extend ‘intelligence’ into that world, and expand (or shrink) our awareness within this wider system,
The ‘simple’ proposition is that a ‘default state’ within the brain feeds (metabolically) a network of regions that are involved in internally-directed cognition, pertaining to one’s perspective of where one is in space, what one has previously experienced through the course of one’s life, and what may happen to ‘one’ in future anticipated scenarios. This ‘self’ can be seen to be a cause for anguish, greed, obsession and addiction. These ‘cortical mid line structures’ (amongst other areas) essentially could be said to define ‘self’ at a neurophysiological level. Conversely, a set of brain regions involved in ‘goal-directed’ cognitive functioning, and located for instance in pre-frontal cortical structures, work together, in conjunction with an attentional system that maintains focus on task (or fluctuates between ‘default’/self-distraction and task-focus). With task focus comes productivity, enhanced performance, and, actually, banishment of the self. When the ‘task-positive’ network is engaged, the individual is exactly that – engaged in, absorbed by, purposeful action. The ‘default’ / self-related network is tuned down as these networked regions operate in a mutually exclusive fashion by and large (though as evidence from research becomes more granular there will be greater understanding of nuanced interaction/fluctuation between networks under certain productive conditions).
In our technology-enhanced future scenario we can, underpinned by burgeoning awareness of how our biological machinery supports our cognitive functioning, perceptions and behaviour, begin to consider how we might influence this state of being. Through technological interventions. Technology can be described as ‘the application of scientific knowledge for practical purposes, especially in industry’ (thanks google). ‘Industry’ can be interpreted as ‘productivity’ and as directed towards sustaining economic activity that advances affluence of a society, an individual, a ‘quality of life’. So we might defer to ‘technology’ in the sense of computer/artificial intelligence imbued apparatus, but in fact can apply this more generically to any practical intervention wherein knowledge can enhance and inspire change, progress, development.
Jamie Wheal and Steven Kotler (2017) have written about how science and technology can (and has already begun to) inspire interventions that transform thinking, prompt innovation, and connect with more ancient traditions of self-mastery, so I owe a debt to this for referencing many strands of research I have mined to inform my own. A central conceit of all this is that indeed technological interventions can target, neuroscientifically, the brain construct of selfhood. This can effectively be boiled down to three possible means of ‘intervention’:
It should be emphasised here that this task-positive state of being comes with it a sense of satisfaction and wellbeing as a consequence of being focused, and aligned: one’s skills are being employed to their best, in harmony with the environment. When one ‘returns’ from this task-focused ‘journey’ one has grown, strengthened from that experience. Skills consolidated, learnings gleaned. And re-connection with ‘self’, however momentary gains a feeling of being more in control, more fulfilled (and better capable of disconnecting as needed back into a goal directed state).
Now the interesting thing from an egalitarian perspective here is that focusing on a goal and tuning down the brain regions that are ‘self focused’ naturally lends itself to being more outward focused, and altruistic in a sense. For it is not about one’s individual role or goal per se. When the default ‘self’ is ‘banished’ it’s own self-interest ought to retreat from focus within the context of pursuing goal-directed tasks, opening up to possibilities that are beyond self and more in the interests of a wider societal context. This is the tricky part to define when it comes to structuring tasks that do not become too individually constrained, but that is a piece for future exposition. This idea has not yet been fully qualified in research and care should be taken simplifying the argument too readily, as in fact areas within the default mode are involved with Theory of Mind which consider the thoughts and feelings of others (see Filkowski et al., 2016). As such there will doubtlessly be components of default activity that may be necessary to preserve when engaged in tasks that have collective benefits. But nonetheless in principle, the removal of self paves the way towards harmonious interrelationship with other and the wider community and environment. Further research as mentioned will tease out the nuances of functionally connective brain resource distribution when in task-positive states.
Summing up then, the basis here is that ‘self’ is a function of brain regions that demand a lot of energy, somewhat wastefully, and which detract from effective performance on goal related tasks. Now there is deeper understanding of how this is based in brain activity that can be relatively localised, we can also seek to intervene in that system and divert resources to more producitve networks of brain regions that optimise task performance. In doing so the self ‘switches off’, and a more industrious and fulfilling state of being arises. Through this scientific understanding we can devise technological interventions that help in this process of ‘self mastery’. We can do this through structuring our environment in a way to motivate and direct participants towards task focus at expense of internal distracting ‘self-thoughts’. We can do it by offering pharmacological substances (which may target quite specifically receptors such as serotonergic 5-HT2A) that dissolve self and desynchronise electrophysiological activity associated with ‘normal cognition’ (Muthukumaraswamy SD, et al. (2013)), or creating observable impact on the ‘balance’ of stress hormones such as cortisol and noradrenaline (influencing arousal and therefore motivation). Finally we can devise better measuring apparatus that can probe this brain activity, and even through ‘neurofeedback’ give better control over the distribution of activity and its cognitive associative processing. And in accord with this, through Internet of Things enabled connectivity with our WHOLE environment, perchance synthesise a means for ‘controlling’ all aspects of our ‘being’. Like a thermostat controlling the heating system in your house,you could ensure that your brain state resonates most harmoniously with the wider external environment and the demands of any task that requires fulfilling. All the whilst being more attuned to collective goals, needs of society, needs of the planet...
So a future could exist in which a more egalitarian status is conferred societally-wide by understanding how to guide individual ‘self-absorption’ more towards altruistic goal directed focus through application of technology (as conceptualised in different ways above). The techno-spiritual revolution could go hand in hand with the resurgence of ancient practices (as is happening anyway) so that we all reconnect with our species’ purpose to evolve harmoniously with the machines that will eventually replace us. But at which point, with self banished there is no ‘one’ left to mind!
5th December 2053: sat by the virtual fireside in my condominium, preparing for the day’s ‘work’. Have installed my fNIRS headband and ensured it’s synced with the Human-Environment integration system, networked world-wide via the Global Internat of Things. The AI huge-data analysis framework has concocted my task list for today and is now through real time neurofeedback providing my own digital augmented operating system with impetus to optimise my large scale brain networks connectivity protocol. I can feel now in tune with my own cognitive capabilities, switching effortlessly into a cenrtal-executive prefrontal cortex connective state. My ‘self’ is dwindling as my engagement tunes in to my goal which is to provide innovative solutions to sustainable energy technology challenges affecting a community in India. I am virtually in my remote ‘neighbour’s shoes, understanding and experiencing what daily challenges he faces making ends meet. But importantly I can help focus all my energies onto a constructive solution that can be fed across the web network to stimulate further solutions and innovations, coupled with AI enhanced superadditive innovation engineering. We can test it all iteratively in VR, 3D print concepts and then test for real in-situ. After a day’s work like this I can return to my ‘self’ space briefly and review how better ‘I’ feel for it. I don’t even need to use the fNIRS interface to marshall my own thought processes, that;s more for my ‘working day’ conncting planetary wide for industrious applications. No this is for personal ‘benefit’ as I strengthen my own resolve and motivation as a person to be a better version of myself, content, fulfilled, and eager and full of energy to get into tomorrow’s working day helping my fellows across the worldweb.
Sinister? Against the spirit of human potential? Granting too much 'power' to AI? Or idealistic and unattainable? To be explored....
Carhart-Harris, R. and Nutt, D.J. (2017). Serotonin and brain function: a tale of two receptors. J Psychopharmacol. 31(9): 1091–1120
Filkowski, M.M., Cochran, R>N. And Haas, B.W. (2016). Altruistic behavior: mapping responses in the brain. Neurosci Neuroecon. 2016; 5: 65–75.
Muthukumaraswamy SD, et al. (2013) Broadbandcortical desynchronization underlies the human psychedelic state. J Neurosci 33(38):15171–15183
Sitaram, R., Ros, T., Stoeckel, L., Haller, S., Scharnowski, F., Lewis-Peacock, J., Weiskopf, N., Blefari, M.L., Rana, M., Oblak, E., Birbaumer, N., and Sulzer, J. (2017). Closed-loop brain training: the science of neurofeedback. Nature Reviews Neuroscience volume 18, pages86–100
Tegmark,, M., (2017). Life 3.0. Pemguin Books: UK
Wheal, J. and Kotler, S. (2017). Stealing Fire:How Silicon Valley, the Navy SEALs and Maverick Scientists are Revolutionizing the Way We Live and Work. Harper Collin
“Cyberspace. A consensual hallucination experienced daily by billions... Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding...”
― William Gibson, Neuromancer
As a cineaste and ardent fan of the craft and storytelling potential for the immersive medium that is film, I have sat through (suffered) many an experience that has bored me to tears, frustrated me, annoyed me, or sent me to sleep. But I don’t believe I have ever actually walked out of a film. Though I have come close. In fact I think the closest I ever came was whilst watching the grandstanding motion picture event that is/was AI: Artificial Intelligence.
I confess to being a diehard afficianado of Stanley Kubrick (3 of his film are in my all time top 10 – which ones I wonder...?! - there’s not that many to choose from tbh), and I suppose I had high hopes for the long germinating project of his that posthumously was realised by none other than Steven Spielberg. I guess I had hoped that in his spirit, a sort of sequel (thematically at least) to 2001 might have followed up on notions about machine intelligence as a natural evolution of consciousness, perhaps questioning the role of emotion as redundant in a more clinically perfectionist universe where ultimate limits of cognition might enable intelligent expansion across the void of space. At the expense of sugary sentiment. But no, Spielberg indulged himself and the viewer in a particularly overly saccharine take on ‘what it means to be human’, and how the future of machines rests in their capacity to evolve emotional connectivity. So far so Hollywood.
It made me squirm in my seat, long for it to end. I had to compel myself to not storm out of the cinema with half an hour more to go, muttering indignantly.
Perhaps this speaks to me as a cognitive psychologist, skeptical of ‘cod’ ideas about emotion – what is emotion? - as espoused in popular fictions, or an over amplified sense of how humanity will prevail due to some special status. Couched in this ‘feel-good’ but ambiguous notion of ‘emotion’. Am not here to write a PhD on ‘what is emotion and how does it relate to the concept of man vs machine dominance of the future Intelligence Landscape and evolutionary Darwinist cyber-thinking' (!) - though perhaps I could? Rather I want to begin touching on (from hereonin) in a series of pieces about what AI might mean more generally in a discussion about humankind’s reliance upon technology, notions of ‘self’ and ‘consciousness’, and how inevitable progress is simply the prime directive for evolution, and that’s something we need to accept and put up with...
Hopefully this will touch on some notions such as emotion, including more up to date thinking on that subject (such as the ‘constructionist’ framework as championed by the likes of Lisa Feldman-Barrett, 2014, – wherein at the heart of the matter is consideration of the human brain as a prediction-machine in itself, permutating iterative algorithms that learn, fail, adapt, succeed, grow, with emotional ‘signals’ in the mix as important functions facilitating that process). Within this line of thought, we can look at the brain and its architecture as indeed analogous to a ‘machine’, with mechanistic causal chains and connections, feedback loops, networks, which beget the cognitions, ‘qualia’ of experiential perception, in short the ‘programs’ (programmes) and operating software dependent on this infrastructure.
A good source of popular reference I shall draw on amongst others is Max Tegmark’s (2017) book ‘Life 3.0’ which nicely elucidates upon the field of Artificial Intelligence research, it’s ethical role in determining the future of AI development (to avoid the fateful Cyberdyne Systems ‘incident’ of 1997), and a serious look at where AI may present significant benefit to our species’ co-development into the near, intermediate and long term future. It’s here to stay, it’s growing exponentially, and we really don’t know truly where it is going to take us (or leave us).
It is becoming an ever stranger world day by day. Yesterday I conversed with a chat-function online attempting to source some virtual reality equipment compatible with slightly outdated computer hardware. Frustrated at the speed with which everything updates and creates redundancy in old equipment, I was somewhat exasperated and defensive with the agent with whom I was chatting. I rather tersely conversed with him and came close to asking irritatedly if he was a human or an AI and if the latter could I please have a human instead (perhaps I prefer some’one’ with the capacity to obfuscate more and put me at ease even when getting nowhere?!). The tenor of his responses suggested to me he was indeed human. But in retrospect I can’t be 100% sure. Such is the bizarrre state of affairs (at least interactively speaking) that we live in. Is it a good or a bad thing? Is devolving responsibilities such as providing consumer advice (or health advice, fitness advice, legal advice etc. etc.) to AI a sensible, effective, preferable course of action?
Much research in AI, and psychology, would argue that non-human agents can provide the appropriate rapport cues that put humans at ease, engender trust in the communication process, even elicit deeper levels of openness than human counterparts may do so (Fiske et al., 2019). It's still early days, but one thing for sure is that machine intelligence will certainly exponentially improve, learn, develop, extend beyond it’s original operating system, program, limitations. And perhaps it is best to see that as an exciting opportunity to be harnessed, or guided where possible.
Or we pull the plug now...before it’s too late. Damn, Shroedinger’s Cat is out of the bag. The mice have escaped the interface and are scurrying after the silicon cheese. The red eyes are glowing in the dark, metal legs scraping across the tarmac, relentless, rasping ‘we’ll be back’...
Next up: how AI may ‘solve’ our modern day political crisis, putting the meaning of democracy back into the lexicon. All politicians from the year 2037 will be required to register their profiles on the Mechanical Turk, henceforth their political machinations at the behest of the crowdsourcing algorithm that determines whose proposition wins the big data-analysed consensus of opinion, carefully weighing into the equation socio-economic equality formulae, balanced against environmental impact (from the worldwide IoT net), offset against predicted movement of key stocks and sustainable business practices. Nobody profits from politics, financially or status wise. Protected anonymity is key to ensure the latter. Everybody gets what nobody wants.
And lo it’s Metal Mickey. In a blond wig and puckered visage. Running the whole show.
Some things might just never change.
"The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which."
- George Orwell, Animal Farm
Feldman-Barret t and Russell, J.A. (2014).The Psychological Construction of Emotion ISBN 9781462516971
Fiske, A. , Henningsen, P., Buyx A (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. JOURNAL OF MEDICAL INTERNET RESEARCH, 21 (5), 1-12
Spielberg, S. (dir) (2001)/ A.I. Artificial Intelligence. Dreamworks Pictures. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
Tegmark, M. (2017). Life 3.0. Penguin Books: UK
"I'm free, to do what I want, any old how" K. Richards and M. Jagger
Would you jump off a mountain in good faith that a parachute will spring open and convey you gently back down to the ground?
What would it take for you to make that step out into the void, eyes screwed shut, and your fate thrust forth into the lap of the gods...?!
Just how much of a decision would you be taking consciously to do this? You have free will, you are in control, the decision is yours and yours alone...right??
You might be a little surprised if I told you that actually the decision you would be making would actually be already decided upon BEFORE you actually decided to do it! Confused? You should be.
The scenario of jumping from a clifftop is an extreme example but serves also as a metaphor for taking risks, making commitments, pushing boundaries in life. So substitute a more relevant contextual challenge that fits with your lifestyle and read on with that in mind...
The question of free will is a contentious one, debated by philosophers, neuroscientists, psychologists and their ilk. But delve down into the human brain and you will find some intriguing propositions about how it works, how the thoughts that appear to be our own, and the actions that we choose to take, may rest upon the foundation of an illusion.
A position that is increasingly gaining traction in the scientific community concerns the ambiguous nature of human agency, and veridicality of 'reality' being called into question. By this I mean what we think of as true, or real, be it from the perspective of cause-and-effect, our capacity to determine our fate through our actions and decisions, and how we exhibit control over an environment that we can define clearly all around us, is in fact just the tip of a shimmering iceberg whose constitution is far more complex. The nitty-gritty is somewhat hidden below the surface of a realm we take for granted as being 'how things are'.
The brain in its infinite complexity, computes and processes a vast amount of information that we can not hope to comprehend or keep track of from a conscious standpoint. Motorically speaking, the information flooding in from the environment continuously and impinging on our senses, demanding adjustment moment-to-moment from our bodily position in space, is just mind-bogglingly incomprehensible (to paraphrase an observation from Douglas Adams upon the scale of the Universe, but which could equally be directed at the brain itself). It is said that the brain has evolved to the size and density that it has, in order to accommodate the computring power necessary for ambulatory functioning (i.e. moving about, including fiddling with implements called 'tools'). We might surmise proudly and smugly that we sit atop the food chain with our weighty grey (and white) matter on account of our superior intellectual capacities, our amazing perceptual abilities including the propensity to make grand art, send our kind to the Moon, create huge shopping malls and invent reality TV. But actually that most likely betrays a false sense of what our cerebral machinery is all about. Why is it that we stuggle to make robots that can perform the fluid movements we take for granted, and that exponential advances in computing (following Moore's law) only now decades later are yielding some developments in 'lifelike' motor coordination? It's 2019 and I don't see Replicants standing on every street corner, so Ridley got that one wrong. But computers have been able to beat people at chess for some time now...
Ironically the most metabolically demanding component of the brain seems to be lodged in the very structures that give us this grandiose sense of our own importance, and can in fact get in the way of producing great achievements as mentioned above. And when movement is effected in a most efficient way, or when boundaries of human endeavour associated with refined movement skills (eg. leaping from great heights, whizzing about at high speed, and so on in athletic pursuits), it is believed that these areas of self-indulgence are in fact turned 'off' (tuned down). Arne Dietrich (2006) talks about this, referring to it as the transient hypofrontality theory. So in that sense whilst the components of movement are computed and implemented 'effortlessly', i.e. efficiently by an complex infrastructure, the 'self' and it's luxuriant ramblings put a strain on resources and give rise to feelings of fatigue and perplexity at the hard work of it all...go figure.
Returning to the questions posed at the start, let us bring to bear on this argument a point about free will or it's potential absence in the proceedings. The brain's mechanisms below the surface of conscious awareness manifest algorithms and heuristics that rely on past experiences, encoded in memory and motor cortices, predict outcomes probabilistically, and feed forward courses of actions (also 'decisions') to the higher centres of awareness that then inform what 'I' will do (or say) next.
A marvel of discovery in the neuroscientific canon around 1964 was the so-called Bereitschaftspotential (Kornhuber and Deecke, 1964). This measure concerns activity in the motor cortex and supplementary motor area which precedes voluuntary activity in muscles. That it can be identified in experimental situations prior to an apparently conscious act being made raises a near metaphysical challenge to the notion of conscious decision making based in volitional agency. Instead we may postulate a decision as representating the sum total of unconscious proccesses from which an outcome is determined and made accessible to the conscious agent ('me'). Benjamin Libet (Libet et al., 1983) found that this response would occur abour 0.35 seconds before an experimental subject reported awareness of a desire to make a motor action.
I have talked at length elsewhere about the brain networks that turn on and off (so to speak) based on whether attention is focused outwardly on a task to be performed or inwardly on internal mentation, rumination, mind wandering, or notions aout one's 'self' and its indulgent concerns. And how by focusing outwards and being immersed in 'doing' rather than 'thinking' per se, one in fact loses sense of self, an awareness of being an agent in that sense. So from this we can at least propose that 'self', 'awareness', 'agency' and associated concepts in some respect are dependent upon the state of the brain, how it's resources are managed within this bogglingly convoluted connective system. And could also make further assertions about the availability of resource for 'cognition' (in the higher/abstract sense of the term) being very much down to a prioritisation within that system based on demands for sustenance of the body (and it's brain), and its biological imperatives. A luxury perhaps?
So from this basis, a sense that we are entitled to a free agency, volitional 'will' to do with a we please, to make decisions as we see fit, and to have a vainglorious demeanour about how marvellous this all is that we can do what we want, does perhaps rest on a fallacy.
A final point to make with respect to the Bereitschaftspotential, refers to a recent paper in whcih researchers in Germany and Austria conducted an experiment studying the BP/brains of bungee jumpers. Jumping from a 192m bridge (I know it well having done likewise a couple of years back), the BPs were recorded and conclusions drawn. This study sought to ascertain just 'when' that go-no-go decision occurs vis-a-vis one's propensity to leap forth as if against the mores of sanity and survival. The answer would seem to be that lo-and-behold the potential registers as one would expect, in advance of the decision 'to go'. Interestingly, and as this represents an 'extreme' or 'life-threatening' type of scenario (despite the safety strictures in place), participants will still exhibit fear and reticence and must overcome the tendency to abort the decision 'to go'. But at some level whilst this conflict is raging internally, that potential/tendency has already been set. What has yet to be investigated further (and this is where my own research seeks to bridge gaps, pun unintended) is how the fronto-parietal attentional networks, and the shifting activation in functional connectivity between 'task'focused' and 'task-irrelavant' (or 'self' indulgent) comes into play in individuals taking part in extreme activities, and harnessing their 'willpower' to jump into the abyss...'Free won't', to coin a phrase, perhaps being the order of the day when it comes to a capacity to inhibit a predetermined action. Watch this space as this line of reasoning develops further....!
At the end of the day 'you' will come to a decision as to whether 'you' are going to do the act as posed in the earlier question. But that is not to say that you have full agency over that 'doing'. For within your own makeup there will be a brain network-driven tendency to mitigate impulse, whether that is to do a bold act such as jumping from a cliff, or to not do that act in a month of sundays! Likewise, if taking this metaphor and applying it to your impetus to make a committing decision about a life event, or even to make changes that go against the impulse to stay the same, the fact remains, that motivation rests inside your brain, and to an extent outwith your 'control'.
So rather than sweat it out in the middle of the night wondering what 'you' would do, take some solace in the thought that actually all 'you' can really do is just follow whatever that inner voice tells you to do, as it has by and large been determined in advance by a much more well informed committee in the recesses of your brain's parliamentary chambers!!
Footnote - updated positon on the above
Actually, the validity of the Bereitschaftspotential in terms of claims made with reference to it's status in the debate on free will has been revisited in recent years (Schurger et al., 2012), with the inflation of it's significance attributed to interpretation of signal-within-noise in brain activity fluctuations relative to decision making. If anything, evidence suggests that the brain weighs up 'evidence' based on sensory information in order to make a decision, and the activity corresponding to 'evaluation' of that information accumulates towards a 'tipping point' at which a decision can be assertively made. [Could it fact be that our conscious thoughts are 'echoes' of this underlying activity? Or shadows on the Platonic cave wall? Continuing this analogy, even a projection, an echo, can have physical ramifications and influence the course of events - an Alpine avalanche for instance caused from sound waves bouncing round the mountain ampitheatre, or a shadow demarking an area of shade to escape the suns rays (stretching the point somewhat!) Perhaps consciousness, and by association 'will' is indeed an echoic manifestation of underlying automatic processing, yet who's reverberations as a by-product (waste?) can actually alter the course of actions that have been set in motion. This is an intriguing proposition, as if evolution has bestowed a circular economy upon the neural system such that it's 'waste' generated propels the organism forwards via emergent 'agency', like a bat echolating to navigate, or using that waste to reshape the path ahead...].
The BP in itself may be an artefact of the analysis and not as it has been asserted provide a pre-emptive signature of a foregone conclusion, as it were, of decision having been determined prior to conscious awareness of that. The matter is open for debate. Likewise the question of free will is also unresolved. The fact remains though that the brain processes a huge amount of data, and as a 'subconscious' committee of information gatherers, the likely direction of a decision may already have been weighed up prior to a conscious selection of that decision. At which point the conscious agent exhibits so-called free will to decide X vs Y. Up until the moment of say jumping off a cliff one has the capacity to 'change one's mind'. But somewhere deep down there is an impetus to do it or to not do it. And effort may require being expended to counteract that impetus to overcome inertia that has metabolic roots in availability of resource and direction to employ (literally in this case fight or flight - i.e. step into the abyss and 'fly' or 'fight' the urge to do so/not do so depending on the personality and motivational state of being! - turning the actual concept on it's head for a moment!). An interesting take on this to pursue further concerns a biological/physiological impetus to act upon energy stores available to direct in the service of an intention, a goal, a drive to do something when one is engaged and motivated towards an activity (inspired!). Namely, how the underlying neurophysiological/neurocognitive mechanisms organise operationally in order to make that act/final decision to 'go for it' happen. Do we really 'know' that we will do X when in situation Y? We can surmise a probability of likelihood (perhaps approaching but not absolutely certainly 100%) based on past experience, or perceived motivation to do so. But do we really 'know' that we WILL make that commitment in the moment (as opposed to 'choking' at the last minute?). If so then it may be true to say that actually the decision will become apparent when the systems subserving that reach their conclusive judgement and provide the 'conscious agent' with the answer that allows 'it' to decide at the last moment.
Free will if we must use such a vague term in this sense perhaps represents a culmination and a finite time point in an act that has heritage in the build up (to which the conscious agent is not fully granted access) and a 'tipping point' of opportunity to exhibit itself. Perhaps then that moment of time is not entirely 'available' until this build up of sufficient momentum is achieved, and the window of 'free will' is somewhat granted at the behest of the unconscious committee actually providing the energy and drive to empower the agent to finally 'decide'. Finally I might contend, hypothesise, and seek to confirm further scientifically, that a moment of action perhaps coincides with a reconfiguration of brain networks that have a conscious and 'volitional' element yet represent a transitional phase between 'self-referent' processing, and task-focused (and possibly motor-orientated) processing. And as such manifests as a sort of 'meta-conscious' state of being (transitional, enactive, beyond conscious). By this I hypothesise, in the given example of leaping forth into the void, the moment of commitment and 'decision' coincides with a quietening down of self-reflective processing, thus de-amplifiying the 'voice' of conscious awareness in the sense of an introspective dialogue concerning choice, and decision-making. In it's place, a motivated, action-based network of processing puts into practice strongly focused schemas that are directed at performing the task, removing potential interference by these reflective structures which might seek to undermine the smooth, fluid and automatic patterns of behaviour and compromise that performance. And at this transition (and immersion within the task focused state) the 'conscious agent' is somewhat removed from the equation. Again giving over to the effortless automatic system that allows exemplary performance to take place.
Dietrich A (2006). Transient hypofrontality as a mechanism for the psychological effects of exercise.Psychiatry Res. 145(1):79-83.
Kornhuber, H. H. & Deecke, L. Hirnpotentialänderungen beim Menschen vor und nach Willkürbewegungen, dargestellt mit Magnetbandspeicherung und Rückwärtsanalyse. Pflügers Arch281, 52 (1964)
Libet, B., Gleason, C. A., Wright, E. W. & Pearl, D. K. Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential) the unconscious initiation of a freely voluntary act. Brain106, 623–642 (1983)
Nann, M., Cohen, G., Deecke, L. & Soekadar, S.R. (2019). To jump or not to jump - the Bereitschaftspotential required to jump into 192-meter abyss. Scientific Reportsvolume 9, Article number: 2243
Schurger, A., Sitt, J.D. and Dehaene, S. (2012). An accumulator model for spontaneous neural activity prior to self-initiated movement. PNAS 109 (42) E2904-E2913
The Shawshank Redemption (‘Hope springs eternal’) has been a very poignant influence on my attitude to life. It is a beautiful story that melds key elements of Stephen King’s ability to craft mesmerising prose (in the novella form) with a heartfelt and inspiring message behind it. It was brought to the screen in near perfect translation and cinematic elaboration by Frank Darabont in 1994, being nominated for several accolades including Best Picture although it did not win). At the story’s heart is a character who bides his time, takes what life throws (unpleasantly) at him, and holds true to a core maxim predicated on optimism and singular belief that the future will eventually yield reward. Through perseverance.
The modern propensity towards 'positive thinking' rests upon a façade that ‘if you believe it will happen’, ‘hard work pays off’. The millennial prerogative towards acquisition of whatever one wants one shall have. A well known Stanford psychology study (Mischel et. Al., 1972) revealed the ‘marshmallow effect’ which has some bearing on the notion of what one might expect is due (what is it with Stanford – though perhaps their Prison Experiment has some unconscious bearing on this current blog piece wirth respect to compliance – or ‘not giving in’ but I digress).
In said experiment, children were allowed the option of an immediate or a delayed gratification by means of a marshmallow or cookie. Two rewards if willing to wait. The gist being the revelation that personality traits concomitant with immediate gratification imply impulsivity, lack of self control, and perhaps later detrimental bearing on life satisfaction and success. Conversely, those children displaying patience and capacity to wait awhile for the reward may display traits later in life of greater competency and self control. And perhaps greater success in chosen endeavours? (somewhat extrapolating here). Brain imaging asserted connection with areas of pre-frontal cortex in the control of impulse and capacity to delay gratification. This fits with themes espoused in other blogs with respect to capacity to ‘re-route’ brain networks towards successful accomplishment on tasks away from those involved in self-absorption or distraction (and consequent reduced capacity for performance on cognitive tasks and goal-directed behaviour).
Returning to Shawshank Penitentiary, the protagonist, Andy Dufresne, exhibits a pronounced calm, and withdrawn exterior throughout the tribulations experienced, to the marvel of other inmates and friends. The ‘shock’ twist (spoiler warning) is that after literally decades of incarceration in which other institutionalised associates have ‘accepted their lot’, one day Andy simply disappears. He has been tunnelling out for 20+ years. The surprise in this rests in how he could possibly have done it. But the key is that he began with small increments of activity, tempered with some good fortune (discovering the wall mortar to be somewhat less robust than one might expect). He began to scrape away at the concrete. He increased the size of his effect, and his tools again incrementally, unnoticed by others as the scale magnified.
He endured his time inside, maintaining composure and resolve against adversity, keeping his cards close to his chest. Even his closest friends did not suspect. He had bumps on the road in which his resolve, like the prison walls themselves, threatened to crumble. But this adversity tested him, and re-inforced that resolve, now tempered with experience and the burgeoning skill and confidence that imbued. And eventually when the time was nigh, his grand plan was put into execution. With the attitude of now or never, no turning back, he undertook the most risky of actions and set in motion his escape plan. Literally crawling through tunnels of excrement in his bid for freedom.
The point here, if not obvious, is that one’s goal in life is not something that one is bequeathed as a birthright and which one is pre-destined to achieve at the drop of one’s hat. And simply by believing that it is so it will magically come to pass. Rather instead one has to fixate on that goal as a possibly distant, but realisable thing. But then almost to give over to any sense of the timeline on which that ought to be achieved. For instead it is the path and the work that requires focus as one takes steps towards that ‘endpoint’. Perhaps it is like giving one’s ‘unconscious’ mind a remit that it should work towards this defined goal, and be left to it’s devices to find it’s way round the obstacles that inevitably, and productively occur. (For those obstacles provide the impetus for building the resilience and the adaptive skills to achieve the necessary goals.)
So when the prison walls close in at the fearful time of lights out, sounds of torment echo and rattle round the bars, this is the time to lie back, compose oneself and consolidate one’s mindset towards the small but significant tasks that are required to pave the way towards escape and freedom. The torment is without not within. Inside is focus, is imagination, is capacity to solve problems and find motivation. Then pick up your rock hammer, slip out from under the sheet, check the coast is clear, and start to scrape quietly away at the widening crack. Smiling to yourself and inwardly whistling the theme to that Steve McQueen film…
[Just make sure you are prepared for (and relish the opportunity) to crawl through a tunnel of shit to get to the light at the end of the tunnel!!!]
Darabont, F (Dir) (1994). The Shawshank Redemption. Castle Rock Entertainment. (Movie) https://en.wikipedia.org/wiki/The_Shawshank_Redemption
King, S (1982). Rita Hayworth and the Shawshank Redemption. In Different Seasons. Viking Press
Mischel, W., Ebbesen, E. B., & Raskoff Zeiss, A. (1972). Cognitive and attentional mechanisms in delay of gratification. Journal of Personality and Social Psychology, 21(2), 204-218.
The science of cognition and perception in context
This is where I elaborate upon brain science relating to cognitive functioning dependent on environmental context.