Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

24 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.



 

28 November 2021

Whose Reality Is It Anyway?

Thomas Nagel wondered if the world a bat perceives is fundamentally different  to our own

By Keith Tidman

Do we experience the world as it objectively is, or only as an approximation shaped by the effects of information passing through our mind’s interpretative sieve? Does our individual reality align with anyone else’s, or is it exclusively ours, dwelling like a single point amid other people’s experienced realities?

 

We are swayed by our senses, whether through the direct sensory observation of the world around us, or indirectly as we use apparatuses to observe, record, measure, and decipher. Either way, our minds filter the information absorbed, becoming the experiences funneled and fashioned into a reality which in turn is affected by sundry factors. These influences include our life experiences and interpretations, our mental models of the world, how we sort and assimilate ideas, our unconscious predilections, our imaginings and intuitions unsubscribed to particular facts, and our expectations of outcomes drawn from encounters with the world.

 

We believe that what serves as the lifeline in this modeling of personal reality is the presence of agency and ‘free will’. The tendency is to regard free will as orthodoxy. We assume we can freely reconsider and alter that reality, to account for new experiences and information that we mold through reason. To a point, that’s right; but to one degree or another we grapple with biases, some of which are hard-wired or at least deeply entrenched, that predispose us to particular choices and behaviours. So, how freely we can actually surmount those preconceptions and predispositions is problematic, in turn bearing on the limits of how we perceive the world.


The situation is complicated further by the vigorous debate over free will versus how much of what happens does so deterministically, where lifes course is set by forces beyond our control. Altering the models of reality to which we clutch is hard; resistance to change is tempting. We shun hints of doubt in upholding our individual (subjective) representations of reality. The obscurity and inaccessibility of any single, universally accepted objective world exacerbates the circumstances. We realise, though, that subjective reality is not an illusion to be casually dismissed to our suiting, but is lastingly tangible.


In 1974, the American philosopher Thomas Nagel developed a classic metaphor to address these issues of conscious experience. He proposed that some knowledge is limited to what we acquire through our subjective experiences, differentiating those from underlying objective facts. To show how, Nagel turned to bats’ conscious use of echoed sounds as the equivalent of our vision in perceiving its surroundings for navigation. He argued that although we might be able to imagine some aspects of what it’s like to be a bat, like hanging upside down or flying, we cannot truly know what a bat experiences as physical reality. The bat’s experiences are its alone, and for the same reasons of filtering and interpretation, are likewise distinguishable from objective reality.

 

Sensory experience, however, does more than just filter objective reality. The very act of human observation (in particular, measurement) can also create reality. What do I mean? Repeated studies have shown that a potential object remains in what’s called ‘superposition’, or a state of suspension. What stays in superposition is an abstract mathematical description, called a ‘wavefunction’, of all the possible ways an object can become real. There is no distinction between the wave function and the physical things.


While in superposition, the object can be in any number of places until measurement causes the wavefunction to ‘collapse’, resulting in the object being in a single location. Observation thus has implications for the nature of reality and the role of consciousness in bringing that about. According to quantum physicist John Wheeler, ‘No ... property is a property until it is observed’, a notion presaged by the philosopher George Berkeley three centuries earlier by declaring ‘Esse est percepi’ – to be, is to be perceived.


Evidence, furthermore, that experienced reality results from a subjective filtering of objective reality comes from how our minds react to externalities. For example, two friends are out for a stroll and look up at the summer sky. Do their individual perceptions of the sky’s ‘blueness’ precisely match each other’s or anyone else’s, or do they experience blueness differently? If those companions then wade into a lake, do their perceptions of ‘chilliness’ exactly match? How about their experiences of ‘roughness’ upon rubbing their hand on the craggy bark of a tree? These are interpretations of objective reality by the senses and the mind.


Despite the physiology of the friends’ brains and physical senses being alike, their filtered experiences nonetheless differ in both small and big ways. All this, even though the objective physical attributes of the sky, the lake, and the tree bark, independent of the mind, are the same for both companions. (Such as in the case of the wavelength of visible light that accounted for the blueness being interpretatively, subjectively perceived by the senses and mind.) Notwithstanding the deceptive simplicity of these examples, they are telling of how our minds are attuned to processing sensory input, thereby creating subjective realities that might resemble yet not match other people’s, and importantly don’t directly merge with underlying objective reality.

  

In this paradigm of experience, there are untold parsed and sieved realities: our own and everyone else’s. That’s not to say objective reality, independent of our mental parsing, is myth. It exists, at least as backdrop. That is, both objective and subjective reality are credible in their respective ways, as sides of the whole. It’s just that our minds’ unavoidable filtering leads to the altering of objective reality. Objective reality thus stays out of reach. The result is our being left with the personal reality our minds are capable of, a reality nonetheless easily but mistakenly conflated with objective reality.

 

That’s why our models of the underlying objective reality remain approximations, in states of flux. Because when it comes to understanding the holy grail of objective reality, our search is inspired by the belief that close is never close enough. We want more. Humankind’s curiosity strives to inch closer and closer to objective reality, however unending that tireless pursuit will likely prove.

 

20 September 2020

‘What Are We?’ “Self-reflective Consciousness, Cooperation, and the Agents of Our Future Evolution”

Cueva de las Manos, Río Pinturas

Posted by John Hands 

‘What are we?’ This is arguably the fundamental philosophical question. Indeed, ‘What are we?’ along with ‘Where do we come from?’ and ‘Why do we exist?’ are questions that humans have been asking for at least 25,000 years. During all of this time we have sought answers from the supernatural. About 3,000 years ago, however, we began to seek answers through philosophical reasoning and insight. Then, around 150 years ago, we began to seek answers through science: through systematic, preferably measurable, observation or experiment. 

As a science graduate and former tutor in physics for Britain's ‘Open University*’, I wanted to find out what answers science currently gives. But I couldn’t find any book that did so. There are two reasons for this.

  • First, the exponential increase in empirical data generated by rapid developments in technology had resulted in the branching of science into increasingly narrow, specialized fields. I wanted to step back from the focus of one leaf on one branch and see what the whole evolutionary tree shows us. 
  • Second, most science books advocate a particular theory, and often present it as fact. But scientific explanations change as new data is obtained and new thinking develops. 

And so I decided to write ‘the book that hadn’t been written’: an impartial evaluation of the current theories that explain how we evolved, not just from the first life on Earth, but where that came from, right back to the primordial matter and energy at the beginning of the universe of which we ultimately consist. I called it COSMOSAPIENS Human Evolution from the Origin of the Universe* and in the event it took more than 10 years to research and write. What’s more, the conclusions I reached surprised me. I had assumed that the Big Bang was well-established science. But the more I investigated the more I discovered that the Big Bang Theory had been contradicted by observational evidence stretching back 60 years. Cosmologists had continually changed this theory as more sophisticated observations and experiments produced ever more contradictions with the theory.

The latest theory is called the Concordance Model. It might more accurately be described as ‘The Inflationary-before-or-after-the-Hot Big Bang-unknown-27% Dark Matter-unknown-68% Dark Energy model’. Its central axiom, that the universe inflated at a trillion trillion trillion times the speed of light in a trillion trillion trillionth of a second is untestable. Hence it is not scientific.

The problem arises because these cosmological theories are mathematical models. They are simplified solutions of Einstein’s field equations of general relativity applied to the universe. They are based on assumptions that the latest observations show to be invalid. That’s one surprising conclusion I found. 

Another surprise came when I examined the orthodox theory for the last 65 years in the UK and the USA of how and why life on Earth evolved into so many different species. It is known as NeoDarwinism, and was popularised by Richard Dawkins in his bestselling book, The Selfish Gene, where it says that biological evolution is caused by genes selfishly competing with each other to survive and replicate.

NeoDarwinism is based on the fallacy of ascribing intention to an acid, deoxyribonucleic acid, of which genes are composed. Dawkins admits that this language is sloppy and says he could express it in scientific terms. But I’ve read the book twice and he never does manage to do this. Moreover, the theory is contradicted by substantial behavioural, genetic, and genomic evidence. When confronted by such, instead of modifying the theory to take account of the evidence, as a scientist should do, Dawkins lamely says “genes must have misfired”. 

The fact is, he couldn’t modify the theory because the evidence shows that Darwinian competition causes not the evolution of species but the destruction of species. It is cooperation, not competition, that has caused the evolution of successively more complex species.

Today, most biologists assert that we differ only in degree from other animals. I think that this too is wrong. What marked our emergence as a distinct species some 25,000 years ago wasn’t the size or shape of our skulls, or that we walked upright, or that we lacked bodily hair, or the genes we possess. These are differences in degree from other animals. What made us unique was reflective consciousness.

Consciousness is a characteristic of a living thing as distinct from an inanimate thing like a rock. It is possessed in rudimentary form by the simplest species like bacteria. In the evolutionary lineage leading to humans, consciousness increased with increasing neural complexity and centration in the brain until, with humans, it became conscious of itself. We are the only species that not only knows but also knows that it knows. We reflect on ourselves and our place in the cosmos. We ask questions like: What are we? Where did we come from? Why do we exist? 

This self-reflective consciousness has transformed existing abilities and generated new ones. It has transformed comprehension, learning, invention, and communication, which all other animals have in varying degrees. It has generated new abilities, like imagination, insight, abstraction, written language, belief, and morality that no other animal has. Its possession marks a difference in kind, not merely degree, from other animals, just as there is a difference in kind between inanimate matter, like a rock, and living things, like bacteria and animals. 

Moreover, Homo sapiens is the only known species that is still evolving. Our evolution is not morphological—physical characteristics—or genetic, but noetic, meaning ‘relating to mental activity’. It is an evolution of the mind, and has been occurring in three overlapping phases: primeval, philosophical, and scientific. 

Primeval thinking was dominated by the foreknowledge of death and the need to survive. Accordingly, imagination gave rise to superstition, which is a belief that usually arises from a lack of understanding of natural phenomena or fear of the unknown. 

It is evidenced by legends and myths, the beliefs in animism, totemism, and ancestor worship of hunter-gatherers, to polytheism in city-states in which the pantheon of gods reflected the social hierarchy of their societies, and finally to a monotheism in which other gods were demoted to angels or subsumed into one God, reflecting the absolute power of king or emperor. 

The instinct for competition and aggression, which had been ingrained over millions of years of prehuman ancestry, remained a powerful characteristic of humans, interacting with, and dominating, reflective consciousness. 

The second phase of reflective consciousness, philosophical thinking, emerged roughly 1500 to 500 BCE. It was characterised by humans going beyond superstition to use reasoning and insight, often after disciplined meditation, to answer questions. In all cultures it produced the ethical view that we should treat all others, including our enemies, as ourselves. This ran counter to the predominant instinct of aggression and competition. 

The third phase, scientific thinking, gradually emerged from natural philosophy around 1600 CE. It branched into the physical sciences, the life sciences, and medical sciences. 

Physics, the fundamental science, then started to converge, rapidly so over the last 65 years, towards a single theory that describes all the interactions between all forms of matter. According to this view, all physical phenomena are lower energy manifestations of a single energy at the beginning of the universe. This is similar in very many respects to the insight of philosophers of all cultures that there is an underlying energy in the cosmos that gives rise to all matter and energy. 

During this period, reflective consciousness has produced an increasing convergence of humankind. The development of technology has led to globalisation, both physically and electronically, in trade, science, education, politics (United Nations), and altruistic activities such as UNICEF and Médecins Sans Frontières. It has also produced a ‘complexification’ of human societies, a reduction in aggression, an increase in cooperation, and the ability to determine humankind’s future. 

This whole process of human evolution has been accelerating. Primeval thinking emerges roughly 25,000 years ago, philosophical thinking emerges about 3,000 years ago, scientific thinking emerges some 400 years ago, while convergent thinking begins barely 65 years ago. 

I think that when we examine the evidence of our evolution from primordial matter and energy at the beginning of the universe, we see a consistent pattern. This shows that we humans are the unfinished product of an accelerating cosmic evolutionary process characterised by cooperation, increasing complexity and convergence, and that – uniquely as far we know – we are the self-reflective agents of our future evolution. 


 

*For further details and reviews of John’s new book, see https://johnhands.com 

Editor's note. The UK’s ‘Open University’ differs from other universities through its the policy of open admissions and its emphasis on distance and online learning programs.

28 June 2020

The Afterlife: What Do We Imagine?

Posted by Keith Tidman


‘The real question of life after death isn’t whether 
or not it exists, but even if it does, what 
problem this really solves’

— Wittgenstein, Tractatus Logico-Philosophicus, 1921

Our mortality, and how we might transcend it, is one of humanity’s central preoccupations since prehistory. One much-pondered possibility is that of an afterlife. This would potentially serve a variety of purposes: to buttress fraught quests for life’s meaning and purpose; to dull unpleasant visions of what happens to us physically upon death; to switch out fear of the void of nothingness with hope and expectation; and, to the point here, to claim continuity of existence through a mysterious hereafter thought to defy and supplant corporeal mortality.

And so, the afterlife, in one form or another, has continued to garner considerable support to the present. An Ipsos/Reuters poll in 2011 of the populations of twenty-three countries found that a little over half believe in an afterlife, with a wide range of outcomes correlated with how faith-based or secular a country is considered. The Pew Center’s Religious Landscape Study polling found, in 2014, that almost three-fourths of people seem to believe in heaven and more than half said that they believed in hell. The findings cut across most religions. Separately, research has found that some one-third of atheists and agnostics believe in an afterlife — one imagined to include ‘some sort of conscious existence’, as the survey put it. (This was the Austin Institute for the Study of Family and Culture, 2014.) 

Other research has corroberated these survey results. Researchers based at Britain's Oxford University in 2011 examined forty related studies conducted over the course of three years by a range of social-science and other specialists (including anthropologists, psychologists, philosophers, and theologians) in twenty countries and different cultures. The studies revealed an instinctive predisposition among people to an afterlife — whether of a soul or a spirit or just an aspect of the mind that continues after bodily death.

My aim here is not to exhaustively review all possible variants of an afterlife subscribed to around the world, like reincarnation — an impracticality for the essay. However, many beliefs in a spiritual afterlife, or continuation of consciousness, point to the concept of dualism, entailing a separation of mind and body. As René Descartes explained back in the 17th century:
‘There is a great difference between the mind and the body, inasmuch as the body is by its very nature always divisible, whereas the mind is clearly indivisible. For when I consider the mind, or myself insofar as I am only a thinking thing, I cannot distinguish any parts within myself. . . . By contrast, there is no corporeal or extended thing that I can think of which in my thought I cannot easily divide into parts. . . . This one argument would be enough to show me that the mind is completely different than the body’ (Sixth Meditation, 1641).
However, in the context of modern research, I believe that one may reasonably ask the following: Are the mind and body really two completely different things? Or are the mind and the body indistinct — the mind reducible to the brain, where the brain and mind are integral, inseparable, and necessitating each other? Mounting evidence points to consciousness and the mind as the product of neurophysiological activity. As to what’s going on when people think and experience, many neuroscientists favour the notion that the mind — consciousness and thought — is entirely reducible to brain activity, a concept sometimes variously referred to as physicalism, materialism, or monism. But the idea is that, in short, for every ‘mind state’ there is a corresponding ‘brain state’, a theory for which evidence is growing apace.

The mind and brain are today often considered, therefore, not separate substances. They are viewed as functionally indistinguishable parts of the whole. There seems, consequently, not to be broad conviction in mind-body dualism. Contrary to Cartesian dualism, the brain, from which thought comes, is physically divisible according to hemispheres, regions, and lobes — the brain’s architecture; by extension, the mind is likewise divisible — the mind’s architecture. What happens to the brain physically (from medical or other tangible influences) affects the mind. Consciousness arises from the entirety of the brain. A brain — a consciousness — that remarkably is conscious of itself, demonstrably curious and driven to contemplate its origins, its future, its purpose, and its place in the universe.

The contemporary American neuroscientist, Michael Gazzaniga, has described the dynamics of such consciousness in this manner:
‘It is as if our mind is a bubbling pot of water. . . . The top bubble ultimately bursts into an idea, only to be replaced by more bubbles. The surface is forever energized with activity, endless activity, until the bubbles go to sleep. The arrow of time stitches it all together as each bubble comes up for its moment. Consider that maybe consciousness can be understood only as the brain’s bubbles, each with its own hardware to close the gap, getting its moment’. (The Consciousness Instinct, 2018)
Moreover, an immaterial mind and a material world (such as the brain in the body), as dualism typically frames reality, would be incapable of acting upon each other: what’s been dubbed the ‘interaction problem’. Therefore the physicalist model — strengthened by research in fields like neurophysiology, which quicken to acquire ever-deeper learning — has, arguably, superseded the dualist model.

People’s understanding that, of course, they will die one day, has spurred search for spiritual continuation to earthbound life. Apprehension motivates. The yearn for purpose motivates. People have thus sought evidence, empirical or faith-based or other, to underprop their hope for otherworldly survival. However, modern reality as to the material, naturalistic basis of the mind may prove an injurious blow to notions of an out-of-body afterlife. After all, if we are our bodies and our bodies are us, death must end hope for survival of the mind. As David Hume graphically described our circumstances in Of the Immortality of the Soul (1755), our ‘common dissolution in death’. That some people are nonetheless prone to evoke dualistic spectral spirits — stretching from disembodied consciousness to immortal souls — that provide pretext in desirously thwarting the interruption of life doesn’t change the finality of existence. 

And so, my conclusion is that perhaps we’d be better served to find ingredients for an ‘afterlife’ in what we leave by way of influences, however ordinary and humble, upon others’ welfare. That is, a legacy recollected by those who live on beyond us, in its ideal a benevolent stamp upon the present and the future. This earthbound, palpable notion of what survives us goes to answer Wittgenstein’s challenge we started with, regarding ‘what problem’ an afterlife ‘solves’, for in this sense it solves the riddle of what, realistically, anyone might hope for.

09 February 2020

What Is It to Be Human?

Hello, world!
Posted by Keith Tidman

Consciousness is the mental anchor to which we attach our larger sense of reality.

We are conscious of ourselves — our minds pondering themselves in a curiously human manner — as well as being intimately conscious of other people, other species, and everything around us, near and remote.

We’re also aware that in reflecting upon ourselves and upon our surroundings, we process experiences absorbed through our senses — even if filtered and imagined imperfectly. This intrinsically empirical nature of our being is core, nourishing our experience of being human. It is our cue: to think about thinking. To ponder the past, present, and future. To deliberate upon reality. And to wonder — leaving no stone unturned: from the littlest (subatomic particles) to the cosmic whole. To inspire and be inspired. To intuit. To poke into the possible beginning, middle, and end of the cosmos. To reflect on whether we behave freely or predeterminedly. To conceptualise and pick from alternative futures. To learn from being wrong as well as from being right. To contemplate our mortality. And to tease out the possibility of purpose from it all.

Perception, memory, interpretation, imagination, emotion, logic, and reason are among our many tools for extracting order out of disorder, to quell chaos. These and other properties, collectively essential to distinguishing humanity, enable us to model reality, as best we can.

There is perhaps no more fundamental investigation than this into consciousness touching upon what it means to be human.

To translate the world in which we’re thoroughly immersed. To use our rational minds as the gateway to that understanding — to grasp the dimensions of reality. For humans, the transmission of thought, through the representational symbols of language, gestures, and expressions — representative cognition — provides a tool for chiseling out our place in the world. In the twentieth century, Ludwig Wittgenstein laconically but pointedly framed the germaneness of these ideas:
‘The limits of my language mean the limits of my world’.
Crucially, Wittgenstein grounds language as a tool for communication in shared experiences. 

Language provides not only an opening through which to peer into human nature but also combines  with other cognitive attributes, fueling and informing what we believe and know. Well, at least what we believe we know. The power of language — paradoxically both revered and feared, yet imperative to our success — stems from its channeling human instincts: fundamentally, what we think we need and want.

Language, to the extraordinary, singular level of complexity humankind has developed and learned to use it as a manifestation of human thought, emanates from a form of social leaning. That is, we experiment with language in utilitarian fashion, for best effect; use it to construct and contemplate what-ifs, venturing into the concrete and abstract to unspool reality; and observe, interact with, and learn from each other in associative manner. Accumulative adaptation and innovation. It’s how humanity has progressed — sometimes incrementally, sometimes by great bounds; sometimes as individuals, sometimes as elaborate networks. Calibrating and recalibrating along the way. Accomplished, deceptively simply, by humans emitting sounds and scribbling streams of symbols to drive progress — in a manner that makes us unique.

Language — sophisticated, nuanced, and elastic — enables us to meaningfully absorb what our brains take in. Language helps us to decode and make sense of the world, and recode the information for imaginatively different purposes and gain. To interpret and reinterpret the assembly of information in order to shape the mind’s new perspectives on what’s real — well, at least the glowing embers of what’s real — in ways that may be shared to benefit humankind on a global, community, and individual level. Synaptic-like, social connections of which we are an integral part.

Fittingly, we see ourselves simultaneously as points connected to others, while also as distinct identities for which language proves essential in tangibly describing how we self-identify. Human nature is such that we have individual and communal stakes. The larger scaffolding is the singularly different cultures where we dwell, find our place, and seek meaning — a dynamically frothing environment, where we both react to and shape culture, with its assortment of both durably lasting and other times shifting norms.

12 November 2017

Hearts and Minds: The Mystery of Consciousness

By Mary Monro

Despite the best efforts of scientists and philosophers over the centuries, no mechanism has been discovered that indicates how consciousness emerges in the brain. Descartes famously thought the soul resided in the pineal gland - but that was mainly because he couldn't think of any other purpose for it (It actually produces melatonin and guides sleeping patterns). But, 400 years on, perhaps we still need to think again about where consciousness might reside.
In recent years there has been a surge of interest in the gut brain, with its hundred million neurons and its freight of microbes, that influences every aspect of our being including mood and memory. If the gut might now be considered a possible source of consciousness what about other candidates?

After all, “Primary consciousness arises when cognitive processes are accompanied by perceptual, sensory and emotional experience” as Fritjof Capra and Pier Luigi Luisi put it in their book The Systems View of Life: A Unifying Vision (2014).  Reflective or higher-order consciousness includes self-awareness and anticipation.

There is another intelligent, organising, feeling, planning, responsive, communicating organ inside us – a body-wide-web lining our blood vessels. Vascular endothelium cells (VE for short) line every vessel from the heart to the smallest capillary, reaching into every part of the body. Vascular endothelium is the interface between the blood and the tissues, deciding what goes where through a combination of electrical, kinetic, mechanical and chemical signalling.

Laid out, the VE in a human body would be the size of a rugby pitch yet it weighs only one kilogram. Far from being simply wallpaper, recent research has shown it to be a lead actor in the management of the body, including the brain. It is believed that each of the sixty trillion cells of the VE is unique, each one exquisitely adapted to meet the needs of its immediate environment, whether that is in the deeply oxygen deprived depths of the kidney or the highly oxygenated gas exchange surface of the lung. William Aird, in a scholarly paper in 2007, describes vascular endothelium as 'a powerful organising principle in health and disease'.

The blood-brain barrier (usually abbreviated to BBB) protects the brain from molecules and cells in the blood that might damage neural tissue. The vascular endothelium forms the interface but it was previously thought to be a passive sieve, controlled by neurons. The BBB has now been renamed the ‘neuro-vascular unit’ as it has become clear that neural cells, pericytes (that back the endothelial cells) and the vascular endothelial cells all actively take part in managing this critical barrier. It is not known which of them is in charge.

Other researchers have sought to apply the Turing Test to the VE in the brain – the Turing Test being an evaluation of whether an information processing system is capable of intelligent, autonomous thought. Christopher Moore and Rosa Cao, argue that blood is drawn to particular areas of the brain by the VE, in advance of metabolic demand, where it stimulates and modulates neuronal function. So the brain is responder rather than activator. Who is doing the thinking? Is the body-wide-web (including the heart and its assistant the blood) gathering information from the body and the external environment to tell the brain what to do? How does it make decisions? What does this imply for consciousness?

In fact, long ago, Aristotle asserted that the vascular architecture in the embryo functions as a frame or model that shapes the body structure of the growing organism. Recent research bears this out, with the VE instructing and regulating organ differentiation and tissue remodelling, from the embryo to post-natal life.  The VE cells form before there is a heart and it is fluid flow that drives endothelial stem cells to trigger the development of the heart tube, vessels and blood cells. There is no brain, only a neural tube, at this stage.

Recent research has shown that blood vessels can direct the development of nerves or vice versa or they can each develop independently. So, embryologically, there is a case for saying that the VE is a decision making executive.

All this recalls a founding principle of osteopathy – which is that ‘the rule of the artery is supreme’.  This is a poetic, 19th century way of saying that disturbance to blood flow is at the root of disease. In his autobiography, published in 1908, Andrew Still remarks: ‘in the year 1874 I proclaimed that a disturbed artery marked the beginning to an hour and a minute when disease began to sow its seeds of destruction in the human body’.

Now, almost a century and a half on, we find that ‘endothelial activity is crucial to many if not all disease processes’, as K. S. Ramcharan put it in a recent paper entitled ‘The Endotheliome: A New Concept in Vascular Biology’ (published in Thrombosis Research in 2011). All this illustrates the importance of this seemingly humble tissue, upon whose health our mental and physical wellbeing depends. And if this structure acts consciously, then perhaps we should consider the possibility that all living cells act consciously.



*Mary Monro Bsc (Hons) Ost, MSc Paed Ost, FSCCO is an Osteopath, based in Bath, United Kingdom.

10 September 2017

Chaos Theory: And Why It Matters

Posted by Keith Tidman

Computer-generated image demonstrating that the behaviour of dynamical systems is highly sensitive to initial conditions

Future events in a complex, dynamical, nonlinear system are determined by their initial conditions. In such cases, the dependence of events on initial conditions is highly sensitive. That exquisite sensitivity is capable of resulting in dramatically large differences in future outcomes and behaviours, depending on the actual initial conditions and their trajectory over time — how follow-on events nonlinearly cascade and unpredictably branch out along potentially myriad paths. The idea is at the heart of so-called ‘Chaos Theory’.

The effect may show up in a wide range of disciplines, including the natural, environmental, social, medical, and computer sciences (including artificial intelligence), mathematics and modeling, engineering — and philosophy — among others. The implication of sensitivity to initial conditions is that eventual, longer-term outcomes or events are largely unpredictable; however, that is not to say they are random — there’s an important difference. Chaos is not randomness; nor is it disorder*. There is no contradiction or inconsistency between chaos and determinism. Rather, there remains a cause-and-effect — that is, deterministic — relationship between those initial conditions and later events, even after the widening passage of time during which large nonlinear instabilities and disturbances expand exponentially. Effect becomes cause, cause becomes effect, which becomes cause . . . ad infinitum. As Chrysippus, a third-century BC Stoic philosopher, presciently remarked:
‘Everything that happens is followed by something else which depends on it by causal necessity. Likewise, everything that happens is preceded by something with which it is causally connected’.
Accordingly, the dynamical, nonlinear system’s future behaviour is completely determined by its initial conditions, even though the paths of the relationship — which quickly get massively complex via factors such as divergence, repetition, and feedback — may not be traceable. A corollary is that not just the future is unpredictable, but the past — history — also defies complete understanding and reconstruction, given the mind-boggling branching of events occurring over decades, centuries, and millennia. Our lives routinely demonstrate these principles: the long-term effects of initial conditions on complex, dynamical social, economic, ecologic, and pedagogic systems, to cite just a few examples, are likewise subject to chaos and unpredictability.

Chaos theory thus describes the behaviour of systems that are impossible to predict or control. These processes and phenomena have been described by the unique qualities of fractal patterns like the one above — graphically demonstrated, for example, by nerve pathways, sea shells, ferns, crystals, trees, stalagmites, rivers, snow flakes, canyons, lightning, peacocks, clouds, shorelines, and myriad other natural things. Fractal patterns, through their branching and recursive shape (repeated over and over), offer us a graphical, geometric image of chaos. They capture the infinite complexity of not just nature but of complex, nonlinear systems in general — including manmade ones, such as expanding cities and traffic patterns. Even tiny errors in measuring the state of a complex system get mega-amplified, making prediction unreliable, even impossible, in the longer term. In the words of the 20th-century physicist Richard Feynman:
‘Trying to understand the way nature works involves . . . beautiful tightropes of logic on which one has to walk in order not to make a mistake in predicting what will happen’.
The exquisite sensitivity to initial conditions is metaphorically described as the ‘butterfly effect’. The term was made famous by the mathematician and meteorologist Edward Lorenz in a 1972 paper in which he questioned whether the flapping of a butterfly’s wings in Brazil — an ostensibly miniscule change in initial conditions in space-time — might trigger a tornado in Texas — a massive consequential result stemming from the complexly intervening (unpredictable) sequence of events. As Aristotle foreshadowed, ‘The least initial deviation . . . is multiplied later a thousandfold’.

Lorenz’s work that accidentally led to this understanding and demonstration of chaos theory dated back to the preceding decade. In 1961 (in an era of limited computer power) he was performing a study of weather prediction, employing a computer model for his simulations. In wanting to run his simulation again, he rounded the variables from six to three digits, assuming that such an ever-so-tiny change couldn’t matter to the results — a commonsense expectation at the time. However, to the astonishment of Lorenz, the computer model resulted in weather predictions that radically differed from the first run — all the more so the longer the model ran using the slightly truncated initial conditions. This serendipitous event, though initially garnering little attention among Lorenz's academic peers, eventually ended up setting the stage for chaos theory.

Lorenz’s contributions came to qualify the classical laws of Nature represented by Isaac Newton, whose Mathematical Principles of Natural Philosophy three hundred-plus years earlier famously laid out a well-ordered, mechanical system — epically reducing the universe to ‘clockwork’ precision and predictability. It provided us, and still does, with a sufficiently workable approximation of the world we live in.

No allowance, in the preceding descriptions, for indeterminacy and unpredictability. That said, an important exception to determinism would require venturing beyond the macroscopic systems of the classical world into the microscopic systems of the quantum mechanical world — where indeterminism (probability) prevails. Today, some people construe the classical string of causes and effects and clockwork-like precision as perhaps pointing to an original cause in the form of some ultimate designer of the universe, or more simply a god — predetermining how the universe’s history is to unfold.

It is not the case, as has been thought too ambitiously by some, that all that humankind needs to do is get cleverer at acquiring deeper understanding, and dismiss any notion of limitations, in order to render everything predictable. Conforming to this reasoning, the 18th century Dutch thinker, Baruch Spinoza, asserted,
‘Nothing in Nature is random. . . . A thing appears random only through the incompleteness of our knowledge’.


*Another example of chaos is brain activity, where a thought and the originating firing of neurons — among the staggering ninety billion neurons, one hundred trillion synapses, and unimaginable alternative pathways — results in the unpredictable, near-infinite sequence of electromechanical transmissions. Such exquisite goings-on may well have implications for consciousness and free will. Since consciousness is the root of self-identity — our own identity, and that of others — it matters that consciousness is simultaneously the product of, and subject to, the nonlinear complexity and unpredictability associated with chaos. The connections are embedded in realism. The saving grace is that cause-and-effect and determinism are, however, still in play in all possible permutations of how individual consciousness and the universe subtly connect.

23 July 2017

Identity: From Theseus's Paradox to the Singularity

Posted by Keith Tidman

A "replica" of an ancient Greek merchant ship based on the remains of a ship that wrecked about 2,500 years ago.  With acknowledgements to Donald Hart Keith.
As the legend goes, Theseus was an imposing Greek hero, who consolidated power and became the mythical king of Athens. Along the way, he awed everyone by leading victorious military campaigns. The Athenians honoured Theseus by displaying his ship in the Athenian harbour. As the decades rolled by, parts of the ship rotted. To preserve the memorial, each time a plank decayed, the Athenians replaced it with a new plank of the same kind of wood. First one plank, then several, then many, then all.

As parts of the ship were replaced, at what point was it no longer the ‘ship of Theseus’? Or did the ship retain its unique (undiminished) identity the entire time, no matter how many planks were replaced? Do the answers to those two questions change if the old planks, which had been warehoused rather than disposed of, were later reassembled into the ship? Which, then, is the legendary ‘ship of Theseus’, deserving of reverence — the ship whose planks had been replaced over the years, or the ship reassembled from the stored rotten planks, or neither? The Greek biographer and philosopher Plutarch elaborated on the paradox in the first century in 'Life of Theseus'.

At the core of these questions about a mythical ship is the matter of ‘identity’. Such as how to define ‘an object’; whether an object is limited to the sum of people’s experience of it; whether an object can in some manner stay the same, regardless of the (macro or micro) changes it undergoes; whether the same rules regarding identity apply to all objects, or if there are exceptions; whether gradual and emergent, rather than immediate, change makes a difference in identity; and so forth.

The seventeenth-century English poilitical philosopher, Thomas Hobbes, weighed in on the conundrum, asking, ‘Which of the two existing ships is numerically one and the same ship as Theseus’s original ship?’ He went on to offer this take on the matter:
‘If some part of the first material has been removed or another part has been added, that ship will be another being, or another body. For, there cannot be a body “the same in number” whose parts are not all the same, because all a body’s parts, taken collectively, are the same as the whole.’
The discussion is not, of course, confined to Theseus’s ship. All physical objects are subject to change over time: suns (stars), trees, houses, cats, rugs, hammers, engines, DNA, the Andromeda galaxy, monuments, icebergs, oceans. As do differently categorised entities, such as societies, institutions, and organizations. And people’s bodies, which change with age of course — but more particularly, whose cells get replaced, in their entirety, roughly every seven years throughout one’s life. Yet, we observe that amidst such change — even radical or wholesale change — the names of things typically don’t change; we don’t start calling them something else. (Hobbes is still Hobbes seven years later, despite cellular replacement.)

The examples abound, as do the issues of identity. It was what led the ancient Greek philosopher Heraclitus to famously question whether, in light of continuous change, one can ‘step into the same river twice’—answering that it’s ‘not the same river and he’s not the same man’. And it’s what led Hobbes, in the case of the human body, to conveniently switch from the ‘same parts’ principle he had applied to Theseus’s ship, saying regarding people, ‘because of the unbroken nature of the flux by which matter decays and is replaced, he is always the same man’. (Or woman. Or child.) By extension of this principle, objects like the sun, though changing — emitting energy through nuclear fusion and undergoing cycles — have what might be called a core ‘persistence’, even as aspects of their form change.
‘If the same substance which thinks be changed,
it can be the same person, or remaining
the same, it can be a different person? — John Locke
But people, especially, are self-evidently more than just bodies. They’re also identified by their minds — knowledge, memories, creative instincts, intentions, wants, likes and dislikes, sense of self, sense of others, sense of time, dreams, curiosity, perceptions, imagination, spirituality, hopes, acquisitiveness, relationships, values, and all the rest. This aspect to ‘personal identity’, which John Locke encapsulates under the label ‘consciousness’ (self) and which undergoes continuous change, underpins the identity of a person, even over time — what has been referred to as ‘diachronic’ personal identity. In contrast, the body and mind, at any single moment in time, has been referred to as ‘synchronic’ personal identity. We remain aware of both states — continuous change and single moments — in turns (that is, the mind rapidly switching back and forth, analogous to what happens while supposedly 'multitasking'), depending on the circumstance.

The philosophical context surrounding personal identity — what’s essential and sufficient for personhood and identity — relates to today’s several variants of the so-called ‘singularity’, spurring modern-day paradoxes and thought experiments. For example, the intervention of humans to spur biological evolution — through neuroscience and artificial intelligence — beyond current physical and cognitive limitations is one way to express the ‘singularity’. One might choose to replace organs and other parts of the body — the way the planks of Theseus’s ship were replaced — with non-biological components and to install brain enhancements that make heightened intelligence (even what’s been dubbed ultraintelligence) possible. This unfolding may be continuous, undergoing a so-called phase transition.

The futurologist, Ray Kurzweil, has observed, ‘We're going to become increasingly non-biological’ — attaining a tipping point ‘where the non-biological part dominates and the biological part is not important any more’. The process entails the (re)engineering of descendants, where each milestone of change stretches the natural features of human biology. It’s where the identity conundrum is revisited, with an affirmative nod to the belief that mind and body lend themselves to major enhancement. Since such a process would occur gradually and continuously, rather than just in one fell swoop (momentary), it would fall under the rubric of ‘diachronic’ change. There’s persistence, according to which personhood — the same person — remains despite the incremental change.

In that same manner, some blend of neuroscience, artificial intelligence, heuristics, the biological sciences, and transformative, leading-edge technology, with influences from disciplines like philosophy and the social sciences, may allow a future generation to ‘upload the mind’ — scanning and mapping the mind’s salient features — from a person to another substrate. That other substrate may be biological or a many-orders-of-magnitude-more-powerful (such as quantum) computer. The uploaded mind — ‘whole-brain emulation’ — may preserve, indistinguishably, the consciousness and personal identity of the person from whom the mind came. ‘Captured’, in this term’s most benign sense, from the activities of the brain’s tens of billions of neurons and trillions of synapses.

‘Even in a different body, you’d still be you
if you had the same beliefs, the same worldview,
and the same memories.’ — Daniel Dennett
If the process can happen once, it can happen multiple times, for the same person. In that case, reflecting back on Theseus’s ship and notions of personal identity, which intuitively is the real person? Just the original? Just the first upload? The original and the first upload? The original and all the uploads? None of the uploads? How would ‘obsolescence’ fit in, or not fit in? The terms ‘person’ and ‘identity’ will certainly need to be revised, beyond the definitions already raised by philosophers through history, to reflect the new realities presented to us by rapid invention and reinvention.

Concomitantly, many issues will bubble to the surface regarding social, ethical, regulatory, legal, spiritual, and other considerations in a world of emulated (duplicated) personhood. Such as: what might be the new ethical universe that society must make sense of, and what may be the (ever-shifting) constraints; whether the original person and emulated person could claim equal rights; whether any one person (the original or emulation) could choose to die at some point; what changes society might confront, such as inequities in opportunity and shifting centers of power; what institutions might be necessary to settle the questions and manage the process in order to minimise disruption; and so forth, all the while venturing increasingly into a curiously untested zone.

The possibilities are thorny, as well as hard to anticipate in their entirety; many broad contours are apparent, with specificity to emerge at its own pace. The possibilities will become increasingly apparent as new capabilities arise (building on one another) and as society is therefore obliged, by the press of circumstances, to weigh the what and how-to — as well as the ‘ought’, of course. That qualified level of predictive certainty is not unexpected, after all: given sluggish change in the Medieval Period, our twelfth-century forebears, for example, had no problem anticipating what thirteenth-century life might offer. At that time in history, social change was more in line with the slow, plank-by-plank changes to Theseus’s ship. Today, the new dynamic of what one might call precocious change — combined with increasingly successful, productive, leveraged alliances among the various disciplines — makes gazing into the twenty-second century an unprecedentedly challenging briar patch.

New paradoxes surrounding humanity in the context of change, and thus of identity (who and what I am and will become), must certainly arise. At the very least, amidst startling, transformative self-reinvention, the question of what is the bedrock of personal identity will be paramount.

12 February 2017

The Decline of Materialism

Posted by Thomas Scarborough
Materialism is the theory that matter alone exists – however this is too simple. Let us assume, rather, that materialism is the arranging of our world in our minds – and since we are speaking of materialism, we do this on the basis of what we see, hear, smell, taste, and touch. 
That is, in speaking of materialism, we are speaking of all that we learn about a material world through our senses – either directly, or through the instruments which we use. And so defined, materialism may seem to promise us a complete understanding of our world. We have certainly made enormous strides. We are able to tease apart the sub-atomic world, see billions of years back in time, and map and manipulate the complex genetic code – among many other things. However, there are at least four limiting and complicating factors to a materialistic outlook, each of which vastly reduces its scope and its power:
• It is one thing to discover the laws of nature, yet quite another to predict their outcomes. We see an analogy in the game of chess. While the rules of the game are simple – a pawn advances like this, and a king like that – the outcome of these rules is another matter altogether. A chess board, which is simplicity itself in the scheme of things – a mere sixty-four squares and thirty-two pieces – taxes the human mind to the very limits of its powers. It is the easy part, one might say, to design a supercomputer, or to plot a trajectory to Pluto. The impossible part is to predict the ripples on a pond, or to anticipate the path of a snail on a wall. Worse than this, we too often fail to foresee the negative outcomes of laws we imagined we had mastered.

• If materialism is the arranging of a material world in our minds on the basis of what we see, hear, smell, taste, and touch, consider then that others, too, arrange the world in their minds – and these others enter my world and my considerations. It is not I alone now, who seek to arrange the world in my mind. As soon as I factor another human being into my thinking – let alone a few, even hundreds, not to speak of a million more – the complexity of knowing my world becomes unthinkable. It is beyond imagination on the graph of intrinsic complexity.  We therefore separate out such situations from the ‘natural sciences’, and call them ‘human sciences’. It happens wherever others enter the picture.

• The natural sciences are, in a sense, an open book. Yet in order to understand the human sciences, we need to understand how others arrange their worlds in their minds. In order to accomplish this, we now find that we need to understand how they communicate this – and we must infer it from semiotic codes.  A plethora of views, an ocean of feelings, vast beyond our comprehension, is expressed with facial expressions, nuances of speech, gestures, postures, behavioural codes, ideological codes, and so much more – all of them full of variation and caprice.  This takes us another quantum leap away from that materialism which advances through the senses.

• But the way that we use these semiotic codes, noted Jacques Derrida, we are continually deferring meaning.  Francis Bacon put it like this: words beget words (which beget words).  It is much like having money in a bank, which has its money in another bank, which has its money in another bank, and so on. It is easy to see that one will never access one's money. Which is to say that, while the things of sense seem concrete, our words merely hover over the surface of reality.  If mind and matter were to correspond in a one-to-one relationship, we would have to be mere ‘machines’. Yet suppose now that all living forms have such ‘hovering’ minds.  We may in fact be living in a vast, teeming world which is wakeful in every part.
Materialism, we said, is the arranging of our world in our minds, on the basis of what we see, hear, smell, taste, and touch. On the surface of it, this promises us a complete understanding of our world.  Yet then we come up against the problem of outcomes. Further, we come up against the problem of others – through which we separate out the human sciences. Then we discover that we need to engage with complex and subtle semiotic codes. And finally, we might need to account for a world which is populated not merely with seven billion human beings, but with living agents beyond number or knowing. One by one, each of these four steps, in quantum leaps, diminishes the usefulness of materialism. By and large, our advancing understanding of the world would seem to be taking us further and further away from the materialism the philosophers once knew.

09 January 2017

Is Consciousness Bound Inextricably by the Brain?

From Qualia to Comprehension

Posted by Keith Tidman
According to the contemporary American philosopher, Daniel Dennett, consciousness is the ‘last surviving mystery’ humankind faces.
Well, that may be overstating human achievements, but at the very least, consciousness ranks among the most consequential mysteries. With its importance acknowledged, does the genesis of conscious experience rest solely in the brain? That is, should investigations of consciousness adhere to the simplest, most direct explanation, where neurophysiological activity accounts for this core feature of our being?

Consciousness is a fundamental property of life—an empirical connection to the phenomenal. Conscious states entail a wide range of (mechanistic) experiences, such as wakefulness, cognition, awareness of self and others, sentience, imagination, presence in time and space, perception, emotions, focused attention, information processing, vision of what can be, self-optimisation, memories, opinions—and much more. An element of consciousness is its ability to orchestrate how these intrinsic states of consciousness express themselves.

None of these states, however, requires the presence of a mysterious dynamic—a ‘mind’ operating dualistically separate from the neuronal, synaptic activity of the brain. In that vein, ‘Consciousness is real and irreducible’, as Dennett's contempoary, John Searle, observed in pointing out the seat of consciousness being the brain; ‘you can’t get rid of it’. Accordingly, Cartesian dualism—the mind-body distinction—has long since been displaced by today’s neuroscience, physics, mathematical descriptions, and philosophy.

Of significance, here, is that the list of conscious experiences in the neurophysiology of the brain includes colour awareness (‘blueness’ of eyes), pain from illness, happiness in children’s company, sight of northern lights, pleasure in another’s touch, hunger before a meal, smell of a petunia, sound of a violin concerto, taste of a macaroon, and myriad others. These sensations fall into a category dubbed qualia, their being the subjective, qualitative, ‘introspective’ properties of experience.

Qualia might well constitute, in the words of the Australian cognitive scientist, David Chalmers, the ‘hard problem’ in understanding consciousness; but, I would suggest, they’re not in any manner the ‘insoluble problem’. Qualia indeed pose an enigma for consciousness, but a tractable one. The reality of these experiences—what’s going on, where and how—has not yet yielded to research; however, it’s early. Qualia are likely—with time, new technologies, fresh methodologies, innovative paradigms—to also be traced back to brain activity.

In other words, these experiences are not just correlated to the neurophysiology of the brain serving as a substrate for conscious processes, they are inextricably linked to and caused by brain activity. Or, put another way, neurophysiological activity doesn’t merely represent consciousness, it is consciousness—both necessary and sufficient.

Consciousness is not unique to humans, of course. There’s a hierarchy to consciousness, tagged approximately to the biological sophistication of a species. How aware, sentient, deliberative, coherent, and complexly arranged that any one species might be, consciousness varies down to the simplest organisms. The cutoff point of consciousness, if any, is debatable. Also, if aliens of radically different intelligences and physiologies, including different brain substrates, are going about their lives in solar systems scattered throughout the universe, they likewise share properties of consciousness.

This universal presence of consciousness is different than the ‘strong’ version of panpsychism, which assigns consciousness (‘mind’) to everything—from stars to rocks to atoms. Although some philosophers through history have subscribed to this notion, there is nothing empirical (measurable) to support it—future investigation notwithstanding, of course. A takeaway from the broader discussion is that the distributed presence of conscious experience precludes any one species, human or alien, from staking its claim to ‘exceptionalism’.

Consciousness, while universal, isn’t unbounded. That said, consciousness might prove roughly analogous to physics’ dark matter, dark energy, force fields, and fundamental particles. It’s possible that the consciousness of intelligent species (with higher-order cognition) is ‘entangled’—that is, one person’s consciousness instantaneously influences that of others across space without regard to distance and time. In that sense, one person’s conscious state may not end where someone else’s begins; instead, consciousness is an integrated, universal grid.

All that said, the universe doesn’t seem to pulse as a single conscious entity or ‘living organism’. At least, it doesn't to modern physicists. On a fundamental and necessary level, however, the presence of consciousness gives the universe meaning—it provides reasons for an extraordinarily complex universe like ours to exist, allowing for what ‘awareness’ brings to the presence of intelligent, sentient, reflective species... like humans.

Yet might not hyper-capable machines too eventually attain consciousness? Powerful artificial intelligence might endow machines with the analog of ‘whole-brain’ capabilities, and thus consciousness. With time and breakthroughs, such machines might enter reality—though not posing the ‘existential threat’ some philosophers and scientists have publicly articulated. Such machines might well achieve supreme complexity—in awareness, cognition, ideation, sentience, imagination, critical thinking, volition, self-optimisation, for example—translatable to proximate ‘personhood’, exhibiting proximate consciousness.

Among what remains of the deep mysteries is this task of achiveing a better grasp of the relationship between brain properties and phenomenal properties. The promise is that in the process of developing a better understanding of consciousness, humanity will be provided with a vital key for unlocking what makes us us.

04 September 2016

Picture Post #16: Life Behind the Pile of Petrol Cans


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

Azad Nanakeli 2011, Arbil, Kurdistan-Iraq
A tailor shop that is situated behind a pile of petrol cans. An image that offers a certain brutality about human life – yet in this harshness, but also lightness, man survives. In such ‘idiosyncratic sympathies’ is hidden our intimacy – and hence, similarity. How violent is it to earn one's daily bread out of sight of the street, and behind a symbol of capitalism and war and power?

Virtue will always raise its flags of dependence upon what it believes. Reducing intimacy to something impersonal in cultural terms, yet personal in providing a subjective state within which is created a distinct worldview. The subtlety between intimacy and brutality can then pass by unnoticed, or be easily exchanged, one with the other.

Yet human beings are blessed with something called imagination. And without imagination, intimacy cannot exist. Strangely, the most common scenes reflect our trouble with imagination. As if the common has very little value in regard. We let comparisons decree our personal preferences – and in so doing, not only do we refuse to imagine ourselves, but we refuse to imagine others. We refuse intimacy with the world.

Imagination evokes thinking, even though most thinking occurs within the already imagined. Imagination reveals a problem as to how we make the world intelligible. In this way, daily life offers us a myriad stream of common, unanticipated images like this, scenes in which a host of uncommon things can be traced.



30 November 2015

How to help the French living under Terror and their own Terreur

Posted by Perig Gouanvic


"Inside a Revolutionary Committee under Terreur (1793-1794)"
Finger pointing and cleansing the public discourse is not new In France
In France, there are very old beliefs, reminiscent of the Terreur era, about religion and minorities that should never be questioned. Multiculturalism is considered a danger. Let's consider, for instance, the fact that the Paris attacks terrorists, who were born an raised in France or Belgium have more in common with the skinheads of the 1980s than with the fundamentalists we see on TV. They drink alcohol, smoke pot, play murder rampage video games, and really have the "no future" belief system of other teenagers 20 years ago. Several observers witted that religion would actually be a pacifying, structuring, influence for these young people. In other words, supporting the strength of religious communities, not just stopping humiliating them, might actually prevent terrorism. At the present moment, the orthodoxy says that we should not limit "free speech" - especially the Charlie* kind - and even that we should celebrate humiliation of religion as the most exquisite mark of French Freedom and Rationality.




A French anti-terrorism judge also lamented that, as the laïcité laws became more rigid, the whole Muslim community felt so alienated they stopped collaborating with the police to denounce potential terrorism suspects. But, again, don't try to convince the authorities and intellectuals that supporting communities, especially the Muslim community, as such (not as a community with socioeconomical problems, but as a community whose customs and religion are positive contributions to France) is a positive step towards the elimination of terrorism. Supporting ethnic, religious or cultural minorities, in France, is called "communautarism". Another word for multiculturalism? Yes, except that it must be said with a grin of disgust. The French feminist sociologist Christine Delphy, who has been widely vilified for her opposition to the French scarf-banning laws, offers the rest of us a definition :
The French definition of communautarism is the fact that people who are discriminated, who are assigned with prejudices, to whom equal chances are denied, etc. these people – who have often been parked in the same neighborhoods – these people hang out and talk to each other. This is communautarism, it's bad, it means that they want to part from the rest of society and, instead of looking for well seen people, people who have privileges, for example, for Blacks and Arabs instead of reaching out for Whites and beg them to come and talk with them, they talk to each other. That would be communautarism.
Yet the fact remains that cultivating friendly and respectful relationships with communities, acknowledging their contribution to civil society, is one of the time-tested ways to prevent ostracism and extremism.​ Yet in france, too often individual members of minorities talking to each other are considered potential enemies of the state. Just imagine how dangerous it would be for the French State if it decided to approach these communities and recognize them as such!

I don't think most people are aware of the mental straightjacket in which the French have placed themselves for the last 30 years. It encompasses more than the issue of ethno-religious groups. Some probably know that the French have some very strange philosophers such as Finkelkraut and BHL**, and some very despicable intellectuals such as Michel Houellebecq, who recently wrote a book describing France becoming an Islamic republic, and became a National obsession in the wake of the Charlie attacks. These public figures pretend to be victims of political correctness, although they occupy most of the media. One thing that might not be as well known is that there also exists, in parallel, a whole swamp of dissident intellectuals that are actively maintained in the margins of the French discourse. In France, they are called the "confusionnists", "cryptofascists", and so on, so forth.

The slippery slope argument and guilt by association have become commonplace in France. In this mixed bag, you will find true far right people, but also anarchists, radical critics of NATO, Israel, etc. As an example, France has been able to outlaw the boycott campaign against Israel, which makes it more repressive of boycott calls than Israel itself. Don't try to protest: if you are not called an anti-Semite you will be called an objective supporter of anti-Semites. There is no way out. These kinds of large-scale paranoid delusions are reminiscent of the arbitrary denunciations of French Revolution's Terreur, described in the above 1797 illustration.


Finally, journalism too is constrained in this straightjacket. There are only a few journalists left who analyze the terror events in depth. They have pointed out in the past the same thing that was pointed out about the Bush administration (foreknowledge, and the presence of elements in the intelligence services who rather preferred terrorist attacks to happen, for instance to impose mass surveillance (of course these theses are not mainstream but they are still more audible in the anglophone world than in France)). But they are marginalized, and quickly become part of this "cryptofascist", "conspiracy theorizing" swamp I was talking about. The result is that compelling elements of inquiry are missed not only in France but abroad. For example, Hicham Hamza, a French journalist, has investigated the local ramifications of a Times of Israel article covering a warning by "officials" to France's "Jewish community", on the morning of the attacks. His resources are thin,  his site is regularly under cyberattacks and of course most would not approach him with a tad pole, because of the usual name-calling.

The same could be said of the Charlie Hebdo attacks - and further in the past --  of Rwanda, about which the BBC aired a documentary that would be swiftly thrown in the holocaust denying, cryptofasisct swamp in France. These elements do circulate in the French blogosphere. But guess what: France now has the right to shut down any website it judges problematic. What might be judged problematic is quite broad:

[It’s] a heterogeneous movement, heavily entangled with the Holocaust denial movement, and which combines admirers of Hugo Chavez and fans of Vladimir Putin. An underworld that consist of former left-wing activists or extreme leftists, former "malcontents", sovereignists, revolutionary nationalists, ultra-nationalists, nostalgists of the Third Reich, anti-vaccination activists, supporters of drawing straws, September 11th revisionists, anti-Zionists, Afrocentricists, survivalists, followers of "alternative medicine", agents of influence of the Iranian regime, Bacharists, Catholic or Islamic fundamentalists. « Conspirationnisme : un état des lieux », par Rudy Reichstadt, Observatoire des radicalités politiques, Fondation Jean-Jaurès, Parti socialiste, 24 février 2015.

(Welcome to the conspiracy theorist movement.)

Even so, of course, it cannot prevent us from thinking and inquiring. The French population really needs a breath of fresh air right now. They need fresh insights, serious journalism, and the freedom to discuss outside of their mentally and legally censored world. I don't have specific suggestions to solve those issues. I just think that the rest of the world should be aware that the French prison of ideas is not always self imposed and that there are many people who just wish they could escape. Many do: I can see that in Quebec.


*The Charlie Hebdo satiricial magazine whose cartoonists were murdered in January.
** Bernard Henri Levy, a self-styled philosophe.