25 September 2022

Where Do Ideas Come From?


By Keith Tidman

 

Just as cosmic clouds of dust and gas, spanning many light-years, serve as ‘nurseries’ of new stars, could it be that the human mind similarly serves as a nursery, where untold thought fragments coalesce into full-fledged ideas?

 

At its best, this metaphor for bringing to bear creative ideas would provide us with a different way of looking at some of the most remarkable human achievements in the course of history.

 

These are things like Michelangelo’s inspired painting, sculpting, architecture, and engineering. The paradigm-shifting science of Niels Bohr and Max Planck developing quantum theory. The remarkable compositions of Mozart. The eternal triumvirate of Socrates, Plato, and Aristotle — whose intellectual hold remains to today. The piercing insights into human nature memorably expressed by Shakespeare. The democratic spread of knowledge achieved through Gutenberg’s printing press. And so many more, of course.

 

To borrow from Newton (with his nod to the generations of luminaries who set the stage for his own influences upon science and mathematics), might humbler souls, too, learn to ‘stand on the shoulders of such giants’, even if in less remarkable ways? Yet still to reach beyond the rote? And, if so,  how might that work?

 

I would say that, for a start, it is essential for the mind to be unconstrained by conformance and orthodox groupthink in viewing and reconceiving the world: a quest for patterns. The creative process must not be sapped by concern over not getting endeavours right the first or second or third time. Doubting ideas, putting them to the test through decomposition and recomposition, adds to the rigour of those that optimally survive exploitation and scrutiny.


To find solutions that move significantly beyond the prevailing norms requires the mind to be undaunted, undistracted, and unflagging. Sometimes, how the creative process starts out — the initial conditions, as well as the increasing numbers of branching paths along which those conditions travel — greatly shapes eventual outcomes; other times, not. All part of the interlacing of analysis and serendipitous discovery. I think that tracing the genealogy of how ideas coalesce informs that process.

 

For a start, there’s a materialistic aspect to innovative thought, where the mind is demystified from some unmeasurable, ethereal other. That is, ideas are the product of neuronal activity in the fine-grained circuity of the brain, where hundreds of trillions of synapses, acting like switches and routers and storage devices, sort out and connect thoughts and deliver clever solutions. Vastly more synapses, one might note, than there are stars in our Milky Way galaxy!

 

The whispering unconscious mind, present in reposed moments such as twilight or midnight or simply gazing into the distance, associated with ‘alpha brain waves’, is often where creative, innovative insights dwell, being readied to emerge. It’s where the critical mass of creative insights is housed, rising to challenge rigid intellectual canon. This activity finds a force magnifier in the ‘parallel processing’ of others’ minds during the frothy back and forth of collaborative dialogue.

 

The panoply of surrounding influences helps the mind set up stencils for transitioning inspiration into mature ideas. These influences may germinate from individuals in one’s own creative orbit, or as inspiration derived from the culture and community of which one is a part. Yet, synthesising creative ideas across fields, resulting in multidisciplinary teams whose members complement one another, works effectively to kindle fresh insights and solutions.

 

Thoughts may be collaboratively exchanged within and among teams, pushing boundaries and inciting vision and understanding. It’s incremental, with ideas stepwise building on ideas in the manner famously acknowledged by Newton. Ultimately, at its best the process leads to the diffusion of ideas, across communities, as grist for others engaged in reflection and the generation of new takes on things. Chance happenings and spontaneous hunches matter, too, with blanks cooperatively filled in with others’ intuitions.

 

As an example, consider that, in a 1959 talk, the Nobel prize winning physicist, Richard Feynman, challenged the world to shrink text to such an extent that the entire twenty-four-volume Encyclopedia Britannica could fit onto the head of a pin. (A challenge perhaps reminiscent of the whimsical question about ‘the number of angels fitting on the head of a pin’, at the time intended to mock medieval scholasticism.) Meanwhile, Feynman believed there was no reason technology couldn’t be developed to accomplish the task. The challenge was met, through the scaling of nanotechnology, two and a half decades later. Never say never, when it comes to laying down novel intellectual markers.

 

I suggest that the most-fundamental dimension to the origination of such mind-stretching ideas as Feynman’s is curiosity — to wonder at the world as it has been, as it is now, and crucially as it might become. To doggedly stay on the trail of discovery through such measures as what-if deconstruction, reimagination, and reassembly. To ferret out what stands apart from the banal. And to create ways to ensure the right-fitting application of such reinvention.

 

Related is a knack for spotting otherwise secreted links between outwardly dissimilar and disconnected things and circumstances. Such links become apparent as a result of combining attentiveness, openness, resourcefulness, and imagination. A sense that there might be more to what’s locked in one’s gaze than what immediately springs to mind. Where, frankly, the trite expression ‘thinking outside-the-box’ is itself an ironic example of ‘thinking inside-the-box’.

 

Forging creative results from the junction of farsightedness and ingenuity is hard — to get from the ordinary to the extraordinary is a difficult, craggy path. Expertise and extensive knowledge is the metaphorical cosmic dust required in order to coalesce into the imaginatively original ideas sought. 

 

Case in point is the technically grounded Edison, blessed with vision and critical-thinking competencies, experiencing a prolific string of inventive, life-changing eureka moments. Another example is Darwin, prepared to arrive at his long-marinating epiphany into the brave world of ‘natural selection’. Such incubation of ideas, venturing into uncharted waters, has proven immensely fruitful. 

 

Thus, the ‘nurseries’ of thought fragments, coalescing into complex ideas, can provide insight into reality — and grist for future visionaries.

 

18 September 2022

Neo-Medievalism and the New Latin

By Emile Wolfaardt

Medieval Latin (or Ecclesiastical Latin, as it is sometimes called), was the primary language of the church in Europe during the Dark Ages. The Bible and its laws and commands were all in Latin, as were the punishments to be meted out for those who breached its dictates. This left interpretation and application up to the proclivities of the clergy. Because the populace could not understand Latin, there was no accountability for those who wielded the Latin sword.

We may have outgrown the too-simplistic ideas of infanticidal nuns and the horror stories of medieval torture devices (for the most part, anyway). Yet the tragedy of the self-serving ecclesiastical economies, the gorgonising abuse of spiritual authority, the opprobrious intrusion of privacy, and disenfranchisement of the masses still cast a dark shadow of systemic exploitation and widespread corruption over that period. The few who birthed into the ranks of the bourgeois ruled with deleterious absolutism and no accountability. The middle class was all but absent, and the subjugated masses lived in abject poverty without regard or recourse. There was no pathway to restation themselves in life. It was effectively a two-class social stratification system that enslaved by keeping people economically disenfranchised and functionally dependent. Their beliefs were defined, their behavior was regulated, and their liberties were determined by those whose best interest was to keep them stationed where they were.

It is the position of this writer that there are some alarming perspectives and dangerous parallels to that abuse in our day and age that we need to be aware of.

There has been a gargantuan shift in the techno-world that is obfuscatious and ubiquitous. With the ushering in of the digital age, marketers realised that the more information they could glean from our choices and conduct, the better they could influence our thinking. They started analysing our purchasing history, listening to our conversations, tracking key words, identifying our interests. They learned that people who say or text the word ‘camping’ may be in the market for a tent, and that people who buy rifles, are part of a shooting club, and live in a particular area are more likely to affiliate with a certain party. They learned that there was no such thing as excess data – that all data is useful and could be manipulated for financial gain.

Where we find ourselves today is that the marketing world has ushered in a new economic model that sees human experiences as free raw material to be taken, manipulated, and traded at will, with or without the consent of the individual. Google's vision statement for 2022 is ‘to provide access to the world's information in one click’. Everything, from your heart rate read by your watch, your texts surveyed by your phone’s software, your words recorded by the myriad listening devices around you, your location identified by twenty apps on your phone, your GPS, your doorbell, and the security cameras around your home are garnering your data. And we even pay for these things. It is easier to find a route using a GPS than a map, and the convenience of a smart technology seems, at first glance anyway, like a reasonable exchange.

Our data is being harvested systematically, and sold for profit without our consent or remuneration. Our search history, buying practices, biometric data, contacts, location, sleeping habits, exercise routine, self-discipline, articles we pause our scrolling to peruse, even whether we use exclamation marks in our texts – the list continues almost endlessly – and a trillion other bits of data each day is recorded. Then it is analysed for behavioural patterns, organised to manipulate our choices, and sold to assist advertisers to prise the hard-earned dollars out of our hands. It is written in a language very few people can understand, imposed upon us without our understanding, and used for financial gain by those who do not have our best interest at heart. Our personal and private data is the traded for profit without our knowledge, consent, or benefit.

A new form of economic oppression has emerged, ruthlessly designed, implemented by the digital bourgeois, and built exclusively on harvesting our personal and private data – and we gladly exchanged it for the conveniences it offered. As a society, we have been gaslighted into accepting this new norm. We are fed the information they choose to feed us, are subject to their manipulation, and we are simply fodder for their profit machine. We are indeed in the oppressive age of Neo-Medievalism, and computer code is the new Latin.

It seems to have happened so quickly, permeated our lives so completely, and that without our knowledge or consent.

But it is not hopeless. As oppressive as the Dark Ages were, that period came to an end. Why? Because there were people who saw what was happening, vocalised and organised themselves around a healthier social model, and educated themselves around human rights, oppression, and accountable leadership. After all – look at us now. We were birthed out of that period by those who ushered in the Enlightenment and ultimately Modernity.

Reformation starts with being aware, with educating oneself, with speaking up, and with joining our voices with others. There is huge value to this digital age we have wholeheartedly embraced. However, instead of allowing it to oppress us, we must take back control of our data where we can. We must do what we need to, to maximise the opportunities it provides, join with those who see it for what it is, help others to retain their freedom, and be a part of the wave of people and organisations looking for integrity, openness, and redefinition in the process. The digital age with its AI potential is here to stay. This is good. Let’s be a part of building a system that serves the needs of the many, that benefits humanity as a whole, and that lifts us all to a better place.

11 September 2022

The Uncaused Multiverse: And What It Signifies


By Keith Tidman

Here’s an argument that seems like commonsense: everything that exists has a cause; the universe exists; and so, therefore, the universe has a cause. A related argument goes on to say that the events that led to the universe must themselves ultimately originate from an uncaused event, bringing the regress of causes to a halt.

 

But is such a model of cosmic creation right?


Cosmologists assert that our universe was created by the Big Bang, an origin story developed by the Belgian physicist and Catholic priest Georges Lemaitre in 1931. However, we ought not to confuse the so-called singularity — a tiny point of infinite density — and the follow-on Big Bang event with creation or causation per se, as if those events preceded the universe. Rather, they were early components of a universe that by then already existed, though in its infancy.


It’s often considered problematic to ask what came before the Big Bang’, given the event is said to have led to the creation of space and time (I address ‘time’ in some detail below). By extension, the notion of nothingness prior to the Big Bang is equally problematic, because, correctly defined, nothingness is the total, absolute absence of everything — even energy and space. Although cosmologists claim that quantum fluctuations, or short bursts of energy in space, allowed the Big Bang to happen, we are surely then obliged to ask what allowed those fluctuations to happen.


Yet, it’s generally agreed you can’t get something from nothing. Which makes it all the more meaningful that by nothingness, we are not talking about space that happens to be empty, but rather the absence of space itself.

 

I therefore propose, instead, that there has always been something, an infinity where something is the default condition, corresponding to the impossibility of nothingness. Further, nothingness is inconceivable, in that we are incapable of visualising nothingness. As soon as we attempt to imagine nothingness, our minds — the act of thinking about it — causes the otherwise abstraction of ‘nothingness’ to turn into the concreteness of ‘something’: a thing with features. We can’t resist that outcome, for we have no basis in reality and in experience that we can match up with this absolute absence of everything, including space, no matter how hard we try to picture it in our mind’s eye.

 

The notion of infinity in this model of being excludes not just a ‘first universe’, but likewise excludes a ‘first cause’ or ‘prime mover’. By its very definition, infinity has no starting point: no point of origin; no uncaused cause. That’s key; nothing and no one turned on some metaphorical switch, to get the ball rolling.


What I wish to convey is a model of multiple universes existing  each living and dying  within an infinitely bigger whole, where infinity excludes a first cause or first universe. 


In this scenario, where something has always prevailed over nothingness, the topic of time inevitably raises its head, needing to be addressed. We cannot ignore it. But, I suggest, time appears problematic only because it's misconceived. Rather, time is not something that suddenly lurches out of the starting gate upon the occurrence of a Big Bang, in the manner that cosmologists and philosophers have typically described how it happens. Instead, when properly understood, time is best reflected in the unfolding of change.

 

The so-called ‘arrow of time’ traditionally appears to us in the three-way guise of the past leading to (causing) the present leading to the future. Allegorically, like a river. However, I propose that past and future are artificial constructs of the mind that simply give us a handy mechanism by which to live with the consequences of what we customarily call time: by that, meaning the consequences of change, and thus of causation. Accordingly, it is change through which time (temporal duration) is made visible to us; that is, the neurophysiological perception of change in human consciousness.

 

As such, only the present — a single, seamless ‘now’ — exists in context of our experience. To be sure, future and past give us a practical mental framework for modeling a world in ways that conveniently help us to make sense of it on an everyday level. Such as for hypothesising about what might be ahead and chronicling events for possible retrieval in the ‘now’. However, future and past are figments, of which we have to make the best. ‘Time reflected as change’ fits the cosmological model described here.


A process called entropy lets us look at this time-as-change model on a cosmic scale. How? Well, entropy is the irresistible increase in net disorder — that is, evolving change — in a single universe. Despite spotty semblances of increased order in a universe  from the formation of new stars and galaxies to someone baking an apple pie  such localised instances of increased order are more than offset by the governing physical laws of thermodynamics.


These physical laws result in increasing net disorder, randomness, and uncertainty during the life cycle of a universe. That is, the arrow of change playing out as universes live and peter out because of heat death — or as a result of universes reversing their expansion and unwinding, erasing everything, only to rebound. Entropy, then, is really super-charged change running its course within each universe, giving us the impression of something we dub time.  

 

I propose that in this cosmological model, the universe we inhabit is no more unique and alone than our solar system or beyond it our spiral galaxy, the Milky Way. The multiplicity of such things that we observe and readily accept within our universe arguably mirrors a similar multiplicity beyond our universe. These multiple universes may be regarded as occurring both in succession and in parallel, entailing variants of Big Bangs and entropy-driven ‘heat deaths’, within an infinitely larger whole of which they are a part.


In this multiverse reality of cosmic roiling, the likelihood of dissimilar natural laws from one universe to another, across the infinite many, matters as to each world’s developmental direction. For example, in both the science and philosophy of cosmology, the so-called ‘fine-tuning principle’ — known, too, as the anthropic principle — argues that with enough different universes, there’s a high probability some worlds will have natural laws and physical constants allowing for the kick-start and evolution of complex intelligent forms of life.


There’s one last consequence of the infinite, uncaused multiverse described here. Which is the absence of intent, and thus absence of intelligent design, when it comes to the physical laws and materialisation of sophisticated, conscious species pondering their home worlds. I propose that the fine-tuning of constants within these worlds does not undo the incidental nature of such reality.


The special appeal of this kind of multiverse is that it alone allows for the entirety of what can exist.


04 September 2022

Picture Post #78 Human Loss



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Jeremy Dyer *


Prague, Czech Republic. Monument to the Victims of Communism

I have viewed this powerful, symbolic artwork in Prague, which also makes an arresting image. If asked to interpret the artwork, we might imagine it is depicting the misery of loss in some form—perhaps Alzheimers, loss of identity, or personal catastrophe.

Today it might represent alienation from society, as aspects of our literal and ideological worlds are constantly being buffeted around us. What are you busy losing? What parts of you have faded away, and how do you grieve for that? What things are gone forever and what might still be resurrected in your life? How do you mourn that which has been forgotten by you? Does it speak to your life? 

Officially, though, the installation represents the personal human cost brought about by the historical evil of Communism. And today, passers-by ignore it as they go about their daily business, even as a steady trickle of tourists take selfies there.

------------------------------------------

* Jeremy Dyer is a psychologist and artist.

28 August 2022

Replacing Nature

by Thomas Scarborough

Koeberg Nuclear Power Station, Cape Town

The 2017 film Blade Runner 49 was ‘visually amazing’, receiving eight nominations and two awards at the 71st British Academy Film Awards, among other important accolades. But beyond the visuals, there was some serious philosophy. Blade Runner 49 portrays a world which, according to Laura Holt of the Centre for the Study of Existential Risk, cuts the ‘umbilical cord’ which connects human survival with the biosphere.

Today, this cutting of the umbilical cord would seem to be a slow but relentless process. The more organised we become, the more there is to go wrong. The more there is to go wrong, the more we need to insure life against it. The ‘progress’ of the Enlightenment has become the progress of human domination. This has come at the cost, according to the World Wildlife Fund, of the massive retreat of nature: an average 68% drop in biodiversity since 1970.

The theologian Dietrich Bonhoeffer, before his execution by the Nazi regime in 1945, wrote a synopsis of an enivsaged book. His notes were published posthumously in English in 1953, in Prisoner of God. Since these were abbreviated, I paraphrase here (the original translation appears below):

‘The Coming of Age of Humanity.

‘Humanity will seek to insure life against accident and ill-fortune. If the elimination of danger proves to be impossible, they will seek at least to minimise it. Insurance, while it thrives upon accidents, seeks also to mitigate their effects. This is a Western phenomenon. The goal is ultimately to be independent of nature. Our immediate environment is destined, not to be nature as before, but organisation. Yet this immunity from nature will produce a new crop of dangers, which is the very organisation.’
This was a prescient observation by a man who wrote nearly eighty years ago. At a glance, one might suppose that he was speaking of totalitarianism. It is, however, not the totalinarianism of the state, but what we now call ‘the science of scarcity’. How to provide more, and more, with less, across all lands and seas, for a global population.

Apart from being a relentless process, this becomes more and more dangerous to human stability. Close to my home in Cape Town, there is a nuclear power station. On some days, its twin domes rise hauntingly above the mists on the shore. In 2006, apparently, a single bolt broke loose in a generator, so disabling half the nuclear plant. It went into a controlled shutdown, and could not be raised to life for months. The reason for this was that replacement parts needed to be imported from France.

The incident showed how perilously close human organisation may sometimes be to disintegration. Fuel distribution, desalination plants, food production, transportation, communications, and any number of things besides, may be laid lame through fairly small and localised problems. The war in Ukraine, while not small, has revealed how a localised catastrophe can now destabilise the whole world. Too often, where we engineer things to create a more predictable and dependable world than nature provides, we come another step closer to the edge.

While we have applied much attention to the problems, we seem to find no reason to stop the latent and relentless process of separating ourselves from nature. And those who perhaps see clearly, do not have the power to prevent it. It is not ‘as before’, wrote Bonhoeffer. ‘Before’ (in his continuing notes), humankind had the spiritual vitality to defeat ‘the blasphemies of hybris’. He wrote, ‘Man is once more faced with the problem of himself. He can cope with every danger except the danger of human nature itself.’

Prominent thinkers have said no, wait, stop. Let go of the steering wheel. We are headed for, as it were, Blade Runner 49. The late biologist and naturalist Edward O. Wilson proposed that half the earth should be rewilded. Most recently, Laura Holt called for ‘relinquished areas’ of nature. I myself have proposed that large areas of the planet be prohibitos autem terra: under a ban. I propose that we are not capable of stopping ourselves in any other way.

-----------------------

* Original translation: ‘The coming of age of humanity (along the lines already suggested). The insuring of life against accident, ill-fortune. If elimination of danger impossible, at least its minimisation. Insurance (which although it thrives upon accidents, seeks to mitigate their effects) a western phenomenon. The goal, to be independent of nature. Our immediate environment not nature, as formerly, but organization. But this immunity produces a new crop of dangers, i.e. the very organisation.’

21 August 2022

Thence We Will Create Superhumans

by Corinne Othenin-Girard *


IMAGINE A WORLD IN WHICH parents have the option to go to a geneticist to discuss the ‘genetic fix’ choices of their unborn child.

If you should think that this is a fantasy of a dystopian fiction, you would be mistaken. Not only is the above, to a point, technologically possible today, but the parents' option could be made possible, too, in the not-too-distant future. 

Human Genome Editing is a kind of genetic engineering, where DNA is deleted and inserted, modified and replaced. 

The main argument in support of this technology is that it would be used to prevent the transmission of genetic diseases from one generation to the other. 

There seems now to be an instrumentalisation of individuals with disability, which means that concepts become instruments which serve as a guide to action. The proponents of (Germline) Genome Editing are using ‘the prevention of disability’ as a concept that coincides with how people with disabilities are usually portrayed and viewed by the broad public. 

There are two kinds of such editing—Somatic Genome Editing, and Germline Genome Editing—and there are, broadly, three possible applications. These applications include the following: 

1. Somatic Genome Editing is performed in the non-reproductive cells, and may contribute to treating diseases in existing individuals. It is said that it has the potential to revolutionise healthcare. A stunning success of this method was shown recently in the (possibly permanent) cure of hemophilia. And by now nearly 300 experimental gene-based therapies are in clinical testing. Changes made by somatic genome therapy are not passed down to future generations.

2. Germline Genome Editing is performed in early-stage embryo (before ‘it’ is even called an embryo), or in germ cells (sperm and egg cells). These modifications affect all cells of the potential future child, and will also be passed on to future generations. This technology would be used to prevent the transmission of diseases from one generation to the next. In other words, Genome editing would be used for fixing genetic ‘defects’ or ‘variations’ which cause rare diseases. Germline Genome Editing does not treat, cure, or prevent disease in any living individual. It is used to create embryos with altered genomes. 

3. From there on, the technology of Germline Genome Editing will inevitably expand into the area of generating ‘new’ or ‘improved’ abilities. Any gene can change, based on the ability-development it promises. ‘Treating disease’ or ‘preventing disability’ would therefore merge with ‘enhancement’. If genome editing should be deemed to be ‘sufficiently safe’, it could be applied to all kinds of gene variations—and that which is seen as ‘normal’ might be up for debate. The proponents of enhancement by genome editing mean to improve the human body and mind to its maximum potential. They conceive the natural human body as limited, defective, and in need of improvement, and support functioning beyond species-typical boundaries. 

Assuming that so-called ‘glitches’ of gene editing would be overcome, is it ethically acceptable to use this technology in order to ‘design’ future babies? It has already been done, in fact, and this issue has already come up, through the so-called CRISPR-Baby Scandal. In 2018, a Chinese researcher He Jiankui made the first CRIPRS-edited babies, twin girls called Lulu and Nana. Many researchers condemned his action. The actual editing wasn’t executed well. 

At the moment, public opinion is thought to carry a lot of weight. Therefore, various polls have been conducted to assess it. For example, the parent may have a severe heritable muscle disease: whether gene editing for (unborn) babies is acceptable, when it greatly reduces their risk of serious diseases or conditions. Assuming, again, that the technology is safe and effective. 

But for the technology to be declared as safe, don’t individuals with changed DNA need to be monitored throughout their life? 

The emerging field of enhancement medicine is due to push the boundaries through genetic manipulation, and will apply a shift to what is the human norm. 

Would using genome editing technology to create the 'perfect' or 'ideal' human risk making us become less tolerant of 'imperfections'? A person who couldn't embrace the norm of perfection would be perceived as 'disabled' and not as a person with a difference that needs to be sustained.

A genuinely inclusive and pro-equality society has no preferences between all possible future persons. Instead all existing and future individuals are perceived as having equal worth and value.

-------------------------------------

* Corinne Othenin-Girard is a PhD student in sociology in Basle, Switzerland. She is currently working on a participatory project on the topic of Human Germline Genome Editing. Corinne invites readers of Pi to join a Zoom Conference, 9 September 2022 on Human Germline Gene editing (HGGE), more specifically on how it could change the future of humanity.

15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

07 August 2022

A Linguistic Theory of Creation

by Thomas Scarborough

Creation of the Earth, by Wenceslas Hollar (1607-1677)

Perhaps it has been obscured through familiarity. There is an obvious curiosity in the opening chapters of Genesis (the creation of the world). Step by step, God creates the world, then names the world—repeatedly both coupling and separating his* creating and his naming.
Would it not be more natural simply to describe God’s creative acts without embellishment? Would not a description of his creative acts alone suffice? Unless God's naming has some special significance in the narrative, it may seem quite superfluous.

Under any circumstances, the opening chapters of Genesis are supremely difficult to interpret. Bearing this very much in mind, the purpose here is to present an alternative view—unfinished, unrefined—as a new possibility.

Existing interpretations of Genesis include the following:

  • Heaven and earth were created in six days
  • The six days were six (longer) periods of time
  • The earth’s great age was ‘created into’ a six-day sequence
  • Genesis represents the re-creation of the world
  • Genesis stitches various creation stories together
  • Its purpose is to glorify God, not first to be factual
  • It is a synopsis, which may not be sequential
  • It is a myth
  • It is a spiritual allegory
  • It describes a dream of Moses

Here, then, is a new alternative—presented merely as a possibility—for greater minds to examine the rough edges and (possibly) inadmissible ideas on an exceedingly complex text.

We begin with a simple linguistic fact. Names, in the Bible, were often commemorative. The ATS Bible Dictionary sums it up well: ‘Names were assumed afterwards to commemorate some striking occurrence in one’s history.’ Therefore, an event took place—then it, or the place of its happening, was named: Babel, Israel, the Passover, and so on. In fact, often with a pause.

If we assume that the creation account in Genesis includes, similarly, a commemorative naming, then the account may separate a stage-by-stage creation of the world from a stage-by-stage naming of it. With this in mind, there would then be four stages to each act of creation in Genesis. For example, in the NASB translation of the Bible (abridged):

  • ‘Then God said, Let there be light.’
  • ‘And there was light.’
  • ‘And God called the light day.’
  • ‘And there was evening and there was morning, one day.’

One may reduce this to two stages:

  • God created.
  • Then God named it.
 
And with some nuance, we may possibly say:

  • God created, within unspecified periods of time.
  • God named his creation during equal pauses (days), as commemorative acts.


In this case, Genesis could be viewed as a series of linguistic events. Its opening verses could set the tone, as a linguistic announcement: ‘And the earth was formless and void’—reminiscent of the linguist Ferdinand de Saussure, ‘In itself, thought is like a swirling cloud, where no shape is intrinsically determinate. No ideas are established in advance, and nothing is distinct, before the introduction of linguistic structure.’ 

Further, one may see a major linguistic shift in Genesis 3:7: ‘Then the eyes of both of them were opened …’ We have, from this point, the language of ‘ought’, as the first rational creatures ostensibly discern right from wrong. Then, needless to say, Babel represents a major linguistic shift in Genesis chapter 11, as languages (plural) appear.

From this, two major issues arise.

Firstly, is God's creating, in each stage of creation, coincident with his naming of it? In other words, did God name things on the same day that he created them, or did he name them afterwards? 

If it was on the same day that he created them, then the theory suggested here would presumably unravel. But arguably, in its favour, each naming is preceded by the word ‘And … ,’ which in the creation account is mostly used to indicate sequences in time. ‘And God called ...’ may represent separate periods of time in which namings occurred, after acts of creation.

A possible problem lies in Genesis 5:2, ‘God named them … in the day they were created.’ However, the word ‘day’ may here encompass every day, as we find in Genesis 2:4. ‘In the day’ may not refer to the separate stages of creation of Genesis chapter 1.

A second issue arises: God's naming does not seem to appear in the text consistently. ‘God called …’ appears only three times in Genesis 1, in connection with the first three days of creation. 

However Genesis, in general, liberally makes use of related words. Take the key words ‘God created ...’ Alternatives that we find in the text are ‘made, ‘formed’, ‘brought forth’, and so on. The same is true of the key words ‘God called …’ Alternatives are ‘saw’, ‘blessed’, ‘sanctified’. An act of commemoration may be implied in all of these words.

In short, the time periods which are described in Genesis may be attached, not first to the creation of the world, but to God’s naming of it—and, incidentally, to man's naming of it. On the sixth day, ‘the man gave names …’

Such a theory would potentially remove major problems of other creation theories. In particular, it could possibly move beyond both literal and liberal readings of Genesis, without colliding with them.

----------------------------------

* I follow Rabbi Aryeh Kaplan: “We refer to G-d using masculine terms simply for convenience’s sake.

Also by Thomas Scarborough: Hell: A Thought Experiment.

31 July 2022

Picture Post #77: The Picnic



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Martin Cohen



 
Another image from another war. The 1999–2000 battle of Grozny saw the siege and assault of the Chechen capital by Russian forces, and left the city devastated. In 2003, the United Nations called Grozny the most destroyed city on Earth.

But pause to look at this image. There’s a bizarre juxtaposition of suburban normalcy and wartime horror here. For a start, the table set with four chairs. Who else will be coming to dinner? Notice at the moment the two soldiers are a man and a woman, again echoing many a more homely, family scene.

Of course, as with most picnics, it is the setting that makes the moment, but here it is a nightmare scene of blasted apartment blocks and grey, smoking ruins. Not the family car, but the “family tank” is parked nearby.

On the table, the actual food is rather meagre.which may explain why both figures at the table look, frankly, rather miserable.

24 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.