17 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

10 November 2019

God: a New Argument from Design

The game of our universe does not reveal sameness

Posted by Thomas Scarborough


The venerable ‘argument from design’ proposes that the creation reveals a Creator. More than this, that the creation reveals the power and glory of God. Isaac Newton was one among many who believed it—stating in an appendix to his 1637 Principia or Principles of Mathematics:
‘This most elegant system of the sun, planets, and comets could not have arisen without the design and dominion of an intelligent and powerful being.’
The trouble is, there are alternative explanations for design—in fact complete, coherent explanations. To put it in a nutshell, there are other ways that order and design can come about. So, today, the argument is often said to be inconclusive. The evolutionary biologist, Richard Dawkins, writes that it is ‘unanswerable'—which is not to say, however, that it is disproven.

Yet suppose that we push the whole argument back—back beyond all talk of power and glory—back beyond the simplest conceptions of design, to a core, a point of ‘ground zero'. Here we find the first and most basic characteristic of design: it is more than chaos or, alternatively, it is more than featurelessness.

On the surface of it, our universe ought to be only one or the other. Our universe is governed by laws which ought not to produce any more than chaos on the one hand, or featurelessness on the other. We might use the analogy of a chess game, although the analogy only goes so far.* A careful observer of a chess match reports that the entire game is governed by rules, and there is no departure from such rules.

Yet there is clearly, at the same time, something happening in the game at a different level. Games get won, and games get lost, and games play out in different ways each time. There is something beyond the laws. We may even infer that there is intelligence behind each game – but let us not rush to go that far.

However, without seeing the players, one could assume that they must exist—or something which resembles them. To put it as basically as we can: the game lacks sameness from game to game—whether this be the sameness of chaos or the sameness of featurelessness. Something else is happening there. Now apply this to our universe. We ought to see complete chaos, or we ought to see complete featurelessness. We ought not to see asymmetry or diversity, or anything of that sort—let alone anything which could resemble design.

The problem is familiar to science. The physicist, Stephen Hawking, wrote:
‘Why is it (the universe) not in a state of complete disorder at all times? After all, this might seem more probable.’
That is, there is no good explanation for it. Given the laws of nature, we cannot derive from them a universe which is as complex as the one we see. On the other hand, biologist Stuart Kauffman writes,
‘We have no adequate theory for why our universe is complex.’
This is the opposite view. We ought not to see any complexity emerging. No matter what degree of complexity we find today, whether it be Newton's system of the universe, or the basic fact that complexity exists, it should not happen. It is as if there is more than the rules—because the game of our universe does not reveal sameness.

This idea of ‘more’—of different levels of reality—has been seriously entertained by various scientists. The  science writer Natalie Wolchover says, ‘Space-time may be a translation of some other description of reality,’ and while she does not propose the existence of the supernatural, the idea of some other description of reality could open the door to this.

Call this the ‘ground zero’, the epicentre of the argument from design. There is something going on, at a level we do not see, which we may never discover by examining the rules. In the analogy of the chess game, where we observe something beyond the rules, we may not be able to tell what that something is—yet it is clear that it is.

This argument differs from the familiar version of the theological argument from design, which generally assumes that God created the rules which the design displays. On the contrary, this argument proposes that God may exist beyond the rules, through the very fact that we see order.



* A problem with the analogy is that a chess game manifests complexity to begin with. The important point is, however, that the game reveals more than it should.

03 November 2019



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Posted by Jeremy Dyer *

This is a detail from a great work of art. Which one? Whose? We are expected to admire it, to marvel and to learn. 

What if I told you that it was a detail from one of Pollock's works? Would you then try to 'see' the elusive essence of it? On the other hand, what if I told you it was merely a photo from above the urinal in a late-night restaurant? Does that make it any more or less 'art'? 

If everything is art—the sacred mantrathen the reverse corollary must also be true. Nothing is art.


* Jeremy Dyer is an acclaimed Cape Town artist.

27 October 2019

The Politics of the Bridge


Posted by Martin Cohen

Bridges are the stuff of superlatives and parlour games. Which is the longest bridge in the world? The tallest? The most expensive? And then there's also a prize which few seem to compete for - the prize for being the most political. The British Prime Minister, Boris Johnson’s. surprise proposal in September for a feasibility study for a bridge to Ireland threatens to scoop the pot.

But then, what is it about bridges and Mr. Johnson? Fresh from the disaster, at least in public relations terms, of his ‘Garden bridge’ (pictured above) over the river Thames, the one that Joanna Lumley said would be a “floating paradise”, the “tiara on the head of our fabulous city” and was forecast to cost £200 million before the plug was pulled on it (leaving Londoners with bills of £48 million for nothing), he announces a new bridge - this time connecting Northern Ireland across seas a thousand feet deep to Stranraer in Scotland. This one would cost a bit too - albeit Johnson suggests it would be value for money at no more than £15 billion.

If Londoners choked on a minuscule fraction of that for their new bridge, it is hard to see how exactly this new one could have been afforded. Particularly as costs of large-scale public works don't exactly have a good reputation in terms of coming in within budget.
The 55-kilometre bridge–tunnel system of the Hong Kong-Zhuhai-Macau bridge that opened last year was constructed only after delays, corruption and accidents had put its cost up to 48 billion Yuan (about £5.4 billion).

When wear and tear to the eastern span of the iconic San Francisco Bay bridge became too bad to keep patching, an entirely new bridge was built to replace it, at a final price tag of $6.5 billion (about £5.2 billion), a remarkable sum in its own right but all more indigestible because it represented a 2,500% cost overrun from the original estimate of $250 million.
Grand public works are always political. For a start, there is the money to be made on the contract, but there is also the money to be made from interest on the loans obtained. Money borrowed at a low rate from governments, can be relent at a higher rate. Even when they are run scrupulously, bridges are, like so many large construction projects, moneygorounds.

And yet, bridges have a good image, certainly compared to walls. They are said to unite, where barriers divide. "Praise the bridge that carried you safe over" says Lady Duberly at breakfast, in George Colman's play The Heir at Law. But surface appearances can be deceptive. Bridges, as recent history has shown, have a special power to divide.

That Hong Kong bridge is also a way of projecting mainland Chinese power onto its fractious new family member. President Putin's $3.7 billion Kerch Strait Bridge joining Crimea to Russia was hardly likely, as he put it, to bring “all of us closer together”. Ukrainians and the wider international community considered Russia's the bridge to be reinforcing Russian annexation of the peninsula. And if bridges are often favourably contrasted with walls, this one, it soon emerged, functioned as both: no sooner was the bridge completed than shipping trying to sail under it began to be obstructed. No wonder that Ukraine believes that there was an entirely negative and carefully secret political rationale for the bridge: to impose an economic stranglehold over Ukraine and cripple its commercial shipping industry in the Azov Sea.

In this sense, a bridge to Northern Ireland seems anything but a friendly gesture by the British, rather it smacks of old-style colonialism.

But perhaps the saddest bridge of them all was the sixteenth century Old Bridge at Mostar, commissioned by Suleiman the Magnificent in 1557 and connecting the two sides of the old city. Upon its completion it was the widest man-made arch in the world, towering forty meters (130 feet) over the river. Yet it was constructed and bound not with cement but with egg whites. No wonder, according to legend, the builder, Mimar Hayruddin, whose conditions of employment apparently included his being hanged if the bridge collapsed, carefully prepared for his own funeral on the day the scaffolding was finally removed from the completed structure.

In fact, the bridge was a fantastic piece of engineering and stood proud - until that is, in 1993 when Croatian nationalists, intent on dividing the communities either side of the river, collapsed it in a barrage of artillery shells. Thus the bridge once compared with a ‘rainbow rising up to the Milky Way’ became instead a tragic monument to hatred.

20 October 2019

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that disinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully — from the material to the cultural and spiritual — is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical belief in the fundamental value of all life. In turn, reason and observable evidence may lead to shared moral goods, as well as progress on the material and immaterial sides of life’s ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in 
themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that distinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully  from the material to the cultural and spiritual  is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical believe in the fundamental value of all life. In turn, reason and observable evidence may lead to share moral goods, as well as progress on the material and immaterial sides of life's ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

13 October 2019

A New African Pragmatism

Natalia Goncharova, Exhilarating Cyclist, 1913.
By Sifiso Mkhonto *

Allister Marran, addressing himself to older people in these pages, wrote: 'Your time is over.'  Far from representing ageism, his attitude represents a new pragmatism in Africa. 

For the past few years, a question has lingered in my mind: are African political and business leaders concerned about the future of this continent, or are they concerned about their turn to eat, and how those in their lineage may benefit from the feast that is dished out in the back kitchen? Judging by the obvious evidence before us, we can only conclude that they are far too often unconcerned. 

We shall not delve into each problem, because history teaches us that we have a tendency to spend our resources and energy on discussing and unpacking problems, rather than executing the solution. In business, leaders do not appreciate you knocking at their door with a problem. They prefer a mere brief of the problem, and a detailed plan of the solution. This philosophy can and should be adapted to our approach to social issues that we face as a continent.

In my understanding, we should pragmatically ask at least four ‘whys’. These should be good enough to assist us in thinking of an amicable solution to major issues, among them the following:
• unemployment
• crime (including femicide, xenophobia, and gang violence)
• poverty, and
• lack of quality education
Here is a basic example of applying the first of these four points:
Why do we have such a high level of unemployment amongst the youth?
• Because there are no jobs.
Why are there no jobs?
• Because policy is not business-friendly, start-up businesses fail to create jobs, there’s too much red-tape, and young people studying in fields that are scarce of jobs.
Why, and why. All answers derived should lead us to basic solutions. We do not need ideology and political identity as a continent. These preoccupations set us ten steps back each time a pragmatic, sustainable solution is brought forth. It is the youth, today, which is determined, against all odds, to change the narrative of corrupt States, high crime levels, the stigma of stereotypical prejudices, and many other issues.

Against all the red tape, they still start businesses with no funding, they still pursue education with great sacrifice, to escape the reality of poverty. However, because of those who enjoy the buffet that is prepared and dished out in the back kitchen, many young lions and lionesses are doomed.

The solution is simple. Give young people the space they deserve – they think differently, and they are determined – to advance this continent into one of the most prosperous in the world. 'Grant an idea or belief to be true,' wrote William James, 'what concrete difference will its being true make in anyone's actual life?' Ideology and political identity have failed us. We need a new African pragmatism.



* Sifiso Mkhonto is a logistician and former student leader in South Africa.

06 October 2019

Picture Post #49: Vision in a Suitcase



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 


Posted by Tessa Den Uyl

Florence, 2019


The Venus by Botticelli, the David by Michelangelo, the Thinker by Rodin, names which resonate, and celebrate moments in our history which are now in the lap of technology. With new materials and with lasers, these images, and thus the names, are copied and cast into gadgets which we can grasp quickly and transport (even) in hand luggage.

These persons had a vision. In this light it just seems odd to exploit ready-mades for commerce that are not urinals, thinking of Duchamp’s ‘Fountain’ and placing a non-art object in an art space.What happens in this shop window might be thought of as the reverse. The art (and its creator) are objects available to everyone. But nothing within these statues reminds us of a vision. They are vision-less, though apparently they remind us of something else.

Does this mean that, when we have merely heard about something, scraps of such something are enough to live through the original, with all its implications and compulsiveness, in which and for which the creation came into being?

29 September 2019

What Place for Privacy in a Digital World?

C. S. Lewis, serene at his desk...

Posted by Keith Tidman

When Albert Camus offered this soothing advice in the first half of the twentieth century, ‘Opt for privacy. . . . You need to breathe. And you need to be’, life was still uncomplicated by digital technology. Since then, we have become just so many cogwheels in the global machinery that makes up the ‘Internet of things’ — the multifarious devices that simultaneously empower us and make us vulnerable.

We are alternately thrilled with the power that these devices shower on us — providing an interactive window onto the world, and giving us voice — even as we are dismayed to see our personal information scooped up, stowed, scrutinised for nuggets, reassembled, duplicated, and given up to others. That we may not see this too, that our lives are shared without our being aware, without our freely choosing, and without our being able to prevent their commodification and monetisation only makes it much worse.

Can a human right to privacy, assumed by Camus, still fit within this digitised reality?

Louis Brandeis, a former justice on the U.S. Supreme Court, defined the ‘right to be left alone’ as the ‘most comprehensive of rights, and the right most prized by civilised people’. But that was proffered some ninety years ago. If individuals and societies still value that principle, then today they are challenged to figure out how to balance the intrusively ubiquitous connectivity of digital technology, and the sanctity of personal information implicit in the ‘right to be left alone’. That is, the fundamental human right articulated by the UN’s 1948 Universal Declaration of Human Rights:
‘No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence’.
It’s safe to assume that we’re not about to scrap our digital devices and nostalgically return to analog lives. To the contrary, inevitable shifts in society will require more dependence on increasingly sophisticated digital technology for a widening range of purposes. Participation in civic life will call for more and different devices, and greater vacuuming and moving around of information. Whether the latter will translate into further loss of the human right to privacy, as is risked, or that society manages change in order to preserve or even recover lost personal privacy, the draft of that narrative is still being written.

However, it’s important to acknowledge that intervention — by policymakers, regulators, technologists, sociologists, cultural anthropologists, and ethicists, among others — may coalesce to avoid the erosion of personal privacy taking a straight upward trajectory. Urgency, and a commitment to avoid and even reverse further erosion, will be key.

Some contemporary philosophers have argued that claims to a human right to privacy are redundant, for various reasons. An example is when privacy is presumed embedded in other human rights, such as personal property — distinguished from property held in common — and protection of our personal being. But this seems dubious; in fact, one might flip the argument on its head — that is, our founding other rights on the right of privacy, the latter being more fundamentally based in human dignity and moral values. It’s a more nuanced, ethics-based position that makes the one-dimensional assertion that ‘If you don’t have anything to hide, you have nothing to fear’ all the more specious.

Furthermore, without a right to privacy being carved out in concrete terms, such as codified in law and constitutions, it may simply get ignored, rendering it non-defendable. For all that, we value privacy, and with it to prevent other people’s intrusion and meddling in our lives. We cling to the notion of what has been dubbed the ‘inviolate personality’ — the quintessence of being a person. In endorsing this belief in individual interests, one is subscribing to Noam Chomsky’s caution that ‘It’s dangerous when people are willing to give up their privacy’. To Chomsky’s point, the informed, ‘willing’ acceptance of social media’s mining and monetising of our personal data provides a contrast.

One parallel factor is the push-pull between what may become normalised governmental access to our personal information and individuals’ assertion of confidentiality and the ‘reasonable expectation’ of privacy. The style of government — from liberal democracies to authoritarianism — matters to government access to personal information: whether for benign use or malign abuse. ‘In good conscience’ is a reasonable guiding principle in establishing the what, when, and how of government access. And in turn, it matters to a fundamental human right to privacy. Meantime, governments may see a need for tools to combat crime and terrorism, allowing surveillance and intelligence gathering through wiretaps and Internet monitoring.

Two and a half centuries ago, Benjamin Franklin foreshadowed this tension between the liberty implied in personal privacy and the safety implied in government’s interest in self-protection. He cautioned: 
‘Those who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety’. 
Yet, however amorphous these contrary claims to rights might be, as a practical matter society has to resolve the risk-benefit equation and choose how to play its hand. What we conclude is the best solution will likely keep shifting, based on norms and emerging technology.

And the notions of a human right to privacy differ as markedly among cultures as they do among individuals. The definition of privacy and its value may differ both among and within cultures. It would perhaps prove unsurprising if a culture situated in Asia, a culture situated in Africa, a culture situated in Europe, and a culture situated in South or Central America were to frame personal privacy rights differently. But only insofar as both the burgeoning of digital technology and the nature of government influence the privacy-rights landscape.

The reflex may be to anticipate that privacy and human rights will take a straight, if thorny, path. The relentless and quickening emergence of digital technologies drives this impulse. The British writer and philosopher C. S. Lewis provides social context for this impulse, saying:
‘We live … in a world starved for solitude, silence, and private.’
Despite the invasion of people’s privacy, by white-hatted parties (with benign intent) and black-hatted parties (with malign intent), I believe our record thus far represents only an embryonic, inelegant attempt to explore — with perfunctory legal, regulatory, or principled restraint — the rich utility of digital technology.

Nonetheless, if we are to steer clear of the potentially unbridled erosion of privacy rights — to uphold the human right to privacy, however measured — then it will require repeatedly revisiting what one might call the ‘digital social contract’ the community adopts: and resolving the contradiction behind being both ‘citizen-creators’ and ‘citizen-users’ of digital technologies.

22 September 2019

The Impossibility of Determinism


Posted by Thomas Scarborough

Indeed, free will and determinism. It is a classic problem of metaphysics. No matter what we may think about it, we know that we have a problem. We know that things are physically determined. I line up dominos in a row, and topple the first of them with my finger. It is certain that the whole row of dominos will fall.

Are people then subject to the same kind of determinism? Are we just so many powerless humanoid shapes waiting to be knocked down by circumstances? Or perhaps, to what extent are we subject to such determinism? Is it possible for us to escape our own inner person? Our own history? Our own future? Are we even free to choose our own thoughts—much less our actions? Are we even free to believe? Each of these questions would seem to present us with a range of mightily confusing answers.

I suggest that it may be helpful to try to view the question from a broader perspective—the particular one that comes from consideration of the phenomenon of cause and effect. If I am controlled by indomitable causes, then I am not free. Yet if I am (freely) the cause of my own thoughts and actions, then I am free. Which then is it? Once we understand the dynamics of cause and effect, we should be in a better position to understand free will and determinism.

What is cause and effect?

In our everyday descriptions of our world, we say that, to paraphrase Simon Blackburn, causation is the relation between two events. It holds when, given that one event occurs, ‘it produces, or brings forth, or necessitates the second’. The burrowing aardvark caused the dam to burst; the lightning strike caused the thatch to burn; the medicine caused the patient to rally, and so on. Yet we notice in this something that is immediately problematic—which is that in order to say that there is causality, we need to have carefully defined events before and after.

But such definition is a problem. The philosopher-statesman Francis Bacon wrote of the ‘evil’ we find in defining natural and material things. ‘The definitions themselves consist of words, and those words beget others.’ Aristotle wrote that words consist of features (say, the features of a house), and those features must stand in a certain relation to one another (rubble, say, is not a house). Therefore, not only do we have words within words, but features and relations, too.

Where does it all end? It all ends nowhere. It is an endless regress. Bacon’s ‘evil’ means that our definitions dissipate into the universe. It seems much like having money in a bank, which has its money in another bank, which has its money in another bank, and so on. It is not hard to see that one will never find the money. Full definitions ultimately reach into the void.

If we want to be consistent about it, there are no events. In order to obtain events, we need to set artificial limits to our words—and artificial limits to reality itself, by excluding unwanted influences on our various constructions. But that is not the way the world really is in its totality. More than this, these unwanted influences always seem to enter the picture again somewhere along the line. This is a big part of the problem in our world today.

Of course, cause and effect quite simply work: he lit the fire; I broke the urn; they split the atom. This is good as far as it goes—yet again, such explanations work because we define before and after—and that very definition strips away a lot of what is really going on.

Where does this leave us? It leaves us without a reason to believe in cause and effect—even if we are naturally disposed to thinking that way. There is no rational framework to support it.

Someone might object. Even if we have no befores and afters, we still have a reality which is bound by the laws of the universe. There is therefore some kind of something which is not free. Yet every scientific law is about events before and after. Whatever is out there, it has nothing in common—that we can know of anyway—with such a scheme.

This may be a new way of putting it, but it is not a new idea. Albert Einstein, as an example, said that determinism is a feature of theories, rather than any aspect of the world directly. While, at the end of this post, we cannot prove free will, we can state that notions of determinism are out of the question, in the world as we know it. The world is something else, which we have not yet understood.