15 December 2019

Redefining Race: It’s Collaboration That Counts

Posted by Sifiso Mkhonto
Historically, under the category of race, White colonisers of Africa used race for greed and exploitation, enslaving the continent's Black races for innumerable reasons, such as labour, land, resources, pleasure, etc.
While it is true that Arab races exploited Black races, and Black exploited Arab – and Black exploited Black, and so on – a mere cursory look at the colonial map of Africa reveals that most Black races were dominated by White colonisers, and as a result were exploited by them. There were (arguably) just two exceptions: Ethiopia and Liberia.

While the colonial era now lies behind us in Africa – at least in its overt forms – racial prejudice continues to be a major issue. As we approach the 20s of the 20s decade in the Common Era, we come to realise that racial superiority, if not domination, has continued in the form of individualism.

I propose that collaboration is the true opposite of racism, while a failure to collaborate is its chief characteristic.

Race, like all the causes of prejudice, is merely a classificatory term, a social construct, rather than a genuine biological category. It indicates a group which is characterised by closeness of common descent, and some shared physical distinctiveness such as colour of skin – but can this still be relevant when one speaks in terms of collaboration?  Collaboration is concrete.  It advances beyond the theoretical constructs of race, and gives us a measurable and meaningful term.  It is a concept we can work with.

Presently, in Africa, including my own nation South Africa, the category of race is used as a tool for redress. However, we find a failure to measure its success. This begs the question – is the concept of race effective, or is it a hindrance to progress?  If there is one thing about race, it is that your individual fortunes can be turned in the direction you wish – if you know how to bargain with it – and those who know how to bargain with it use it to lay a solid foundation for their fortunes.

It is great to admire the beauty of your ethnicity, but to do it at the expense of diminishing another uncovers insecurities about your own successes and failures. The agenda which puts Black races on their own should be torn to shreds, because the truth is that everyone is on their own. The only difference between each ethnicity is collaboration.  This is the level where true non-racialism is measured.

It seems as if real interracial collaboration faded with the struggle for independence and self-determination. The chances of having a genuine partnership for empowerment, or to fight a system of oppression with a person of your opposite race, was higher during the tough times, compared to the present. Times are still tough economically, politically, and socially, but behind the curtain of some delusionary interracial collaborations, we find terms and conditions that do not move us forward.

In his book of 1725, Logic: The Right Use of Reason in the Inquiry After Truth, Isaac Watts says,
‘Do not always imagine that there are ideas wheresoever there are names; for though mankind hath so many millions of ideas more than they have of names, yet so foolish and lavish are we, that too often we use some words in mere waste, and have no ideas for them; or at least our ideas are so exceedingly scattered and confused, broken and blended, various and unsettled, that they can signify nothing toward the improvement of the understanding.’
Race is an issue – along with other forms of prejudice – where concepts are used ‘in mere waste’. We attach a lot of ideas to mere words. Some of these words have no real definition which belongs to them. What then are the concepts which really matter? In the case of ‘racism’, it is about collaboration, above all.

This is how we should define the issue going forward. This is the true opposite of racial prejudice. In everything we do, from day to day, we should keep this first and foremost.

08 December 2019

Is Torture Morally Defensible?


Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.


I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.

01 December 2019

Picture Post #51: Nobody Excluded



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Paris, October 2019.
Picture credit: Olivia Galisson

Posted by Tessa den Uyl

Activists draw attention to global ecological devastation in front of the fountain of Place du Châtelet. This monument was ordered by Napoleon in 1806, and built by the sculptor Boizet. It pays tribute to the victories achieved in battle, and reminds us of Napoleon’s decision to provide free drinking water to all Parisians.

Victories bring along statues, which serve historical commemoration -- though foremost, symbolically, they are built upon the idea of a future. A future that, seen from a once-upon-a-time perspective, might not have been that imaginable, as to how it would turn out.

The beginning of the world alike the end is not new to our imagination. But things have changed. We have interfered too much in the flux of ecology, for profit. We might think we are smart, but how smart we truly are will have to be proven. For neither rage nor love might provide a statue to remember.

This planet does not care about our extinction. Though we are this planet -- for without it, we simply wouldn’t be. This is not new to our imagination. More recent, instead, is the question whether our extinction is truly a problem, or do we make it a problem because we have created a mess? This time, what is foreseen is that nobody is excluded.

24 November 2019

Prosthetics of the Brain


Posted by Emile Wolfaardt

Some creatures are able to regrow lost limbs (like crayfish, salamanders, starfish and some spiders). As humans, we are not as advanced in that department. But we can create such limbs – conventional prosthetics – artificial limbs or organs designed to provide (some) return of function. Some replacements, like glass eyes, don’t even provide that – they don’t see better, they simply look better. But a new wave of smart prosthetics is busy changing all that.

Bionic eyes are surgically implanted, and connect with retina neurons, recreating the transduction of light information back to the brain – so the brain can once again ‘see’. Bionic lenses provide augmented abilities, enabling eyes to see three times better than ‘perfect vision’. Bionic eyes will have all the abilities of modern visual technology like night vision, heat sensors, distant, infra-red and x-ray vision - and other augmented abilities. Likewise, other prosthetics will become smart, enhancing the human experience with enhanced reality.

The latest innovation in prosthetics is the revolutionary addition of machine learning and AI. Here, the wave of change is going to be of tsunamic proportion. Bioengineers are impressively pushing into this frontier, merging the human experience with superhuman abilities. The new field of development is the power of ‘smart brains’ – or neuro-mechanical algorithmic collaboration - where artificial intelligence, machine learning, and the human brain interface to create a brand-new human experience.

Neuro-mechanical algorithmic collaboration may sound like a huge tongue twister – but you already know what it means. Let’s parse it. Neuro- (of the brain), mechanical (of machines) algorithmic (all information, human or machine, is processed by way of algorithms) collaboration (working together). These BMIs (Brain Machine Interfaces) will become the norm of our future. What does that look like? The end result is the human brain having access to any and all information instantly, being able to share it with others seamlessly, and interpolating it into the situation appropriately.

For instance, a doctor in the middle of a surgery observes an unexpected bleed, instantly pulls up in his brain the last 20 occurrences of that bleed in similar situations, and is able to select the best cause and solution. Or you and I could have this conversation brain to brain, without the use of telephones or devices - simply using brain to brain communication. While that seems like a huge concept, in one sense it is not very different to what we do all the time. We use technology – the cell-phone – to communicate thoughts from one brain to another brain. Imagine if we could use technology to negate the need for the cell-phone. That is brain to brain communication.

There is a rat in a cage in Duke University, USA. In front of him are two glass doors that cannot open. He has a probe in his brain that links to a computer. In Brazil, there is another rat with a similar probe in his brain. In front of him are two wooden doors that he cannot see through. Then place a treat behind one of the glass doors in front of the rat in the USA, and his brain tells the rat in Brazil which door to open. That is brain to brain communication. Remove the probe (go wireless) and we have innate brain to brain communication.

There are many, many challenges before this can become a functional reality – but it is within sight. Amongst the biggest challenges are mapping the human brain sufficiently so we know what neurons to fire up, and creating a broad enough wireless connection to relay the enormous amount of information required to transmit even a single thought. We are making progress. Elon Musk is one of the innovators in this field. He is currently suggesting he can make changes to the brain to address Parkinson’s, Alzheimer’s, Autism and other brain disorders.

Scientists can control the movement of a rat with a PlayStation remote type control, have it climb a ladder, jump off a ledge that is higher than it would comfortably jump from, then inject endorphins into the rat’s brain that made the jump feel good.

Who knows – perhaps the opportunity lies ahead to correct socially disruptive behaviour, or criminal thinking? Would that be more effective than incarceration? Who knows - perhaps couples will be able to release endorphins into each other’s brains to establish a sense of bliss? Who knows – perhaps we will be able enhance our brains so that our knowledge is infinite, our character impeccable, and our reality phenomenal? If so, we shall be able to create our own reality, a world in which we and others live in peace and happiness. We can have the life we want in the world we choose.

Who would not want that? Or would they?



Further reading:

https://waitbutwhy.com/2017/04/neuralink.html



17 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

10 November 2019

God: a New Argument from Design

The game of our universe does not reveal sameness

Posted by Thomas Scarborough


The venerable ‘argument from design’ proposes that the creation reveals a Creator. More than this, that the creation reveals the power and glory of God. Isaac Newton was one among many who believed it—stating in an appendix to his 1637 Principia or Principles of Mathematics:
‘This most elegant system of the sun, planets, and comets could not have arisen without the design and dominion of an intelligent and powerful being.’
The trouble is, there are alternative explanations for design—in fact complete, coherent explanations. To put it in a nutshell, there are other ways that order and design can come about. So, today, the argument is often said to be inconclusive. The evolutionary biologist, Richard Dawkins, writes that it is ‘unanswerable'—which is not to say, however, that it is disproven.

Yet suppose that we push the whole argument back—back beyond all talk of power and glory—back beyond the simplest conceptions of design, to a core, a point of ‘ground zero'. Here we find the first and most basic characteristic of design: it is more than chaos or, alternatively, it is more than featurelessness.

On the surface of it, our universe ought to be only one or the other. Our universe is governed by laws which ought not to produce any more than chaos on the one hand, or featurelessness on the other. We might use the analogy of a chess game, although the analogy only goes so far.* A careful observer of a chess match reports that the entire game is governed by rules, and there is no departure from such rules.

Yet there is clearly, at the same time, something happening in the game at a different level. Games get won, and games get lost, and games play out in different ways each time. There is something beyond the laws. We may even infer that there is intelligence behind each game – but let us not rush to go that far.

However, without seeing the players, one could assume that they must exist—or something which resembles them. To put it as basically as we can: the game lacks sameness from game to game—whether this be the sameness of chaos or the sameness of featurelessness. Something else is happening there. Now apply this to our universe. We ought to see complete chaos, or we ought to see complete featurelessness. We ought not to see asymmetry or diversity, or anything of that sort—let alone anything which could resemble design.

The problem is familiar to science. The physicist, Stephen Hawking, wrote:
‘Why is it (the universe) not in a state of complete disorder at all times? After all, this might seem more probable.’
That is, there is no good explanation for it. Given the laws of nature, we cannot derive from them a universe which is as complex as the one we see. On the other hand, biologist Stuart Kauffman writes,
‘We have no adequate theory for why our universe is complex.’
This is the opposite view. We ought not to see any complexity emerging. No matter what degree of complexity we find today, whether it be Newton's system of the universe, or the basic fact that complexity exists, it should not happen. It is as if there is more than the rules—because the game of our universe does not reveal sameness.

This idea of ‘more’—of different levels of reality—has been seriously entertained by various scientists. The  science writer Natalie Wolchover says, ‘Space-time may be a translation of some other description of reality,’ and while she does not propose the existence of the supernatural, the idea of some other description of reality could open the door to this.

Call this the ‘ground zero’, the epicentre of the argument from design. There is something going on, at a level we do not see, which we may never discover by examining the rules. In the analogy of the chess game, where we observe something beyond the rules, we may not be able to tell what that something is—yet it is clear that it is.

This argument differs from the familiar version of the theological argument from design, which generally assumes that God created the rules which the design displays. On the contrary, this argument proposes that God may exist beyond the rules, through the very fact that we see order.



* A problem with the analogy is that a chess game manifests complexity to begin with. The important point is, however, that the game reveals more than it should.

03 November 2019



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Posted by Jeremy Dyer *

This is a detail from a great work of art. Which one? Whose? We are expected to admire it, to marvel and to learn. 

What if I told you that it was a detail from one of Pollock's works? Would you then try to 'see' the elusive essence of it? On the other hand, what if I told you it was merely a photo from above the urinal in a late-night restaurant? Does that make it any more or less 'art'? 

If everything is art—the sacred mantrathen the reverse corollary must also be true. Nothing is art.


* Jeremy Dyer is an acclaimed Cape Town artist.

27 October 2019

The Politics of the Bridge


Posted by Martin Cohen

Bridges are the stuff of superlatives and parlour games. Which is the longest bridge in the world? The tallest? The most expensive? And then there's also a prize which few seem to compete for - the prize for being the most political. The British Prime Minister, Boris Johnson’s. surprise proposal in September for a feasibility study for a bridge to Ireland threatens to scoop the pot.

But then, what is it about bridges and Mr. Johnson? Fresh from the disaster, at least in public relations terms, of his ‘Garden bridge’ (pictured above) over the river Thames, the one that Joanna Lumley said would be a “floating paradise”, the “tiara on the head of our fabulous city” and was forecast to cost £200 million before the plug was pulled on it (leaving Londoners with bills of £48 million for nothing), he announces a new bridge - this time connecting Northern Ireland across seas a thousand feet deep to Stranraer in Scotland. This one would cost a bit too - albeit Johnson suggests it would be value for money at no more than £15 billion.

If Londoners choked on a minuscule fraction of that for their new bridge, it is hard to see how exactly this new one could have been afforded. Particularly as costs of large-scale public works don't exactly have a good reputation in terms of coming in within budget.
The 55-kilometre bridge–tunnel system of the Hong Kong-Zhuhai-Macau bridge that opened last year was constructed only after delays, corruption and accidents had put its cost up to 48 billion Yuan (about £5.4 billion).

When wear and tear to the eastern span of the iconic San Francisco Bay bridge became too bad to keep patching, an entirely new bridge was built to replace it, at a final price tag of $6.5 billion (about £5.2 billion), a remarkable sum in its own right but all more indigestible because it represented a 2,500% cost overrun from the original estimate of $250 million.
Grand public works are always political. For a start, there is the money to be made on the contract, but there is also the money to be made from interest on the loans obtained. Money borrowed at a low rate from governments, can be relent at a higher rate. Even when they are run scrupulously, bridges are, like so many large construction projects, moneygorounds.

And yet, bridges have a good image, certainly compared to walls. They are said to unite, where barriers divide. "Praise the bridge that carried you safe over" says Lady Duberly at breakfast, in George Colman's play The Heir at Law. But surface appearances can be deceptive. Bridges, as recent history has shown, have a special power to divide.

That Hong Kong bridge is also a way of projecting mainland Chinese power onto its fractious new family member. President Putin's $3.7 billion Kerch Strait Bridge joining Crimea to Russia was hardly likely, as he put it, to bring “all of us closer together”. Ukrainians and the wider international community considered Russia's the bridge to be reinforcing Russian annexation of the peninsula. And if bridges are often favourably contrasted with walls, this one, it soon emerged, functioned as both: no sooner was the bridge completed than shipping trying to sail under it began to be obstructed. No wonder that Ukraine believes that there was an entirely negative and carefully secret political rationale for the bridge: to impose an economic stranglehold over Ukraine and cripple its commercial shipping industry in the Azov Sea.

In this sense, a bridge to Northern Ireland seems anything but a friendly gesture by the British, rather it smacks of old-style colonialism.

But perhaps the saddest bridge of them all was the sixteenth century Old Bridge at Mostar, commissioned by Suleiman the Magnificent in 1557 and connecting the two sides of the old city. Upon its completion it was the widest man-made arch in the world, towering forty meters (130 feet) over the river. Yet it was constructed and bound not with cement but with egg whites. No wonder, according to legend, the builder, Mimar Hayruddin, whose conditions of employment apparently included his being hanged if the bridge collapsed, carefully prepared for his own funeral on the day the scaffolding was finally removed from the completed structure.

In fact, the bridge was a fantastic piece of engineering and stood proud - until that is, in 1993 when Croatian nationalists, intent on dividing the communities either side of the river, collapsed it in a barrage of artillery shells. Thus the bridge once compared with a ‘rainbow rising up to the Milky Way’ became instead a tragic monument to hatred.

20 October 2019

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that disinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully — from the material to the cultural and spiritual — is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical belief in the fundamental value of all life. In turn, reason and observable evidence may lead to shared moral goods, as well as progress on the material and immaterial sides of life’s ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in 
themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that distinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully  from the material to the cultural and spiritual  is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical believe in the fundamental value of all life. In turn, reason and observable evidence may lead to share moral goods, as well as progress on the material and immaterial sides of life's ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.