16 July 2017

Pity the Fundamentalists

Posted by Mirjam Scarborough*
          with Thomas Scarborough

Ferdinand Hodler - Die Lebensmüden (Tired of Life) 1892

What is it that sustains the fundamentalist?  I should say, the religious fundamentalist.  In particular, the fundamentalist who is willing to give up everything for God?  Of this description, there are fundamentalists of many kinds: missionaries, militants, medics, volunteers – priests, nuns, imams, rabbis – anyone for whom God means the world, and proves it by his or her sacrifice.

I had the privilege of researching religious fundamentalism through a ten-year study of women missionaries in Central Africa.  These represented the most committed members of fundamentalist faith**.  In a sense, they were the foot soldiers of the avant garde.  They were the leading edge – ready to give up everything in the name of God: friends, comforts, health, security, even freedom, children, life itself.

The reason why they did it, not unexpectedly, was that fundamentalists see themselves as being under orders – and these are orders from God himself.  The orders may go by various names: God's summons, commission, commandment, burden, among other terms.  In the case of the women missionaries, it was a 'call'. 

This call from God is not 'empty', so to speak.  It is rich in content.  Yet one thing characterises it above all.  God’s demands are high.  His paragons are perfect.  One gives much and expects little.  The orders which the religious fundamentalist receives make the highest demands – indeed they represent, generally speaking, the hardest tasks that anyone may aspire to. 

The question of my research was simple: 'What is it that sustains such a call?'

My intuitive answer was: God himself.  It is sufficient to know that one is called by God, to see one through any challenge and hardship on earth – and then, in many cases, to add to it love.  In fact, this proved to be true, and was borne out by the research.  Orders from God – or the perception of orders from God – encouraged the religious fundamentalist to great commitment and endurance, heroism and sacrifice.

However, it didn’t last.  It couldn’t last.  In the long term – which was about four years – the heroes crumbled.  There were intense stresses.  Their expectations were deeply challenged.  They suffered severe emotional trauma and exhaustion.  In fact, it was accepted as the norm that one would 'break down' in year four. Most, if not all of the missionaries I interviewed, needed medical interventions to stabilise their condition.

Even then, statistically, 50% of them were lost to the mission every thirteen years. Worse than that, anecdotal evidence showed that their spouses and children may have suffered the deepest trauma.

The call of God sustained them at first. It inspired them to extraordinary commitment and endurance. Up to a point, my assumptions were on target. Those who feel that they are called by God – perhaps ordered, summoned, commanded, commissioned by him – are sustained by the call. But as months grow into years, they nearly all crumble. They are utterly depleted.

However, there was a surprise.  Some of fundamentalism's foot soldiers – by far not all – rebounded.  After their first leave of absence, broken and beaten, they returned to the mission field repaired, if not refreshed – to the very same circumstances – never to experience such crisis again.

What changed?  It was not their fundamentalism, really.  They did not lose the sense of being under the call and commandment of God, nor did they feel in any way that his high demands had slipped.  But they let go of personal effort, and they trusted God to do it – in spite of them.

It all turned, therefore, on their understanding of their trust in God. No longer did they trust God to give them super-human powers for the task, or an indomitable will.  Rather, in brokenness, they trusted him to bless their great weakness.  Some called it the purification of the call.  Some called it repentance.

It all hinged on this one thing: God is great – but he does not impart his greatness to us.  It does not rub off on mere mortals.

The religious fundamentalist – the avant garde – missionaries, militants, medics, volunteers – priests, nuns, imams, rabbis – anyone for whom God means the world, and prove it by their sacrifice – all are of only fleeting usefulness to the cause, if any usefulness at all, until their call is purified.  In fact, until then, if anecdotal evidence would be true, they do great damage not only to themselves but to all those close to them.

It is tragedy and ruin – until they find a realistic sense of themselves, and a realistic sense of the God they serve.  Pity the religious fundamentalists, and all those near to them – at least, those whose call has not yet been 'cleansed'.



* Rev. Dr. Mirjam Scarborough (1957-2011) was a doctor of philosophy and a missiologist.
** My study included some who are sooner referred to as 'revivalists'.

09 July 2017

Poetry: A Notice Offering Amnesty

Posted by Chengde Chen*



A Notice Offering Amnesty
Written after the Grenfell Tower fire

By Chengde Chen

To determine the numbers of dead,
The police appeal to the survivors:

‘Please let us know your situation
And that of others you may know of.
Don’t worry about your immigration status–
We will not report it to the Home Office,
Nor will the Home Office pursue it.
So, please contact us!’

I seem to be touched by this,
But don’t really know what for.
For humanity in the law?
Or because we’re guilty of so lacking in it,
That we have to sacrifice the law
To compensate?



* Chengde Chen is the author of Five Themes of Today, Open Gate Press, London. chengde@sipgroup.com

02 July 2017

Picture Post #26. Life-Matters



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

Guatamala, 1968. Picture credit: Jill Gibson
A woman with a newborn passes by the word ‘Muerte’ written on the wall. Nothing could be more natural; birth and death simply belong to each other.

Which raises two questions: what happens when death becomes a symbol to reclaim something belonging to the past? What happens when a distinction is made about who, and who should not, live? Because then the right to live is not the same concept for all of us. 

Suppose that birth is a concept about being, and death a concept about non-being, then whatever touches upon these concepts, touches upon a principle. The problem is not birth, nor yet death itself. The problem is in the claims being made. To respect life means to respect death. Herein lies something universal.

A note by the photographer, Jill Gibson:

During the years 1966–1968, I was photo-documenting the work and progress of doctors who were examining the medical problems of children living in the pure Mayan village of Santa Maria Cauqué, located in the hills 30 minutes outside of Guatemala City. There were some days I travelled in a 4-wheel drive vehicle, up riverbed roads for five to six hours just to reach remote villages, along with a doctor. The doctor educated me about the United Fruit Company and it’s influence over the Guatemalan government, and the ramifications of U.S. involvement in the country. So, I believe the word Muerte, being a graffiti on the wall, has something to do with the resistance at the time.

There was in fact a lot of death going on then, as the country was immersed in military violence from 1965 through 1995. We saw it again first hand in 1984. During these years, the Mayans were being annihilated.

25 June 2017

The Death Penalty: An Argument for Global Abolition


Posted by Keith Tidman

In 1957, Albert Camus wrote an essay called Reflections on the Guillotine. As well as arguing against it on grounds of principle, he also speaks of the ineffectiveness of the punishment:
‘According to one magistrate, the overwhelming majority of the murderers he had tried did not know, when they shaved themselves that morning, that they were going to kill someone that night. In short, capital punishment cannot intimidate the man who throws himself upon crime as one throws oneself into misery.’
For myself, too, the death penalty is an archaic practice, a vestige with no place in a 21st-century world. In the arena of constitutional law, the death penalty amounts to ‘cruel and unusual’ (inhumane) punishment. In the arena of ethics, the death penalty is an immoral assault on human rights, dignity, and life’s preeminence.

Through the millennia, social norms habitually tethered criminal punishment to ‘retribution’ — which minus the rhetorical dressing distils to ‘revenge’. ‘Due process of law’ and ‘equal protection under the law’ were random, rare, and capricious. In exercising retribution, societies shunted aside the rule of authentic proportionality, with execution the go-to punishment for a far-ranging set of offenses, both big and small — murder only one among them. In some societies, matters like corruption, treason, terrorism, antigovernment agitation, and even select ‘antisocial’ behaviours likewise qualified for execution — and other extreme recourses — shades of which linger today.

Resort through the ages to state-sanctioned, ceremonial killing (and other severe corporal punishment) reflected the prevailing norms of societies, with little stock placed on the deep-rooted, inviolable value of human life. The aim was variously to control, coerce, impose suffering, and ultimately dehumanise — very much as enemies in war find it easier to kill if they create ‘subhuman’ caricatures of the enemy. Despite the death penalty’s barbarity, some present-day societies retain this remnant from humanity’s darker past: According to Amnesty International, twenty-three countries — scattered among the Asia-Pacific, Africa, the United States in the Americas, and Belarus in Europe — carried out executions in 2016; while fifty-five countries sentenced people to death that year.

But condemnation of the death penalty does not, of course, preclude imposing harsh punishment for criminal activity. Even the most progressive, liberally democratic countries, abiding by enlightened notions of justice, appropriately accommodate strict punishment — though well short of society’s premeditatedly killing its citizens through application of the death penalty. The aims of severe punishment may be several and, for sure, reasonable: to preserve social orderliness, disincentivise criminal behaviour, mollify victims, reinforce legal canon, express moral indignation, cement a vision of fairness, and reprimand those found culpable. Largely fair objectives, if exercised dispassionately through due process of law. These principles are fundamental and immutable to civil, working — and rules-based — societies. Nowhere, however, does the death penalty fit in there; and nowhere is it obvious that death is a proportionate (and just) response to murder.
________________________________________

‘One ought not return injustice
for injustice’ — Socrates
________________________________________

Let’s take a moment, then, to look at punishment. Sentencing may be couched as ‘consequentialist’, in which case punishment’s purpose is utilitarian and forward looking. That is, punishment for wrongdoing anticipates future outcomes for society, such as eliminating (or more realistically, curtailing) criminal behaviour. The general interest and welfare of society — decidedly abstract notions, subject to various definitions — serve as the desired and sufficient end state.

Alternatively, punishment may be couched as ‘deontological’. In that event, the deed of punishment is itself considered a moral good, apart from consequences. Deontology entails rules-based ethics — living under the rule of law, as a norm within either liberal or conservative societies and systems of governance — while still attaining retributive objectives. Or, commonly, punishment may be understood as an alliance of both consequentialism and deontology. Regardless of choice — whether emphasis is on consequentialism or deontology or a hybrid of the two — the risk of punishing the innocent, especially given the irreversibility of the death penalty in the case of discovered mistakes, looms large. As such, the choice among consequentialism, deontology, or a hybrid matters little to any attempt to support a case for capital punishment.

Furthermore, the meting out of justice works only if knowledge is reliable and certain. That is, knowledge of individuals’ culpability, the competence of defense and prosecutorial lawyers, unbiased evidence (both exculpatory and inculpatory), the randomness of convictions across demographics, the sense of just desserts, the fairness of particular punishments (proportionality), and the prospective benefits to society of specific punitive measures. Broadly speaking, what do we know, how do we know it, and the weight of what counts — epistemological issues that are bound by the ethical issues. In many instances, racial, ethnic, gender, educational, or socioeconomic prejudices (toward defendants and victims alike) skew considerations of guilt and, in particular, the discretionary imposition of the death penalty. In some countries, politics and ideology — even what’s perceived to threaten a regime’s legitimacy — may damn the accused. To those sociological extents, ‘equal protection of the law’ becomes largely moot.

Yet at the core, neither consequentialism — purported gains to society from punishment’s outcomes — nor deontology — purported intrinsic, self-evident morality of particular sentences — rises to the level of sufficiently undergirding the ethical case for resorting to the death penalty. Nor does retribution (revenge) or proportionality (‘eye for an eye, tooth for a tooth’). After all, whether death is the proportionate response to murder remains highly suspect. Indeed, no qualitative or quantitative logic, no matter how elegantly crafted, successfully supports society’s recourse to premeditatedly and ceremoniously executing citizens as part of its penal code.
_____________________________________________

‘Capital punishment is the most
premeditated of murders’ — Albert Camus
_____________________________________________

There is no public-safety angle, furthermore, that could not be served equally well by lifetime incarceration — without, if so adjudged, consideration of rehabilitation and redemption, and thus without the possibility of parole. Indeed, evidence does not point to the death penalty improving public safety. For example, the death penalty has no deterrent value — that is, perpetrators don’t first contemplate the possibility of execution in calculating whether or not to commit murder or other violent crime. The starting position therefore ought to be that human life is sacrosanct — life’s natural origins, its natural course, and its natural end. Society ought not deviate from that principle in normalising particular punishments for criminal — even heinously criminal — behaviour. The guiding moral principle is singular: that it’s ethically unprincipled for a government to premeditatedly take its citizenries’ lives in order to punish, a measure that morally sullies the society condoning it.

Society’s applying the death penalty as an institutional sentence for a crime is a cruel vestige of a time when life was less sacred and society (the elite, that is) was less inclined to censor its own behavior: intentionally executing in order, with glaring irony, to model how killing is wrong. Society cannot compartmentalise this lethal deed, purporting that sanctioned death penalty is the exception to the ethical rule not to kill premeditatedly. Indeed, as Salil Shetty, secretary-general of Amnesty International, laconically observed, ‘the death penalty is a symptom of a culture of violence, not a solution to it’.

Although individuals, like victim family members, may instinctively and viscerally want society to thrash out in revenge on their behalf — with which many people may equally instinctively and understandably sympathise — it’s incumbent upon society to administer justice rationally, impartially, and, yes, even dispassionately. With no carveout for excepted crimes, no matter how odious, the death penalty is a corrosive practice that flagrantly mocks the basis of humanity and civilisation — that is, it scorns the very notion of a ‘civil’ society.

The death penalty is a historical legacy that should thus be consigned to the dustbin. States, across the globe, have no higher, sober moral stake than to strike the death penalty from their legal code and practices. With enough time, it will happen; the future augurs a world absent state-sanctioned execution as a misdirected exercise in the absolute power of government.

18 June 2017

Language: Two Himalayan Mistakes

Seated Woman by Richard Diebenkorn
Posted by Thomas Scarborough
We take a lot on trust. Too much of it, mistakenly. We even have a name for it: ex verecundiam.  With this in mind, there are two things at the heart of our language, which we have mistakenly taken on trust. The first is how to circumscribe the meaning of a word, the second is how to qualify that meaning. These are not merely issues of semantics. They have profound implications for our understanding of the world. 
There was a time, not too long ago, when we had no dictionaries. In fact, it was not too long ago that we had no printing presses on which to print them. Then, when dictionaries arrived, we decided that words had definitions, and that, where applicable, each of these definitions held the fewest possible semantic features. A woman, for instance, was an ‘adult human female’, no less, and certainly no more – three features in all. While this may be too simple a description of the matter, the meaning will be clear.

We may never know who first gave us permission to do this, or on whose authority it was decided. It may go back to Aristotle. But at some time in our history, two options lay before us. One was to reduce the meaning of a word to the fewest possible semantic features. The other was to include in it every possible semantic feature. We know now what the decision was. We chose artificially and arbitrarily to radically reduce what words are.

We canvassed the literature. We canvassed the people. All had their own vast ideas and experiences about a word. Then we sought the word's pure essence, its abstract core – like the definition of the woman, an ‘adult human female’. This, however, introduced one of the biggest problems of semantics. We needed now to separate semantic features which mattered from those which did not. The artificiality and uncertainty of this dividing line – that is, between denotation and connotation – has filled many books.

Worse than this. It is easy to prove that we took the wrong option at the start.  We are in a position to demonstrate that, when we refer to a word, we refer to its maximal semantic content, not minimal. Some simple experiments prove the point. Take the sentences, ‘I entered the house. The karma was bad,’ or, ‘The car hit a ditch. The axle broke.’ What now does ‘the karma’ or ‘the axle’ refer to? It refers to the maximal content of a word. This is how, intuitively, innately, we deal with words.

Our second big mistake, which follows on from the first, was the notion of subject and predicate. We call these the ‘principal syntactic elements’ of language. They were at the forefront of Kant's philosophy. Today, the universally accepted view is that the predicate completes an idea about the subject. Take as an example the sentence, ‘’The woman (subject) dances (predicate),’ or, ‘The penny (subject) drops (predicate).’

Again, ‘the woman’ is taken as the bare-bones concept, ‘adult human female’. Add to this the predicate – the fact that the woman dances – and we expand on the concept of a woman. We already know that a woman dances, of course. We know, too, that she laughs, sleeps, eats, and a great deal more. Similarly, we define ‘the penny’ as a ‘British bronze coin’. Add to this that it can drop, and we have expanded on the concept of a penny. Of course, we know well that it clinks, shines, even melts, and much more besides.

Yet, what if the predicate serves not to expand upon the subject, but to narrow it down? In fact, if words contain every possible semantic feature, so too must subjects. A predicate takes a ‘maximal’ subject, then – the near infinite possibilities contained in ‘the woman’, or ‘the penny’ – and channels them, so to speak. ‘The woman (who can be anything) dances.’ ‘The penny (which offers a multitude of possibilities) drops.’  Predicates, then, are ‘clarifiers’, as it were. They take a thing, and narrow it down and sharpen its contours.

The application to philosophy is simple.  We discard a word’s many possibilities – those of a woman, a penny, a house, a car – in the interests of the arbitrary notion that they represent minimal meanings – so reducing them to the smallest number of semantic features people use, and throwing the rest away.

Day after day, we do this, through force of centuries of habit. With this, we instantly discard (almost) all the possibilities of a word. We meet situations without being open to their possibilities, but cobble a few predicates to bare-bones subjects, and so lose our good sense. Nuclear power is the generation of electricity, a ship is something that floats, a Führer is someone who governs. The words, being stripped of their maximal meanings, do not contain – perhaps most importantly – the possibility of evil. This greatly assists prejudice, bigotry, partiality, and discrimination.

When words are reduced to their minimal features – when we base their meaning on their denotative core – we ‘crop’ them, truncate them, reduce them, and above all, cut away from them a great many meanings which they hold, and so reduce our awareness of the world, and cosmos.  Due in no small part to the way we imagine our language to be – minimal words and minimal subjects – we have entered habits of thinking which are simplistic, reductionistic, technical – and dangerous.

But to understand words in terms of maximal meanings is to reject the reductionism of our present time, and to think expansively, creatively, intuitively, holistically. 

11 June 2017

Seeking Reformers

Torture by Kevin (DJ) Ahzee
Posted by Sifiso Mkhonto
Unity has a mixed reputation, in South Africa.  It was under apartheid that the motto ‘unity is strength’ became the tool of exclusion.  Yet even under our new constitutional democracy, with the motto ‘Working together we can do more’, unity became an illusion.
Today we find ourselves with different kinds of unity: political party unity, religious unity, and cultural unity. Yet rather than uniting us, these ‘unities’ exist in menacing tension, and instead of being united, we seem isolated. Furthermore, as the contours of these ‘unities’ have become more apparent, they have revealed parallel power structures in our society:
  Political party unity has promoted ‘party first’, and cadre deployment.
  Religious unity has served religious leaders, who have consorted with political power, and
  Cultural unity has divided society through tribalism.
Over time, each of these unities has polarised us into captives and captors, and united us in bondage. Our ‘unities’ have become what I shall call ‘civilised oppression’. The dynamic is simple, on the surface of it. The major tool which is used to secure our captivity is patronage. Patronage is the support, encouragement, privilege, or financial aid that an organisation or individual bestows on another. It indicates the power to grant favours – but also, importantly, the need to seek them.

Underlying this dynamic, at both extremes, is the cancer we call greed. This greed then becomes institutionalised, and oppression, in the words of Iris Marion Young, becomes ‘embedded in unquestioned norms, habits, and symbols, in the assumptions underlying institutions and rules, and the collective consequences of following those rules’. Chains and prison cells are a mere shadow of the chains and prison cells of mental oppression such as this. ‘The most potent weapon in the hands of the oppressor,’ wrote the South African anti-apartheid activist Steve Biko, ‘is the mind of the oppressed.’

Since greed is embedded in each of us, and this greed has become institutionalised, we cannot eliminate the attendant oppression by getting rid of rulers or by promulgating new laws – because oppressions are systematically reproduced in the major economic, political and cultural institutions. To make matters worse, in the words of the American social psychologist Morton Deutsch, ‘while specific privileged groups are the beneficiaries of the oppression of other groups, and thus have an interest in the continuation of the status quo, they do not typically understand themselves to be agents of oppression’. They, and we, are blind.

Contrast this now with the fact that we do, in fact, live in a constitutional democracy, with a bill of rights and the rule of law. People have lost the will and the desire to insist on law because they are cowed through the dynamics of patronage. Despondency has increased – or rather, our leaders have increased our despondency – as the dynamics of greed have gained the upper hand. Iris Marion Young describes our oppression ‘as a consequence of often unconscious assumptions and reactions of well-meaning people in ordinary interactions that are supported by the media and cultural stereotypes as well as by the structural features of bureaucratic hierarchies and market mechanisms’.

The cancer is within most of us. Now where do we look for restoration? Our greatest need is for Reformers who will press for merit systems, insist that lawmakers respect the law and find strategies to eliminate patronage. They will seek unity under one constitution, one bill of rights, one law. However, this cannot be done by Reformers who do not have the cure for the cancer. I believe that freedom will flourish, citizens will emancipate themselves from mental oppression, and patronage won’t be the big elephant in the room, as soon as we implement this cure.

It is time to turn our house into a home. Find a cure for the cancer. Seek knowledgeable and principled Reformers who won’t give society the cold shoulder when symptoms of the cancer are identified even in them. Now is the time to act on the diagnosis of the cancer, and take the medication that will cure us. Those who are controlled by the disease need to repent and find their way, instead of being sidetracked by patronage.



Also by Sifiso Mkhonto: Breaking the Myth of Equality.

04 June 2017

Picture Post #25 The Machine Age


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen



1950s advertising image for a new-fangled vending machine

You can just imagine the conversation...  ‘Hi Betty, can I ask you a dumb question? Better than anyone I know Bill!’

Okay, maybe that's not what the image brought to your mind, but it is what the  copywriters for the original magazine adverstisement came up with - under a heading ‘Sweet ’n’ Snarky’. Don’t ask what ‘snarky’ means exactly, as no one seems to agree, but here the image gives a particular sense to the term: ‘smart, stylish, a little bit rogueish’.

Nearly 70 years on, the machine no longer looks snarky, indeed it looks pretty unstylish and dumb. The green fascia and the plain helvetica font shouting out in red the word ‘COFFEE’ scarcely impress, as surely at the time they would have done. That’s not even to start on the drab characters in this little play, Bill, the office flirt and Betty, the attractive secretary.

In those days, the set-up might have seemed attractive; offering new technological developments combined with social engagement. Just like the characters in a popular TV soap series, the image created by others seeks to tell you who you are. Advertising media in particular have long been keen to exploit this role-play and their success offers a fascinating additional question. Which is; just why do people like to be reduced to their function, to a stereotype?
  
Of course, the advertisers were not really interested in what an actual Bill might have to talk about to an actual Betty. Real characters are multifaceted. Why, this Bill and Betty might even have both been academics chatting during a break between lectures!
‘Hi Betty, do you think these coffee machines will increase our happiness in life?’
‘Hmmm. Good question, Bill. And my answer would be ‘Yes and No’.  Soon we’ll find ourselves oppressed with new technologies but first let us celebrate the reflection of change this one represents.’
Welcome to the deep world of everyday expression, not the frothy one of advertisers’ expresso.

28 May 2017

Why Absolute Moral Relativism Should Be Off The Table

Posted by Christian Sötemann
In the case of moral statements there can be many degrees between absolute certainty and absolute uncertainty. 
Even empirical truths, which are thoroughly supported by conclusive evidence, cannot, by their empirical nature, have the same degree of certainty as self-evident truths. There may always be an empirical case which escapes us. And so it may be questioned whether a viable moral principle really has to be either one or the other: absolutely certain or absolutely uncertain, valuable or valueless – or whether it is good enough for it to serve as an orientation, a rule of thumb, or something useful in certain types of cases.

With this in mind, given any moral principle in front of us, it could be helpful for us to differentiate between whether:
• it is only universally applicable in an orthodox way

or

• there is an overt denial of any generalisability (even for a limited type of cases) of moral values and principles.
In the first case, we may try to reconcile a concrete situation with an abstract moral rule, without rejecting the possibility of some degree of generalisation – yet in the second case, we have what we previously discussed: generalising that we would not be able to make any kind of general statement. In the second case, we have an undifferentiated position that renders all attempts at gauging arguments about ethics futile, thus condoning an equivalence of moral stances that is hardly tenable.

This liberates the moral philosopher at least in one way: absolute moral relativism can be taken off the table, while all moral standpoints may still be subjected to critical scrutiny. If I have not found any moral philosophy that I can wholeheartedly embrace, I do not automatically have to resort to absolute moral relativism. If I have not found it yet, it does not mean that it does not exist at all. The enquiring mind need not lose all of its beacons.

To put moral relativism in its most pointed form, the doctrine insists that there are moral standpoints, yet that none of them may be considered any more valid than others. This does not oblige the moral relativist to say that everything is relative, or that there are no facts at all, such as scientific findings, or logical statements. It confines the relativism to the sphere of morality.

We need to make a further distinction. The English moral philosopher Bernard Williams pointed out that there may be a 'logically unhappy attachment' between a morality of toleration, which need not be relative, and moral relativism. Yet here we find a contradiction. If toleration is the result of moral relativism – if I should not contest anyone’s moral stance, because I judge that all such stances are similarly legitimate – I am making a general moral statement, namely: 'Accept everybody’s moral preferences.' However, such generalisation is something the moral relativist claims to avoid.

A potential argument that, superficially, seems to speak for moral relativism is that it can be one of many philosophical devices that helps us to come up with counterarguments to moral positions. Frequently, this will reveal that moral principles which were thought to be universal fail to be fully applicable – or applicable at all – in the particular case. However, this can lead to a false dilemma, suggesting only polar alternatives (either this or that, with no further options in between) when others can be found. The fact that there is a moral counterargument does not have to mean that we are only left with the conclusion that all moral viewpoints are now invalid.

Moral propositions may not have the same degree of certainty as self-evident statements, which cannot be doubted successfully – such as these:
• 'Something is.'

• 'I am currently having a conscious experience.'
These propositions present themselves as immediately true to me, since a) is something in itself, as would be any contestation of the statement, and b) even doubting or denying my conscious experience happens to be just that: a conscious experience.

Rarely do we really find a philosopher who endorses complete moral relativism, maintaining that any moral position is as valid as any other. However, occasionally such relativism slips in by default – when one shrugs off the search for a moral orientation, or deems moral judgements to be mere personal or cultural preferences.

Now and again, then, we might encounter variants of absolute moral relativism, and what we could do is this: acknowledge their value for critical discussion, then take them off the table.

21 May 2017

Healthcare ... A Universal Moral Right

A Barber-surgeon practising blood-letting
Posted by Keith Tidman

Is health care a universal moral right — an irrefutably fundamental ‘good’ within society — that all nations ought to provide as faithfully and practically as they can? Is it a right in that all human beings, worldwide, are entitled to share in as a matter of justice, fairness, dignity, and goodness?

To be clear, no one can claim a right to health as such. As a practical matter, it is an unachievable goal — but there is a perceived right to healthcare. Where health and healthcare intersect — that is, where both are foundational to society — is in the realisation that people have a need for both. Among the distinctions, ‘health’ is a result of sundry determinants, access to adequate healthcare being just one. Other determinants comprise behaviours (such as smoking, drug use, and alcohol abuse), access to nutritious and sufficient food and potable water, absence or prevalence of violence or oppression, and rates of criminal activity, among others. And to be sure, people will continue to suffer from health disorders, despite all the best of intentions by science and medicine. ‘Healthcare’, on the other hand, is something society can and does make choices about, largely as a matter of policymaking and access to resources.

The United Nations, in Article 25 of its ‘Universal Declaration of Human Rights’, provides a framework for theories of healthcare’s essential nature:
“Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including . . . medical care and necessary social services, and the right to security in the event of . . . sickness . . . in circumstances beyond his [or her] control.”
The challenge is whether and how nations live up to that well-intentioned declaration, in the spirit of protecting the vulnerable.

At a fundamental level, healthcare ethics comprises values — judgments as to what’s right and wrong, including obligations toward the welfare of other human beings. Rights and obligations are routinely woven into the deliberations of policymakers around the world. In practice, a key challenge in ensuring just practices — and figuring out how to divvy up finite (sometimes sorely constrained) material resources and economic benefits — is how society weighs the relative value of competing demands. Those jostling demands are many and familiar: education, industrial advancement, economic growth, agricultural development, security, equality of prosperity, housing, civil peace, environmental conditions — and all the rest of the demands on resources that societies grapple with in order to prioritise spending.

These competing needs are where similar constraints and inequalities of access persist across socioeconomic demographics and groups within and across nations. Some of these needs, besides being important in their own right, also determine — even if sometimes only obliquely — to health and healthcare. Their interconnectedness and interdependence are folded into what one might label ‘entitlements’, aimed at the wellbeing of individuals and whole populations alike. They are eminently relatable, as well as part and parcel of the overarching issue of social fairness and justice.

The current vexed debate over healthcare provision within the United States among policymakers, academics, pundits, the news media, other stakeholders (such as business executives), and the public at large is just one example of how those competing needs collide. It is also evidence of how the nuts and bolts of healthcare policy rapidly become entangled in the frenzy of opposing dogmas.

On the level of ideology, the healthcare debate is a well-trodden one: how much of the solution to the availability and funding of healthcare services should rest with the public sector, including government programming, mandating, regulation, and spending; and how much (with a nod to the laissez-faire philosophy of Adam Smith in support of free markets) should rest with the private sector, incluidng businesses such as insurance companies, hospitals, and doctors? Yet often missing in all this urgency and the decisions about how to ration healthcare is that the money being spent has not resulted in best health outcomes, based on comparison of certain health metrics with select other countries.

Sparring over public-sector versus private-sector solutions to social issues — as well as over states’ rights versus federalism among the constitutionally enumerated powers — has marked American politics for generations. Healthcare has been no exception. And even in a wealthy nation like the United States, challenges in cobbling together healthcare policy have drilled down into a series of consequential factors. They include whether to exclude specified ailments from coverage, whether preexisting conditions get carved out of (affordable) insured coverage, whether to impose annual or lifetime limits on protections, how much of the nation's gross domestic product to consign to healthcare, and how many tens of millions of people might remain without healthcare or be ominously underinsured, among more — precariously resting on arbitrary decisions. True reform might require starting with a blank slate, then cherry-picking from among other countries’ models of healthcare policy, based on their lessons learned as to what did and did not work over many years. Ideas as to America’s national healthcare are still on the anvil, being hammered by Congress and others into final policy.

Amid all this policy ‘sausage making’, there’s the political sleight-of-hand rhetoric that misdirects by acts of either commission or omission within debates. Yet, do the uninsured still have a moral right to affordable healthcare? Do the underinsured still have a moral right to healthcare? Do people with preexisting conditions still have a moral right to healthcare? Do people who are older, but who do not yet qualify for age-related Medicare protections, have a moral right to healthcare? Absolutely, on all counts. The moral right to healthcare — within society’s financial means — is universal, irreducible, non-dilutable; that is, no authority may discount or deny the moral right of people to at least basic healthcare provision. Within that philosophical context of morally rightful access to healthcare, the bucket of healthcare services provided will understandably vary wildly, from one country to another, pragmatically contingent on how wealthy or poor a country is.

Of course, the needs, perceptions, priorities — and solutions — surrounding the matter of healthcare differ quite dramatically among countries. And to be clear, there’s no imperative that the provision of effective, efficient, fair healthcare services hinge on liberally democratic, Enlightenment-inspired forms of government. Apart from these or other styles of governance, there’s more fundamentally no alternative to local sovereignty in shaping policy. Consider another example of healthcare policy: the distinctly different countries of sub-Saharan Africa pose an interesting case. The value of available and robust healthcare systems is as readily recognized in this part of the world as elsewhere. However, there has been a broadly articulated belief that the healthcare provided is of poor quality. Also, healthcare is considered less important among competing national priorities — such as jobs, agriculture, poverty, corruption, and conflict, among others. Yet, surely the right to healthcare is no less essential to these many populations.

Everything is finite, of course, and healthcare resources are no exception. The provision of healthcare is subject to zero-sum budgeting: the availability of funds for healthcare must compete with the tug of providing other services — from education to defence, from housing to environmental protections, from commerce to energy, from agriculture to transportation. This reality complicates the role of government in its trying to be socially fair and responsive. Yet, it remains incumbent on governments to forge the best healthcare system that circumstances allow. Accordingly, limited resources compel nations to take a fair, rational, nondiscriminatory approach to prioritising who gets what by way of healthcare services, which medical disorders to target at the time of allocation, and how society should reasonably be expected to shoulder the burden of service delivery and costs.

As long ago as the 17th century, René Descartes declared that:
‘... the conservation of health . . . is without doubt the primary good and the foundation of all other goods of this life’. 
However, how much societies spend, and how they decide who gets what share of the available healthcare capital, are questions that continue to divide. The endgame may be summed up, to follow in the spirit of the 18th-century English philosopher Jeremy Bentham, as ‘the greatest happiness for the greatest number [of people]’ for the greatest return on investment of public and private funds dedicated to healthcare. How successfully public and private institutions — in their thinking about resources, distribution, priorities, and obligations — mobilise and agitate for greater commitment comes with implied decisions, moral and practical, about good health to be maintained or restored, lives to be saved, and general wellbeing to be sustained.

Policymakers, in channeling their nations’ integrity and conscience, are pulled in different directions by competing social imperatives. At a macro level, depending on the country, these may include different mixes of crises of the moment, political and social disorder, the shifting sands of declared ideological purity, challenges to social orthodoxy, or attention to simply satiating raw urges for influence (chasing power). In that brew of prioritisation and conflict, policymakers may struggle in coming to grips with what’s ‘too many’ or ‘too few’ resources to devote to healthcare rather than other services and perceived commitments. Decisions must take into account that healthcare is multidimensional: a social, political, economics, humanities, and ethics matter holistically rolled into one. Therefore, some models for providing healthcare turn out to be more responsible, responsive, and accountable than others. These concerns make it all the more vital for governments, institutions, philanthropic organizations, and businesses to collaborate in policymaking, public outreach, program implementation, gauging of outcomes, and decisions about change going forward.

A line is thus often drawn between healthcare needs and other national needs — with the tensions of altruism and self-interest opposed. The distinctions between decisions and actions deemed altruistic and those deemed self-interested are blurred since they must hinge on motives, which are not always transparent. In some cases, actions taken to provide healthcare nationally serve both purposes — for example, what might improve healthcare, and in turn health, on one front (continent, nation, local community) may well keep certain health disorders from another front.

The ground-level aspiration is to maintain people’s health, treat the ill, and crucially, not financially burden families, because what’s not affordable to families in effect doesn’t really exist. That nobly said, there will always be tiered access to healthcare — steered by the emptiness or fullness of coffers, political clout, effectiveness of advocacy, sense of urgency, disease burden, and beneficiaries. Tiered access prompts questions about justice, standards, and equity in healthcare’s administration — as well as about government discretion and compassion. Matters of fairness and equity are more abstract, speculative metrics than are actual healthcare outcomes with respect to a population’s wellbeing, yet the two are inseperable.

Some three centuries after Descartes’ proclamation in favour of health as ‘the primary good’, the United Nations issued to the world the ‘International Covenant on Economic, Social, and Cultural Rights’ and thereby placing its imprimatur on ‘the right of everyone to the enjoyment of the highest attainable standard of physical and mental health’. The world has made headway, where many nations have instituted intricate, encompassing healthcare systems for their own populations, while also collaborating with the governments and local communities of financially stressed nations to undergird treatments through financial aid, program design and implementation, resource distribution, teaching of indigenous populations (and local service providers), setting up of healthcare facilities, provision of preventions and cures, follow-up as to program efficacy, and accountability of responsible parties.

In short, the overarching aim is to convert ethical axioms into practical, implementable social policies and programs.

14 May 2017

The Philosophy of Jokes

I say, I say, I say...
Posted by Martin Cohen
Ludwig Wittgenstein, that splendidly dour 20th century philosopher, usually admired for trying to make language more logical, once remarked, in his earnest Eastern European way, that a very serious work, or zery serieuse, verk in philosophy could consist entirely of jokes. 
Now Wittgenstein probably meant to shock his audience which consisted of his American friend, Norman Malcolm (who he also once, advised to avoid an academic career and to work instead on a farm) but he was also in deadly earnest. Because, humour is, as he also is on record as saying, ‘not a mood, but a way of looking at the world’. Understanding jokes, just like understanding the world, hinges on having first adopted the right kind of perspective.

So here's one to test his idea out on.
‘A traveler is staying at a monastery, where the Order has a vow of silence and can only speak at the evening meal. On his first night as they are eating, one of the monks stands up and shouts ‘Twenty two!’. Immediately the rest of the monks break out into raucous laughter. Then they return to new silence. A little while later, another shouts out ‘One hundred and ten’, to even more uproarious mirth. This goes on for two more nights with no real conversation, just different numbers being shouted out, followed by ribald laughing and much downing of ale. At last, no longer able to contain his curiosity the traveler asks the Abbot what it is all about. The Abbot explains that the monastery has only one non-religious book in it, which consists of a series of jokes each headed with its own number. Since all the monks know them by heart, instead of telling the jokes they just call out the number. 
Hearing this, the traveler decides to have a look at the book for himself. He goes to the library and carefully makes a note of the numbers of the funniest jokes. Then, that evening he stands up and calls out the number of his favourite joke – which is ‘seventy six’. But nobody laughs, instead there is an embarrassed silence. The next night he tries again, ‘One hundred and thirteen!’, he exclaims loudly into the silence - but still no response. 
After the meal he asks the Abbott if the jokes he picked were not considered funny by the monks? ‘Ooh no’, says the Abbott. ‘The jokes are funny – it’s just that some people just don't know how to tell them!’
I like that one! And incredibly, it is one of the oldest jokes around. This, we might say, is a joke with a pedigree. A version of it appears in the Philogelos, or Laughter Lover, which is a collection of some 265 jokes, written in Greek and compiled some 1,600 odd years ago. So it’s old. Nevertheless, despite its antiquity, the style of this and at least some of the other jokes is very familiar.

Clearly, humour is something that transcends communities and periods in history. It seems to draw on something common to all peoples. Yet jokes are also clearly things rooted in their times and places. At the time of this joke, monks and secret books were serious business. But the first philosophical observation to make and principle to note is that both these jokes involved one of those ‘ah-ha!’ moments.

Humour often involves a sudden, unexpected shift in perspective forcing a rapid reassessment of assumptions. Philosophy, at its best, does much the same thing.