Showing posts with label John Stuart Mill. Show all posts
Showing posts with label John Stuart Mill. Show all posts

15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

08 December 2019

Is Torture Morally Defensible?


Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.


I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.

20 October 2019

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that disinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully — from the material to the cultural and spiritual — is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical belief in the fundamental value of all life. In turn, reason and observable evidence may lead to shared moral goods, as well as progress on the material and immaterial sides of life’s ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in 
themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that distinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully  from the material to the cultural and spiritual  is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical believe in the fundamental value of all life. In turn, reason and observable evidence may lead to share moral goods, as well as progress on the material and immaterial sides of life's ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

09 June 2019

On the Influences Upon ‘Happiness’



According to Sonja Lyubomirsky, a person’s happiness level combines ‘genetic set-point’, ‘intentional activities’ (choice of daily activities), and life circumstances (The How of Happiness).

Posted by Keith Tidman

Is ‘Happiness’ in large measure subjective? Are people happy, or unhappy, just if they perceive themselves as such? Surely, there’s a transient nature to spiked happiness, either up or down. That is, no matter how events may make us feel at any moment in time — ecstatic (think higher-than-expected pay increase) or gloomy (think passed over for an anticipated major promotion) — eventually we return to our original level of happiness, or ‘baseline’. This implies that happiness does not change much, or long-lastingly, for an individual over a lifetime. There’s always the pulling back to our happiness predisposition or mean, a process that philosophers sometimes refer to as ‘hedonic adaptation’. So, what factors influence happiness?

The feeling of happiness may be boosted when we’re fully occupied by activities that we deem especially important to us: those pursuits that represent our most-cherished values, inspire us, require concerted deliberation, prompt creative self-expression, achieve our potential, confirm our competence, reflect purposes beyond ourselves, foster meaningful goals, and promote relatedness. Ties to family, friends, colleagues, and the larger community — socialisation and connectedness — enhance this feeling of wellbeing. We benefit from these pursuits in proportion to how clearly we envision them, how committed we are to attaining them, and the amount of effort we invest.

The role of money in the subjective perception of happiness extends only to its helping to meet such salient necessities as a place to live, sufficient nourishment, adequate clothing, sleep, and security. That is, the barest requirements, but which importantly help lessen one’s anxiety over physical sustenance. After meeting such basic living conditions, the ability of larger sums of money to influence happiness trails off. People eventually adapt to the perks that a surge in wealth initially brings. Happiness reverts to its original baseline. (Even lottery winners, temporarily ecstatic as they believe the windfall is the key to life-long happiness, typically return to their baseline level of happiness. Their happiness level may ultimately even fall below their baseline, as new wealth might bring unanticipated pressure and anxiety of its own, such as being badgered for handouts.) That’s the individual level. But there’s a similar tendency at the national scale, too: defined as the declining effects of growing wealth on the wellbeing of populations. 

For instance, middle-income and wealthier citizens may find themselves unendingly aspiring for more and fancier material possessions — each leading, eventually, to adaptation to new norms and perpetually rising expectations to fulfill desires. This dynamic has been referred to as the ‘hedonic treadmill’. Happiness appears illusory and transient; there’s instability. Adaptation leads to fewer emotional rewards, and along the way possibly squeezes out less-tangible goals that might bear more significantly on quality of life. A sense of entitlement settles in. Whole sets of new wants materialize. As the 19th-century British philosopher John Stuart Mill counseled, ‘I have learned to seek my happiness by limiting my desires, rather than in attempting to satisfy them’.

A powerful influence on happiness, which underscores the nature of wellbeing, is what people fundamentally value — their ideal, conditioned by cultural factors. For example, in pursuing happiness, one nationality may predominantly prefer situations and experiences that thrill, exhilarate, and enervate, with satisfaction of the individual at the core. Another nationality may be more predisposed to situations and experiences that promote tranquility, comfort, and composure, with satisfaction of the group at the core. Both of these culturally based models, in their respective ways, allow for citizens to fulfill expectations regarding how to live out life. 

Meanwhile, evidence suggests yet another dimension to all this: people tend to recall their personal reactions, such as joy, to activities inaccurately. In reflecting back, there’s greater clarity of what happened toward the end of the activity and diminishing clarity of what happened at earlier stages. As American-Israeli psychologist Daniel Kahneman succinctly expressed it, ‘Remembered happiness is different from experienced happiness.’ Holes or poorly recalled stages of activities get filled in by the mind, based more on what people believe should have happened, reshaping memories and misrepresenting to a degree how they really felt in the moment. The remembered experience — ‘experienced happiness’ — may thus have an unreal quality to it.

Some people believe that free choice, rather than submission to the vagaries of chance, is essential to this experienced happiness. But reality is a mixed bag. Countries that are relatively wealthy and enjoy the social perks of liberal democratic governance tend to feel confident and unthreatened enough to grant their citizens true choice (as a social and political good), which gets manifested in generally higher levels of happiness. Depending on what conditions might prompt sharp increases or decreases in happiness, hedonic adaptation will prevail. The key to maintaining at least baseline happiness is to have jurisdiction over how our choices actually play out, not merely to be presented with more choices. 

In fact, an abundance of choices can confound and freeze up personal decision-making, as people hesitate to choose when overwhelmed by a multitude of nuanced possibilities. Anxiety over the prospect of less than the best outcomes and the unintended consequences of choice only makes matters worse. This reflects how people exhibit different approaches to evaluating happiness. Yet, paradoxically, citizens who have known no other social scheme may in fact prefer contending with fewer choices. Such is the case, for instance, with autocratic systems of governance, modeled on prescriptive social contracts, which take a characteristically more patriarchic-leaning approach to decisions. Citizens become acclimatized to those conditions, where their level of happiness may change little from the baseline.

Tracking the influences on happiness tells us something important about context and efficacy. That is, the challenge to happiness — and especially efforts to control how these influences bear on the amount of happiness people experience from moment to moment — seems tied to resigning to the formidable reversion back to one’s happiness baseline. Evidence is that hedonic adaptation’ is a commanding force. By extension, therefore, attempts to appreciably elevate an individual’s happiness quotient, lastingly not just transiently, by manipulating these influences might have modest effect. The situation of influences’ limited effects in heightening happiness both appreciably and long term  one’s actual experience of happiness  may particularly be the case in context of how Sonja Lyubomirsky, among others, apportions the influences (‘determinants’) of happiness among the three sweeping categories shown in the graphic above. 

15 April 2018

'Evil': A Brief Search for Understanding

In medieval times, evil ws often personified in not-quite human forms

Posted by Keith Tidman

Plato may have been right in asserting that “There must always be something antagonistic to good.” Yet pause a moment, and wonder exactly why? And also what is it about ‘evil’ that means it can be understood and defined equally from both religious and secularist viewpoints? I would argue that fundamental to an exploration of both these questions is the notion that for something to be evil, there must be an essential component: moral agency. And as to this critical point, it might help to begin with a case where moral agency and evil arguably have converged.

The case in question is repeated uses of chemical weapons in Syria, made all too real recently. Graphic images of gassed children, women, and men, gasping for air and writhing in pain, have circulated globally and shocked people’s sense of humanity. The efficacy of chemical weapons against populations lies not only in the weapons’ lethality but — just as distressingly and perhaps more to the weapons’ purpose — in the resulting terror, shock, and panic, among civilians and combatants alike. Such use of chemical weapons does not take place, however, without someone, indeed many people, making a deliberate, freely made decision to engage in the practice. Here is, the intentionality of deed that infuses human moral agency and, in turn, gives rise to a shared perception that such behaviour aligns with ‘evil’.

One wonders what the calculus was among the instigators (who they are need not concern us, much as it matters from the poltiical standpoint) to begin and sustain the indiscriminate use of chemical weapons. And what were the considerations as to whom to 'sacrifice' (the question of presumed human dispensability) in the name of an ideology or quest for simple self-survival? Were the choices viewed and the decisions made on ‘utilitarian’ grounds? That is, was the intent to maim and kill in such shocking ways to demoralise and dissuade insurgency’s continuation (short-term consequences), perhaps in expectation that the conflict will end quicker (longer-term consequences)? Was it part of some larger gopolitical messaging between Russia and the United States? (Some even claim the attacks were orchestrated by the latter to discredit the former...)

Whatever the political scenario, it seems that the ‘deontological’ judgement of the act — the use of chemical weapons — has been lost. This, after all, can only make the use utterly immoral irrespective of consequences. Meanwhile, world hesitancy or confusion — fails to stop another atrocity against humanity, and the hesitancy itself has its own pernicious effects. The 19th-century British philosopher John Stuart Mill underscored this point, observing that:
“A person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.”
Keeping the preceding scenario in Syria in mind, let’s further explore the dimensions of rational moral agency and evil. Although  the label ‘evil’ is most familiar when used to qualify the affairs of human beings it can be used more widely, for example in relation to natural phenomena. Yet, I focus here on people because although, for example, predatory animals can and do cause serious harm, even death, I would argue that the behaviour of animals more fittingly falls under the rubric of ‘natural phenomena’ and that only humans are truly capable of evil.

As one distinction, people can readily anticipate — project and understand — the potential for harm, on an existential level; other species probably cannot (with research continuing). As for differentiating between, say, wrongdoing and full-on evil, context is critical. Another instantiation of evil is history’s many impositions of colonial rule, as having been practiced in all parts of the world. It not uncommonly oppressed its victims, in all manner of scarring ways, by sowing fear, injustice, stripping away of human rights, physical and emotional pain, and destruction of indigenous traditions.

This tipping point from wrongdoing, from say, someone under-reporting taxable income or skipping out on paying a restaurant bill, into full-on evil is made evident in these additional examples. These are deeds that range the gamut: serial murder that preys on communities, terrorist attacks on subway trains, genocide aimed at helpless minority groups, massacres, enslavement of people, torture, abuses of civilians during conflicts, summary executions, and mutilation, as well as child abuse, rape, racism, and environmental destruction. Such atrocities happen because people arrive at freely made choices: deliberateness, leading to causation.

These incidences, and their perpetrators (society condemns both doer and deed) aren’t just ‘wrong’, or ‘bad’, or even ‘contemptible’, they’re evil. Even though context matters and can add valuable explanation — circumstances that mitigate or extenuate deeds, including instigators’ motives — rendering judgements about evil is still possible, even if occasionally tenuously. So, for example, mitigation might include being unaware of the harmful consequences of one's actions, well-meaning intent that unpredictably goes awry, pernicious effects of a corrupting childhood, or lack of empathy of a psychopath. Under these conditions, blame and culpability hardly seem appropriate. Extenuation, on the other hand, might be deliberate, cruel infliction of pain and the pleasure derived from it, such as might occur during the venal kidnapping of a woman or child.

As for a religious dimension to moral agency, such agency might be viewed as applying to a god, in the capacity as creator of the universe. In this model of creation, such a god is seen as serving as the moral agent behind what I referred to above as ‘natural evil’ — from hurricanes, earthquakes, volcano eruptions, tsunamis, and droughts to illnesses, famine, pain, and grief. They of course often have destructive, even deadly, consequences. Importantly, that such evil occurs in the realm of nature doesn’t award it exceptional status. This, despite occasional claims to the contrary, such as the overly reductionist, but commonplace, assertion of the ancient Roman emperor-philosopher Marcus Aurelius:
 “Nothing is evil which is according to nature.”
In the case of natural events, evil may be seen as stemming less from intentions and only from the consequences of such phenomena — starvation, precarious subsistence, homelessness, broken-up families, desolation, widespread chronic diseases, rampant infant mortality, breakdown of social systems, malaise, mass exoduses of desperate migrants escaping violence, and gnawing hopelessness.

Such things have prompted faith-based debates over evil in the world. Specifically, if, as commonly assumed by religious adherents, there is a god that’s all-powerful, all-knowing, and all-benevolent, then why is there evil, including our examples above of natural evil? In one familiar take on theodicy, the 4th-century philosopher Saint Augustine offered a partial explanation, averring that:
 “God judged it better to bring good out of evil than to suffer no evil to exist.” 
 Other philosophers have asserted that the absence of evil, where people could only act for the good (as well as a god’s supposed fore-knowledge of people’s choices) would a priori render free will unnecessary and, of note, choices being predetermined.

Yet, the Gordian knot remains untied: our preceding definition of a god that is all-powerful and all-benevolent would rationally include being able to, as well as wanting to, eliminate evil and the suffering stemming from it. Especially, and surely, in the framework of that god’s own moral agency and unfettered free will. Since, however, evil and suffering are present — ubiquitously and incessantly — a reasonable inquiry is whether a god therefore exists. If one were to conclude that a god does exist, then recurring natural evil might suggest that the god did not create the universe expressly, or at least not entirely, for the benefit of humankind. That is, that humankind isn’t, perhaps, central or exceptional, but rather is incidental, to the universe’s existence. Accordingly, one might presuppose an ontological demotion.

Human moral agency remains core even when it is institutions — for example, governments and organisations of various kinds — that formalise actions. Here, again, the pitiless use of chemical weapons in Syria presents us with a case in point to better understand institutional behaviour. Importantly, however, even at the institutional level, human beings inescapably remain fundamental and essential to decisions and deeds, while institutions serve as tools to leverage those decisions and deeds. National governments around the world routinely suppress and brutalise minority populations, often with little or no provocation. Put another way, it is the people, as they course through the corridors of institutions, who serve as the central actors. They make, and bear responsibility for policies.

It is through institutions that people’s decisions and deeds become externalised — ideas instantiated in the form of policies, plans, regulations, acts, and programs. In this model of individual and collective human behaviour, institutions have the capacity for evil, even in cases when bad outcomes are unintended. Which affirms, one might note in addressing institutional behaviour, that the 20th-century French novelist and philosopher, Albert Camus, was perhaps right in observing:
“Good intentions may do as much harm as malevolence if they lack understanding.”
So, to the point: an institution’s ostensibly well-intended policy — for example, freeing up corporate enterprise to create jobs and boost national productivity — may nonetheless unintentionally cause suffering — for example, increased toxins in the soil, water, and air, affecting the health of communities. Hence again is a way in which effects, not only intentions, express bad outcomes.

But at other times, the moral agency behind decisions and deeds perpetrated by institutions’ human occupants may intentionally aim toward evil. Cases range the breadth of actions: launching wars overtly with plunder or hegemonism in mind; instigating pogroms or death fields; materially disadvantaging people based on identities like race, ethnicity, religion, or national origin (harsh treatment of migrants being a recent example); ignoring the dehumanising and stunting effects of child labour; showing policy disregard as society’s poorest elderly live in squalor; allowing industries to seep toxins into the environment for monetary gain — there are myriad examples. Institutions aren’t, therefore, simply bricks and mortar. They have a pulse, comprising the vision, philosophy, and mission of the people who design and implement their policies, benign or malign.

Evil, then, involves more than what Saint Augustine saw as the ‘privation’ of good — privation of virtuousness, equality, empathy, responsible social stewardship, health, compassion, peace, and so forth. In reality, evil is far less passive than Saint Augustine’s vision. Rather, evil arises from the deliberate, freely making of life’s decisions and one's choice to act on them, in clear contravention to humanity’s well-being. Evil is distinguished from the mere absence of good, and is much more than Plato’s insight that there must always be something ‘antagonistic’ to good. In many instances, evil is flagrant, such as in our example of the use of chemical weapons in Syria; in other instances, evil is more insidious and sometimes veiled, such as in the corruption of government plutocrats invidiously dipping into national coffers at the expense of the populace's quality of life. In either case, it is evident that evil, whether in its manmade or in its natural variant, exists in its own right and thus can be parsed and understood from both the religious and the secular vantage point.

19 November 2017

Freedom of Speech in the Public Square

Posted by Keith Tidman

Free to read the New York Times forever, in Times Square
What should be the policy of free society toward the public expression of opinion? The First Amendment of the U.S. Constitution required few words to make its point:
‘Congress shall make no law . . . abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.’
It reveals much about the republic, and the philosophical primacy of freedom of speech, that this was the first of the ten constitutional amendments collectively referred to as the Bill of Rights.

As much as we like to convince ourselves, however, that the public square in the United States is always a bastion of unbridled free speech, lamentably sometimes it’s not. Although we (rightly) find solace in our free-speech rights, at times and in every forum we are too eager to restrict someone else’s privilege, particularly where monopolistic and individualistic thinking may collide. Hot-button issues have flared time and again to test forbearance and deny common ground.

And it is not only liberal ideas but also conservative ones that have come under assault in recent years. When it comes to an absence of tolerance of opinion, there’s ample responsibility to share, across the ideological continuum. Our reaction to an opinion often is swayed by whose philosophical ox is being gored rather than by the rigor of argument. The Enlightenment thinker Voltaire purportedly pushed back against this parochial attitude, coining this famous declaration:

‘I don’t agree with what you have to say, but I’ll defend to the death your right to say it.’
Yet still, the avalanche of majority opinion, and overwrought claims to ‘unique wisdom’, poses a hazard to the fundamental protection of minority and individual points of view — including beliefs that others might find specious, or even disagreeable.

To be clear, these observations about intolerance in the public square are not intended to advance moral relativism or equivalency. There may indeed be, for want of a better term, ‘absolute truths’ that stand above others, even in the everyday affairs of political, academic, and social policymaking. This reality should not fall prey to pressure from the more clamorous claims of free speech: that the loudest, angriest voices are somehow the truest, as if decibel count and snarling expressions mattered to the urgency and legitimacy of one’s ideas.

Thomas Jefferson like-mindedly referred to ‘the safety with which error of opinion may be tolerated where reason is left free to combat it’. The key is not to fear others’ ideas, as blinkered censorship concedes defeat: that one’s own facts, logic, and ideas are not up to the task of effectively put others’ opinions to the test, without resort to vitriol or violence.

The risk to society of capriciously shutting down the free flow of ideas was powerfully warned against some one hundred fifty years ago by that Father of Liberalism, the English philosopher John Stuart Mill:
‘Strange it is that men should admit the validity of the arguments for free speech but object to their being “pushed to an extreme”, not seeing that unless the reasons are good for an extreme case, they are not good for any case.’
Mill’s observation is still germane to today’s society: from the halls of government to university campuses to self-appointed bully pulpits to city streets, and venues in-between.

Indeed, as recently as the summer of 2017, the U.S. Supreme Court underscored Mill’s point, setting a high bar in affirming bedrock constitutional protections of even offensive speech. Justice Anthony Kennedy, considered a moderate, wrote:
‘A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. . . . The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.’
It is worth noting that the high court opinion was unanimous: both liberal and conservative justices concurred. The long and short of it is that even the shards of hate speech are protected.

As to this issue of forbearance, the 20th-century philosopher Karl Popper introduced his paradox of tolerance: ‘Unlimited tolerance must lead to the disappearance of tolerance’. Popper goes on to assert, with some ambiguity,
‘I do not imply . . . that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force’.
The philosopher John Rawls agreed, asserting that a just society must tolerate the intolerant, to avoid itself becoming guilty of intolerance and appearing unjust. However, Rawls evoked reasonable limits ‘when the tolerant sincerely and with reason believe that their own security and that of the institutions of liberty are in danger’. Precisely where that line would be drawn is unclear — left to Supreme Court justices to dissect and delineate, case by case.

Open-mindedness — honoring ideas of all vintages — is a cornerstone of an enlightened society. It allows for the intellectual challenge of contrarian thinking. Contrarians might at times represent a large cohort of society; at other times they simply remain minority (yet influential) iconoclasts. Either way, the power of contrarians’ nonconformance is in serving as a catalyst for transformational thinking in deciding society’s path leading into the future.

That’s intellectually healthier than the sides of debates getting caught up in their respective bubbles, with tired ideas ricocheting around without discernible purpose or prediction.

Rather than cynicism and finger pointing across the philosophical divide, the unfettered churn of diverse ideas enriches citizens’ minds, informs dialogue, nourishes curiosity, and makes democracy more enlightened and sustainable. In the face of simplistic patriarchal, authoritarian alternatives, free speech releases and channels the flow of ideas. Hyperbole that shuts off the spigot of ideas dampens inventiveness; no one’s ideas are infallible, so no one should have a hand at the ready to close that spigot. As Benjamin Franklin, one of America’s Founding Fathers, prophetically and plainly pronounced in the Pennsylvania Gazette, 17 November 1737:
‘Freedom of speech is a principal pillar of a free government.’
Adding that ‘... when this support is taken away, the constitution of a free society is dissolved, and tyranny is erected on its ruins’. Franklin’s point is that the erosion or denial of unfettered speech threatens the foundation of a constitutional, free nation that holds government accountable.

With determination, the unencumbered flow of ideas, leavened by tolerance, can again prevail as the standard of every public square — unshackling discourse, allowing dissent, sowing enlightenment, and delivering a foundational example and legacy of what’s possible by way of public discourse.