Showing posts with label deontology. Show all posts
Showing posts with label deontology. Show all posts

15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

17 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

15 April 2018

'Evil': A Brief Search for Understanding

In medieval times, evil ws often personified in not-quite human forms

Posted by Keith Tidman

Plato may have been right in asserting that “There must always be something antagonistic to good.” Yet pause a moment, and wonder exactly why? And also what is it about ‘evil’ that means it can be understood and defined equally from both religious and secularist viewpoints? I would argue that fundamental to an exploration of both these questions is the notion that for something to be evil, there must be an essential component: moral agency. And as to this critical point, it might help to begin with a case where moral agency and evil arguably have converged.

The case in question is repeated uses of chemical weapons in Syria, made all too real recently. Graphic images of gassed children, women, and men, gasping for air and writhing in pain, have circulated globally and shocked people’s sense of humanity. The efficacy of chemical weapons against populations lies not only in the weapons’ lethality but — just as distressingly and perhaps more to the weapons’ purpose — in the resulting terror, shock, and panic, among civilians and combatants alike. Such use of chemical weapons does not take place, however, without someone, indeed many people, making a deliberate, freely made decision to engage in the practice. Here is, the intentionality of deed that infuses human moral agency and, in turn, gives rise to a shared perception that such behaviour aligns with ‘evil’.

One wonders what the calculus was among the instigators (who they are need not concern us, much as it matters from the poltiical standpoint) to begin and sustain the indiscriminate use of chemical weapons. And what were the considerations as to whom to 'sacrifice' (the question of presumed human dispensability) in the name of an ideology or quest for simple self-survival? Were the choices viewed and the decisions made on ‘utilitarian’ grounds? That is, was the intent to maim and kill in such shocking ways to demoralise and dissuade insurgency’s continuation (short-term consequences), perhaps in expectation that the conflict will end quicker (longer-term consequences)? Was it part of some larger gopolitical messaging between Russia and the United States? (Some even claim the attacks were orchestrated by the latter to discredit the former...)

Whatever the political scenario, it seems that the ‘deontological’ judgement of the act — the use of chemical weapons — has been lost. This, after all, can only make the use utterly immoral irrespective of consequences. Meanwhile, world hesitancy or confusion — fails to stop another atrocity against humanity, and the hesitancy itself has its own pernicious effects. The 19th-century British philosopher John Stuart Mill underscored this point, observing that:
“A person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.”
Keeping the preceding scenario in Syria in mind, let’s further explore the dimensions of rational moral agency and evil. Although  the label ‘evil’ is most familiar when used to qualify the affairs of human beings it can be used more widely, for example in relation to natural phenomena. Yet, I focus here on people because although, for example, predatory animals can and do cause serious harm, even death, I would argue that the behaviour of animals more fittingly falls under the rubric of ‘natural phenomena’ and that only humans are truly capable of evil.

As one distinction, people can readily anticipate — project and understand — the potential for harm, on an existential level; other species probably cannot (with research continuing). As for differentiating between, say, wrongdoing and full-on evil, context is critical. Another instantiation of evil is history’s many impositions of colonial rule, as having been practiced in all parts of the world. It not uncommonly oppressed its victims, in all manner of scarring ways, by sowing fear, injustice, stripping away of human rights, physical and emotional pain, and destruction of indigenous traditions.

This tipping point from wrongdoing, from say, someone under-reporting taxable income or skipping out on paying a restaurant bill, into full-on evil is made evident in these additional examples. These are deeds that range the gamut: serial murder that preys on communities, terrorist attacks on subway trains, genocide aimed at helpless minority groups, massacres, enslavement of people, torture, abuses of civilians during conflicts, summary executions, and mutilation, as well as child abuse, rape, racism, and environmental destruction. Such atrocities happen because people arrive at freely made choices: deliberateness, leading to causation.

These incidences, and their perpetrators (society condemns both doer and deed) aren’t just ‘wrong’, or ‘bad’, or even ‘contemptible’, they’re evil. Even though context matters and can add valuable explanation — circumstances that mitigate or extenuate deeds, including instigators’ motives — rendering judgements about evil is still possible, even if occasionally tenuously. So, for example, mitigation might include being unaware of the harmful consequences of one's actions, well-meaning intent that unpredictably goes awry, pernicious effects of a corrupting childhood, or lack of empathy of a psychopath. Under these conditions, blame and culpability hardly seem appropriate. Extenuation, on the other hand, might be deliberate, cruel infliction of pain and the pleasure derived from it, such as might occur during the venal kidnapping of a woman or child.

As for a religious dimension to moral agency, such agency might be viewed as applying to a god, in the capacity as creator of the universe. In this model of creation, such a god is seen as serving as the moral agent behind what I referred to above as ‘natural evil’ — from hurricanes, earthquakes, volcano eruptions, tsunamis, and droughts to illnesses, famine, pain, and grief. They of course often have destructive, even deadly, consequences. Importantly, that such evil occurs in the realm of nature doesn’t award it exceptional status. This, despite occasional claims to the contrary, such as the overly reductionist, but commonplace, assertion of the ancient Roman emperor-philosopher Marcus Aurelius:
 “Nothing is evil which is according to nature.”
In the case of natural events, evil may be seen as stemming less from intentions and only from the consequences of such phenomena — starvation, precarious subsistence, homelessness, broken-up families, desolation, widespread chronic diseases, rampant infant mortality, breakdown of social systems, malaise, mass exoduses of desperate migrants escaping violence, and gnawing hopelessness.

Such things have prompted faith-based debates over evil in the world. Specifically, if, as commonly assumed by religious adherents, there is a god that’s all-powerful, all-knowing, and all-benevolent, then why is there evil, including our examples above of natural evil? In one familiar take on theodicy, the 4th-century philosopher Saint Augustine offered a partial explanation, averring that:
 “God judged it better to bring good out of evil than to suffer no evil to exist.” 
 Other philosophers have asserted that the absence of evil, where people could only act for the good (as well as a god’s supposed fore-knowledge of people’s choices) would a priori render free will unnecessary and, of note, choices being predetermined.

Yet, the Gordian knot remains untied: our preceding definition of a god that is all-powerful and all-benevolent would rationally include being able to, as well as wanting to, eliminate evil and the suffering stemming from it. Especially, and surely, in the framework of that god’s own moral agency and unfettered free will. Since, however, evil and suffering are present — ubiquitously and incessantly — a reasonable inquiry is whether a god therefore exists. If one were to conclude that a god does exist, then recurring natural evil might suggest that the god did not create the universe expressly, or at least not entirely, for the benefit of humankind. That is, that humankind isn’t, perhaps, central or exceptional, but rather is incidental, to the universe’s existence. Accordingly, one might presuppose an ontological demotion.

Human moral agency remains core even when it is institutions — for example, governments and organisations of various kinds — that formalise actions. Here, again, the pitiless use of chemical weapons in Syria presents us with a case in point to better understand institutional behaviour. Importantly, however, even at the institutional level, human beings inescapably remain fundamental and essential to decisions and deeds, while institutions serve as tools to leverage those decisions and deeds. National governments around the world routinely suppress and brutalise minority populations, often with little or no provocation. Put another way, it is the people, as they course through the corridors of institutions, who serve as the central actors. They make, and bear responsibility for policies.

It is through institutions that people’s decisions and deeds become externalised — ideas instantiated in the form of policies, plans, regulations, acts, and programs. In this model of individual and collective human behaviour, institutions have the capacity for evil, even in cases when bad outcomes are unintended. Which affirms, one might note in addressing institutional behaviour, that the 20th-century French novelist and philosopher, Albert Camus, was perhaps right in observing:
“Good intentions may do as much harm as malevolence if they lack understanding.”
So, to the point: an institution’s ostensibly well-intended policy — for example, freeing up corporate enterprise to create jobs and boost national productivity — may nonetheless unintentionally cause suffering — for example, increased toxins in the soil, water, and air, affecting the health of communities. Hence again is a way in which effects, not only intentions, express bad outcomes.

But at other times, the moral agency behind decisions and deeds perpetrated by institutions’ human occupants may intentionally aim toward evil. Cases range the breadth of actions: launching wars overtly with plunder or hegemonism in mind; instigating pogroms or death fields; materially disadvantaging people based on identities like race, ethnicity, religion, or national origin (harsh treatment of migrants being a recent example); ignoring the dehumanising and stunting effects of child labour; showing policy disregard as society’s poorest elderly live in squalor; allowing industries to seep toxins into the environment for monetary gain — there are myriad examples. Institutions aren’t, therefore, simply bricks and mortar. They have a pulse, comprising the vision, philosophy, and mission of the people who design and implement their policies, benign or malign.

Evil, then, involves more than what Saint Augustine saw as the ‘privation’ of good — privation of virtuousness, equality, empathy, responsible social stewardship, health, compassion, peace, and so forth. In reality, evil is far less passive than Saint Augustine’s vision. Rather, evil arises from the deliberate, freely making of life’s decisions and one's choice to act on them, in clear contravention to humanity’s well-being. Evil is distinguished from the mere absence of good, and is much more than Plato’s insight that there must always be something ‘antagonistic’ to good. In many instances, evil is flagrant, such as in our example of the use of chemical weapons in Syria; in other instances, evil is more insidious and sometimes veiled, such as in the corruption of government plutocrats invidiously dipping into national coffers at the expense of the populace's quality of life. In either case, it is evident that evil, whether in its manmade or in its natural variant, exists in its own right and thus can be parsed and understood from both the religious and the secular vantage point.

25 June 2017

The Death Penalty: An Argument for Global Abolition


Posted by Keith Tidman

In 1957, Albert Camus wrote an essay called Reflections on the Guillotine. As well as arguing against it on grounds of principle, he also speaks of the ineffectiveness of the punishment:
‘According to one magistrate, the overwhelming majority of the murderers he had tried did not know, when they shaved themselves that morning, that they were going to kill someone that night. In short, capital punishment cannot intimidate the man who throws himself upon crime as one throws oneself into misery.’
For myself, too, the death penalty is an archaic practice, a vestige with no place in a 21st-century world. In the arena of constitutional law, the death penalty amounts to ‘cruel and unusual’ (inhumane) punishment. In the arena of ethics, the death penalty is an immoral assault on human rights, dignity, and life’s preeminence.

Through the millennia, social norms habitually tethered criminal punishment to ‘retribution’ — which minus the rhetorical dressing distils to ‘revenge’. ‘Due process of law’ and ‘equal protection under the law’ were random, rare, and capricious. In exercising retribution, societies shunted aside the rule of authentic proportionality, with execution the go-to punishment for a far-ranging set of offenses, both big and small — murder only one among them. In some societies, matters like corruption, treason, terrorism, antigovernment agitation, and even select ‘antisocial’ behaviours likewise qualified for execution — and other extreme recourses — shades of which linger today.

Resort through the ages to state-sanctioned, ceremonial killing (and other severe corporal punishment) reflected the prevailing norms of societies, with little stock placed on the deep-rooted, inviolable value of human life. The aim was variously to control, coerce, impose suffering, and ultimately dehumanise — very much as enemies in war find it easier to kill if they create ‘subhuman’ caricatures of the enemy. Despite the death penalty’s barbarity, some present-day societies retain this remnant from humanity’s darker past: According to Amnesty International, twenty-three countries — scattered among the Asia-Pacific, Africa, the United States in the Americas, and Belarus in Europe — carried out executions in 2016; while fifty-five countries sentenced people to death that year.

But condemnation of the death penalty does not, of course, preclude imposing harsh punishment for criminal activity. Even the most progressive, liberally democratic countries, abiding by enlightened notions of justice, appropriately accommodate strict punishment — though well short of society’s premeditatedly killing its citizens through application of the death penalty. The aims of severe punishment may be several and, for sure, reasonable: to preserve social orderliness, disincentivise criminal behaviour, mollify victims, reinforce legal canon, express moral indignation, cement a vision of fairness, and reprimand those found culpable. Largely fair objectives, if exercised dispassionately through due process of law. These principles are fundamental and immutable to civil, working — and rules-based — societies. Nowhere, however, does the death penalty fit in there; and nowhere is it obvious that death is a proportionate (and just) response to murder.
________________________________________

‘One ought not return injustice
for injustice’ — Socrates
________________________________________

Let’s take a moment, then, to look at punishment. Sentencing may be couched as ‘consequentialist’, in which case punishment’s purpose is utilitarian and forward looking. That is, punishment for wrongdoing anticipates future outcomes for society, such as eliminating (or more realistically, curtailing) criminal behaviour. The general interest and welfare of society — decidedly abstract notions, subject to various definitions — serve as the desired and sufficient end state.

Alternatively, punishment may be couched as ‘deontological’. In that event, the deed of punishment is itself considered a moral good, apart from consequences. Deontology entails rules-based ethics — living under the rule of law, as a norm within either liberal or conservative societies and systems of governance — while still attaining retributive objectives. Or, commonly, punishment may be understood as an alliance of both consequentialism and deontology. Regardless of choice — whether emphasis is on consequentialism or deontology or a hybrid of the two — the risk of punishing the innocent, especially given the irreversibility of the death penalty in the case of discovered mistakes, looms large. As such, the choice among consequentialism, deontology, or a hybrid matters little to any attempt to support a case for capital punishment.

Furthermore, the meting out of justice works only if knowledge is reliable and certain. That is, knowledge of individuals’ culpability, the competence of defense and prosecutorial lawyers, unbiased evidence (both exculpatory and inculpatory), the randomness of convictions across demographics, the sense of just desserts, the fairness of particular punishments (proportionality), and the prospective benefits to society of specific punitive measures. Broadly speaking, what do we know, how do we know it, and the weight of what counts — epistemological issues that are bound by the ethical issues. In many instances, racial, ethnic, gender, educational, or socioeconomic prejudices (toward defendants and victims alike) skew considerations of guilt and, in particular, the discretionary imposition of the death penalty. In some countries, politics and ideology — even what’s perceived to threaten a regime’s legitimacy — may damn the accused. To those sociological extents, ‘equal protection of the law’ becomes largely moot.

Yet at the core, neither consequentialism — purported gains to society from punishment’s outcomes — nor deontology — purported intrinsic, self-evident morality of particular sentences — rises to the level of sufficiently undergirding the ethical case for resorting to the death penalty. Nor does retribution (revenge) or proportionality (‘eye for an eye, tooth for a tooth’). After all, whether death is the proportionate response to murder remains highly suspect. Indeed, no qualitative or quantitative logic, no matter how elegantly crafted, successfully supports society’s recourse to premeditatedly and ceremoniously executing citizens as part of its penal code.
_____________________________________________

‘Capital punishment is the most
premeditated of murders’ — Albert Camus
_____________________________________________

There is no public-safety angle, furthermore, that could not be served equally well by lifetime incarceration — without, if so adjudged, consideration of rehabilitation and redemption, and thus without the possibility of parole. Indeed, evidence does not point to the death penalty improving public safety. For example, the death penalty has no deterrent value — that is, perpetrators don’t first contemplate the possibility of execution in calculating whether or not to commit murder or other violent crime. The starting position therefore ought to be that human life is sacrosanct — life’s natural origins, its natural course, and its natural end. Society ought not deviate from that principle in normalising particular punishments for criminal — even heinously criminal — behaviour. The guiding moral principle is singular: that it’s ethically unprincipled for a government to premeditatedly take its citizenries’ lives in order to punish, a measure that morally sullies the society condoning it.

Society’s applying the death penalty as an institutional sentence for a crime is a cruel vestige of a time when life was less sacred and society (the elite, that is) was less inclined to censor its own behavior: intentionally executing in order, with glaring irony, to model how killing is wrong. Society cannot compartmentalise this lethal deed, purporting that sanctioned death penalty is the exception to the ethical rule not to kill premeditatedly. Indeed, as Salil Shetty, secretary-general of Amnesty International, laconically observed, ‘the death penalty is a symptom of a culture of violence, not a solution to it’.

Although individuals, like victim family members, may instinctively and viscerally want society to thrash out in revenge on their behalf — with which many people may equally instinctively and understandably sympathise — it’s incumbent upon society to administer justice rationally, impartially, and, yes, even dispassionately. With no carveout for excepted crimes, no matter how odious, the death penalty is a corrosive practice that flagrantly mocks the basis of humanity and civilisation — that is, it scorns the very notion of a ‘civil’ society.

The death penalty is a historical legacy that should thus be consigned to the dustbin. States, across the globe, have no higher, sober moral stake than to strike the death penalty from their legal code and practices. With enough time, it will happen; the future augurs a world absent state-sanctioned execution as a misdirected exercise in the absolute power of government.