Showing posts with label Aristotle. Show all posts
Showing posts with label Aristotle. Show all posts

12 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Böhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

12 September 2021

The Play of Old and New

by Andrew Porter
In trying to figure out what's valuable in the old and the new, what should we keep or discard? Should change be invited or checked?
We know there is a relationship between the old and the new. It's both complex and fascinating. What is established may stay in place, or it may be replaced and perish.

If we want to help change society, or government, or ourselves for the better, how much of the old should we keep, and how much discard? Is modest reform in order, or a revolution? Should the depletion of, say, rain forests be allowed or prevented?

Aristotle delineated 'potential' as material, and 'actual' as form. We gather, therefore, that what exists is often on its way to completion, whereas the goal is the actual. This contrasts with the view that what exists is the 'actual', while future possibilities are 'potential'. Added to this is the fact that the old was once new, and the new will become old.

It might help us clarify the relationship if we can articulate the flow of old to new in real time.

Should we see it as a flow, or as a fixed contrast? What does a dynamic tension mean in this case? Is the new a rejection of the ossification of the old, or is it in harmony with it? How do old and new relate to the metaphysical principles of Order and Freedom? Are the old and the new in a dance with each other, the new emerging from the potentiality which already exists? Does novelty merely help advance and develop what has been?

Something that goes on throughout nature may sort much of this out. We regenerate skin and bone and muscle tissue, while certain sets of brain cells endure past these changes. It is all us. We are old and new.

Take a reformer, in politics or elsewhere, who wants to enact significant change. They have to deal both with the old and with the new. Existing patterns to overcome, new ideas to consider and implement. How will society change? How much hold ought it to have? The old is a mixed bag. How justified is the new? Potential beckons, but is it in that which exists, or in the ends at which a process aims?

Old and new act as permeable membranes to each other, each in flux in relation to the other. Novelty is in the potential of current things. A reformer usually tries to jettison a large chunk of the old, but, like their own body, must keep a substantial part of it. Imagine if both current existents and new emergences followed a reason. Would it be different in nature than in human life?

I'll skip away now with the questions. I blithely leave the answers to you.


Photo credit GharPedia

18 July 2021

The ‘Common Good’ and Equality of Opportunity

Adam Smith, the 19th-century Scottish philosopher, warned against both
monopoly interests and government intervention in private economic arrangements.

Posted by Keith Tidman
 

Every nation grapples with balancing things that benefit the community as a whole — the common good — and those that benefit individuals — the private good. Untangling which things fall under each of the two rubrics is just one of the challenges. Decisions hinge on a nation’s history, political philosophy, approach to governance, and the general will of its citizenry.

 

At the core is recognition that community, civic relationships, and interdependencies matter in building a just society, as what is ‘just’ is a shared enterprise based on liberal Enlightenment principles around rights and ethics. Acting on this recognition drives whether a nation’s social system allows for every individual to benefit impartially from its bounty.

 

Although capitalism has proven to be the most-dynamic engine of nations’ wealth in terms of gross domestic product, it also commonly fosters gaping inequality between the multibillionaires and the many tens of millions of people left destitute. There are those left without homes, without food, without medical care — and without hope. As philosopher and political economist Adam Smith observed: 


‘Wherever there is great property there is great inequality. For one very rich man there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many’.


Today, this gap between the two extreme poles in wealth inequality is widening and becoming uglier in both material and moral terms. Among the worst injustices, however, is inequality not only of income or of wealth — the two traditional standards of inequality — but (underlying them both) inequality of opportunity. Opportunity as in access to education or training, meaningful work, a home in which to raise a family, leisure activity, the chance to excel unhampered by caste or discrimination. Such benefits ultimately stem from opportunity, without which there is little by way of quality of life.

 

I would argue that the presence or absence of opportunity in life is the root of whether society is fair and just and moral. The notion of the common good, as a civically moral imperative, reaches back to the ancient world, adjusting in accordance with the passage and rhythm of history and the gyrations of social composition. Aristotle stated in the Politics that ‘governments, which have a regard to the common interest, are constituted in accordance with strict principles of justice’.

 

The cornerstone of the common good is shared conditions, facilities, and establishments that redound to every citizen’s benefit. A foundation where freedom, autonomy, agency, and self-governance are realised through collective participation. Not as atomised citizens, with narrow self-interests. And not where society myopically hails populist individual rights and liberties. But rather through communal action in the spirit of liberalised markets and liberalised constitutional government institutions.

 

Common examples include law courts and an impartial system of justice, accessible public healthcare, civic-minded policing and order, affordable and sufficient food, thriving economic system, national defense to safeguard peace, well-maintained infrastructure, responsive system of governance, accessible public education, libraries and museums, protection of the environment, and public transportation.

 

The cornerstone of the private good is individual rights, with which the common good must be seeded and counterweighted. These rights, or civic liberties, commonly include those of free speech, conscience, public assembly, and religion. As well as rights to life, personal property, petition of the government, privacy, fair trial (due process), movement, and safety. That is, natural, inalienable human rights that governments ought not attempt to take away but rather ought always to protect.

 

One challenge is how to manage the potential pluralism of a society, where there are dissimilar interest groups (constituencies) whose objectives might conflict. In modern societies, these dissimilar groups are many, divided along lines of race, ethnicity, gender, country of origin, religion, and socioeconomic rank. Establishing a common good from such a mix is something society may find difficult.

 

A second challenge is how to settle the predictable differences of opinion over the relative worth of those values that align with the common good and the private good. When it comes to ‘best’ government and social policy, there must be caution not to allow the shrillest voices, whether among the majority or minority of society, to crowd out others’ opinions. The risk is in opportunity undeservedly accruing to one group in society.

 

Just as the common good requires that everyone has access to it, it requires that all of us must help to sustain it. The common good commands effort, including a sharing of burdens and occasional sacrifice. When people benefit from, but choose not to help sustain it (perhaps like a manufacturer’s operators ignoring their civic obligation and polluting air and water, even as they expect access themselves to clean resources), they freeload.

 

Merit will always matter, of course, but as only one variable in the calculus of opportunity. And so, to mitigate inequality of opportunity, the common good may call for a ‘distributive’ element. Distributive justice emphasises the allocation of shared outcomes and benefits. To uplift the least-advantaged members of society, based on access, participation, proportionality, need, and impartiality.

 

Government policy and social conscience are both pivotal in ensuring that merit doesn’t recklessly eclipse or cancel equality of opportunity. Solutions for access to improved education, work, healthcare, legal justice, and myriad other necessities to establish a floor to quality of life are as much political as social. It is through such measures that we see how sincere society’s concerns really are — for the common good.

07 February 2021

Will Democracy Survive?

Image via https://www.ancient-origins.net/history-famous-people/cleisthenes-father-democracy-invented-form-government-has-endured-over-021247

Cleisthenes, the Father of Democracy, Invented a Form of Government That Has Endured for 2,500 Years


Posted by Keith Tidman

How well is democracy faring? Will democracy emerge from despots’ modern-day assaults unscathed?

Some 2,500 years ago there was a bold experiment: Democracy was born in Athens. The name of this daring form of governance sprang from two Greek words (demos and kratos), meaning ‘rule by the people’. Democracy offered the public a voice. The political reformer Cleisthenes is the acknowledged ‘father of democracy’, setting up one of ancient Greece’s most-lasting contributions to the modern world.

 

In Athens, the brand was direct democracy, where citizens composed an assembly as the governing body, writing laws on which citizens had the right to vote. The assembly also decided matters of war and foreign policy. A council of representatives, chosen by lot from the ten Athenian tribes, was responsible for everyday governance. And the courts, in which citizens brought cases before jurors selected from the populace by a lottery, was the third branch. Aristotle believed the courts ‘contributed most to the strength of democracy’.

 

As the ancient Greek historian, Herodotus, put it, in this democratic experiment ‘there is, first, that most splendid of virtues, equality before the law’. Yet, there was a major proviso to this ‘equality’: Only ‘citizens’ were qualified to take part, who were limited to free males — less than half of Athens’s population — excluding women, immigrants, and slaves.

 

Nor did every Greek philosopher or historian in the ancient world share Herodotus’s enthusiasm for democracy’s ‘splendid virtues’. Some found various ways to express the idea that one unsavory product of democracy was mob rule. Socrates, as Plato recalls in the Republic, referred unsparingly to the ‘foolish leaders of democracy . . . full of disorder, and dispensing a sort of equality to equals and unequaled alike’.

 

Others, like the historian Thucydides, Aristotle, the playwright Aristophanes, the historian and philosopher Xenophon, and the anonymous writer dubbed the Old Oligarch, expanded on this thinking. They critiqued democracy for dragging with it the citizens’ perceived faults, including ignorance, lack of virtue, corruptibility, shortsightedness, tyranny of the collective, selfishness, and deceptive sway by the specious rhetoric of orators. No matter, Athens’s democracy endured 200 years, before ceding ground to aristocratic-styled rule: what Herodotus labeled ‘the one man, the best’.

 

Many of the deprecations that ancient Greece’s philosophers heaped upon democratic governance and the ‘masses’ are redolent of the problems that democracy, in its representative form, would face again.


Such internal contradictions recently resulted in the United States, the longest-standing democratic republic in the modern world, having its Congress assailed by a mob, in an abortive attempt to stymie the legislators’ certification of the results of the presidential election. However, order was restored that same day (and congressional certification of the democratic will completed). The inauguration of the new president took place without incident, on the date constitutionally laid out. Democracy working.

 

Yet, around the world, in increasing numbers of countries, people doubt democracy’s ability to advance citizens’ interests. Disillusion and cynicism have settled in. Autocrats and firebrands have gladly filled that vacuum of faith. They scoff at democracy. The rule of law has declined, as reported by the World Justice Project. Its index has documented sharp falloffs in the robustness of proscriptions on government abuse and extravagant power. Freedom House has similarly reported on the tenuousness of government accountability, human rights, and civil liberties. ‘Rulers for life’ dot the global landscape.

 

That democracy and freedoms have absorbed body blows around the world has been underscored by attacks from populist leaders who rebuff pluralism and highjack power to nurture their own ambitions and those of closely orbiting supporters. A triumphalism achieved at the public’s expense. In parts of Eastern Europe, Asia Pacific, sub-Saharan Africa, Middle East and North Africa, South and Central America, and elsewhere. The result has been to weaken free speech and press, free religious expression, free assembly, independence of judiciaries, petition of the government, thwarts to corruption, and other rights, norms, and expectations in more and more countries.


Examples of national leaders turning back democracy in favour of authoritarian rule stretch worldwide. Central Europe's populist overreach, of concern to the European Union, has been displayed in abruptly curtailing freedoms, abolishing democratic checks and balances, self-servingly politicising systems of justice, and brazen leaders acquiring unlimited power indefinitely.


Some Latin American countries, too, have experienced waning democracy, accompanied by turns to populist governments and illiberal policies. Destabilised counterbalances to government authority, acute socioeconomic inequalities, attacks on human rights and civic engagement, emphasis on law and order, leanings toward surveillance states, and power-ravenous leaders have symbolised the backsliding.

 

Such cases notwithstanding, people do have agency to dissent and intervene in their destiny, which is, after all, the crux of democracy. Citizens are not confined to abetting or turning a blind eye toward strongmen’s grab for control of the levers of power or ultranationalistic penchants. In particular, there might be reforms, inspired by ancient Athens’s novel experiment, to bolster democracy’s appeal, shifting power from the acquisitive hands of elites and restoring citizens’ faith. 

 

One systemic course correction might be to return to the variant of direct democracy of Aristotle’s Athens, or at least a hybrid of it, where policymaking becomes a far more populous activity. Decisions and policy are molded by what the citizens decide and decree. A counterweight for wholly representative democracy: the latter emboldening politicians, encouraging the conceit of self-styled philosopher-kings whose judgment they mistakenly presume surpasses that of citizens. 

 

It might behoove democracies to have fewer of these professional politicians, serving as ‘administrators’ clearing roadblocks to the will of the people, while crafting the legal wording of legislation embodying majority public pronouncements on policy. The nomenclature of such a body — assembly, council, congress, parliament, or other — matters little, of course, compared with function: party-less technocrats in direct support of the citizenry.

 

The greatest foe to democracies’ longevity, purity, and salience is often the heavy-handed overreach of elected executives, not insurrectionist armies from within the city gates. Reforms might therefore bear on severe restriction or even elimination of an executive-level figurehead, who otherwise might find the giddy allure of trying to accrete more power irresistible and unquenchable. Other reforms might include:

 

• A return to popular votes and referenda to agree on or reject national and local policies; 

• Normalising of constitutional amendments, to ensure congruence with major social change;

• Fewer terms served in office, to avoid ‘professionalising’ political positions; 

• Limits on campaign length, to motivate focused appeals to electors and voter attentiveness.


Still other reforms might be the public funding of campaigns, to constrain expenditures and, especially, avoid bought candidates. Curtailing of special-interest supplicants, who serve deep-pocketed elites. Ethical and financial reviews to safeguard against corruption, with express accountability. Mandatory voting, on specially designated paid holidays, to solicit all voices for inclusivity. Civic service, based on communal convictions and norms-based standards. And reinvention of public institutions, to amplify pertinence, efficacy, and efficiency.

 

Many more ways to refit democracy’s architecture exist, of course. The starting point, however, is that people must believe democracy works and are prepared to foster it. In the arc of history, democracy is most vulnerable if resignedly allowed to be.

 

Testaments to democracy should be ideas, not majestic buildings or monuments. Despots will not cheerfully yield ground; the swag is too great. Yet ideas, which flourish in liberal democracy, are greater.

 

Above all, an alert, restive citizenry is democracy’s best sentinel: determined to triumph rather than capitulate, despite democracy’s turbulence two and a half millennia after ancient Athens’s audacious experiment. 

13 December 2020

Persuasion v. Manipulation in the Pandemic


Posted by Keith Tidman

Persuasion and manipulation to steer public behaviour are more than just special cases of each other. Manipulation, in particular, risks short-circuiting rational deliberation and free agency. So, where is the line drawn between these two ways of appealing to the public to act in a certain way, to ‘adopt the right behaviour’, especially during the current coronavirus pandemic? And where does the ‘common good’ fit into choices?

 

Consider two related aspects of the current pandemic: mask-wearing and being vaccinated. Based on research, such as that reported on in Nature (‘Face masks: what the data say’, Oct. 2020), mask-wearing is shown to diminish the spread of virus-loaded airborne particles to others, as well as to diminish one’s own exposure to others’ exhaled viruses. 


Many governments, scientists, medical professionals, and public-policy specialists argue that people therefore ought to wear masks, to help mitigate the contagion. A manifestly utilitarian policy position, but one rooted in controversy nonetheless. In the following, I explain why.

 

In some locales, mask-wearing is mandated and backed by sanctions; in other cases, officials seek willing compliance, in the spirit of communitarianism. Implicit in all this is the ethics-based notion of the ‘common good’. That we owe fellow citizens something, in a sense of community-mindedness. And of course, many philosophers have discussed this ‘common good’; indeed, the subject has proven a major thread through Western political and ethical philosophy, dating to ancient thinkers like Plato and Aristotle.


In The Republic, Plato records Socrates as saying that the greatest social good is the ‘cohesion and unity’ that stems from shared feelings of pleasure and pain that result when all members of a society are glad or sorry for the same successes and failures. And Aristotle argues in The Politics, for example, that the concept of community represented by the city-state of his time was ‘established for the sake of some good’, which overarches all other goods.


Two thousand years later, Jean-Jacques Rousseau asserted that citizens’ voluntary, collective commitment — that is, the ‘general will’ or common good of the community — was superior to each person’s ‘private will’. And prominent among recent thinkers to have explored the ‘common good’ is the political philosopher John Rawls, who has defined the common good as ‘certain general conditions that are . . . equally to everyone’s advantage’ (Theory of Justice, 1971).

 

In line with seeking the ‘common good’, many people conclude that being urged to wear a mask falls under the heading of civic-minded persuasion that’s commonsensical. Other people see an overly heavy hand in such measures, which they argue deprives individuals of the right — constitutional, civil, or otherwise — to freely make decisions and take action, or choose not to act. Free agency itself also being a common good, an intrinsic good. For some concerned citizens, compelled mask-wearing smacks of a dictate, falling under the heading of manipulation. Seen, by them, as the loss of agency and autonomous choice.

 

The readying of coronavirus vaccines, including early rollout, has led to its own controversies around choice. Health officials advising the public to roll up their sleeves for the vaccine has run into its own buzzsaw from some quarters. Pragmatic concerns persist: how fast the vaccines were developed and tested, their longer-term efficacy and safety, prioritisation of recipients, assessment of risk across diverse demographics and communities, cloudy public-messaging narratives, cracks in the supply chain, and the perceived politicising of regulatory oversight.


As a result of these concerns, nontrivial numbers of people remain leery, distrusting authority and harbouring qualms. As recent Pew, Gallup, and other polling on these matters unsurprisingly shows, some people might assiduously refuse ever to be vaccinated, or at least resist until greater clarity is shed on what they view as confusing noise or until early results roll in that might reassure. The trend lines will be watched.

 

All the while, officials point to vaccines as key to reaching a high enough level of population immunity to reduce the virus’s threat. Resulting in less contagion and fewer deaths, while allowing besieged economies to reopen with the business, social, and health benefits that entails. For all sorts of reasons — cultural, political, personal — some citizens see officials’ urgings regarding vaccinations as benign, well-intentioned persuasion, while others see it as guileful manipulation. One might consider where the Rawlsian common good fits in, and how the concept sways local, national, and international policy decision-making bearing on vaccine uptake.

 

People are surely entitled to persuade, even intensely. Perhaps on the basis of ethics or social norms or simple honesty: matters of integrity. But they may not be entitled to resort to deception or coercion, even to correct purportedly ‘wrongful’ decisions and behaviours. The worry being that whereas persuasion innocuously induces human behaviour broadly for the common good, coercive manipulation invalidates consent, corrupting the baseline morality of the very process itself. To that point, corrupt means taint ends.

 

Influence and persuasion do not themselves rise to the moral censure of coercive or deceptive manipulation. The word ‘manipulation’, which took on pejorative baggage in the eighteen hundreds, has special usages. Often unscrupulous in purpose, such as to gain unjust advantage. Meantime, persuasion may allow for abridged assumptions, facts, and intentions, to align with community expectations and with hoped-for behavioural outcomes to uphold the common good. A calculation that considers the veracity, sufficiency, and integrity of narratives designed to influence public choices, informed by the behavioural science behind effective public health communications. A subtler way, perhaps, to look at the two-dimensional axes of persuasion versus manipulation.

 

The seed bedding of these issues is that people live in social relationships, not as fragmented, isolated, socially disinterested individuals. They live in the completeness of what it means to be citizens. They live within relationships that define the Rawlsian common good. A concept that helps us parse persuasion and manipulation in the framework of inducing societal behaviour: like the real-world cases of mask-wearing and vaccinations, as the global community counterattacks this lethal pandemic.

 

08 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

19 January 2020

Environmental Ethics and Climate Change

Posted by Keith Tidman

The signals of a degrading environment are many and on an existential scale, imperilling the world’s ecosystems. Rising surface temperature. Warming oceans. Sinking Greenland and Antarctic ice sheets. Glacial retreat. Decreased snow cover. Sea-level rise. Declining Arctic sea ice. Increased atmospheric water vapour. Permafrost thawing. Ocean acidification. And not least, supercharged weather events (more often, longer lasting, more intense).

Proxy (indirect) measurements — ice cores, tree rings, corals, ocean sediment — of carbon dioxide, a heat-trapping gas that plays an important role in creating the greenhouse effect on Earth, have spiked dramatically since the beginning of the Industrial Revolution. The measurements underscore that the recent increase far exceeds the natural ups and downs of the previous several hundred thousand years. Human activity — use of fossil fuels to generate energy and run industry, deforestation, cement production, land use changes, modes of travel, and much more — continues to be the accelerant.

The reports of the United Nations’ Intergovernmental Panel on Climate Change, contributed to by some 1,300 independent scientists and other researchers from more than 190 countries worldwide, reported that concentrations of carbon dioxide, methane, and nitrous oxides ‘have increased to levels unprecedented in at least 800,000 years’. The level of certainty of human activity being the leading cause, referred to as anthropogenic cause, has been placed at more than 95 percent.

That probability figure has legs, in terms of scientific method. Early logical positivists like A.J. Ayer had asserted that for validity, a scientific proposition must be capable of proof — that is, ‘verification’. Later, however, Karl Popper, in his The Logic of Scientific Discovery, argued that in the case of verification, no number of observations can be conclusive. As Popper said, no matter how many instances of white swans we may have observed, this does not justify the conclusion that all swans are white. (Lo and behold, a black swan shows up.) Instead, Popper said, the scientific test must be whether in principle the proposition can be disproved — referred to as ‘falsification’. Perhaps, then, the appropriate test is not ability to prove that mankind has affected the Earth’s climate; rather, it’s incumbent upon challengers to disprove (falsify) such claims. Something that  hasn’t happened and likely never will.

As for the ethics of human intervention into the environment, utilitarianism is the usual measure. That is to say, the consequences of human activity upon the environment govern the ethical judgments one makes of behavioural outcomes to nature. However, we must be cautious not to translate consequences solely in terms of benefits or disadvantages to humankind’s welfare; our welfare appropriately matters, of course, but not to the exclusion of all else in our environment. A bias to which we have often repeatedly succumbed.

The danger of such skewed calculations may be in sliding into what the philosopher Peter Singer coined ‘speciesism’. This is where, hierarchically, we place the worth of humans above all else in nature, as if the latter is solely at our beck and call. This anthropocentric favouring of ourselves is, I suggest, arbitrary and too narrow. The bias is also arguably misguided, especially if it disregards other species — depriving them of autonomy and inherent rights — irrespective of the sophistication of their consciousness. To this point, the 18th/19th-century utilitarian Jeremy Bentham asserted, ‘Can [animals] feel? If they can, then they deserve moral consideration’.

Assuredly, human beings are endowed with cognition that’s in many ways vastly more sophisticated than that of other species. Yet, without lapsing into speciesism, there seem to be distinct limits to the comparison, to avoid committing what’s referred to as a ‘category mistake’ — in this instance, assigning qualities to species (from orangutans and porpoises to snails and amoebas) that belong only to humans. In other words, an overwrought egalitarianism. Importantly, however, that’s not the be-all of the issue. Our planet is teeming not just with life, but with other features — from mountains to oceans to rainforest — that are arguably more than mere accouterments for simply enriching our existence. Such features have ‘intrinsic’ or inherent value — that is, they have independent value, apart from the utilitarianism of satisfying our needs and wants.

For perspective, perhaps it would be better to regard humans as nodes in what we consider a complex ‘bionet’. We are integral to nature; nature is integral to us; in their entirety, the two are indissoluble. Hence, while skirting implications of panpsychism — where everything material is thought to have at least an element of consciousness — there should be prima facie respect for all creation: from animate to inanimate. These elements have more than just the ‘instrumental’ value of satisfying the purposes of humans; all of nature is itself intrinsically the ends, not merely the means. Considerations of aesthetics, culture, and science, though important and necessary, aren’t sufficient.

As such, there is an intrinsic moral imperative not only to preserve Earth, but for it and us jointly to flourish — per Aristotle’s notion of ‘virtue’, with respect and care, including for the natural world. It’s a holistic view that concedes, on both the utilitarian and intrinsic sides of the moral equation, mutually serving roles. This position accordingly pushes back against the hubristic idea that human-centricism makes sense if the rest of nature collectively amounts only to a backstage for our purposes. That is, a backstage that provides us with a handy venue where we act out our roles, whose circumstances we try to manage (sometimes ham-fistedly) for self-satisfying purposes, where we tinker ostensibly to improve, and whose worth (virtue) we believe we’re in a position to judge rationally and bias-free.

It’s worth reflecting on a thought experiment, dubbed ‘the last man’, that the Australian philosopher Richard Routley introduced in the 1970s. He envisioned a single person surviving ‘the collapse of the world system’, choosing to go about eliminating ‘every living thing, animal and plant’, knowing that there’s no other person alive to be affected. Routley concluded that ‘one does not have to be committed to esoteric values to regard Mr. Last Man as behaving badly’. Whether Last Man was, or wasn’t, behaving unethically goes to the heart of intrinsic versus utilitarian values regarding nature —and presumptions about human supremacy in that larger calculus.

Groups like the UN Intergovernmental Panel on Climate Change have laid down markers as to tipping points beyond which extreme weather events might lead to disastrously runaway effects on the environment and humanity. Instincts related to the ‘tragedy of the commons’ — where people rapaciously consume natural resources and pollute, disregarding the good of humanity at large — have not yet been surmounted. That some other person, or other community, or other country will shoulder accountability for turning back the wave of environmental destruction and the upward-spiking curve of climate extremes has hampered the adequacy of attempted progress. Nature has thrown down the gauntlet. Will humanity pick it up in time?

08 December 2019

Is Torture Morally Defensible?


Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.


I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.

30 September 2018

Picture Post #38 What Happened Next to the White Rabbit



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

A shop window in Paris, captured en passant by Tessa den Uyl
      
This cozy scene reminiscient of Lewis Carroll's imaginary ‘wonderland’, is in fact, something rather more grim.

No surprise would fall upon us to discover a boar’s head hanging on the wall in a hunter’s lodge. But most often today, to encounter embalmed animals in non-rural houses reminds of gestures of excess that echo as non-virtuous.

This shop window in the centre of Paris offers a sitting room full of real dead animals. Yet perhaps it is not the embalmed animals that particularly draw the attention here, but rather the way that they are displayed with more or less anthropomorphic features.

The White Rabbit, in Lewis Carroll's famous story, Alice in Wonderland, occupies a particular role: he appears at the very beginning of the book, in chapter one, wearing a waistcoat, carrying a pocket watch, and in a great hurry muttering ‘Oh dear! Oh dear! I shall be too late!’ And Alice encounters him again at a stressful moment in the adventure when she finds herself trapped in his house after growing too large.

Most emblematic of all though, the Rabbit  reappears as a servant of the King and Queen of Hearts in the closing chapters of the book, reading out bizarre verses as ‘evidence' against Alice. In this scene, the stuffed white rabbit, too, seems to have a prosecutorial air, rather as though the animal is a judge surrounded by courtroom flunkeys.

In Alice’s case, the White Rabbit’s case for the prosecution is so convincing that the Queen of Hearts immediately announces ‘Off with her head!’ at which point, mercifully, Alice wakes up. In this real-life shop, too, a similar return to earth is marked by a neatly framed message held by the only fake animal in the shop.  It notifies the observer that all the animals have died naturally in zoos or zoological parks. Potential clients can presumably put their consciences to ease.

Aristotle mentioned that art is a representation of life, of character, of emotion and actions, and in contemporary art, animals in formaldehyde are exhibited in famous museums for world-scaring prizes. So why not admire a similar thing by looking into this shop window? Yet there is a repulsion.

Is it a reduction of the animal - or is it rather the excess - the building up of animals into fine decor for homes? Or is the display less commercial than in itself an artistic exploration? Or is it more a philosophical challenge, something to do with Aristotle’s notion that we seek to discover the universal hidden in a world of the everyday and particular?


23 September 2018

Why Is There Something Rather Than Nothing?

For scientists, space is not empty but full of quantum energy
Posted by Keith Tidman

Gottfried Wilhelm Leibniz introduced this inquiry more than three hundred years ago, saying, ‘The first question that should rightly be asked is, “Why is there something rather than nothing?”’ Since then, many philosophers and scientists have likewise pondered this question. Perhaps the most famous restatement of it came in 1929 when the German philosopher, Martin Heidegger, placed it at the heart of his book What Is Metaphysics?: ‘Why are there beings at all, and why not rather nothing?’

Of course, many people around the world turn to a god as a sufficient reason (explanation) for the universe’s existence. Aristotle believed, as did his forerunner Heraclitus, that the world was mutable — everything undergoing perpetual change — which he characterised as movement. He argued that there was a sequence of predecessor causes that led back deep into the past, until reaching an unmoved mover, or Prime Mover (God). An eternal, immaterial, unchanging god exists necessarily, Aristotle believed, itself independent of cause and change.

In the 13th century Saint Thomas Aquinas, a Christian friar, advanced this so-called cosmological view of universal beginnings, likewise perceiving God as the First Cause. Leibniz, in fact, was only proposing something similar, with his Contingency Argument, in the 17th century:

‘The sufficient reason [for the existence of the universe] which needs not further reason must be outside of this series of contingent things and is found in a substance which . . . is a necessary being bearing the reason for its existence within itself. . . .  This final reason for things is called God’ — Leibniz, The Principles of Nature and Grace

However, evoking God as the prime mover or first cause or noncontingent being — arbitrarily, on a priori rather than empirical grounds — does not inescapably make it so. Far from it. The common counterargument maintains that a god correspondingly raises the question that, if a god exists — has a presence — what was its cause? Assuming, that is, that any thing — ‘nothing’ being the sole exception — must have a cause. So we are still left with the question, famously posed by the theoretical physicist Stephen Hawking, ‘What is it that breathes fire into the equations and makes a universe for them to describe?’ To posit the existence of a god does not, as such, get around the ‘hard problem’: why there is a universe at all, not just why our universe is the way it is.



Some go so far as to say that nothingness is unstable, hence again impossible.


 
Science has not fared much better in this challenge. The British mathematician and philosopher Bertrand Russell ended up merely declaring in 1948, ‘I should say that the universe is just there, and that’s all’. A ‘brute fact’, as some have called it. Many scientists have embraced similar sentiments: concluding that ‘something’ was inevitable, and that ‘nothingness’ would be impossible. Some go so far as to say that nothingness is unstable, hence again impossible. But these are difficult positions to support unquestionally, given that, as with many scientific and philosophical predecessors and contemporaries, they do not adequately explain why and how. This was, for example, the outlook of Baruch Spinoza, the 17th-century Dutch philosopher who maintained that the universe (with its innumerable initial conditions and subsequent properties) had to exist. Leaping forward to the 20th century, Albert Einstein, himself an admirer of Spinoza’s philosophy, seemed to concur.

Quantum mechanics poses an interesting illustration of the science debate, informing us that empty space is not really empty — not in any absolute sense, anyway. Even what we might consider the most perfect vacuum is actually filled by churning virtual particles — quantum fluctuations — that almost instantaneously flit in and out of existence. Some theoretical physicists have suggested that this so-called ‘quantum vacuum’ is as close to nothingness as we might get. But quantum fluctuations do not equate to nothingness; they are not some modern-day-science equivalent of the non-contingent Prime Mover discussed above. Rather, no matter however flitting and insubstantial, virtual quantum particles are still something.

It is therefore reasonable to inquire into the necessary origins of these quantum fluctuations — an inquiry that requires us to return to an Aristotelian-like chain of causes upon causes, traceable back in time. The notion of a supposed quantum vacuum still doesn’t get us to what might have garnered something from nothing. Hence, the hypothesis that there has always been something — that the quantum vacuum was the universe’s nursery — peels away as an unsupportable claim. Meanwhile, other scientific hypotheses, such as string theory, bid to take the place of Prime Mover. At the heart of the theory is the hypothesis that the fundamental particles of physics are not really ‘points’ as such but rather differently vibrating energy ‘strings’ existing in many more than the familiar dimensions of space-time. Yet these strings, too, do not get us over the hump of something in place of nothing; strings are still ‘something’, whose origins (causes) would beg to be explained.

In addressing these questions, we are not talking about something emerging from nothing, as nothingness by definition would preclude the initial conditions required for the emergence of a universe. Also, ‘nothingness’ is not the mere absence (or opposite) of something; rather, it is possible to regard ‘nothingness’ as theoretically having been just as possible as ‘something’. In light of such modern-day challenges in both science and philosophy, Lugdwig Wittgenstein was at least partially right in saying, early in the 20th century (Tractatus Logico-Philosophicus, section 6.4 on what he calls ‘the mystical’), that the real mystery was, ‘Not how the world is . . . but that it is’.