03 November 2019



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Posted by Jeremy Dyer *

This is a detail from a great work of art. Which one? Whose? We are expected to admire it, to marvel and to learn. 

What if I told you that it was a detail from one of Pollock's works? Would you then try to 'see' the elusive essence of it? On the other hand, what if I told you it was merely a photo from above the urinal in a late-night restaurant? Does that make it any more or less 'art'? 

If everything is art—the sacred mantrathen the reverse corollary must also be true. Nothing is art.


* Jeremy Dyer is an acclaimed Cape Town artist.

27 October 2019

The Politics of the Bridge


Posted by Martin Cohen

Bridges are the stuff of superlatives and parlour games. Which is the longest bridge in the world? The tallest? The most expensive? And then there's also a prize which few seem to compete for - the prize for being the most political. The British Prime Minister, Boris Johnson’s. surprise proposal in September for a feasibility study for a bridge to Ireland threatens to scoop the pot.

But then, what is it about bridges and Mr. Johnson? Fresh from the disaster, at least in public relations terms, of his ‘Garden bridge’ (pictured above) over the river Thames, the one that Joanna Lumley said would be a “floating paradise”, the “tiara on the head of our fabulous city” and was forecast to cost £200 million before the plug was pulled on it (leaving Londoners with bills of £48 million for nothing), he announces a new bridge - this time connecting Northern Ireland across seas a thousand feet deep to Stranraer in Scotland. This one would cost a bit too - albeit Johnson suggests it would be value for money at no more than £15 billion.

If Londoners choked on a minuscule fraction of that for their new bridge, it is hard to see how exactly this new one could have been afforded. Particularly as costs of large-scale public works don't exactly have a good reputation in terms of coming in within budget.
The 55-kilometre bridge–tunnel system of the Hong Kong-Zhuhai-Macau bridge that opened last year was constructed only after delays, corruption and accidents had put its cost up to 48 billion Yuan (about £5.4 billion).

When wear and tear to the eastern span of the iconic San Francisco Bay bridge became too bad to keep patching, an entirely new bridge was built to replace it, at a final price tag of $6.5 billion (about £5.2 billion), a remarkable sum in its own right but all more indigestible because it represented a 2,500% cost overrun from the original estimate of $250 million.
Grand public works are always political. For a start, there is the money to be made on the contract, but there is also the money to be made from interest on the loans obtained. Money borrowed at a low rate from governments, can be relent at a higher rate. Even when they are run scrupulously, bridges are, like so many large construction projects, moneygorounds.

And yet, bridges have a good image, certainly compared to walls. They are said to unite, where barriers divide. "Praise the bridge that carried you safe over" says Lady Duberly at breakfast, in George Colman's play The Heir at Law. But surface appearances can be deceptive. Bridges, as recent history has shown, have a special power to divide.

That Hong Kong bridge is also a way of projecting mainland Chinese power onto its fractious new family member. President Putin's $3.7 billion Kerch Strait Bridge joining Crimea to Russia was hardly likely, as he put it, to bring “all of us closer together”. Ukrainians and the wider international community considered Russia's the bridge to be reinforcing Russian annexation of the peninsula. And if bridges are often favourably contrasted with walls, this one, it soon emerged, functioned as both: no sooner was the bridge completed than shipping trying to sail under it began to be obstructed. No wonder that Ukraine believes that there was an entirely negative and carefully secret political rationale for the bridge: to impose an economic stranglehold over Ukraine and cripple its commercial shipping industry in the Azov Sea.

In this sense, a bridge to Northern Ireland seems anything but a friendly gesture by the British, rather it smacks of old-style colonialism.

But perhaps the saddest bridge of them all was the sixteenth century Old Bridge at Mostar, commissioned by Suleiman the Magnificent in 1557 and connecting the two sides of the old city. Upon its completion it was the widest man-made arch in the world, towering forty meters (130 feet) over the river. Yet it was constructed and bound not with cement but with egg whites. No wonder, according to legend, the builder, Mimar Hayruddin, whose conditions of employment apparently included his being hanged if the bridge collapsed, carefully prepared for his own funeral on the day the scaffolding was finally removed from the completed structure.

In fact, the bridge was a fantastic piece of engineering and stood proud - until that is, in 1993 when Croatian nationalists, intent on dividing the communities either side of the river, collapsed it in a barrage of artillery shells. Thus the bridge once compared with a ‘rainbow rising up to the Milky Way’ became instead a tragic monument to hatred.

20 October 2019

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that disinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully — from the material to the cultural and spiritual — is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical belief in the fundamental value of all life. In turn, reason and observable evidence may lead to shared moral goods, as well as progress on the material and immaterial sides of life’s ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in 
themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that distinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully  from the material to the cultural and spiritual  is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical believe in the fundamental value of all life. In turn, reason and observable evidence may lead to share moral goods, as well as progress on the material and immaterial sides of life's ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

13 October 2019

A New African Pragmatism

Natalia Goncharova, Exhilarating Cyclist, 1913.
By Sifiso Mkhonto *

Allister Marran, addressing himself to older people in these pages, wrote: 'Your time is over.'  Far from representing ageism, his attitude represents a new pragmatism in Africa. 

For the past few years, a question has lingered in my mind: are African political and business leaders concerned about the future of this continent, or are they concerned about their turn to eat, and how those in their lineage may benefit from the feast that is dished out in the back kitchen? Judging by the obvious evidence before us, we can only conclude that they are far too often unconcerned. 

We shall not delve into each problem, because history teaches us that we have a tendency to spend our resources and energy on discussing and unpacking problems, rather than executing the solution. In business, leaders do not appreciate you knocking at their door with a problem. They prefer a mere brief of the problem, and a detailed plan of the solution. This philosophy can and should be adapted to our approach to social issues that we face as a continent.

In my understanding, we should pragmatically ask at least four ‘whys’. These should be good enough to assist us in thinking of an amicable solution to major issues, among them the following:
• unemployment
• crime (including femicide, xenophobia, and gang violence)
• poverty, and
• lack of quality education
Here is a basic example of applying the first of these four points:
Why do we have such a high level of unemployment amongst the youth?
• Because there are no jobs.
Why are there no jobs?
• Because policy is not business-friendly, start-up businesses fail to create jobs, there’s too much red-tape, and young people studying in fields that are scarce of jobs.
Why, and why. All answers derived should lead us to basic solutions. We do not need ideology and political identity as a continent. These preoccupations set us ten steps back each time a pragmatic, sustainable solution is brought forth. It is the youth, today, which is determined, against all odds, to change the narrative of corrupt States, high crime levels, the stigma of stereotypical prejudices, and many other issues.

Against all the red tape, they still start businesses with no funding, they still pursue education with great sacrifice, to escape the reality of poverty. However, because of those who enjoy the buffet that is prepared and dished out in the back kitchen, many young lions and lionesses are doomed.

The solution is simple. Give young people the space they deserve – they think differently, and they are determined – to advance this continent into one of the most prosperous in the world. 'Grant an idea or belief to be true,' wrote William James, 'what concrete difference will its being true make in anyone's actual life?' Ideology and political identity have failed us. We need a new African pragmatism.



* Sifiso Mkhonto is a logistician and former student leader in South Africa.

06 October 2019

Picture Post #49: Vision in a Suitcase



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 


Posted by Tessa Den Uyl

Florence, 2019


The Venus by Botticelli, the David by Michelangelo, the Thinker by Rodin, names which resonate, and celebrate moments in our history which are now in the lap of technology. With new materials and with lasers, these images, and thus the names, are copied and cast into gadgets which we can grasp quickly and transport (even) in hand luggage.

These persons had a vision. In this light it just seems odd to exploit ready-mades for commerce that are not urinals, thinking of Duchamp’s ‘Fountain’ and placing a non-art object in an art space.What happens in this shop window might be thought of as the reverse. The art (and its creator) are objects available to everyone. But nothing within these statues reminds us of a vision. They are vision-less, though apparently they remind us of something else.

Does this mean that, when we have merely heard about something, scraps of such something are enough to live through the original, with all its implications and compulsiveness, in which and for which the creation came into being?

29 September 2019

What Place for Privacy in a Digital World?

C. S. Lewis, serene at his desk...

Posted by Keith Tidman

When Albert Camus offered this soothing advice in the first half of the twentieth century, ‘Opt for privacy. . . . You need to breathe. And you need to be’, life was still uncomplicated by digital technology. Since then, we have become just so many cogwheels in the global machinery that makes up the ‘Internet of things’ — the multifarious devices that simultaneously empower us and make us vulnerable.

We are alternately thrilled with the power that these devices shower on us — providing an interactive window onto the world, and giving us voice — even as we are dismayed to see our personal information scooped up, stowed, scrutinised for nuggets, reassembled, duplicated, and given up to others. That we may not see this too, that our lives are shared without our being aware, without our freely choosing, and without our being able to prevent their commodification and monetisation only makes it much worse.

Can a human right to privacy, assumed by Camus, still fit within this digitised reality?

Louis Brandeis, a former justice on the U.S. Supreme Court, defined the ‘right to be left alone’ as the ‘most comprehensive of rights, and the right most prized by civilised people’. But that was proffered some ninety years ago. If individuals and societies still value that principle, then today they are challenged to figure out how to balance the intrusively ubiquitous connectivity of digital technology, and the sanctity of personal information implicit in the ‘right to be left alone’. That is, the fundamental human right articulated by the UN’s 1948 Universal Declaration of Human Rights:
‘No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence’.
It’s safe to assume that we’re not about to scrap our digital devices and nostalgically return to analog lives. To the contrary, inevitable shifts in society will require more dependence on increasingly sophisticated digital technology for a widening range of purposes. Participation in civic life will call for more and different devices, and greater vacuuming and moving around of information. Whether the latter will translate into further loss of the human right to privacy, as is risked, or that society manages change in order to preserve or even recover lost personal privacy, the draft of that narrative is still being written.

However, it’s important to acknowledge that intervention — by policymakers, regulators, technologists, sociologists, cultural anthropologists, and ethicists, among others — may coalesce to avoid the erosion of personal privacy taking a straight upward trajectory. Urgency, and a commitment to avoid and even reverse further erosion, will be key.

Some contemporary philosophers have argued that claims to a human right to privacy are redundant, for various reasons. An example is when privacy is presumed embedded in other human rights, such as personal property — distinguished from property held in common — and protection of our personal being. But this seems dubious; in fact, one might flip the argument on its head — that is, our founding other rights on the right of privacy, the latter being more fundamentally based in human dignity and moral values. It’s a more nuanced, ethics-based position that makes the one-dimensional assertion that ‘If you don’t have anything to hide, you have nothing to fear’ all the more specious.

Furthermore, without a right to privacy being carved out in concrete terms, such as codified in law and constitutions, it may simply get ignored, rendering it non-defendable. For all that, we value privacy, and with it to prevent other people’s intrusion and meddling in our lives. We cling to the notion of what has been dubbed the ‘inviolate personality’ — the quintessence of being a person. In endorsing this belief in individual interests, one is subscribing to Noam Chomsky’s caution that ‘It’s dangerous when people are willing to give up their privacy’. To Chomsky’s point, the informed, ‘willing’ acceptance of social media’s mining and monetising of our personal data provides a contrast.

One parallel factor is the push-pull between what may become normalised governmental access to our personal information and individuals’ assertion of confidentiality and the ‘reasonable expectation’ of privacy. The style of government — from liberal democracies to authoritarianism — matters to government access to personal information: whether for benign use or malign abuse. ‘In good conscience’ is a reasonable guiding principle in establishing the what, when, and how of government access. And in turn, it matters to a fundamental human right to privacy. Meantime, governments may see a need for tools to combat crime and terrorism, allowing surveillance and intelligence gathering through wiretaps and Internet monitoring.

Two and a half centuries ago, Benjamin Franklin foreshadowed this tension between the liberty implied in personal privacy and the safety implied in government’s interest in self-protection. He cautioned: 
‘Those who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety’. 
Yet, however amorphous these contrary claims to rights might be, as a practical matter society has to resolve the risk-benefit equation and choose how to play its hand. What we conclude is the best solution will likely keep shifting, based on norms and emerging technology.

And the notions of a human right to privacy differ as markedly among cultures as they do among individuals. The definition of privacy and its value may differ both among and within cultures. It would perhaps prove unsurprising if a culture situated in Asia, a culture situated in Africa, a culture situated in Europe, and a culture situated in South or Central America were to frame personal privacy rights differently. But only insofar as both the burgeoning of digital technology and the nature of government influence the privacy-rights landscape.

The reflex may be to anticipate that privacy and human rights will take a straight, if thorny, path. The relentless and quickening emergence of digital technologies drives this impulse. The British writer and philosopher C. S. Lewis provides social context for this impulse, saying:
‘We live … in a world starved for solitude, silence, and private.’
Despite the invasion of people’s privacy, by white-hatted parties (with benign intent) and black-hatted parties (with malign intent), I believe our record thus far represents only an embryonic, inelegant attempt to explore — with perfunctory legal, regulatory, or principled restraint — the rich utility of digital technology.

Nonetheless, if we are to steer clear of the potentially unbridled erosion of privacy rights — to uphold the human right to privacy, however measured — then it will require repeatedly revisiting what one might call the ‘digital social contract’ the community adopts: and resolving the contradiction behind being both ‘citizen-creators’ and ‘citizen-users’ of digital technologies.

22 September 2019

The Impossibility of Determinism


Posted by Thomas Scarborough

Indeed, free will and determinism. It is a classic problem of metaphysics. No matter what we may think about it, we know that we have a problem. We know that things are physically determined. I line up dominos in a row, and topple the first of them with my finger. It is certain that the whole row of dominos will fall.

Are people then subject to the same kind of determinism? Are we just so many powerless humanoid shapes waiting to be knocked down by circumstances? Or perhaps, to what extent are we subject to such determinism? Is it possible for us to escape our own inner person? Our own history? Our own future? Are we even free to choose our own thoughts—much less our actions? Are we even free to believe? Each of these questions would seem to present us with a range of mightily confusing answers.

I suggest that it may be helpful to try to view the question from a broader perspective—the particular one that comes from consideration of the phenomenon of cause and effect. If I am controlled by indomitable causes, then I am not free. Yet if I am (freely) the cause of my own thoughts and actions, then I am free. Which then is it? Once we understand the dynamics of cause and effect, we should be in a better position to understand free will and determinism.

What is cause and effect?

In our everyday descriptions of our world, we say that, to paraphrase Simon Blackburn, causation is the relation between two events. It holds when, given that one event occurs, ‘it produces, or brings forth, or necessitates the second’. The burrowing aardvark caused the dam to burst; the lightning strike caused the thatch to burn; the medicine caused the patient to rally, and so on. Yet we notice in this something that is immediately problematic—which is that in order to say that there is causality, we need to have carefully defined events before and after.

But such definition is a problem. The philosopher-statesman Francis Bacon wrote of the ‘evil’ we find in defining natural and material things. ‘The definitions themselves consist of words, and those words beget others.’ Aristotle wrote that words consist of features (say, the features of a house), and those features must stand in a certain relation to one another (rubble, say, is not a house). Therefore, not only do we have words within words, but features and relations, too.

Where does it all end? It all ends nowhere. It is an endless regress. Bacon’s ‘evil’ means that our definitions dissipate into the universe. It seems much like having money in a bank, which has its money in another bank, which has its money in another bank, and so on. It is not hard to see that one will never find the money. Full definitions ultimately reach into the void.

If we want to be consistent about it, there are no events. In order to obtain events, we need to set artificial limits to our words—and artificial limits to reality itself, by excluding unwanted influences on our various constructions. But that is not the way the world really is in its totality. More than this, these unwanted influences always seem to enter the picture again somewhere along the line. This is a big part of the problem in our world today.

Of course, cause and effect quite simply work: he lit the fire; I broke the urn; they split the atom. This is good as far as it goes—yet again, such explanations work because we define before and after—and that very definition strips away a lot of what is really going on.

Where does this leave us? It leaves us without a reason to believe in cause and effect—even if we are naturally disposed to thinking that way. There is no rational framework to support it.

Someone might object. Even if we have no befores and afters, we still have a reality which is bound by the laws of the universe. There is therefore some kind of something which is not free. Yet every scientific law is about events before and after. Whatever is out there, it has nothing in common—that we can know of anyway—with such a scheme.

This may be a new way of putting it, but it is not a new idea. Albert Einstein, as an example, said that determinism is a feature of theories, rather than any aspect of the world directly. While, at the end of this post, we cannot prove free will, we can state that notions of determinism are out of the question, in the world as we know it. The world is something else, which we have not yet understood.

15 September 2019

Extinction Crisis? The solution may be privatisation

Endangered species can often be protected with comparatively tiny amounts 
of resources. Pictured, the critically endangered Black-flanked rock wallaby whose 
protection needs are measured in thousands of dollars - Image via WWF Australia

Posted by Martin Cohen

Looking around the world, there are so many problems that seem so intractable and the solutions so far off, that it can seem as if it is better to, well not look around the world. 'Climate change', for example, where it has been estimated by Danish statistician and reformed ‘skeptic’, Bjorn Lomborg, that the cost of reducing the world's temperature by the end of the century by a ‘grand total of three tenths of one degree’ is ... $100 trillion. That's not small beans. In terms of charitable donations, you'd need to find 100 million people ready to chip in a million each..

For any number of reasons, that cash ain't gonna be raised and those abatement measures - however worthy - are not going to be made.

Yet in fact there are a whole range of environmental problems which do have relatively straightforwards solutions - and require only tiny investments. These small but vital programmes are often starved of resources.

Take extinctions in Australia, for example, a topic I asked Friends of the Earth (UK) to campaign on back in the 1990s  mainly to highlight UK business links to forest clearance. To run a campaign might have costs a few thousand pounds but after discussions with the then Head of FoE and meeting the senior staff including the Biodiversities campaigner for a roundtable on the issues, I was told there were no resources for it. They offered to run a Press Release campaign if I wrote it instead. And then reneged on that too.

The point is not that I don't like Friends of the Earth much, in fact I think they do a lot of good work, (they helped me lead a campaign that saved the Yorkshire Moors from a four-lane motorway, probably the only time the organisation actually reversed a road scheme that had been formally approved) but that relying on environmentalists to save the world is a mistake. The economics points at a problem and a paradox: environmental pressure groups exist and make money out of environmental horror stories - they have no financial interest in saving anything. A campaign like Climate Change in which a bottomless pit of money must be raised suits certain people very well, even though it can never achieve its ends.

Meanwhile time is running out! Talk about an ‘extinction crisis’ ... It is there all right. But the solutions don't require grandiose schemes to control the world’s climate - they require small concrete actions to preserve habitat.

Half of all the species lost in modern time have been in Australia. In the last 150 years, one in eight of Australia's mammal species - which live(d) nowhere else on earth, have been driven out of existence, as the Australians literally bulldozed their forests into desert, in pursuit of grazing for sheep and cows. At the same time, the land value stolen from the defenceless animals and plundered form Australia's native people is actually tiny.

The Bramble Cay Melomys that lived only on a tiny island in the Torres Strait could have been saved if the island had but been bought and made into a sanctuary. Instead the fate of the little rodent was determined by red tape and political indifference.

Land clearing, invasive farming, extermination programs, lack of monitoring - all these are essentially money-driven failings with economic responses possible. To save the Spotted Tailed Quoll, for example, needs only to preserve a chunk of land from the insatiable thirst of Australia's farmers for land clearance. Likewise, the Black-flanked Rock-wallaby needs a small reserve declaring to cover it's now much diminished range. Such things essentially can be investments - yet the world's billionaire philanthropists - I'm looking at you Mr Gates, Mr Buffett! - have so far directed their wealthy and otherwise worthy Foundations only to talk about human needs - medicine, education, governance even. yet biodiversity and species preservation is surely just as much a vital part of our shred human shared inheritance as any other aspect of human life.

At the moment, attention is rightly focussed on the land clearance in the Amazon rainforest, land clearance often financed directly or indirectly by Western banks and institutions. Yet here's an idea for those with resources: buy up sections of the Amazon and hold them on behalf of their indigenous peoples as ecological parks, scientific resources and sustainably farmed forests. Such privately owned 'ecofarms' would be able to resist predation by those set on both genocide and ecocide. They only need investors!

It has already been done successfully for example in the conservation-driven Kruger Private Reserves in Africa. There, the connecting of habitats alone serves to improve the survival chances of many species in the region.

08 September 2019

‘Just War’ Theory: Its Endurance Through the Ages


The Illustrious Hugo Grotius of the Law of Warre and Peace: 
With Annotations, III Parts, and Memorials of the Author’s Life and Death.
Book with title page engraving, printed in London, England, by T. Warren for William Lee in 1654.

Posted by Keith Tidman

To some people, the term ‘just war’ may have the distinct ring of an oxymoron, the more so to advocates of pacifism. After all, as the contention goes, how can the lethal violence and destruction unleashed in war ever be just? Yet, not all of the world’s contentiousness, neither historically nor today, lends itself to nonmilitary remedies. So, coming to grips with the realpolitik of humankind inevitability waging successive wars over several millennia, philosophers, dating back to ancient Greece and Rome — like Plato, Aristotle, and Cicero — have thought about when and how war might be justified.

Building on such early luminary thinkers, the thirteenth-century philosopher and theologian Saint Thomas Aquinas, in his influential text, Summa Theologica, advanced the principles of ‘just war’ to a whole other level. Aquinas’s foundational work led to the tradition of just-war principles, broken down into jus ad bellum (the right to resort to war to begin with) and jus in bello (the right way to fight once war is underway). Centuries later came a new doctrinal category, jus post bellum (the right way to act after war has ended).

The rules that govern going to war, jus ad bellum, include the following:
• just authority, meaning that only legitimate national rulers may declare war;

• just cause, meaning that a nation may wage war only for such purposes as self-defence, defence of other nations, and intervention against the gravest inhumanity;

• right intentions, meaning the warring state stays focused on the just cause and doesn’t veer toward illegitimate causes, such as material and economic gain, hegemonic expansionism, regime change, ideological-cultural-religious dissimilarities, or unbridled militarism;

• proportionality, meaning that as best can be determined, the anticipated goods outweigh the anticipated evil that war will cause;

• a high probability of success, meaning that the war’s aim is seen as highly achievable; 
and...

• last resort, meaning that viable, peaceful, diplomatic solutions have been explored — not just between potentially warring parties, but also with the intercession of supranational institutions, as fit — leaving no alternative to war in order to achieve the just cause.

The rules that govern the actual fighting of war, jus in bello, include the following: 
• discrimination, meaning to target only combatants and military objectives, and not civilians or fighters who have surrendered, been captured, or are injured; 

• proportionality, meaning that injury to lives and property must be in line with the military advantage to be gained; 

• responsibility, meaning that all participants in war are accountable for their behaviour; 
and... 
• necessity, meaning that the least-harmful military means, such as choice of weapons, tactics, and amount of force applied, must be resorted to.

The rules that govern behaviour following war’s end, jus post bellum, typically include the following: 
• proportionality, meaning the terms to end war and transition to peace should be reasonable and even-handed; 

• discrimination, meaning that the victor should treat the defeated party fairly and not unduly punitively; 

• restorative, meaning promoting stability, mapping infrastructural redevelopment, and guiding institutional, social, security, and legal order; 

and... 
• accountability, meaning that determination of culpability and retribution for wrongful actions (including atrocities) during hostilities are reasonable and measured.
Since the time of the early philosophers like Augustine of Hippo, Thomas Aquinas, and the ascribed ‘father of international law’ Hugo Grotius (The Law of War and Peace, frontispiece above), the principles tied to ‘just war’, and its basis in moral reciprocity, have shifted. One change has entailed the increasing secularisation of ‘just war’ from largely religious roots.

Meanwhile, the failure of the seventeenth-century Peace of Westphalia — which ended Europe’s devastating Thirty Years’ War and Eighty Years’ War, declaring that states would henceforth honour other nations’ sovereignty — has been particularly dreadful. As well intentioned as the treaty was, it failed to head off repeated militarily bloody incursions into others’ territory over the last three and a half centuries. Furthermore, the modern means of war have necessitated revisiting the principles of just wars — despite the theoretical rectitude of wars’ aims.

One factor is the extraordinary versatility, furtiveness, and lethality of modern means of war — and remarkably accelerating transformation. None of these ‘modern means’ were, of course, even imaginable as just-war doctrine was being developed over the centuries. The bristling technology is familiar: from precision (‘smart’) munitions to nuclear weapons, drones, cyber weapons, long-range missiles, stealthy designs, space-based systems, biological/chemical munitions, global power projection by sea and air, hypervelocity munitions, increasingly sophisticated, lethal, and hard-to-defeat AI weapons, and autonomous weapons (increasingly taking human controllers out of the picture). In their respective ways, these devices are intended to exacerbate the ‘friction and fog’ and lethality of war for the opponent, as well as to lessen exposure of one’s own combatants to threats. 

Weapons of a different ilk, like economic sanctions, are meant to coerce opponents into complying with demands and complying with certain behaviours, even if civilians are among the more direly affected. Tactics, too, range widely, from proxies to asymmetric conflicts, special-forces operations, terrorism (intrinsically episodic), psychological operations, targeted killings of individuals, and mercenary insertion.

So, what does this inventory of weapons and tactics portend regarding just-war principles? The answer hinges on the warring parties: who’s using which weapons in which conflict and with which tactics and objectives. The idea behind precision munitions, for example, is to pinpoint combatant targets while minimising harm to civilians and civilian property.

Intentions aren’t foolproof, however, as demonstrated in any number of currently ongoing wars. Yet, one might argue that, on balance, the results are ‘better’ than in earlier conflicts in which, for example, blankets of inaccurate gravity (‘dumb’) bombs were dropped, and where indifference among combatants as to the effects on innocents — impinging on noncombatant immunity — had become the rule rather than the exception.

There are current ‘hot’ conflicts to which one might readily apply just-war theory. Yemen, Somalia, Libya, Syria, Ukraine, India/Pakistan, Iraq, and Afghanistan, among sundry others, come to mind. (As well as brinkmanship, such as with Iran, North Korea, and Venezuela.) The nature of these conflicts ranges from international to civil to terrorist to hybrid. Their adherence to jus ad bellum and jus in bello narratives and prescriptions differ radically from one to another. These conflicts’ jus post bellum narratives — meaning the right way to act after war has ended — have still to reveal their final chapter in concrete treaties, as for example in the current negotiations between the Taliban and United States in Afghanistan, almost two decades into that wearyingly ongoing war. 

The reality is that the breach left by these sundry wars, either as they end abruptly or simply peter out in exhaustion, will be filled by another. As long as the realpolitik inevitability of war continues to haunt us, humanity needs Aquinas’s guidance.

Just-war doctrine, though developed in another age and necessarily having undergone evolutionary adaptation to parallel wars’ changes, remains enduringly relevant — not to anaesthetise the populace, let alone to entirely cleanse war ethically, but as a practical way to embed some measure of order in the otherwise unbridled messiness of war.