04 December 2016

Picture Post #19 The Pillars of Creation


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Keith Tidman

Picture Credit: Hubble Space Telescope (NASA)

A dynamically ‘living universe’ with its own DNA captured by the Hubble space telescope. The image opens a window onto the cosmos, to wistfully wonder about reality.
Among the iconic images of space captured by the Hubble space telescope is this Eagle Nebula’s ‘Pillars of Creation’—revealing the majesty and immensity of space. The image opens a window onto the cosmos, for us to wistfully wonder about the what, how, and (especially) why of reality.


The image shows the pillars’ cosmic dust clouds, referred to as ‘elephant trunks’—revealing a universe that, like our species, undergoes evolution. One thought that intrudes is whether such an immense universe is shared by other ‘gifted’ species, scattered throughout. By extension, Hubble’s images make one wonder whether our universe is unique, or one of many—undergoing the ‘creative destruction’ of these pillars.

Does the image evoke a sense of relative peace—like our own speck in our galaxy’s outer spirals? Or a universe more typically characterised by the distantly familiar roiling, boiling violence—expressing itself in the paradoxical simultaneity of creation and destruction?

The ‘Pillars of Creation’ are—were—some 7,000 light-years away! They may even no longer exist; due to the time that light takes to get to Hubble. An ironic twist of fate, given the name. The ‘shape’ of the universe’s content is thus transitory – like our own bodies, as time elapses and we react to the environment.

For some, the ‘Pillars of Creation’—their church-like spires—inspire thoughts of divine creation. Alternatively, evidence suggests our universe rests in science. Where ‘nothingness’ isn’t possible and ‘something’—a universe—is the default.

03 December 2016

God: An Existential Proof

Posted by Thomas Scarborough
Ernest Hemingway has one of his characters say, 'The world breaks everyone.' In crafting this now famous line, did he hand us a new proof for the existence of God?
It all rests on the way we are motivated, and the changes our motivations undergo in the course of a lifetime.

What is it that motivates me to plant a garden (and to plant it thus), to embark on a career, or to go to war? Today there is little disagreement that, basically, I am motivated when I hold up the world in my head to the world itself. Where then I find a difference between the two, I am motivated to act. It is, writes neuropsychologist Richard Gregory, the encounter with the 'unexpected' that motivates me.

Now consider that, in one’s early years, one's motivations are fresh and new. The world in one’s head seems to offer one high hopes, pleasant dreams, a good view of humanity, and enthusiasm to spare. Yet as one progresses through life, 'the world breaks everyone'. It breaks them, not so much through the hardships it brings to bear on the body—if this should matter at all—but because of the way in which it assails the mind and emotions.

Disillusionment sets in. And this, presumably, means coming to see things for the way they are. As we grow and mature, we come to see that the world is a place where hopes wither, dreams die, good turns to bad, and our energies are sapped. We become jaded, tired, and disinterested. 'My hopes were all dead,' Charlotte Brontë has one of her characters say. 'I looked on my cherished wishes, yesterday so blooming and glowing. They lay stark, chill, livid corpses that could never revive.'

With no world now to hold up to the world, because we have finally seen the world for what it is, we lose our motivation—ultimately all motivation—because motivation is the 'unexpected'.

And so we lose the ability to live. Ernest Hemingway had no motivation to go on. He famously shot himself with a double-barrel shotgun. It is 'the very good,' he wrote, 'and the very gentle and the very brave' who go first. As for the rest—they, too, shall be found.

What then to do, when we are broken? How may a person restore any motivation at all, when they have come to see the world as it is?

It needs to be something beyond this world—and though we here 'appeal to consequences'—the argument that it must be so—indeed it must be so. We cannot go on with a view of this world which is born of the world itself. Small wonder, then, that it is central to religious thinking that 'whether we live, we live unto the Lord, and whether we die, we die unto the Lord'. We continue to strive—but we strive for something which is other-worldly.

There may be another, logical possibility. If not something beyond this world, then we need an interventionist God who through his being there, changes our expectations—a God who reaches down into our reality—a God who acts in this world. The world is not, therefore, all that I expect it to be. This, too, is a dominant religious theme: 'For by you I have run through a troop,' writes David. 'By my God have I leaped over a wall.' He could turn the tables, through his God.

What then is that motivation which lies beyond this world? What then are the interventions of God? This would seem to lie beyond the bounds of philosophy, and in the realm of theology.

Paradoxically, if we accept the 'God option' as the basis of all true motivation, then this would seem to be the option of deepest disillusionment—at the very same time as it offers us the greatest hope. One has no need for a new and fundamentally different motivation, in God, unless the world in one’s head is no longer found to be worth holding up to the world.

27 November 2016

The Silence of God

Posted by Eugene Alper
Perhaps God is so silent with us for a reason.  If He were to answer, if He were to respond to even one question or one plea, this would spell the end of our free will.
For once we knew His preferences for us, once we could sense His approval or disapproval, we would no longer exercise our own preferences, we would not choose our actions.  We would be like children again, led by His hand.  Perhaps He did not want this.  Perhaps He did not create us to be perpetual children.  Perhaps He designed the world so we could think about it and choose our actions freely.

But mentioning free will and God's design in the same sentence presents a predicament—these two ideas need to be somehow reconciled.  For if we believe that God designed the world in a certain way, and the world includes us and our free will, its design has to be flexible enough for us to exercise our free will within it.  We should be able to choose to participate in the design or not, and if so, to which degree.  Should we choose to do something with our life—however small our contribution may be—maybe to improve the design itself, or at least to try to tinker with it, we should be able to do so.  Should we choose to stay away from participating and become hermits, for example, we should be able to do so too.  Or should we choose to participate only partially, every third Tuesday of the month, we should be free to do so as well.

This thinking smacks of being childish.  We want God's design to be there and not to be there at the same time.  We want God to be a loving father who is not overly strict.  This is how we created His image in the Old Testament: God is occasionally stern—to the point of destroying almost the entire humankind—but loving and caring the rest of the time.  This is how we created His image in the New Testament, too: God so loved the world that He sent His own Son to redeem it.  Maybe all we really want is a father again; whatever beings we imagine as our gods, we want the familiar features of our parents.  Maybe we are perpetual children after all.  We want to play in our sandbox—freely and without supervision—and build whatever we want out of sand, yet we want our father nearby for comfort and protection.

There is no need to reconcile anything.  This is how it works.  Our free will fits within God's design so well because it is free only to a degree.  Time and space are our bounds.  We have only so much time until we are gone, and we have only so much energy until it runs out.  Gravity will assure that we can jump, but not too high, that we can fly, but not too far.  We cannot cause too much damage.  Sitting in the sand, we can fight with other players, we can even kick them out, we can build our own castles or destroy theirs, but we cannot destroy the sandbox itself.  Maybe this is the secret of the design. 

20 November 2016

Individualism vs. Personhood in Kiribati

By Berenike Neneia
The French philosophes thought of the individual as being 'prior to' the group. This has been a point of strenuous debate ever since. But whatever the case, individualism is characteristic, in some way, of the whole of our Western society today.
I myself am privileged to belong to a society which would seem to have been stranded in time – and while individualism now influences us profoundly, the cultural patterns of the past are still near. This short post serves as an introduction to a concept which is central to my culture in Kiribati: te oi n aomata.

Te oi n aomata literally means 'a real or true person'. It includes all people, whether men or women, young or old. This is not merely a living person who has concrete existence, but one who is seen by the community which surrounds him or her to have certain features, whether ascribed or acquired. Therefore it is by these features that a community's recognition of a person is 'weighed': as to whether they are an oi n aomata, 'a real or true person', or not.

Since Kiribati society is patriarchal, there is a distinction between how a man (oi ni mwane) and a woman (oi n aine) are seen as oi n aomata. Men will be considered oi n aomata through their material possessions, while women will be known as oi n aomata by their conduct – which is meant in the sense that a woman will be well mannered, respectful, obedient, and so forth. It is rare for a woman to possess or inherit the family’s vital assets such as land, house, taro pit, and canoe. The only exception is a woman who is an only child.

Prior to the coming of Europeans to the shores of Kiribati, a man who was regarded as an oi n aomata or oi ni mwane (a real or true man) was 'renowned' as one who came from a good family (which is, a family disciplined in cultural norms), in which he had a good reputation. He would be the first-born or only child, he would have many lands, and he would have a 'house' of his own: not of European design, but a cluster of structures used for meeting, cooking, sleeping, and relaxing. These belongings were very valuable, as they indicated that a man was 'in the community'.

In relation to such possessions, a man would further have the skills and the knowledge of how to fish and how to cut toddy, which were vital to the sustenance of his family. He would also know how to build his 'house', and to maintain it. As a man, he was the one who would protect his family from all harm.

These were some of the important skills which characterised an oi n mwane or 'real or true man'. He was very highly regarded in communities.

Similarly, to be an oi n aomata or oi n aine (a real or true woman), a woman had to come from a good family (again, a family disciplined in cultural norms). She would be well nurtured and well taught, and she herself would behave according to Kiribati cultural norms. She would know how to cook and to look after her family well. This means that everyone in her household would be served first, while she would be served last.

She would know how to weave mats, so that her family would have something to lie on. She would know respect and not talk back, especially to her husband, her in-laws, and elders. Crucially, a woman would remain a virgin until she was married, since this involved the pride of her family. Therefore, she would give no appearance of indiscreet or suspect behaviour.

A woman had to maintain her place within the home, and look after her family well. As such she was considered an oi n aine or 'real and true woman', since she was the backbone of her family.

Today when one speaks about people, there is a saying, 'Ai tiaki te aomata raom anne,' which refers to those who are 'no longer an (ordinary) person'. Rather, they have acquired, inherited, and possessed important things in the context of our culture, which make life much more enjoyable, much easier, and much better for all (with less complications, and less suffering).

However, where globalisation is now at the shores of Kiribati, the definition of an oi n aomata, 'a real or true person', is evolving in relation to changing patterns, norms, and life-styles of the Kiribati people. We see now the effects of these changing patterns – from a communal life to a more individualistic life-style. While this has brought various benefits to society, in many ways it has not been for the better.

13 November 2016

Pseudo Ethics

Posted by Thomas Scarbrough
Jean-François Lyotard proposed that efficiency, above all, provides us with legitimation for human action today. If we can only do something more efficiently – or more profitably – then we have found a reason to do it. In fact society in its entirety, Lyotard considered, has become a system which must aim for efficient functioning, to the exclusion of its less efficient elements.
This is the way in which, subtly, as if by stealth – we have come fill a great value vacuum in our world with pseudo values, borrowed from the realm of fact. Philosophically, this cannot be done – yet it is done – and it happens like this:

The human sphere is exceedingly complex – and inscrutable. It is one thing for us to trace relations in our world, as by nature we all do – quite another to know how others trace relations in this world.  While our physical world is more or less open to view, this is not the case with worlds which exist inside other people's minds – people who further hide behind semiotic codes: the raising of an eyebrow, for instance, or a laugh, or an utterance.

A million examples could not speak as loudly as the fact that we have a problem in principle. Like the chess novice who randomly inserts a move into the grand master's game, as soon as we introduce others into the picture, there is a quantum leap in complexity.  Small wonder that we find it easier to speak about our world in 'factual' terms than in human terms.

Further, in the human sphere we experience frequent reversals and uncertainties – war, famine, and disease, among many other things – while through the natural sciences we are presented with continual novelty and advance. In comparison with the 'factual' sphere, the human sphere is a quagmire. This leads to a spontaneous privileging of the natural sciences.

We come to see the natural sciences as indicating values, where strictly they do not – and cannot. That is, we consider that they give us direction as to how we should behave. And so, economic indicators determine our responses to the economy, clinical indicators determine our responses to a 'clinical situation' (that is, to a patient), environmental indicators determine our responses to the state of our environment, and so on.

Yet philosophers know that we are unable, through facts, to arrive at any values. We call it the fact-value distinction, and it leaves us with only two logical extremes: logical positivism on the one hand, or ethical intuitionism on the other. That is, either we cannot speak about values at all, or we must speak about them in the face of our severance from the facts. 

We automatically, impulsively, instinctively react to graphs, charts, statistics, imagining that they give us reason to act. Yet this is illusory. While the natural sciences might seem to point us somewhere, in terms of value, strictly they do not, and cannot. It is fact seeking to show us value.

Thus we calculate, tabulate, and assess things, writes sociologist James Aho, on the basis of 'accounting calculations', the value of which has no true basis. Such calculations have under the banner of efficiency come to colonise themselves in virtually every institutional realm of modern society – while it is and has to be a philosophical mistake.

Of course, efficiency has positive aspects. We receive efficient service, we design an efficient machine, or we have an efficient economy. This alone raises the status of efficiency in our thinking. However, in the context of this discussion, where efficiency represents legitimation for human action, it has no proper place.

The idea of such efficiency has introduced us to a life which many of us would not have imagined as children: we are both processed and we process others, on the basis of data sets – while organic fields of interest such as farming, building, nursing, even sports, have been reduced to something increasingly resembling paint-by-numbers. It is called 'increased objectification'.

With the advance of efficiency as a motive for action, we have come to experience, too, widespread alienation today: feelings of powerlessness, normlessness, meaninglessness, and social isolation, which did not exist in former times. Karl Marx considered that we have been overtaken by commodity fetishism, where the devaluation of the human sphere is proportional to the over-valuation of things.

Theologian Samuel Henry Goodwin summed it up: 'We are just a number.' Through pseudo values, borrowed from the realm of fact, we are dehumanised. In fact, this must be the case as long as we take numerate approaches to human affairs on the basis that they are 'indicated' by facts. Cold fact encroaches on the complex and subtle relations which are represented by the human sciences – in fact, by life as it is lived.

06 November 2016

Picture Post #18 A Somersault for the Suspension of Civilisation



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'


Posted by Tessa den Uyl and Martin Cohen

Photo credit: students of  A Mundzuku Ka Hina, communications workshop. 

A life conditioned by the dictates of competition and consumption cannot but bring great social differences along in its train. When we ascribe symbolic values to a consumptive life, ideas will conform to ideals in which our moral duties are the rights of others on us.

The subtle way social disproportions are perceived as if a causa sui, something wherein the cause lies within itself creates a world of facts based upon competitive abstractions that endlessly rehearse on a Procrustean bed.

The salto (flying somersault) performed by the boy, who depends for his survival on a rubbish-dump, breaks with this gesture the conditioned life. What he breaks is to function, which means to think, alike a certain ‘life-design.’ His action shows the incompleteness of our relationships in an abstract world.

His jump is a jump into a space of non-facts.

In the suspension of the movement is the liberating being of lightness.

30 October 2016

Nothing: A Hungarian Etymology

'Landing', 2013. Grateful acknowledgement to Sadradeen Ameen
Posted by Király V. István
In its primary and abstract appearance, nothing is precisely 'that' 'which' it is not. However, the word is still there, in the words of all the languages we know. Here we explore its primary meaning in Hungarian.
The Hungarian word for nothing – 'semmi' – is a compound of 'sem' (nor) and 'mi' (we). The negative 'sem' expresses: 'nor here' (sem itt), 'nor there' (sem ott), 'nor then' (sem akkor), 'nor me' (sem én), 'nor him, nor her' (sem Å‘). That is to say, I or we have searched everywhere, yet have found nothing, nowhere, never.

However much we think about it, the not of 'sem' is not the negating 'not', nor the depriving 'not' which Heidegger revealed in his analysis of 'das Nichts'. The not in the 'sem' is a searching not! It says, in fact, that searching we have not found. By this, it says that the way that we meet, face, and confront the not is actually a search. Thus the 'sem' places the negation in the mode of search, and the search into the mode of not (that is, negation).

What does all this mean in its essence?

Firstly, it means that, although the 'sem' is indeed a kind of search, which 'flows into' the not, still it always distinguishes itself from the nots it faces and encounters. For searching is not simply the repetition of a question, but a question carried around. Therefore the 'sem' is always about more than the tension between the question and its negative answer, for the negation itself – the not – is placed into the mode of search! And conversely.

Therefore the 'sem' never negates the searching itself – it only places and fixes it in its deficient modes. This way, the 'sem' emphasises, outlines, and suffuses the not, yet stimulates the search, until the exhaustion of its final emptiness. The contextually experienced not – that is, the 'sem' – is actually nothing but an endless deficiency of an emptied, exhausted, yet not suspended search.

This ensures on the one hand, the stability of the 'sem', which is inclined to hermetically close up within itself – while it ensures on the other hand, an inner impulse for the search which, emanating from it, continues to push it to its emptiness.

It is in the horizon of this impulse, then, that the 'sem' merges with the 'mi'. The 'mi' in Hungarian is at the same time an interrogative pronoun and a personal pronoun. Whether or not this linguistic identity is a 'coincidence', it conceals important speculative possibilities, for the 'mi' pronoun, with the 'sem' negative, always says that it is 'we' (mi) who questioningly search, but find 'nothing' (semmi).

Merged in their common space, the 'sem' and the 'mi' signify that the questioners – in the plurality of their searching questions – only arrived at, and ran into, the not, the negation. Therefore the Hungarian word for the nothing offers a deeper and more articulated consideration of what this word 'expresses', fixing not only the search and its deficient modes, but also the fact that it is always we who search and question, even if we cannot find ourselves in 'that' – in the nothing.

That is to say, the nothing – in this, which is one of its meanings – is precisely the strangeness, foreignness, and unusualness that belongs to our own self – and therefore all our attempts to eliminate it from our existence will always be superfluous.



Király V. István is an Associate Professor in the Hungarian Department of Philosophy of the Babes-Bolyai University, Cluj, Romania. This post is an extract selected by the Editors, and adjusted for Pi, from his bilingual Hungarian-English Philosophy of The Names of the Nothing.

23 October 2016

Shapeshifters, Socks, and Personal Identity

Posted by Martin Cohen
Perhaps the proudest achievement of philosophy in the past thousand years is the discovery that each of us really does know that we exist. Descartes sort-of proved that with his famous saying:

"I think therefore I am."
Just unfortunate then, that there is a big question mark hanging over the word ‘I’ here – over the notion of what philosophers call ‘personal identity’. The practical reality is that neither you nor I are in fact one person but rather a stream of ever so slightly different people. Think back ten years – what did you have in common with that creature who borrowed your name back then? Not the same physical cells, certainly. They last only a few months at most. The same ideas and beliefs? But how many of us are stuck with the same ideas and beliefs over the long run? Thank goodness these too can change and shift.

In reality, we look, feel and most importantly think very differently at various points in our lives.

Such preoccupations go back a long, long way. In folk tales, for example, like those told by the Brothers Grimm, frogs become princes – or princesses! a noble daughter becomes an elegant, white deer, and a warrior hero becomes a kind of snake. In all such cases, the character of the original person is simply placed in the body of the animal, as though it were all as simple as a quick change of clothes.

Many philosophers, such as John Locke, who lived way back in the seventeenth century, have been fascinated by the idea of such ‘shapeshifting’, which they see as raising profound and subtle questions about personal identity. Locke himself tried to imagine what would happen if a prince woke up one morning to find himself in the body of a pauper – the kind of poor person he wouldn’t even notice if he rode past them in the street in his royal carriage!

As I explained in a book called Philosophy for Dummies – confusing many readers – Locke discusses the nature of identity. He uses some thought experiments too as part of this, but not, by the way (per multiple queries!) the sock example. He didn't literally wonder about how many repairs he could make to one of his socks before it somehow ceased to be the original sock. He talks, though about a prince and a cobbler and asks which ‘bit’ of a person defines them as that person?

In a chapter called ‘Of Identity and Diversity’ in the second edition of the Essay Concerning Human Understanding, he distinguishes between collections of atoms that are unique, and something made up of the same atoms in different arrangements.

Living things, like people, for example, are given their particular identity not by their atoms (because each person's atoms change regularly, as we know) but rather are defined by the particular way that they are organised. The point argued for in his famous Prince and the Cobbler example is that if the spirit of the Prince can be imagined to be transferred to the body of the Cobbler, then the resulting person is ‘really’ the Prince.

Locke’s famous definition of what it means to be a ‘Person’ is:
‘A thinking intelligent being, that has reason, and can consider it self as it self, the same thinking thing, in different times and places; which it does only by that consciousness, which is inseparable from thinking’
More recently, a university philosopher, Derek Parfit, has pondered a more modern–sounding story, all about doctors physically putting his brain into someone else's body, in such a way that all his memories, beliefs and personal habits were transferred intact. Indeed today, rather grisly proposals are being made for ‘transplants’ like this. But our interest is philosophy, and Derek’s fiendish touch is to ask what would happen if it turned out that only half a brain was enough to do this kind of ‘personality transfer’?

Why is that a fiendish question to ask? But if that were possible, potentially we could make two new Dereks out of the first one! Then how would anyone know who was the ‘real’ one?!

Okay, that's all very unlikely anyway. And yet there are real questions and plenty of grays surrounding personal identity. Today, people are undergoing operations to change their gender – transgender John becomes Jane – or do they? Chronically overweight people are struggling to ‘rediscover’ themselves as thin people – or are they a fat person whose digestion is artificially constrained? Obesity and gender dysporia alike raise profound philosophical, not merely medical questions.

On the larger scale, too, nations struggle to decide their identity - some insisting that it involves restricting certain ethnic groups, others that it rests on enforcing certain cultural practices. Yet the reality, as in the individual human body, is slow and continuous change. The perception of a fixed identity is misleading.

“You think you are, what you are not.” 



* The book is intended for introducing children to some of the big philosophical ideas. Copies can be obtained online here: https://www.createspace.com/6299050 

16 October 2016

Does History Shape Future Wars?

Posted by Keith Tidman
To be sure, lessons can be gleaned from the study of past wars, as did Thucydides, answering some of the ‘who’, ‘what’, ‘how’, ‘why’, and ‘so-what’ questions. These putative takeaways may be constructively exploited—albeit within distinct limits.
Exploited, as the military historian Trevor Dupuy said, to “determine patterns of conduct [and] performance . . . that will provide basic insights into the nature of armed conflict.” The stuff of grand strategies and humble tactics. But here’s the rub: What’s unlikely is that those historical takeaways will lead to higher-probability outcomes in future war.

The reason for this conclusion is that the inherent instability of war makes it impossible to pave the way to victory with assurance, regardless of lessons gleaned from history. There are too many variables, which rapidly pile up like grains of sand and get jostled around as events advance and recede. Some philosophers of history, such as Arthur Danto, have shed light on the whys and wherefores of all this. That is, history captures not just isolated events but rather intersections and segues between events—like synapses. These intersections result in large changes in events, making it numbingly hard to figure out what will emerge at the other end of all that bewildering change. It’s even more complicated to sort out how history’s lessons from past wars might translate to reliable prescriptions for managing future wars.

But the grounds for flawed historical prescription go beyond the fact that war’s recipe mixes both ‘art’ and ‘science’. Even in the context of blended art and science, a little historical information is not always better than none; in the case of war, a tipping point must be reached before information is good enough and plentiful enough to matter. The fact is that war is both nonlinear and dynamic. Reliable predictions—and thus prescriptions—are elusive. Certainly, war obeys physical laws; the problem is just that we can’t always get a handle on the how and why that happens, in face of all the rapidly moving, morphing parts. Hence in the eyes of those caught up in war’s mangle, events often appear to play out as if random, at times lapsing into a level of chaos that planners cannot compensate for.

This randomness is more familiarly known as the ‘fog of war’. The fog stems from the perception of confusion in the mind’s eye. Absent a full understanding of prevailing initial conditions and their intersections, this perception drives decisions and actions during war. But it does so unreliably. Complexity thus ensures that orderliness eludes the grasp of historians, policymakers, military leaders, and pundits alike. Hindsight doesn’t always help. Unforeseeable incidents, which Carl von Clausewitz dubbed friction, govern every aspect of war. This friction appears as unmanageable ‘noise’, magnified manifold when war’s tempo quickly picks up or acute danger is at hand.

The sheer multiplicity of, and interactions among, initial conditions make it impossible to predict every possible outcome or to calculate their probabilities. Such unpredictability in war provides a stark challenge to C.G. Hempel’s comfortable expectations:
“Historical explanation . . . [being] aimed at showing that some event in question was not a ‘matter of chance’, but was rather to be expected in view of certain antecedent or simultaneous conditions.” 
To the contrary, it is the very unpredictability of war that 
makes it impossible to avoid or at least contain.
The pioneering of chaos theory, by Henri Poincaré, Edward Lorenz, and others, has 
shown that events associated with dynamic, nonlinear systems—war among them—are 
extraordinarily sensitive to their initial conditions. And as Aristotle observed, “the least 
deviation . . . is multiplied later a thousandfold.”

Wars evolve as events—branching out 
in fern-like patterns—play out their consequences. 
The thread linking the lessons from history to future wars is thin and tenuous. ‘Wisdom’ 
gleaned from the past inevitably bumps up against the realities of wars’ disorder. We 
might learn much from past wars, including descriptive reconstructions of causes, 
circumstances, and happenings, but our ability to take prescriptive lessons’ forward is 
strictly limited.
In describing the events of the Peloponnesian War,

Thucydides wrote:

“If [my history] be judged by those inquirers who desire an exact knowledge of the past 
as an aid to the interpretation of the future . . . I shall be content.” 

Yet is our knowledge of history really so exact? The answer is surely 'no' – whatever the comfortable assurances of Thucydides.





Can History Shape Future War?


Posted by Keith Tidman
In describing the events of the Peloponnesian War, Thucydides wrote, “If [my history] be judged by those inquirers who desire an exact knowledge of the past as an aid to the interpretation of the future ... I shall be content.”
Yet is our knowledge of history really that ‘exact’? And can we apply what is learned, to shape wars still to be fought? Is there a prescriptive use of military history? That is, does the historical study of past wars increase the probability of victory in the next?

In spite of the optimism of Thucydides, the answer has to be no. And for an overarching reason: The complexity, indeterminacy, and dynamical nature of war. Conditions unfold in multiple directions; high-stakes choices are made to try pushing back against the specter of chaos; and overly idealised visions are applied to war’s unfolding—where ‘victory’ is writ large, to win both in battle and in the arena of political will. Of course, lessons of past wars may be useful within limits. Yet, in the words of military historian Trevor Dupuy, only to provide “basic insights”—tracing the contours of conduct and performance.

Variables pile up like grains of sand and are jostled as events advance and recede—unforeseeable incidents that the military theorist Carl von Clausewitz dubbed ‘friction’, which become magnified when war’s tempo spikes or acute danger looms. The instability of war makes it impossible to have confidence in victory, regardless of historical lessons. If the ultimate metric of war is wins, consider a few of America’s post-World War II crucibles: Korea, a stalemate; Vietnam, a loss; Iraq and Afghanistan (fifteen years later!) teetering precariously—constabulary skirmishes in Panama, Haiti, Somalia, Grenada, and Kosovo too minor to regard.

An example of failure has been counterinsurgencies. The last century has seen many efforts go awry. The history includes France in Algeria and Indochina, the Netherlands in Aceh, Britain in Malaya, the Soviet Union in Afghanistan, and the United States in Vietnam, Iraq, and Afghanistan. These were asymmetric conflicts—often fought, by insurgents’ intent, away from sweeping battlefields, and where insurgents at least instinctively understood military strategist Sun Tzu’s observation that “all warfare is based on deception”. Field manuals have provided military, political, informational, intelligence, and psychological tools by way of a counter—yet sustainable victory has often proven elusive.

Some philosophers of history, such as Arthur Danto, have shed light on the whys and wherefores for this disconnect. History does not merely deal with isolated events, but with great intersections—and how they play off one another. These intersections result in major changes, making it numbingly hard to figure out what will emerge. It is even more complicated with war, where one seeks to translate intersections that have played into past wars into reliable prescriptions for managing future wars.

Further, a blizzard of events does not yield dependable means to assess information about what was going on and to convert conclusions into sound, high-probability prescriptions for the next time. Even with hi-tech battlegrounds and mathematical simulations, a little historical information is not always better than none. A tipping point must be reached before information is good enough and plentiful enough. The reason is war’s nonlinear and dynamic nature. To this point, Arnold Toynbee was right to assert that history progresses in nonlinear fashion. In the eyes of those caught up in war’s mangle, therefore, events often play out as chaos, which military planners cannot compensate for. It has been called the ‘fog of war’.

Minor events, too, may lead to major events. Chaos theory has shown that events associated with dynamic, nonlinear systems—war among them—are extraordinarily sensitive to initial conditions. The sheer multiplicity of, and interactions among, the initial conditions make it impossible to predict most outcomes. Efforts by decision-makers run into head winds, as conditions degrade. Errors cascade. The many variables are just the starting point of war, subject to change dramatically as war persists. The ‘butterfly effect’, as dubbed by Edward Lorenz, where the metaphorical flapping of a butterfly’s wings (initial conditions) can cause extreme weather far elsewhere.

Too many to list here, the initial conditions of war include the prospect of third-party intervention, risk of battle fatigue, unexpected coupling of variables, cost-benefit bargains, resilience to setbacks, flexibility of tactics, match between force mix and mission, weaker party’s offsets to the other’s strengths, inspirational leadership, weight placed on presumed relative importance of factors—and numerous others. And as Aristotle observed, “the least deviation . . . is multiplied later a thousandfold.”

The thread linking the outcome of future wars to lessons from history is thin, and errors have come with high costs—in blood, treasure, and ethical norms. ‘Wisdom’ gleaned from the past bumps up against wars’ capacity to create disequilibrium. Much might be learned descriptively from past wars, but the prescriptive value of those lessons is tenuous.