20 November 2016

Individualism vs. Personhood in Kiribati

By Berenike Neneia
The French philosophes thought of the individual as being 'prior to' the group. This has been a point of strenuous debate ever since. But whatever the case, individualism is characteristic, in some way, of the whole of our Western society today.
I myself am privileged to belong to a society which would seem to have been stranded in time – and while individualism now influences us profoundly, the cultural patterns of the past are still near. This short post serves as an introduction to a concept which is central to my culture in Kiribati: te oi n aomata.

Te oi n aomata literally means 'a real or true person'. It includes all people, whether men or women, young or old. This is not merely a living person who has concrete existence, but one who is seen by the community which surrounds him or her to have certain features, whether ascribed or acquired. Therefore it is by these features that a community's recognition of a person is 'weighed': as to whether they are an oi n aomata, 'a real or true person', or not.

Since Kiribati society is patriarchal, there is a distinction between how a man (oi ni mwane) and a woman (oi n aine) are seen as oi n aomata. Men will be considered oi n aomata through their material possessions, while women will be known as oi n aomata by their conduct – which is meant in the sense that a woman will be well mannered, respectful, obedient, and so forth. It is rare for a woman to possess or inherit the family’s vital assets such as land, house, taro pit, and canoe. The only exception is a woman who is an only child.

Prior to the coming of Europeans to the shores of Kiribati, a man who was regarded as an oi n aomata or oi ni mwane (a real or true man) was 'renowned' as one who came from a good family (which is, a family disciplined in cultural norms), in which he had a good reputation. He would be the first-born or only child, he would have many lands, and he would have a 'house' of his own: not of European design, but a cluster of structures used for meeting, cooking, sleeping, and relaxing. These belongings were very valuable, as they indicated that a man was 'in the community'.

In relation to such possessions, a man would further have the skills and the knowledge of how to fish and how to cut toddy, which were vital to the sustenance of his family. He would also know how to build his 'house', and to maintain it. As a man, he was the one who would protect his family from all harm.

These were some of the important skills which characterised an oi n mwane or 'real or true man'. He was very highly regarded in communities.

Similarly, to be an oi n aomata or oi n aine (a real or true woman), a woman had to come from a good family (again, a family disciplined in cultural norms). She would be well nurtured and well taught, and she herself would behave according to Kiribati cultural norms. She would know how to cook and to look after her family well. This means that everyone in her household would be served first, while she would be served last.

She would know how to weave mats, so that her family would have something to lie on. She would know respect and not talk back, especially to her husband, her in-laws, and elders. Crucially, a woman would remain a virgin until she was married, since this involved the pride of her family. Therefore, she would give no appearance of indiscreet or suspect behaviour.

A woman had to maintain her place within the home, and look after her family well. As such she was considered an oi n aine or 'real and true woman', since she was the backbone of her family.

Today when one speaks about people, there is a saying, 'Ai tiaki te aomata raom anne,' which refers to those who are 'no longer an (ordinary) person'. Rather, they have acquired, inherited, and possessed important things in the context of our culture, which make life much more enjoyable, much easier, and much better for all (with less complications, and less suffering).

However, where globalisation is now at the shores of Kiribati, the definition of an oi n aomata, 'a real or true person', is evolving in relation to changing patterns, norms, and life-styles of the Kiribati people. We see now the effects of these changing patterns – from a communal life to a more individualistic life-style. While this has brought various benefits to society, in many ways it has not been for the better.

13 November 2016

Pseudo Ethics

Posted by Thomas Scarbrough
Jean-François Lyotard proposed that efficiency, above all, provides us with legitimation for human action today. If we can only do something more efficiently – or more profitably – then we have found a reason to do it. In fact society in its entirety, Lyotard considered, has become a system which must aim for efficient functioning, to the exclusion of its less efficient elements.
This is the way in which, subtly, as if by stealth – we have come fill a great value vacuum in our world with pseudo values, borrowed from the realm of fact. Philosophically, this cannot be done – yet it is done – and it happens like this:

The human sphere is exceedingly complex – and inscrutable. It is one thing for us to trace relations in our world, as by nature we all do – quite another to know how others trace relations in this world.  While our physical world is more or less open to view, this is not the case with worlds which exist inside other people's minds – people who further hide behind semiotic codes: the raising of an eyebrow, for instance, or a laugh, or an utterance.

A million examples could not speak as loudly as the fact that we have a problem in principle. Like the chess novice who randomly inserts a move into the grand master's game, as soon as we introduce others into the picture, there is a quantum leap in complexity.  Small wonder that we find it easier to speak about our world in 'factual' terms than in human terms.

Further, in the human sphere we experience frequent reversals and uncertainties – war, famine, and disease, among many other things – while through the natural sciences we are presented with continual novelty and advance. In comparison with the 'factual' sphere, the human sphere is a quagmire. This leads to a spontaneous privileging of the natural sciences.

We come to see the natural sciences as indicating values, where strictly they do not – and cannot. That is, we consider that they give us direction as to how we should behave. And so, economic indicators determine our responses to the economy, clinical indicators determine our responses to a 'clinical situation' (that is, to a patient), environmental indicators determine our responses to the state of our environment, and so on.

Yet philosophers know that we are unable, through facts, to arrive at any values. We call it the fact-value distinction, and it leaves us with only two logical extremes: logical positivism on the one hand, or ethical intuitionism on the other. That is, either we cannot speak about values at all, or we must speak about them in the face of our severance from the facts. 

We automatically, impulsively, instinctively react to graphs, charts, statistics, imagining that they give us reason to act. Yet this is illusory. While the natural sciences might seem to point us somewhere, in terms of value, strictly they do not, and cannot. It is fact seeking to show us value.

Thus we calculate, tabulate, and assess things, writes sociologist James Aho, on the basis of 'accounting calculations', the value of which has no true basis. Such calculations have under the banner of efficiency come to colonise themselves in virtually every institutional realm of modern society – while it is and has to be a philosophical mistake.

Of course, efficiency has positive aspects. We receive efficient service, we design an efficient machine, or we have an efficient economy. This alone raises the status of efficiency in our thinking. However, in the context of this discussion, where efficiency represents legitimation for human action, it has no proper place.

The idea of such efficiency has introduced us to a life which many of us would not have imagined as children: we are both processed and we process others, on the basis of data sets – while organic fields of interest such as farming, building, nursing, even sports, have been reduced to something increasingly resembling paint-by-numbers. It is called 'increased objectification'.

With the advance of efficiency as a motive for action, we have come to experience, too, widespread alienation today: feelings of powerlessness, normlessness, meaninglessness, and social isolation, which did not exist in former times. Karl Marx considered that we have been overtaken by commodity fetishism, where the devaluation of the human sphere is proportional to the over-valuation of things.

Theologian Samuel Henry Goodwin summed it up: 'We are just a number.' Through pseudo values, borrowed from the realm of fact, we are dehumanised. In fact, this must be the case as long as we take numerate approaches to human affairs on the basis that they are 'indicated' by facts. Cold fact encroaches on the complex and subtle relations which are represented by the human sciences – in fact, by life as it is lived.

06 November 2016

Picture Post #18 A Somersault for the Suspension of Civilisation



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'


Posted by Tessa den Uyl and Martin Cohen

Photo credit: students of  A Mundzuku Ka Hina, communications workshop. 

A life conditioned by the dictates of competition and consumption cannot but bring great social differences along in its train. When we ascribe symbolic values to a consumptive life, ideas will conform to ideals in which our moral duties are the rights of others on us.

The subtle way social disproportions are perceived as if a causa sui, something wherein the cause lies within itself creates a world of facts based upon competitive abstractions that endlessly rehearse on a Procrustean bed.

The salto (flying somersault) performed by the boy, who depends for his survival on a rubbish-dump, breaks with this gesture the conditioned life. What he breaks is to function, which means to think, alike a certain ‘life-design.’ His action shows the incompleteness of our relationships in an abstract world.

His jump is a jump into a space of non-facts.

In the suspension of the movement is the liberating being of lightness.

30 October 2016

Nothing: A Hungarian Etymology

'Landing', 2013. Grateful acknowledgement to Sadradeen Ameen
Posted by Király V. István
In its primary and abstract appearance, nothing is precisely 'that' 'which' it is not. However, the word is still there, in the words of all the languages we know. Here we explore its primary meaning in Hungarian.
The Hungarian word for nothing – 'semmi' – is a compound of 'sem' (nor) and 'mi' (we). The negative 'sem' expresses: 'nor here' (sem itt), 'nor there' (sem ott), 'nor then' (sem akkor), 'nor me' (sem én), 'nor him, nor her' (sem ő). That is to say, I or we have searched everywhere, yet have found nothing, nowhere, never.

However much we think about it, the not of 'sem' is not the negating 'not', nor the depriving 'not' which Heidegger revealed in his analysis of 'das Nichts'. The not in the 'sem' is a searching not! It says, in fact, that searching we have not found. By this, it says that the way that we meet, face, and confront the not is actually a search. Thus the 'sem' places the negation in the mode of search, and the search into the mode of not (that is, negation).

What does all this mean in its essence?

Firstly, it means that, although the 'sem' is indeed a kind of search, which 'flows into' the not, still it always distinguishes itself from the nots it faces and encounters. For searching is not simply the repetition of a question, but a question carried around. Therefore the 'sem' is always about more than the tension between the question and its negative answer, for the negation itself – the not – is placed into the mode of search! And conversely.

Therefore the 'sem' never negates the searching itself – it only places and fixes it in its deficient modes. This way, the 'sem' emphasises, outlines, and suffuses the not, yet stimulates the search, until the exhaustion of its final emptiness. The contextually experienced not – that is, the 'sem' – is actually nothing but an endless deficiency of an emptied, exhausted, yet not suspended search.

This ensures on the one hand, the stability of the 'sem', which is inclined to hermetically close up within itself – while it ensures on the other hand, an inner impulse for the search which, emanating from it, continues to push it to its emptiness.

It is in the horizon of this impulse, then, that the 'sem' merges with the 'mi'. The 'mi' in Hungarian is at the same time an interrogative pronoun and a personal pronoun. Whether or not this linguistic identity is a 'coincidence', it conceals important speculative possibilities, for the 'mi' pronoun, with the 'sem' negative, always says that it is 'we' (mi) who questioningly search, but find 'nothing' (semmi).

Merged in their common space, the 'sem' and the 'mi' signify that the questioners – in the plurality of their searching questions – only arrived at, and ran into, the not, the negation. Therefore the Hungarian word for the nothing offers a deeper and more articulated consideration of what this word 'expresses', fixing not only the search and its deficient modes, but also the fact that it is always we who search and question, even if we cannot find ourselves in 'that' – in the nothing.

That is to say, the nothing – in this, which is one of its meanings – is precisely the strangeness, foreignness, and unusualness that belongs to our own self – and therefore all our attempts to eliminate it from our existence will always be superfluous.



Király V. István is an Associate Professor in the Hungarian Department of Philosophy of the Babes-Bolyai University, Cluj, Romania. This post is an extract selected by the Editors, and adjusted for Pi, from his bilingual Hungarian-English Philosophy of The Names of the Nothing.

23 October 2016

Shapeshifters, Socks, and Personal Identity

Posted by Martin Cohen
Perhaps the proudest achievement of philosophy in the past thousand years is the discovery that each of us really does know that we exist. Descartes sort-of proved that with his famous saying:

"I think therefore I am."
Just unfortunate then, that there is a big question mark hanging over the word ‘I’ here – over the notion of what philosophers call ‘personal identity’. The practical reality is that neither you nor I are in fact one person but rather a stream of ever so slightly different people. Think back ten years – what did you have in common with that creature who borrowed your name back then? Not the same physical cells, certainly. They last only a few months at most. The same ideas and beliefs? But how many of us are stuck with the same ideas and beliefs over the long run? Thank goodness these too can change and shift.

In reality, we look, feel and most importantly think very differently at various points in our lives.

Such preoccupations go back a long, long way. In folk tales, for example, like those told by the Brothers Grimm, frogs become princes – or princesses! a noble daughter becomes an elegant, white deer, and a warrior hero becomes a kind of snake. In all such cases, the character of the original person is simply placed in the body of the animal, as though it were all as simple as a quick change of clothes.

Many philosophers, such as John Locke, who lived way back in the seventeenth century, have been fascinated by the idea of such ‘shapeshifting’, which they see as raising profound and subtle questions about personal identity. Locke himself tried to imagine what would happen if a prince woke up one morning to find himself in the body of a pauper – the kind of poor person he wouldn’t even notice if he rode past them in the street in his royal carriage!

As I explained in a book called Philosophy for Dummies – confusing many readers – Locke discusses the nature of identity. He uses some thought experiments too as part of this, but not, by the way (per multiple queries!) the sock example. He didn't literally wonder about how many repairs he could make to one of his socks before it somehow ceased to be the original sock. He talks, though about a prince and a cobbler and asks which ‘bit’ of a person defines them as that person?

In a chapter called ‘Of Identity and Diversity’ in the second edition of the Essay Concerning Human Understanding, he distinguishes between collections of atoms that are unique, and something made up of the same atoms in different arrangements.

Living things, like people, for example, are given their particular identity not by their atoms (because each person's atoms change regularly, as we know) but rather are defined by the particular way that they are organised. The point argued for in his famous Prince and the Cobbler example is that if the spirit of the Prince can be imagined to be transferred to the body of the Cobbler, then the resulting person is ‘really’ the Prince.

Locke’s famous definition of what it means to be a ‘Person’ is:
‘A thinking intelligent being, that has reason, and can consider it self as it self, the same thinking thing, in different times and places; which it does only by that consciousness, which is inseparable from thinking’
More recently, a university philosopher, Derek Parfit, has pondered a more modern–sounding story, all about doctors physically putting his brain into someone else's body, in such a way that all his memories, beliefs and personal habits were transferred intact. Indeed today, rather grisly proposals are being made for ‘transplants’ like this. But our interest is philosophy, and Derek’s fiendish touch is to ask what would happen if it turned out that only half a brain was enough to do this kind of ‘personality transfer’?

Why is that a fiendish question to ask? But if that were possible, potentially we could make two new Dereks out of the first one! Then how would anyone know who was the ‘real’ one?!

Okay, that's all very unlikely anyway. And yet there are real questions and plenty of grays surrounding personal identity. Today, people are undergoing operations to change their gender – transgender John becomes Jane – or do they? Chronically overweight people are struggling to ‘rediscover’ themselves as thin people – or are they a fat person whose digestion is artificially constrained? Obesity and gender dysporia alike raise profound philosophical, not merely medical questions.

On the larger scale, too, nations struggle to decide their identity - some insisting that it involves restricting certain ethnic groups, others that it rests on enforcing certain cultural practices. Yet the reality, as in the individual human body, is slow and continuous change. The perception of a fixed identity is misleading.

“You think you are, what you are not.” 



* The book is intended for introducing children to some of the big philosophical ideas. Copies can be obtained online here: https://www.createspace.com/6299050 

16 October 2016

Does History Shape Future Wars?

Posted by Keith Tidman
To be sure, lessons can be gleaned from the study of past wars, as did Thucydides, answering some of the ‘who’, ‘what’, ‘how’, ‘why’, and ‘so-what’ questions. These putative takeaways may be constructively exploited—albeit within distinct limits.
Exploited, as the military historian Trevor Dupuy said, to “determine patterns of conduct [and] performance . . . that will provide basic insights into the nature of armed conflict.” The stuff of grand strategies and humble tactics. But here’s the rub: What’s unlikely is that those historical takeaways will lead to higher-probability outcomes in future war.

The reason for this conclusion is that the inherent instability of war makes it impossible to pave the way to victory with assurance, regardless of lessons gleaned from history. There are too many variables, which rapidly pile up like grains of sand and get jostled around as events advance and recede. Some philosophers of history, such as Arthur Danto, have shed light on the whys and wherefores of all this. That is, history captures not just isolated events but rather intersections and segues between events—like synapses. These intersections result in large changes in events, making it numbingly hard to figure out what will emerge at the other end of all that bewildering change. It’s even more complicated to sort out how history’s lessons from past wars might translate to reliable prescriptions for managing future wars.

But the grounds for flawed historical prescription go beyond the fact that war’s recipe mixes both ‘art’ and ‘science’. Even in the context of blended art and science, a little historical information is not always better than none; in the case of war, a tipping point must be reached before information is good enough and plentiful enough to matter. The fact is that war is both nonlinear and dynamic. Reliable predictions—and thus prescriptions—are elusive. Certainly, war obeys physical laws; the problem is just that we can’t always get a handle on the how and why that happens, in face of all the rapidly moving, morphing parts. Hence in the eyes of those caught up in war’s mangle, events often appear to play out as if random, at times lapsing into a level of chaos that planners cannot compensate for.

This randomness is more familiarly known as the ‘fog of war’. The fog stems from the perception of confusion in the mind’s eye. Absent a full understanding of prevailing initial conditions and their intersections, this perception drives decisions and actions during war. But it does so unreliably. Complexity thus ensures that orderliness eludes the grasp of historians, policymakers, military leaders, and pundits alike. Hindsight doesn’t always help. Unforeseeable incidents, which Carl von Clausewitz dubbed friction, govern every aspect of war. This friction appears as unmanageable ‘noise’, magnified manifold when war’s tempo quickly picks up or acute danger is at hand.

The sheer multiplicity of, and interactions among, initial conditions make it impossible to predict every possible outcome or to calculate their probabilities. Such unpredictability in war provides a stark challenge to C.G. Hempel’s comfortable expectations:
“Historical explanation . . . [being] aimed at showing that some event in question was not a ‘matter of chance’, but was rather to be expected in view of certain antecedent or simultaneous conditions.” 
To the contrary, it is the very unpredictability of war that 
makes it impossible to avoid or at least contain.
The pioneering of chaos theory, by Henri Poincaré, Edward Lorenz, and others, has 
shown that events associated with dynamic, nonlinear systems—war among them—are 
extraordinarily sensitive to their initial conditions. And as Aristotle observed, “the least 
deviation . . . is multiplied later a thousandfold.”

Wars evolve as events—branching out 
in fern-like patterns—play out their consequences. 
The thread linking the lessons from history to future wars is thin and tenuous. ‘Wisdom’ 
gleaned from the past inevitably bumps up against the realities of wars’ disorder. We 
might learn much from past wars, including descriptive reconstructions of causes, 
circumstances, and happenings, but our ability to take prescriptive lessons’ forward is 
strictly limited.
In describing the events of the Peloponnesian War,

Thucydides wrote:

“If [my history] be judged by those inquirers who desire an exact knowledge of the past 
as an aid to the interpretation of the future . . . I shall be content.” 

Yet is our knowledge of history really so exact? The answer is surely 'no' – whatever the comfortable assurances of Thucydides.





Can History Shape Future War?


Posted by Keith Tidman
In describing the events of the Peloponnesian War, Thucydides wrote, “If [my history] be judged by those inquirers who desire an exact knowledge of the past as an aid to the interpretation of the future ... I shall be content.”
Yet is our knowledge of history really that ‘exact’? And can we apply what is learned, to shape wars still to be fought? Is there a prescriptive use of military history? That is, does the historical study of past wars increase the probability of victory in the next?

In spite of the optimism of Thucydides, the answer has to be no. And for an overarching reason: The complexity, indeterminacy, and dynamical nature of war. Conditions unfold in multiple directions; high-stakes choices are made to try pushing back against the specter of chaos; and overly idealised visions are applied to war’s unfolding—where ‘victory’ is writ large, to win both in battle and in the arena of political will. Of course, lessons of past wars may be useful within limits. Yet, in the words of military historian Trevor Dupuy, only to provide “basic insights”—tracing the contours of conduct and performance.

Variables pile up like grains of sand and are jostled as events advance and recede—unforeseeable incidents that the military theorist Carl von Clausewitz dubbed ‘friction’, which become magnified when war’s tempo spikes or acute danger looms. The instability of war makes it impossible to have confidence in victory, regardless of historical lessons. If the ultimate metric of war is wins, consider a few of America’s post-World War II crucibles: Korea, a stalemate; Vietnam, a loss; Iraq and Afghanistan (fifteen years later!) teetering precariously—constabulary skirmishes in Panama, Haiti, Somalia, Grenada, and Kosovo too minor to regard.

An example of failure has been counterinsurgencies. The last century has seen many efforts go awry. The history includes France in Algeria and Indochina, the Netherlands in Aceh, Britain in Malaya, the Soviet Union in Afghanistan, and the United States in Vietnam, Iraq, and Afghanistan. These were asymmetric conflicts—often fought, by insurgents’ intent, away from sweeping battlefields, and where insurgents at least instinctively understood military strategist Sun Tzu’s observation that “all warfare is based on deception”. Field manuals have provided military, political, informational, intelligence, and psychological tools by way of a counter—yet sustainable victory has often proven elusive.

Some philosophers of history, such as Arthur Danto, have shed light on the whys and wherefores for this disconnect. History does not merely deal with isolated events, but with great intersections—and how they play off one another. These intersections result in major changes, making it numbingly hard to figure out what will emerge. It is even more complicated with war, where one seeks to translate intersections that have played into past wars into reliable prescriptions for managing future wars.

Further, a blizzard of events does not yield dependable means to assess information about what was going on and to convert conclusions into sound, high-probability prescriptions for the next time. Even with hi-tech battlegrounds and mathematical simulations, a little historical information is not always better than none. A tipping point must be reached before information is good enough and plentiful enough. The reason is war’s nonlinear and dynamic nature. To this point, Arnold Toynbee was right to assert that history progresses in nonlinear fashion. In the eyes of those caught up in war’s mangle, therefore, events often play out as chaos, which military planners cannot compensate for. It has been called the ‘fog of war’.

Minor events, too, may lead to major events. Chaos theory has shown that events associated with dynamic, nonlinear systems—war among them—are extraordinarily sensitive to initial conditions. The sheer multiplicity of, and interactions among, the initial conditions make it impossible to predict most outcomes. Efforts by decision-makers run into head winds, as conditions degrade. Errors cascade. The many variables are just the starting point of war, subject to change dramatically as war persists. The ‘butterfly effect’, as dubbed by Edward Lorenz, where the metaphorical flapping of a butterfly’s wings (initial conditions) can cause extreme weather far elsewhere.

Too many to list here, the initial conditions of war include the prospect of third-party intervention, risk of battle fatigue, unexpected coupling of variables, cost-benefit bargains, resilience to setbacks, flexibility of tactics, match between force mix and mission, weaker party’s offsets to the other’s strengths, inspirational leadership, weight placed on presumed relative importance of factors—and numerous others. And as Aristotle observed, “the least deviation . . . is multiplied later a thousandfold.”

The thread linking the outcome of future wars to lessons from history is thin, and errors have come with high costs—in blood, treasure, and ethical norms. ‘Wisdom’ gleaned from the past bumps up against wars’ capacity to create disequilibrium. Much might be learned descriptively from past wars, but the prescriptive value of those lessons is tenuous.

10 October 2016

Do We Need Perpetual Peace?

By Bohdana Kurylo
Immanuel Kant viewed war as an attribute of the state of nature, in which ‘the freedom of folly’ has not yet been replaced by ‘the freedom of reason’. His philosophy has influenced the ways in which contemporary philosophers conceive of political violence, and seek to eliminate it from global politics: through international law, collective security, and human rights. Yet is perpetual peace an intrinsically desirable destination for us today?
For Kant, peace was a question of knowledge – insofar as knowledge teaches us human nature and the experience of all centuries. It was a matter of scrutinising all claims to knowledge about human potential, that stem from feelings, instincts, memories, and other results of lived experience. On the basis of such knowledge, he thought, war could be eliminated.

Kant realised, however, that not all human knowledge is true. In particular, our ever-present possibility of war serves as evidence of the inadequacy of existing knowledge to conceive the means and principles by which perpetual peace may be established. Kant’s doctrine of transcendental idealism explained this inadequacy by claiming that humans experience only appearances (phenomena) and not things-in-themselves (noumena). What we think we know, is only appearance – our interpretation of the world. Beyond this lies a real world of things-in-themselves, the comprehension of which is simply unattainable for the human mind.

While realists, on this basis, insist on the inevitability of anarchy and war, Kant conceived that the noumenal realm could emancipate our reason from the limitations of empiricism, so enabling us to achieve perpetual peace. He sought to show that we have a categorical moral duty to act morally, even though the empirical world seems to be resistant to it. And since there is no scientific evidence that perpetual peace is impossible, he held that it ought to remain a possibility. Moreover, since moral practical reason claims that war is absolutely evil, humans have a moral duty to discipline their worst instincts to bring about perpetual peace.

Claiming to be guided by the universal reason, Kant proposed three institutional principles which could become the platform for a transnational civil society, superseding potential sources of conflict:
• The road to peace starts with the transition from the natural condition to an ‘original contract, upon which all rightful legislation of a people must be founded’, which needs to be republican.
• In order to overcome the natural condition internationally, external lawlessness between states should be solved by creating a ‘Federation of Free States’.
• Finally, a peaceful membership in a global republic would not be possible without ‘the right of a stranger not to be treated with hostility […] on someone else’s territory’ – the cosmopolitan right to universal hospitality.
Yet Kant, in spite of wanting to emancipate humans from natural determination and past experience, seems to have fallen under the same phenomenal influence as the realists. His pessimistic view of human instincts, which needed to be suppressed to avoid war, strongly reflected an internalisation of the social perceptions of human nature in his time. Humans, he thought, by choosing to overcome their instincts, ought to move from the tutelage of human nature to a state of freedom. The problem is that this ‘freedom’ was already socially defined. Therefore, viewing war as a purely negative phenomenon that hinders human progress, Kant never subjected his reasoning to the total scrutiny which he himself advocated.

Consequently Kant offered a rather deterministic solution, which merely aimed at social ‘tranquillisation’ through feeding people the ready-made values of global peace. Hence one observes his rather excessive emphasis on obedience to authority: ‘all resistance against the supreme legislative power […] is the greatest and most punishable crime’. Kant’s individual requires a master who will ‘break his self-will and force him to obey’. In turn, the master needs to be kept under the control of his own master. Crucially, this would destroy the liberty to conceive for oneself whether war is necessarily such a negative phenomenon.

Even such pacification, through obedience to authority, is unlikely to bring perpetual peace, for it refuses to understand the underlying factors that lead humans into war with each other. Perhaps more effective would be to try to find the cause of war, prior to searching for its cure.

Kant missed the idea that war may be the consequence of the current value system, which suppresses the true human will. Thus Friedrich Nietzsche argued for the need to revaluate values. Being unafraid of war, he recognised its creative potential to bring about a new culture of politics. Where Kant’s peace would merely be a temporary pacification, a complete revaluation of values could potentially create a society that would be beyond the issues of war and peace.

02 October 2016

Picture Post #17 The Mask



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

The headquarters of Mussolini's Italian Fascist Party, 1934 via the Rare Historical Photos website
The curious thing about this image is that it looks so much like an over-the-top film set. The dictator looks down on the hurrying-past public, from the facade of the Party HQ. Which in this case is imaginatively, yet also somehow absurdly, covered in graffiti - in the original sense of writing or drawings that have been scribbled, scratched, or painted. The 'Si, si, si' is of course Italian for 'Yes', which is actually not so sinister. The occasion was the the 1934 elections, in which Italians were called to vote either For or Against the Fascist representatives on the electoral list. Indeed, the facade was not always covered up like that.

In 1934, Mussolini had already ruled Italy for 12 years, and the election had certain fascistic features: there was only one party - the fascist one - and the ballot slip for 'Yes' was patriotically printed in the colours of the Italian flag (plus some fascist symbols), while 'No' was in fine philosophical sense a vote for nothing, and the ballot sheet was empty white.

The setting of the picture is the Palazzo Braschi in Rome, and the building was the headquarters of the Fascist Party Federation - which was the local one, not the national, Party headquarters.

According to the Fascist government that supervised the vote, anyway, the eventual vote was a massive endorsement of Il Duce with the Fascist list being approved not merely by 99% of voters but by 99.84% of voters!

But back to the building. Part of Mussolini’s and his philosopher guru, Giovanni Gentile's, grand scheme was to transform the cities into theatrical stages proclaiming Fascist values. Italian fascism is little understood, and was not identical to the later Nazi ideology - but one thing it did share was the belief in totalitarian power. As George Orwell would later portray in his dystopia, 1984, in this new world 2+2 really would equal five if the government said so. Si!


28 September 2016

A Look at Philosophy for Children





The Philosophical Society of England has long championed 'philosophy in schools', and over the years has published several articles on the topic. In 1952, Bernard Youngman's  strategy for a philosophical education was copious amounts of  Bible study, whereas nowadays Philosophy in Schools is portrayed as a kind of antidote to religion, a position both explored and advocated at book length by Stephen Law in The War for Children's Minds (also reviewedon the Philosopher website.)

But at least Law would agree with Youngman that the educator's task involves leading 'the young untutored mind towards love of wisdom and knowledge'. And both follow the philosophical principle that the teacher (in Youngman's words) "must value freedom of thought and revere independence of mind; he must at all times be as Plato so succinctly put it - midwife to his pupils' thoughts".



In the dark days of Madame Thatcher, and the UK of the 1980s, when everyone 'knew the price of everything and the value of nothing', philosophy was out of fashion at all levels of education. But these days Philosophy is undergoing something of a resurgence, particularly in UK schools. Not for nothing did that cynical marketing phenomenon, Harry Potter, designate his first task as 'the search for the Philosopher's Stone'. Because, philosophy, however interpreted, has something about it that is appealing to children, intellectuals and hard-nosed accountants alike.

And even if it is sometimes not quite clear which group is driving it, certainly there are now schools dotted all around Britain dipping at least a small toe into philosophy, from small rural Primaries to large urban Grammars. There are a lot of deep philosophy of education and teaching methodology issues raised by these projects. In particular, philosophy like this (unlike the elitist and stultifying French /Philosophy Bac') is part of the shift away from learning content to towards 'thinking skills', and indeed listening and communication skills. Philosophy for children in this sense is just part of a "creative curriculum", made up out of Music, Dance, and the Arts.

One school that has made a particular campaign out of the approach is a small London primary school called Gallions (in E6) which claims that philosophy has cultivated and encouraged creativity, empathy and a sense of self-worth throughout the school. After reinventing itself in this philosophy-inspired way, it claims to have made enormous progress.

"Regular practice in thinking and reasoning together has had an extraordinary impact on learning, on relationships, and on mutual respect within and beyond the school gates", Gallions says in one of its innovative (read expensive) publicity materials.

Gallions claims to have found the holy grail of education - active, creative, democratic -somewhere in the, by now fairly well-worn, methods of Matthew Lipman, a professor of philosophy in the United States, branded as 'P4C', along with more recent help from SAPERE a sort of British off-shoot busy selling courses in its methods (including 'masters' degrees at Oxford Brookes University).

Made into a commercial brand by Lipman, 'P4C' is an educational approach which promises to turn children and young people into effective, critical and creative thinkers and help them to take responsibility for their own learning 'in a creative and collaborative environment'.

The method invariably involves a warm-up or 'thinking game', featuring what is rather grandly called 'the introduction of the stimulus' but might more prosaically be counted as the presentation of the lesson's theme. The group are invited to discuss and eventually decide exactly what questions they want to discuss related to this theme.

For younger children, 'thinking games' like these are advocated.
 
Bring an object in (Use a prop): In this, an everyday object is placed in the middle of a circle of children, and everybody in turn given an opportunity to ask the object a question.  Random words: This time, again working in a circle, each child says a word which must be unconnected to the last word spoken. If someone thinks there is a connection between the words they call out 'challenge!' and must explain the link.

(If you think these are not much of 'games', you should try the ordinary lessons )

Naturally , schools being schools, philosophy starts off with 'rules'. There are rules about speaking - not talking when someone else is talking, about not laughing at other people's ideas - and rules about 'listening - letting people finish, taking 'thinking time' to consider other people's ideas before speaking, and connecting new contributions to points previously made. All this is both virtuous and very practical. If children learn little else at school , they learn ways of intreating with others. Too often, the school environment discourages discussion and collaboration in favour of rigid distinctions between 'right and wrong' points of view and hierarchies of knowledge.

In all this, the teacher is there not to teach, but only as a source of information or occasionally a 'referee' ensuring both fairness and perhaps encouraging (assuming, which is a big assumption, that they are able to distinguish) the most interesting lines of discussion from the rest. In short, they act as as 'facilitators' for the groups' learning. 'In a community of enquiry all the knowledge to be absorbed comes from the children', preach the P4C guidance notes. The aim is that the children teach each other. Their choices are aid to dominate the entire process: they construct the questions they consider interesting about the stimulus, they choose the question they wish to debate, and they decide in what way they want to contribute to the discussion.

As a consequence of all this freedom to decide what to talk about, the adherents of philosophy for children claim that the classes learn how to think and express their thoughts in new and often dramatic ways.

Secondly, as children listen to and learn from each other, they are said to practise and develop 'communication skills, such as empathy, patience and generosity. Both individually and as a group they become attentive and supportive of each other.

Regular doses of 'P4C' are claimed to enable even the shyest children to develop such speaking and listening skills. 'They learn that, in order for them to be heard when they have something to say, they have to listen, and listen carefully, to what others say. The cognitive challenge represented by the stimulus, the facilitation that constantly challenges understanding and pushes for greater depth and clarity, and the thrill of being listened to with interest, causes them to develop their vocabulary and grammar. Children are generally seen to increase the length and complexity of their contributions over time.' (From a P4C Handout for in-service teacher training edited by Lisa Naylor.)
They also acquire a vocabulary for expressing unhappiness, discomfort or frustration, which leads to negotiating with other children instead of expressing their feelings through physical means. Playground interactions change.

All this leads to children spending more time reflecting on ideas, and becoming generally more thoughtful and articulate. The ability to reflect on one's own thinking is, after all, we are told, a feature of very able people. And so, the children's improved self-esteem translates into increased academic achievement.

Mind you, if the approach was really as successful and as transformative as its advocates claim, it would seem a pity to restrict it to the drip-feed of accredited trainers and special conferences. After all, the ideas are as old as the slopes of Mount Olympus, and materials abound promising ways to implement them.

Yet, the would-be philosophy teachers are encouraged to think that introducing creativity, much less philosophy into the classroom, requires additional training - more expertise, not less! Since the UK government privatised education, the Internet is full of sites (such as Independent Thinking Limited) offering 'experts' in education, usually seen as a kind of branch of business management. In places such as this, the experts, fresh from successful careers as insurance salesmen or racing car drivers boast of their outstanding qualities under pictures of themselves on yachts. 
Attitude, creativity, taking responsibility, genius, goal setting and much more ? all the stuff that he had never been told before but was beginning to wish he had. More to the point, he began to formulate the idea of working with young people to take these ideas into schools around the country.
- As one teacher trainer puts it. No wonder that all too often the claims made for P4C turn out to be inflated, and that the children describing how they have benefited seem to be repeating new educational mantras rather than finding their own authentic voices.

This all goes against the original 'Socratic' principle that the teacher stays in the background, only occasionally asserting themselves if they feel either that the discussion has left the philosophical arena completely - or alternatively, to encourage further consideration fo an aspect that may have been raised but is not being followed up by the others. They act as 'chairmen' of a debate, not as sources of new information or adjudicators, both roles which rapidly lead to the class becoming passive in the process.

But if school teachers find it difficult to stay in the background and give up their role as final adjudicator, (few teachers indeed have this knack) it is equally the problem for philosophy in Higher Education too. How to democratise and stimulate philosophical debate is very much the themes of recent work on Philosophy for 'big children' in universities and colleges these days - particularly those taking philosophy as a foundation course for a more specialised degree. In the 1990s I was myself involved in research, under George MacDonald Ross of Leeds University (nowadays part of the so-called 'Higher Education Academy'). Here, tactics such as 'proctorials', which are discussion groups structured in a very similar way to the 'P4C groups reign supreme.

Because philosophy with children and Philosophy, 'with a capital P', in seminar groups, shares many of the same characteristics. There is, first of all, a wish to stimulate the group into active discussion of an issue, and that requires 'the stimulus'. Puzzling problems and paradoxes are often attractive to students, whereas children may prefer more 'concrete' examples.

Whatever method is used, the important thing is to recognise that the problems are triggers, not material in themselves, just as philosophy should be a process, not a body of material to be passively learned.

One secondary school teacher, Michael Brett, who introduced philosophy to his classes (with children between 10 and 13) with books of short philosophical problems, found that the ones which worked best with school students were ones where 'pure philosophers' would refuse to go, such as the economic ones in (my own book) 101 Philosophy Problems, featuring eminently 'concrete' issues such as the price of stamps, potatoes etc., alongside more traditional philosophy problems represented however as secular puzzles, such as the barber who couldn't cut his own hair (Russell's paradox) and so on.

Another book Michael Brett tried with his classes, published in America, called The Book of Questions, had, as one would hope, lots of questions - most of which were ethical and which he thought they'd like, but ihis experience was that children basically preferred the 'barber', the 'stamps and potatoes'...

As he later summed it up, this seemed to be because: 
children like to see a point to what they are thinking about: understanding economics, money etc., or the interest of puzzles. They aren't that bothered about questions that bear little on their lives - as they see them.
When Lisa Naylor says (see article at The Philosopher) that philosophy encourages children to become producers of surprisingly abstract thought, it has to be remembered that these abstract issues have first of all been made 'concrete' by being brought into the classroom as tape recordings or even simple objects.

This fact has to be borne in mind when imagining that philosophy for children is an opportunity to introduce questions with no particular answer, debates with no particular purpose. Philosophy is not just anything goes... Of course, it might be too early to ask, that this should also be borne in mind for philosophers at all levels.
Martin Cohen