04 June 2015
31 May 2015
African Philosophy: A Personal Perspective
Oils on canvas 1.5m², courtesy of Ann Moore |
Great movements may be experienced in microcosm. The dynamics of the national economy may be experienced in the price of a loaf of bread. Global weather patterns may be reflected in a bird which visits my garden. So, too, may the philosophy of a continent be understood through the simple habits of the common people. This is a personal story, through which I began to discern the features of the philosophy of a continent.“Articulation”, in the common usage, has been understood to be verbal articulation. This meaning was expanded, in philosophy at least, by Michael Polanyi, who (re) defined articulation as formulated knowledge. Thus articulation came to include written words, maps, and mathematical formulae, among other things. In fact, the philosophical meaning of the word has changed again since – yet more of this in a moment.
There are two ways in which those of European origin are taught to articulate. On the one hand, we have been taught to articulate our thoughts – on the other hand, our feelings. In fact, it is more or less expected of all of us to express our thoughts accurately, and our feelings precisely. Not so in the African culture I have come to know through living and working in Africa – and more than anything, through marriage into an African family.
My Swiss wife and I, who were both settled and well established in life, were faced with the shock of her being diagnosed with end-stage bone marrow cancer at a comparatively young age. Out of care for my well-being, she reverted to an ancient tradition. She instructed me to marry Ester Sizani, a woman from the hills, of largely Xhosa descent. This came to be of crucial importance for me, to a deeper understanding of African philosophy.
While I knew Ester, I had only communicated with her functionally and in passing. This meant that, when we began a personal relationship together, under instruction, we had not needed to know whether we could communicate. We could understand each other's words, to be sure. I spoke her second language English, and she spoke my third language Afrikaans, and we both could adequately express ourselves in these languages. Nonetheless, we soon came to realise that there was a great gulf between us when it came to articulation. This was not a personal gulf. It was a cultural and historical gulf.
Ester and I persevered with an arranged relationship, which gradually grew in warmth. In time, we travelled together to her childhood home. After a long journey by car, we reached a plateau. We drove through a farmyard, and pulled to a halt. A wiry, bearded man came down a hillside. Ester kissed him on the lips. He briefly took my hand, then dropped it. He didn't speak to me. He didn't look at me.
Ester wiped away tears. She said, “Where are the potatoes?” The man said, “There are two sacks of potatoes in the shed. But one of them is rotten.” They exchanged a few more words about potatoes, then the man walked back up the hillside. “Who was that?” I asked. “It was my father,” said Ester.
Her father? Then why didn't he speak to me? Why didn't he look at me? And what happened to a daughter's customary endearments? “Good to see you, Dad. Love you, Dad.” The talk was entirely about potatoes.
This event stands out for me above all in my growing relationship with Ester. It epitomises one of the fundamental characteristics of Africa, which at first distressed me, then gradually began to open up a new world for me. It was the problem – to me, at least – of a lack of verbal articulation.
Imagine a world, loosely speaking, without articulation: without endearments, without analyses, without strategies – often enough, without arguing or theorising or philosophical views. Ester, one day, seemed to put it in a nutshell when she said to me, with apparent surprise: “Your people fight over words! We don't have that.” This by no means indicates a lack of sophistication in African thought. I have discovered brilliance of intellect, and great emotional sensitivity. However, it was far from what I had ever known.
Being habituated in my European ways, at first I could see no remedy for the relative absence of thought and emotion, as I had ever known it. Yet the answer revealed itself to me slowly. I realised that Ester spoke volumes with her face and with her bodily movements. It seemed clear to me that if I could decipher this, I would know a new language – but then, I despaired of ever learning the code. It would surely take me forever.
I found, however, that I was able to learn it faster than I had thought possible. And as I learnt to interpret Ester, I discovered that I was able to interpret her clan, and her people. Everywhere I went, a new world seemed to open up to me: on the streets, in the shops, and in homes.
Today, it is only through centuries of practice that, by very small degrees, rational and emotional articulation has become widespread in European culture. The thinking which existed before this is referred to as “pre-philosophical” – where “pre” need not refer to a prior moment in time, but to a human condition.
We forget where we have come from, in the European tradition. The premium we now place on articulation did not always exist. The pre-philosophical mindset broadly retreated only with the advent of the so-called Age of Reason.
This having been said, we may now be coming full circle – passing beyond the more narrow kind of articulation which Polanyi described. Articulation, today, may often be understood to include action. One now speaks of articulation, writes Yu Zhenhua, as “ability, capacity, competence and faculty in knowing and action”.
This raises the question as to whether the “articulate” person in the common usage, who relies on the mere formulation of thought (feeling aside), might thereby impoverish their thinking – if not their being. In fact it is formulated knowledge which makes it possible for us to dispose of face-to-face communications and social convocations, so disembodying our human interactions.
I finally came to see that Ester's thinking had everything to do with the thinking of a continent – speaking very broadly indeed. African philosophy, rather than treating philosophy as formulated knowledge, tends to think of it in terms of a body of thought, emotion, and action, all mysteriously and holistically intertwined.
Dances, prayers, and feasting, maxims and story telling, music and rhythm, signs and symbols, and so much more – the silences, too – all combine to form what Africa calls, in its mature form, sagacity. It is controversially called ethnophilosophy, which is, in short, a philosophy which cannot be articulated in terms familiar to the European tradition.
“Knowledge and language are woven together in an indissoluble bond. The requirement that knowledge should have a linguistic articulation becomes an unconditional demand. The possibility of possessing knowledge that cannot be wholly articulated by linguistic means emerges, against such a background, as completely unintelligible” –Kjell S. Jonhanessen.
Elias, M. Teaching Emotional Literacy. Edutopia.
Imbo, S.O. An Introduction to African Philosophy. Rowman & Littlefield.
Jonhanessen, K.S. Rule Following, Intrasitive Understanding, and Tacit Knowledge. Norwegian University Press.
Pettit, P. Practical Belief and Philosophical Theory. Australian National University.
Polanyi, M. The Study of Man. University of Chicago Press.
Zhenhua, Y. Tacit Knowledge/Knowing and the Problem of Articulation. Polanyi Society.
Mirjam Rahel Scarborough (1957-2011) was a Swiss "farm girl", born in Canton Zug. She was a doctor of philosophy, a co-director of the World Evangelical Alliance's International Institute for Religious Freedom, executive editor of the International Journal for Religious Freedom, and an ordained minister.
26 May 2015
How Google and the NSA are creating a Jealous God
Posted by Pierre-Alain (Perig) Gouanvic
Who is going to process the unthinkable amount of data that's being collected by the NSA and its allies? For now, it seems that the volume of stored data is so enormous that it borders on the absurd.
We know that if someone in the NSA puts a person on notice, his or her record will be retrieved and future actions will be closely monitored (CITIZENFOUR). But who is going to decide who is on notice?
And persons are only significant "threats" if they are related to other persons, to groups, to ideas.
Google, who enjoyed a close proximity with power for the last decade, has now decided to differenciate Good and Bad ideas. Or, in the terms of the New Scientist, truthful content and garbage.
Google will determine what's true using the Knowledge-Based Trust, which in turn will rely on sites "such as Snopes, PolitiFact and FactCheck.org, [...] websites [who] exist and profit directly from debunking anything and everything [and] have been previously exposed as highly partisan."
Wikipedia will all also be part of the adventure.
What is needed by the intelligence community is an understanding of the constellation of threats to power, and those threats might not be the very useful terrorists of 9/11. What is more problematic is those who can lead masses of people to doubt that 19 novice pilots, alone and undisturbed, could fly planes on the World Trade Center on 9/11, or influential people like Robert F. Kennedy who liken USA's vaccine program to mass child abuse.
These idea, and so many other 'garbage' ideas, are the soil on which organized resistance grows. This aggregate of ideas constitutes a powerful, coherent, attractive frame of reference for large, ever expanding, sections of society.
And this is why Google is such an asset to the NSA (and conversely). Google is in charge of arming the NSA with Truth, which, conjoined with power, will create an all-knowing, all-seeing computer-being. Adding private communications to public webpages, Google will identify what's more crucial to 'debunk'. Adding public webpages to private communications, the NSA will be able to connect the personal to the collective.
And this, obviously, will only be possible through artificial intelligence.
Before PRISM was ever dreamed of, under orders from the Bush White House the NSA was already aiming to “collect it all, sniff it all, know it all, process it all, exploit it all.” During the same period, Google—whose publicly declared corporate mission is to collect and “organize the world’s information and make it universally accessible and useful”—was accepting NSA money to the tune of $2 million to provide the agency with search tools for its rapidly accreting hoard of stolen knowledge.
-- Julian Assange, Google Is Not What It Seems
Who is going to process the unthinkable amount of data that's being collected by the NSA and its allies? For now, it seems that the volume of stored data is so enormous that it borders on the absurd.
We know that if someone in the NSA puts a person on notice, his or her record will be retrieved and future actions will be closely monitored (CITIZENFOUR). But who is going to decide who is on notice?
And persons are only significant "threats" if they are related to other persons, to groups, to ideas.
Google, who enjoyed a close proximity with power for the last decade, has now decided to differenciate Good and Bad ideas. Or, in the terms of the New Scientist, truthful content and garbage.
The internet is stuffed with garbage. Anti-vaccination websites make the front page of Google, and fact-free "news" stories spread like wildfire. Google has devised a fix – rank websites according to their truthfulness.
Of course, it is not because vaccine manufacturers are exonerated from liability by the US vaccine court that they are necessarily doing those things that anti-vaccine fanatics say. Italian courts don't judge vaccines the same way as US courts do, but well, that's why we need a more truthful Google, isn't it?Google's search engine currently uses the number of incoming links to a web page as a proxy for quality, determining where it appears in search results. So pages that many other sites link to are ranked higher. This system has brought us the search engine as we know it today, but the downside is that websites full of misinformation can rise up the rankings, if enough people link to them.
Google will determine what's true using the Knowledge-Based Trust, which in turn will rely on sites "such as Snopes, PolitiFact and FactCheck.org, [...] websites [who] exist and profit directly from debunking anything and everything [and] have been previously exposed as highly partisan."
Wikipedia will all also be part of the adventure.
What is needed by the intelligence community is an understanding of the constellation of threats to power, and those threats might not be the very useful terrorists of 9/11. What is more problematic is those who can lead masses of people to doubt that 19 novice pilots, alone and undisturbed, could fly planes on the World Trade Center on 9/11, or influential people like Robert F. Kennedy who liken USA's vaccine program to mass child abuse.
These idea, and so many other 'garbage' ideas, are the soil on which organized resistance grows. This aggregate of ideas constitutes a powerful, coherent, attractive frame of reference for large, ever expanding, sections of society.
And this is why Google is such an asset to the NSA (and conversely). Google is in charge of arming the NSA with Truth, which, conjoined with power, will create an all-knowing, all-seeing computer-being. Adding private communications to public webpages, Google will identify what's more crucial to 'debunk'. Adding public webpages to private communications, the NSA will be able to connect the personal to the collective.
And this, obviously, will only be possible through artificial intelligence.
Hassabis and his team [of Google's artificial intelligence program (Deepmind)] are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services [such as truth delivery?]. It could enhance YouTube recommendations for users for example [...].
But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: 'Eventually, I think human extinction will probably occur, and technology will likely play a part in this.' He adds that he thinks Artifical Intellgience is the 'No.1 risk for this century'. It's ominous stuff. [ You can read more on that here..]
May
help us.
18 May 2015
A New role for wikis? How collaborative spaces could revive the wiki ethos.
Why social media, after contributing to the decline of Wikipedia, could need new forms of Wikis
By Pierre-Alain (Perig) Gouanvic
A little commented upon, but very significant, milestone was passed somewhere in the middle of the noughties (2000-2009), when the number of new accounts being born at Wikipedia became inferior to the number of ones marked as "deceased" — that is, accounts of Wikipedians who had grown angered, bored, or otherwise uninterested by the project. This invisible trend coincides with the rise of social networks and blogs. All other attempts to recreate online collaborative encyclopedias failed. Citizendium is a legendary failure to be remembered in the textbooks.
Blogs, as well as, later and to a greater extent, social networks, made possible the encounter of more-or-less like-minded people, from low-life bullies to high ranking academics (both qualifications not being mutually exclusive). What happens on a social network, and increasingly in all digital versions of newspapers and many websites, is that commenting and arguing have found a new home, after it has become obvious that the number one search engine result for most things, Wikipedia, has become policed and sterilized by rules and oligarchies. Most websites have their comments sections, often in communication with one of the two major social networks.
In the process, the Web has matured into its predicted second phase, Web 2.0, wherein spectators have become actors.
Not to say that they are autonomous, free, and emancipated. Rather, readers are used to create content that they will consume and comment on: everybody is naturally (and without pay) contributing, through web visits metrics, content sharing, "citizen journalism" (proletarian journalism in fact). The most popular of public intellectuals are constantly fed with news and insights by their readers, so that they are better able to feel what will most please their audience.
Now on to Web 3.0. Or, in other words, what's next, to avoid pretentious terminology. The next stage is perhaps closer than we think. Perhaps it's already there and, far from being this "semantic web" so many technicians dream about, perhaps it is, after the transformation of people by the web and the transformation of the Web by people, the webbing of people. Of people who have been exposed to the Web and have lived the pain of becoming a drop in the furious ocean of information. What Theodore Sturgeon called "bleshing".
Jean-Jacques Rousseau warned the Encyclopedists of the precise problems that later rampaged freely through Wikipedia (and Citizendium, and other knowledge oligarchies): specialization and a thrist for power, in a place like an Encyclopedia, will make "Men" even more evil than if they were ignorant. Diderot's answer back then was that what would prevent his Encyclopedia from such catastrophe would be friendship.
(Of course, he was not speaking of the alienated, private and exclusive version of friendship that has become the norm nowadays. Perhaps "solidarity" is a closer modern equivalent. But this word is mainly used to describe friendships or alliances in the context of repression or other hardships. It is a close parent of (another buzzword) "resistance".)
Diderot's view of friendship was dignified: exalting knowledge and passion, art and ethics, universality and practicality. One can witness instances of this friendship in social networks, where there is no requirement to create a well-regulated and policed collaborative encyclopedic page.
But what is lost, in those spontaneous spaces of shared understanding (of each other and of some stuff) and shared ignorance, is permanence. Wikis offer permanence and, at same time, the type of flexibility that is required for a number of different persons to coexist and blesh.
However, such powerful structures have one inconvenience that stems precisely from their strength : these micro-Webs evolve in relative autonomy from the larger Web, and especially from the more labile social networks.
Perhaps collaborative spaces with an ethos like Diderot's are needed by "socially networked" people for them to really achieve what seems to be a constantly disappointed hope. Perhaps PI is one of them.
By Pierre-Alain (Perig) Gouanvic
A little commented upon, but very significant, milestone was passed somewhere in the middle of the noughties (2000-2009), when the number of new accounts being born at Wikipedia became inferior to the number of ones marked as "deceased" — that is, accounts of Wikipedians who had grown angered, bored, or otherwise uninterested by the project. This invisible trend coincides with the rise of social networks and blogs. All other attempts to recreate online collaborative encyclopedias failed. Citizendium is a legendary failure to be remembered in the textbooks.
Blogs, as well as, later and to a greater extent, social networks, made possible the encounter of more-or-less like-minded people, from low-life bullies to high ranking academics (both qualifications not being mutually exclusive). What happens on a social network, and increasingly in all digital versions of newspapers and many websites, is that commenting and arguing have found a new home, after it has become obvious that the number one search engine result for most things, Wikipedia, has become policed and sterilized by rules and oligarchies. Most websites have their comments sections, often in communication with one of the two major social networks.
In the process, the Web has matured into its predicted second phase, Web 2.0, wherein spectators have become actors.
Not to say that they are autonomous, free, and emancipated. Rather, readers are used to create content that they will consume and comment on: everybody is naturally (and without pay) contributing, through web visits metrics, content sharing, "citizen journalism" (proletarian journalism in fact). The most popular of public intellectuals are constantly fed with news and insights by their readers, so that they are better able to feel what will most please their audience.
Now on to Web 3.0. Or, in other words, what's next, to avoid pretentious terminology. The next stage is perhaps closer than we think. Perhaps it's already there and, far from being this "semantic web" so many technicians dream about, perhaps it is, after the transformation of people by the web and the transformation of the Web by people, the webbing of people. Of people who have been exposed to the Web and have lived the pain of becoming a drop in the furious ocean of information. What Theodore Sturgeon called "bleshing".
Jean-Jacques Rousseau warned the Encyclopedists of the precise problems that later rampaged freely through Wikipedia (and Citizendium, and other knowledge oligarchies): specialization and a thrist for power, in a place like an Encyclopedia, will make "Men" even more evil than if they were ignorant. Diderot's answer back then was that what would prevent his Encyclopedia from such catastrophe would be friendship.
(Of course, he was not speaking of the alienated, private and exclusive version of friendship that has become the norm nowadays. Perhaps "solidarity" is a closer modern equivalent. But this word is mainly used to describe friendships or alliances in the context of repression or other hardships. It is a close parent of (another buzzword) "resistance".)
Diderot's view of friendship was dignified: exalting knowledge and passion, art and ethics, universality and practicality. One can witness instances of this friendship in social networks, where there is no requirement to create a well-regulated and policed collaborative encyclopedic page.
But what is lost, in those spontaneous spaces of shared understanding (of each other and of some stuff) and shared ignorance, is permanence. Wikis offer permanence and, at same time, the type of flexibility that is required for a number of different persons to coexist and blesh.
However, such powerful structures have one inconvenience that stems precisely from their strength : these micro-Webs evolve in relative autonomy from the larger Web, and especially from the more labile social networks.
Perhaps collaborative spaces with an ethos like Diderot's are needed by "socially networked" people for them to really achieve what seems to be a constantly disappointed hope. Perhaps PI is one of them.
17 May 2015
10 May 2015
What is a philosophical problem? The irrefutable metahypothesis
By Matthew Blakeway
If we ban speculation about metahypotheses, does philosophical debate simply evaporate?
Karl Popper explained how scientific knowledge grows in his book Conjectures and Refutations. A conjecture is a guess as to an explanation of a phenomenon. And an experiment is an attempt to refute a conjecture. Experiments can never prove a conjecture correct, but if successive experiments fail to refute it, then gradually it becomes accepted by scientists that the conjecture is the best available explanation. It is then a scientific theory. Scientists don’t like the word “conjecture” because it implies that it is merely a guess. They prefer the word “hypothesis”. Popper’s rule is that, for a hypothesis to be considered scientific, it must be empirically falsifiable.
When scientists consider a phenomenon that is truly mystifying, it seems reasonable to ask “what might a hypothesis for this look like?” At this point, scientists are hypothesising about hypotheses. Metahypothetical thinking is the first step in any scientific journey. When this produces no results, frustration gets the upper hand and they pursue the following line of reasoning: “the phenomenon is an effect, and must have a cause. But since we don’t know what that cause is, let’s give it a name ‘X’ and then speculate about its properties.” A metahypothesis is now presumed to be 'A Thing', rather than merely an idea about an idea.
The problem is the irrefutability of its existence.
X is a metahypothetical idea, and until we have a hypothesis, we don’t actually know what we are supposed to be refuting. Popper would say that it wasn’t scientific, yet it sprang from a scientific speculation. There is a false impression of truth that actually derives from a misrepresentation of axiom. “X is a thing” actually means “’X’ is a name we have given to an idea where we don’t even know what the idea represents” and the confusion between idea and thing is born. A false logical conclusion arises, not from truth, but because incoherent statements are irrefutable by their nature.
We can trace this through the history of philosophy. Most of it can be reduced to the following two questions:
• “What is X?” and
• “Does X exist?”
- where “X” is a metahypothetical idea that sprang from a scientist speculating about a cause of an unexplained phenomenon. The “X” could represent: God, evil, freewill, the soul, knowledge, etc. Each of these is a metahypothesis that originated with a scientist seeking to explain respectively: the existence of the universe, destructive actions by humans, seemingly random actions by humans, human actions that no one else can understand, human understanding.
The question “what is knowledge?” led to thousands of years of debate that ended when everybody lost interest in it. And I'm sure that the questions “what is freewill?” and “do humans have it?” are currently going through their death throes – again after a thousand years of debate. Or take the statement: “Evil people perform evil actions because they are evil.” If you are reading this blog, you will recognise that as so incoherent that it is barely a sentence, yet the individual components of it frequently pass as explanation for human actions that we don’t like. The idea of “evil” being some sort of thing is irrefutable despite being meaningless. What is there here to refute?
The sheer persistence of any proposition concerning a metahypothesis represented as 'A Thing' is illustrated by a real debate recently. The British actor, Stephen Fry, gave an interview with Irish television in which he argued that if God exists, then he is a maniacal bastard. [To paraphrase!]
Giles Fraser, a Christian, responded with an article “I don’t believe in the God that Stephen Fry doesn’t believe in either.”
I expect that you are positively itching to take a side in this debate. But resist the urge! Instead imagine that you are a Martian gazing down at the tragic poverty of the debates of Earth people. Fry is taking a literal interpretation of God and thereby is converting a metahypothesis into a hypothesis, but he is doing this purely with the intention of refuting it. Deliberately establishing a false hypothesis is a good debating tactic, but a dishonest one.
Fraser responds by taking the literal interpretation and passing it back into the metahypothetical – an equally dishonest tactic of making a debate unwinnable by undefining its terms. It’s like stopping the other team winning at football by hiding the ball. The effect of debates like this is to create an equilibrium stasis where the word “God” is suspended between meaning and incoherence. If it is given a robust definition, it becomes a hypothesis and is empirically refutable. And since its origins were in our inability to explain phenomena (the origin of the universe, life, etc.) for which we now have decent scientific explanations then it is pretty certain that it will indeed be refuted. But if the idea is completely incoherent, then it isn’t possible to talk about it at all. So the word exists – fluidly semi-defined – in the mid-zone between these two states. The concept “God” is an idea about an idea about a cause of unexplained phenomena. It is therefore itself unexplainable.
We can examine the birth of a metahypothesis in real time. Richard Dawkins asked in The Selfish Gene what caused cultural elements to replicate. He speculated that it needed a replicator like a gene:
An effect needs a cause. And since we don’t know what that cause is, let us give it a name and then speculate as to what its properties must be. It is beyond funny that the world’s most famous atheist is here caught employing the same method of reasoning that gave birth to the idea of “God”. We will now debate for a thousand years whether memes exist or not. However, the idea is incoherent despite sounding convincingly sciencey. The idea of the “soul” sounded pretty sciencey in Aristotle’s day. Dawkins speculates that the idea of God is a meme, but he fails to notice that the idea of a meme is a meme, and therefore he is trying to lift himself off the floor by his bootstraps.
So... if we ban speculation about metahypotheses, does philosophical debate simply evaporate? Maybe! But it would probably also stop scientific progress in its tracks. If you are in the mood for a brain spin, you might consider whether the idea of a “metahypothesis” is itself a metahypothesis.
Taking this further, if we cannot hypothesise about hypotheses, then does science evaporate too?
If we ban speculation about metahypotheses, does philosophical debate simply evaporate?
Karl Popper explained how scientific knowledge grows in his book Conjectures and Refutations. A conjecture is a guess as to an explanation of a phenomenon. And an experiment is an attempt to refute a conjecture. Experiments can never prove a conjecture correct, but if successive experiments fail to refute it, then gradually it becomes accepted by scientists that the conjecture is the best available explanation. It is then a scientific theory. Scientists don’t like the word “conjecture” because it implies that it is merely a guess. They prefer the word “hypothesis”. Popper’s rule is that, for a hypothesis to be considered scientific, it must be empirically falsifiable.
When scientists consider a phenomenon that is truly mystifying, it seems reasonable to ask “what might a hypothesis for this look like?” At this point, scientists are hypothesising about hypotheses. Metahypothetical thinking is the first step in any scientific journey. When this produces no results, frustration gets the upper hand and they pursue the following line of reasoning: “the phenomenon is an effect, and must have a cause. But since we don’t know what that cause is, let’s give it a name ‘X’ and then speculate about its properties.” A metahypothesis is now presumed to be 'A Thing', rather than merely an idea about an idea.
The problem is the irrefutability of its existence.
X is a metahypothetical idea, and until we have a hypothesis, we don’t actually know what we are supposed to be refuting. Popper would say that it wasn’t scientific, yet it sprang from a scientific speculation. There is a false impression of truth that actually derives from a misrepresentation of axiom. “X is a thing” actually means “’X’ is a name we have given to an idea where we don’t even know what the idea represents” and the confusion between idea and thing is born. A false logical conclusion arises, not from truth, but because incoherent statements are irrefutable by their nature.
We can trace this through the history of philosophy. Most of it can be reduced to the following two questions:
• “What is X?” and
• “Does X exist?”
- where “X” is a metahypothetical idea that sprang from a scientist speculating about a cause of an unexplained phenomenon. The “X” could represent: God, evil, freewill, the soul, knowledge, etc. Each of these is a metahypothesis that originated with a scientist seeking to explain respectively: the existence of the universe, destructive actions by humans, seemingly random actions by humans, human actions that no one else can understand, human understanding.
The question “what is knowledge?” led to thousands of years of debate that ended when everybody lost interest in it. And I'm sure that the questions “what is freewill?” and “do humans have it?” are currently going through their death throes – again after a thousand years of debate. Or take the statement: “Evil people perform evil actions because they are evil.” If you are reading this blog, you will recognise that as so incoherent that it is barely a sentence, yet the individual components of it frequently pass as explanation for human actions that we don’t like. The idea of “evil” being some sort of thing is irrefutable despite being meaningless. What is there here to refute?
The sheer persistence of any proposition concerning a metahypothesis represented as 'A Thing' is illustrated by a real debate recently. The British actor, Stephen Fry, gave an interview with Irish television in which he argued that if God exists, then he is a maniacal bastard. [To paraphrase!]
Yes, the world is very splendid but it also has in it insects whose whole lifecycle is to burrow into the eyes of children and make them blind. They eat outwards from the eyes. Why? Why did you do that to us? You could easily have made a creation in which that didn’t exist.
Giles Fraser, a Christian, responded with an article “I don’t believe in the God that Stephen Fry doesn’t believe in either.”
If we are imagining a God whose only power, indeed whose only existence, is love itself – and yes, this means we will have to think metaphorically about a lot of the Bible – then God cannot stand accused as the cause of humanity’s suffering.
I expect that you are positively itching to take a side in this debate. But resist the urge! Instead imagine that you are a Martian gazing down at the tragic poverty of the debates of Earth people. Fry is taking a literal interpretation of God and thereby is converting a metahypothesis into a hypothesis, but he is doing this purely with the intention of refuting it. Deliberately establishing a false hypothesis is a good debating tactic, but a dishonest one.
Fraser responds by taking the literal interpretation and passing it back into the metahypothetical – an equally dishonest tactic of making a debate unwinnable by undefining its terms. It’s like stopping the other team winning at football by hiding the ball. The effect of debates like this is to create an equilibrium stasis where the word “God” is suspended between meaning and incoherence. If it is given a robust definition, it becomes a hypothesis and is empirically refutable. And since its origins were in our inability to explain phenomena (the origin of the universe, life, etc.) for which we now have decent scientific explanations then it is pretty certain that it will indeed be refuted. But if the idea is completely incoherent, then it isn’t possible to talk about it at all. So the word exists – fluidly semi-defined – in the mid-zone between these two states. The concept “God” is an idea about an idea about a cause of unexplained phenomena. It is therefore itself unexplainable.
We can examine the birth of a metahypothesis in real time. Richard Dawkins asked in The Selfish Gene what caused cultural elements to replicate. He speculated that it needed a replicator like a gene:
But do we have to go to distant worlds to find other kinds of replicator and other, consequent, kinds of evolution? I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.
The new soup is the soup of human culture. We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme.
An effect needs a cause. And since we don’t know what that cause is, let us give it a name and then speculate as to what its properties must be. It is beyond funny that the world’s most famous atheist is here caught employing the same method of reasoning that gave birth to the idea of “God”. We will now debate for a thousand years whether memes exist or not. However, the idea is incoherent despite sounding convincingly sciencey. The idea of the “soul” sounded pretty sciencey in Aristotle’s day. Dawkins speculates that the idea of God is a meme, but he fails to notice that the idea of a meme is a meme, and therefore he is trying to lift himself off the floor by his bootstraps.
So... if we ban speculation about metahypotheses, does philosophical debate simply evaporate? Maybe! But it would probably also stop scientific progress in its tracks. If you are in the mood for a brain spin, you might consider whether the idea of a “metahypothesis” is itself a metahypothesis.
Taking this further, if we cannot hypothesise about hypotheses, then does science evaporate too?
04 May 2015
Poetry to Refute Dawkinsism
A special poem by Chengde Chen to launch the new blog
How to Refute Dawkins’ Atheism
Dear Professor Dawkins,
Yes, your bestseller, The God Delusion, is bought by millions;
more so your TV debates taking on archbishops, hotly YouTubed.
“No belief without evidence”, your atheist crusade is convincing,
like sounding the new death-knell of religion, with web power.
When the believers defend faith with Scripture,
you dare them to “walk on water” or “turn water into wine”.
When they count the moral good religion brings,
you attribute enough wars and scandals to the Church.
When they’re lost for words, or deeds, and God is laughed at,
you harvest applause, like the invincible spokesman of reason.
However, let me ask you a hypothetical question:
“If you knew it was the case that, without the fear of God,
human society would collapse, would you still reject religion?”
If you say “yes”, surely you would see how irrational you were –
worse than cutting off a man’s head to treat his headache.
A rational person, as you firmly claim to be, has to say “no” –
doesn’t this mean faith could be justified without evidence?
Reason has two functions: seeking truth and weighing expediency;
if we can’t tell if it’ll rain, we’ll carry an umbrella as a precaution.
Since “God’s existence” can neither be proved, nor disproved,
it’s reasonable for man to discipline himself with the imagination,
which wasn’t a “delusion” that happened to occur in all cultures,
but a spiritual organ driven by the evolutionary need to coexist.
Without the simple idea of the-Almighty-for-good-and-against-evil,
what could have turned a race of jungle animal into a moral being?
True or not, the great invention of man’s “second heart”
deserves Nobel returning to history to award his best prize!
Yours sincerely,
An agnostic-who-explains-religion-with-evolution
Readers can find out more about Chengde and his poems here
03 May 2015
27 April 2015
Flat Earthers - exploring human nature
By Tessa den Uyl
Click to expand |
I believe that they can. It was as a result of working on a project to create an animated film about the processes of the imagination that I came to the idea behind these images... And so, the drawings here (part of a longer series) are a kind of path that I followed in a bid 'to solve' a particular philosophical question
'Flat Earth' was conceived as a kind of platform to display aspects of imagination, modesty, and alertness envisioned within a character who inquires into himself about how language games determine his ways of thinking.
This central character tries to understand in what kind of landscape he sees his habits, and whatever he produces materially within that created world is not merely the reflected image of the creation that he imagines, but instead what he perceives is a privileged space, where an image becomes an epiphany, and it is in that space that he can develop his imagination.
Imagination is an activity, it is never passive, it is never negative. Instead, it is active within the limitations that the thinker - and the central character in my imaginary world - assigns to it. That is why the character reveals himself, in the images here, as he really is: defined in relation to the biases of his own worldview, his own philosophy of knowledge.
Imagination is reaching out towards him and he cannot help but grow inside of it. This is the temptation of imagination; he cannot refuse to grow up and enter into a deeper relationship with the world.
On the other hand, even if the character is willing to “grow up” it doesn’t necessarily mean that he is capable of doing so. Instead, what he wants to see, what he has learned to see, excludes what he can actually see. His knowledge doesn’t describe the world, but only tends to ascribe to things its own relations.
So the human being on Flat Earth recognises that he has nothing but relations; that imagination is about making relations between things, and this means that he will always have to deal with language and context. The Flat Earth is that space in which the character tries to “un-culture” himself. In the process, he has to face how he perceives, for it is too easy to be transported along the paths of semantic distortions and to inadvertently give a false value to something in the process of trying to transform values we have created into ultimate truths. The character in my imaginary world does not want to postulate a world, to impose a particular view, but tries instead to enhance the possibility of many different ones.
Click to expand |
Subscribe to:
Posts (Atom)