Showing posts with label Karl Popper. Show all posts
Showing posts with label Karl Popper. Show all posts

08 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

19 January 2020

Environmental Ethics and Climate Change

Posted by Keith Tidman

The signals of a degrading environment are many and on an existential scale, imperilling the world’s ecosystems. Rising surface temperature. Warming oceans. Sinking Greenland and Antarctic ice sheets. Glacial retreat. Decreased snow cover. Sea-level rise. Declining Arctic sea ice. Increased atmospheric water vapour. Permafrost thawing. Ocean acidification. And not least, supercharged weather events (more often, longer lasting, more intense).

Proxy (indirect) measurements — ice cores, tree rings, corals, ocean sediment — of carbon dioxide, a heat-trapping gas that plays an important role in creating the greenhouse effect on Earth, have spiked dramatically since the beginning of the Industrial Revolution. The measurements underscore that the recent increase far exceeds the natural ups and downs of the previous several hundred thousand years. Human activity — use of fossil fuels to generate energy and run industry, deforestation, cement production, land use changes, modes of travel, and much more — continues to be the accelerant.

The reports of the United Nations’ Intergovernmental Panel on Climate Change, contributed to by some 1,300 independent scientists and other researchers from more than 190 countries worldwide, reported that concentrations of carbon dioxide, methane, and nitrous oxides ‘have increased to levels unprecedented in at least 800,000 years’. The level of certainty of human activity being the leading cause, referred to as anthropogenic cause, has been placed at more than 95 percent.

That probability figure has legs, in terms of scientific method. Early logical positivists like A.J. Ayer had asserted that for validity, a scientific proposition must be capable of proof — that is, ‘verification’. Later, however, Karl Popper, in his The Logic of Scientific Discovery, argued that in the case of verification, no number of observations can be conclusive. As Popper said, no matter how many instances of white swans we may have observed, this does not justify the conclusion that all swans are white. (Lo and behold, a black swan shows up.) Instead, Popper said, the scientific test must be whether in principle the proposition can be disproved — referred to as ‘falsification’. Perhaps, then, the appropriate test is not ability to prove that mankind has affected the Earth’s climate; rather, it’s incumbent upon challengers to disprove (falsify) such claims. Something that  hasn’t happened and likely never will.

As for the ethics of human intervention into the environment, utilitarianism is the usual measure. That is to say, the consequences of human activity upon the environment govern the ethical judgments one makes of behavioural outcomes to nature. However, we must be cautious not to translate consequences solely in terms of benefits or disadvantages to humankind’s welfare; our welfare appropriately matters, of course, but not to the exclusion of all else in our environment. A bias to which we have often repeatedly succumbed.

The danger of such skewed calculations may be in sliding into what the philosopher Peter Singer coined ‘speciesism’. This is where, hierarchically, we place the worth of humans above all else in nature, as if the latter is solely at our beck and call. This anthropocentric favouring of ourselves is, I suggest, arbitrary and too narrow. The bias is also arguably misguided, especially if it disregards other species — depriving them of autonomy and inherent rights — irrespective of the sophistication of their consciousness. To this point, the 18th/19th-century utilitarian Jeremy Bentham asserted, ‘Can [animals] feel? If they can, then they deserve moral consideration’.

Assuredly, human beings are endowed with cognition that’s in many ways vastly more sophisticated than that of other species. Yet, without lapsing into speciesism, there seem to be distinct limits to the comparison, to avoid committing what’s referred to as a ‘category mistake’ — in this instance, assigning qualities to species (from orangutans and porpoises to snails and amoebas) that belong only to humans. In other words, an overwrought egalitarianism. Importantly, however, that’s not the be-all of the issue. Our planet is teeming not just with life, but with other features — from mountains to oceans to rainforest — that are arguably more than mere accouterments for simply enriching our existence. Such features have ‘intrinsic’ or inherent value — that is, they have independent value, apart from the utilitarianism of satisfying our needs and wants.

For perspective, perhaps it would be better to regard humans as nodes in what we consider a complex ‘bionet’. We are integral to nature; nature is integral to us; in their entirety, the two are indissoluble. Hence, while skirting implications of panpsychism — where everything material is thought to have at least an element of consciousness — there should be prima facie respect for all creation: from animate to inanimate. These elements have more than just the ‘instrumental’ value of satisfying the purposes of humans; all of nature is itself intrinsically the ends, not merely the means. Considerations of aesthetics, culture, and science, though important and necessary, aren’t sufficient.

As such, there is an intrinsic moral imperative not only to preserve Earth, but for it and us jointly to flourish — per Aristotle’s notion of ‘virtue’, with respect and care, including for the natural world. It’s a holistic view that concedes, on both the utilitarian and intrinsic sides of the moral equation, mutually serving roles. This position accordingly pushes back against the hubristic idea that human-centricism makes sense if the rest of nature collectively amounts only to a backstage for our purposes. That is, a backstage that provides us with a handy venue where we act out our roles, whose circumstances we try to manage (sometimes ham-fistedly) for self-satisfying purposes, where we tinker ostensibly to improve, and whose worth (virtue) we believe we’re in a position to judge rationally and bias-free.

It’s worth reflecting on a thought experiment, dubbed ‘the last man’, that the Australian philosopher Richard Routley introduced in the 1970s. He envisioned a single person surviving ‘the collapse of the world system’, choosing to go about eliminating ‘every living thing, animal and plant’, knowing that there’s no other person alive to be affected. Routley concluded that ‘one does not have to be committed to esoteric values to regard Mr. Last Man as behaving badly’. Whether Last Man was, or wasn’t, behaving unethically goes to the heart of intrinsic versus utilitarian values regarding nature —and presumptions about human supremacy in that larger calculus.

Groups like the UN Intergovernmental Panel on Climate Change have laid down markers as to tipping points beyond which extreme weather events might lead to disastrously runaway effects on the environment and humanity. Instincts related to the ‘tragedy of the commons’ — where people rapaciously consume natural resources and pollute, disregarding the good of humanity at large — have not yet been surmounted. That some other person, or other community, or other country will shoulder accountability for turning back the wave of environmental destruction and the upward-spiking curve of climate extremes has hampered the adequacy of attempted progress. Nature has thrown down the gauntlet. Will humanity pick it up in time?

26 May 2019

Is Popper a ‘modest’ Leo?


Posted by Martin Cohen

A few years ago, astrologer-aesthete Mark Shulgasser asked this revealing question about one of the 20th century's most under-rated philosophers for us. Popper, we should first recall, is admired for at least two big ideas: the first that science proceeds by testing hypotheses and disregarding those that fail the test (‘falsification’) and secondly, his critique of ‘historicism’ (the idea that history is marching towards a fine goal) and linked defence of liberal values and what he calls ‘the open society’. His point is that too many philosophers, from Plato down, think that they are exceptional beings - ‘philosopher kings’.

And yet... Shulgasser throws the charge back at him!

Those (like Popper) born under the astrological sign of Leo think they are kings. Do Leo philosophers think like that too?

Shulgasser continues:
‘Popper himself, so Napoleonic, the overcompensating short man. Popper's philosophical ambitions are overweening. He conquers continents. No one talks about Popper the person without noting his autocratic behavior and intransigence in contrast to his ethic of openness. Here's the Leo dilemma — the autocratic, central I versus the right of every peripheral being to claim to be the same.’
Certainly, in later years, it seems that Professor Popper lived in a house ‘supremely large in area, and adorned with numerous books, works of art, and a Steinway concert grand piano’...  But does that make him ‘Napoleonic’? Consider Brian Magee (broadcaster, politician, author, and popularizer of philosophy) on Popper. taken from Confessions of a Philosopher. Magee starts by accepting Popper as the ‘the outstanding philosopher of the twentieth century’ indeed, the “foremost philosopher of the age”! 
‘My chief impression of him at our early meetings was of an intellectual aggressiveness such as I had never encountered before [Napoleonism]. Everything we argued about he pursued relentlessly, beyond the limits of acceptable aggression in conversation. As Ernst Gombrich—his closest friend, who loved him—once put it to me, he seemed unable to accept the continued existence of different points of view, but went on and on and on about them with a kind of unforgivingness until the dissenter, so to speak, put his signature to a confession that he was wrong and Popper was right. 
In practice this meant he was trying to subjugate people. And there was something angry about the energy and intensity with which he made the attempt. This unremittingly fierce, tight focus, like a flame, put me in mind of a blowtorch, and that image remained the dominant one I had of him for many years, until he mellowed with age. . . 
He behaved as if the proper thing to do was to think one’s way carefully to a solution by the light of rational criteria and then, having come as responsibly and critically as one can to a liberal-minded view of what is right, impose it by an unremitting exercise of will, and never let up until one gets one’s way. ‘The totalitarian liberal’ was one of his nicknames at the London School of Economics, and it was a perceptive one.’
Popper it seems,  ‘turned every discussion into the verbal equivalent of a fight, and appeared to become almost uncontrollable with rage, and would tremble with anger ’.

Yet central to his philosophy is the claim that criticism does more than anything else to bring about growth and improvement of our knowledge and his political writings contain the best statement ever made of the case for freedom and tolerance in human affairs.

So who is the ‘real’ Karl Popper? Does it matter if he failed to live up to his own writings? There's a revealing story told about Popper in which he was invited to give a talk at Cambridge University ‘at the Moral Sciences Club’. 

Who did wave the poker during the acrimonious debate? I understood the Popper version of the Poker incident to put him in a meek and philosophical light and Wittgenstein in a boorish, intolerant one. Maybe I got this wrong - alas I committed myself to this in print - in my book called Philosophical Tales

Anyway, what is known is that Popper was there to present his paper entitled ‘Are There Philosophical Problems?’ at a meeting chaired by Wittgenstein. The two started arguing vehemently over whether there existed substantial problems in philosophy, or merely linguistic puzzles—the position taken by Wittgenstein. In Popper’s account, Wittgenstein gestured at him with a fireplace poker to emphasise his points. When challenged by Wittgenstein to state an example of a moral rule, Popper claims to have replied: ‘Not to threaten visiting lecturers with pokers’, after which (according to Popper) Wittgenstein threw down the poker and stormed out.

My guess it that Popper was indeed a little bit Napoleonic. Mind you, he faced a world in which he was passed over by others all the time, not least Wittgenstein, partly on some kind of unspoken notion of his not being ‘one of us’, not being quite posh enough. Popper was denied access to Oxbridge, and had to graze on the outskirts of academia as a 'not-quite-great' philosopher. 

And elsewhere Magee himself makes it clear he believes Popper is colossally underrated. Why, it’s enough to give anyone a Napoleon complex!

19 November 2017

Freedom of Speech in the Public Square

Posted by Keith Tidman

Free to read the New York Times forever, in Times Square
What should be the policy of free society toward the public expression of opinion? The First Amendment of the U.S. Constitution required few words to make its point:
‘Congress shall make no law . . . abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.’
It reveals much about the republic, and the philosophical primacy of freedom of speech, that this was the first of the ten constitutional amendments collectively referred to as the Bill of Rights.

As much as we like to convince ourselves, however, that the public square in the United States is always a bastion of unbridled free speech, lamentably sometimes it’s not. Although we (rightly) find solace in our free-speech rights, at times and in every forum we are too eager to restrict someone else’s privilege, particularly where monopolistic and individualistic thinking may collide. Hot-button issues have flared time and again to test forbearance and deny common ground.

And it is not only liberal ideas but also conservative ones that have come under assault in recent years. When it comes to an absence of tolerance of opinion, there’s ample responsibility to share, across the ideological continuum. Our reaction to an opinion often is swayed by whose philosophical ox is being gored rather than by the rigor of argument. The Enlightenment thinker Voltaire purportedly pushed back against this parochial attitude, coining this famous declaration:

‘I don’t agree with what you have to say, but I’ll defend to the death your right to say it.’
Yet still, the avalanche of majority opinion, and overwrought claims to ‘unique wisdom’, poses a hazard to the fundamental protection of minority and individual points of view — including beliefs that others might find specious, or even disagreeable.

To be clear, these observations about intolerance in the public square are not intended to advance moral relativism or equivalency. There may indeed be, for want of a better term, ‘absolute truths’ that stand above others, even in the everyday affairs of political, academic, and social policymaking. This reality should not fall prey to pressure from the more clamorous claims of free speech: that the loudest, angriest voices are somehow the truest, as if decibel count and snarling expressions mattered to the urgency and legitimacy of one’s ideas.

Thomas Jefferson like-mindedly referred to ‘the safety with which error of opinion may be tolerated where reason is left free to combat it’. The key is not to fear others’ ideas, as blinkered censorship concedes defeat: that one’s own facts, logic, and ideas are not up to the task of effectively put others’ opinions to the test, without resort to vitriol or violence.

The risk to society of capriciously shutting down the free flow of ideas was powerfully warned against some one hundred fifty years ago by that Father of Liberalism, the English philosopher John Stuart Mill:
‘Strange it is that men should admit the validity of the arguments for free speech but object to their being “pushed to an extreme”, not seeing that unless the reasons are good for an extreme case, they are not good for any case.’
Mill’s observation is still germane to today’s society: from the halls of government to university campuses to self-appointed bully pulpits to city streets, and venues in-between.

Indeed, as recently as the summer of 2017, the U.S. Supreme Court underscored Mill’s point, setting a high bar in affirming bedrock constitutional protections of even offensive speech. Justice Anthony Kennedy, considered a moderate, wrote:
‘A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. . . . The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.’
It is worth noting that the high court opinion was unanimous: both liberal and conservative justices concurred. The long and short of it is that even the shards of hate speech are protected.

As to this issue of forbearance, the 20th-century philosopher Karl Popper introduced his paradox of tolerance: ‘Unlimited tolerance must lead to the disappearance of tolerance’. Popper goes on to assert, with some ambiguity,
‘I do not imply . . . that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force’.
The philosopher John Rawls agreed, asserting that a just society must tolerate the intolerant, to avoid itself becoming guilty of intolerance and appearing unjust. However, Rawls evoked reasonable limits ‘when the tolerant sincerely and with reason believe that their own security and that of the institutions of liberty are in danger’. Precisely where that line would be drawn is unclear — left to Supreme Court justices to dissect and delineate, case by case.

Open-mindedness — honoring ideas of all vintages — is a cornerstone of an enlightened society. It allows for the intellectual challenge of contrarian thinking. Contrarians might at times represent a large cohort of society; at other times they simply remain minority (yet influential) iconoclasts. Either way, the power of contrarians’ nonconformance is in serving as a catalyst for transformational thinking in deciding society’s path leading into the future.

That’s intellectually healthier than the sides of debates getting caught up in their respective bubbles, with tired ideas ricocheting around without discernible purpose or prediction.

Rather than cynicism and finger pointing across the philosophical divide, the unfettered churn of diverse ideas enriches citizens’ minds, informs dialogue, nourishes curiosity, and makes democracy more enlightened and sustainable. In the face of simplistic patriarchal, authoritarian alternatives, free speech releases and channels the flow of ideas. Hyperbole that shuts off the spigot of ideas dampens inventiveness; no one’s ideas are infallible, so no one should have a hand at the ready to close that spigot. As Benjamin Franklin, one of America’s Founding Fathers, prophetically and plainly pronounced in the Pennsylvania Gazette, 17 November 1737:
‘Freedom of speech is a principal pillar of a free government.’
Adding that ‘... when this support is taken away, the constitution of a free society is dissolved, and tyranny is erected on its ruins’. Franklin’s point is that the erosion or denial of unfettered speech threatens the foundation of a constitutional, free nation that holds government accountable.

With determination, the unencumbered flow of ideas, leavened by tolerance, can again prevail as the standard of every public square — unshackling discourse, allowing dissent, sowing enlightenment, and delivering a foundational example and legacy of what’s possible by way of public discourse.

10 May 2015

What is a philosophical problem? The irrefutable metahypothesis

By Matthew Blakeway

If we ban speculation about metahypotheses, does philosophical debate simply evaporate? 



Karl Popper explained how scientific knowledge grows in his book Conjectures and Refutations. A conjecture is a guess as to an explanation of a phenomenon. And an experiment is an attempt to refute a conjecture. Experiments can never prove a conjecture correct, but if successive experiments fail to refute it, then gradually it becomes accepted by scientists that the conjecture is the best available explanation. It is then a scientific theory. Scientists don’t like the word “conjecture” because it implies that it is merely a guess. They prefer the word “hypothesis”. Popper’s rule is that, for a hypothesis to be considered scientific, it must be empirically falsifiable.

When scientists consider a phenomenon that is truly mystifying, it seems reasonable to ask “what might a hypothesis for this look like?” At this point, scientists are hypothesising about hypotheses. Metahypothetical thinking is the first step in any scientific journey. When this produces no results, frustration gets the upper hand and they pursue the following line of reasoning: “the phenomenon is an effect, and must have a cause. But since we don’t know what that cause is, let’s give it a name ‘X’ and then speculate about its properties.” A metahypothesis is now presumed to be 'A Thing', rather than merely an idea about an idea.

The problem is the irrefutability of its existence.
X is a metahypothetical idea, and until we have a hypothesis, we don’t actually know what we are supposed to be refuting. Popper would say that it wasn’t scientific, yet it sprang from a scientific speculation. There is a false impression of truth that actually derives from a misrepresentation of axiom. “X is a thing” actually means “’X’ is a name we have given to an idea where we don’t even know what the idea represents” and the confusion between idea and thing is born. A false logical conclusion arises, not from truth, but because incoherent statements are irrefutable by their nature.

We can trace this through the history of philosophy. Most of it can be reduced to the following two questions:

• “What is X?” and
• “Does X exist?”

- where “X” is a metahypothetical idea that sprang from a scientist speculating about a cause of an unexplained phenomenon. The “X” could represent: God, evil, freewill, the soul, knowledge, etc. Each of these is a metahypothesis that originated with a scientist seeking to explain respectively: the existence of the universe, destructive actions by humans, seemingly random actions by humans, human actions that no one else can understand, human understanding.

The question “what is knowledge?” led to thousands of years of debate that ended when everybody lost interest in it. And I'm sure that the questions “what is freewill?” and “do humans have it?” are currently going through their death throes – again after a thousand years of debate. Or take the statement: “Evil people perform evil actions because they are evil.” If you are reading this blog, you will recognise that as so incoherent that it is barely a sentence, yet the individual components of it frequently pass as explanation for human actions that we don’t like. The idea of “evil” being some sort of thing is irrefutable despite being meaningless. What is there here to refute?

The sheer persistence of any proposition concerning a metahypothesis represented as 'A Thing' is illustrated by a real debate recently. The British actor, Stephen Fry,  gave an interview with Irish television in which he argued that if God exists, then he is a maniacal bastard. [To paraphrase!]

Yes, the world is very splendid but it also has in it insects whose whole lifecycle is to burrow into the eyes of children and make them blind. They eat outwards from the eyes. Why? Why did you do that to us? You could easily have made a creation in which that didn’t exist.

Giles Fraser, a Christian, responded with an article “I don’t believe in the God that Stephen Fry doesn’t believe in either.”

If we are imagining a God whose only power, indeed whose only existence, is love itself – and yes, this means we will have to think metaphorically about a lot of the Bible – then God cannot stand accused as the cause of humanity’s suffering.

I expect that you are positively itching to take a side in this debate. But resist the urge! Instead imagine that you are a Martian gazing down at the tragic poverty of the debates of Earth people. Fry is taking a literal interpretation of God and thereby is converting a metahypothesis into a hypothesis, but he is doing this purely with the intention of refuting it. Deliberately establishing a false hypothesis is a good debating tactic, but a dishonest one.

Fraser responds by taking the literal interpretation and passing it back into the metahypothetical – an equally dishonest tactic of making a debate unwinnable by undefining its terms. It’s like stopping the other team winning at football by hiding the ball. The effect of debates like this is to create an equilibrium stasis where the word “God” is suspended between meaning and incoherence. If it is given a robust definition, it becomes a hypothesis and is empirically refutable. And since its origins were in our inability to explain phenomena (the origin of the universe, life, etc.) for which we now have decent scientific explanations then it is pretty certain that it will indeed be refuted. But if the idea is completely incoherent, then it isn’t possible to talk about it at all. So the word exists – fluidly semi-defined – in the mid-zone between these two states. The concept “God” is an idea about an idea about a cause of unexplained phenomena. It is therefore itself unexplainable.

We can examine the birth of a metahypothesis in real time. Richard Dawkins asked in The Selfish Gene what caused cultural elements to replicate. He speculated that it needed a replicator like a gene:

But do we have to go to distant worlds to find other kinds of replicator and other, consequent, kinds of evolution? I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.

The new soup is the soup of human culture. We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme.

An effect needs a cause. And since we don’t know what that cause is, let us give it a name and then speculate as to what its properties must be. It is beyond funny that the world’s most famous atheist is here caught employing the same method of reasoning that gave birth to the idea of “God”. We will now debate for a thousand years whether memes exist or not. However, the idea is incoherent despite sounding convincingly sciencey. The idea of the “soul” sounded pretty sciencey in Aristotle’s day. Dawkins speculates that the idea of God is a meme, but he fails to notice that the idea of a meme is a meme, and therefore he is trying to lift himself off the floor by his bootstraps.

So... if we ban speculation about metahypotheses, does philosophical debate simply evaporate? Maybe! But it would probably also stop scientific progress in its tracks. If you are in the mood for a brain spin, you might consider whether the idea of a “metahypothesis” is itself a metahypothesis.

Taking this further, if we cannot hypothesise about hypotheses, then does science evaporate too?