Showing posts with label philosophy of science. Show all posts
Showing posts with label philosophy of science. Show all posts

08 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

18 October 2020

Is Technology ‘What Makes us Human’?


Posted by Keith Tidman

Technology and human behaviour have historically always been intertwined, defining us as the species we are. Today, technology’s ubiquity means that our lives’ ever-faster turn toward it and its multiplicity of forms have given it stealth-like properties. Increasingly, for many people, technology seems just to happen, and the human agency behind it appears veiled. Yet at the same time, perhaps counterintuitively, what appears to us to happen ‘behind the curtain’ hints that technology is fundamentally rooted in human nature. 


Certainly, there is a delicate affinity between science and technology: the former uncovers how the world happens to be, while the latter helps science to convert those realities into artefacts. As science changes, technologists see opportunities: through invention, design, engineering, and application. This restlessly visionary process is not just incidental, I suggest, but rather is intrinsic to us.

 

Our species comprises enthusiastic toolmakers. The coupling of science and technology has led to humanity’s rich array of transformative products, from particle accelerators to world-spanning aircraft, to magnetic-resonance imaging devices, to the space-station laboratory and universe-imaging space telescopes. The alliance has brought us gene-editing technologies and bioengineering, robotics driven by artificial intelligence, energy-generating solar panels, and multifunctional ‘smart phones’.

 

There’s an ‘everywhereness’ of many such devices in the world, reaching into our lives, increasingly creating a one-world community linked by mutual interdependence on many fronts. The role of toolmaker-cum-technologist has become integrated, metaphorically speaking, into our species’ biological motherboard. In this way, technology has becomes the tipping point of globalisation’s irrepressibility.

 

René Descartes went so far as to profess that science would enable humankind to ‘become the masters and possessors of nature’. An overreach, perhaps — the despoiling of aspects of nature, such as the air, land, and ecosystems at our over-eager hands convinces us of that — but the trend line today points in the direction Descartes declared, just as electric light frees swaths of the world’s population from dependence on daylight.

 

Technology was supercharged by the science of the Newtonian world, which saw the universe as a machine, and its subsequent vaulting to the world of digits has had obvious magnifying effects. These will next become amplified as the world of machine learning takes center stage. Yet human imagination and creativity have had a powerfully galvanizing influence over the transformation. 

 

Technology itself is morally impartial, and as such neither blameworthy nor praiseworthy. Despite how ‘clever’ it becomes, for the foreseeable future technology does not yet have agency — or preference of any kind. However, on the horizon, much cleverer, even self-optimising technology might start to exhibit moral partiality. But as to the point about responsibility and accountability, it is how technology is employed, through users, which gives rise to considerations of morality.

 

A car, for example, is a morally impartial technology. No nefarious intent can be fairly ascribed to either inventor or owner. However, as soon as someone chooses to exercise his agency and drive the car into a crowd with the intent to hurt, he turns the vehicle from its original purpose as an empowering tool for transportation into an empowering weapon of sorts. But no one wags their finger remonstratively at the car.

 

Technology influences our values and norms, prompting culture to morph — sometimes gradually, other times hurriedly. It’s what defines us, at least in large part, as human beings. At the same time, the incorporation and acceptance of technology is decidedly seductive. Witness the new Digital Revolution. Technology’s sway is hard to discount, and even harder to rebuff, especially once it has established roots deep into culture’s rich subsurface soil. But this sway can also be overstated.

 

To that last point, despite technology’s ubiquity, it has not entirely pulled the rug from under other values, like those around community, spirituality, integrity, loyalty, respect, leadership, generosity, and accountability, among others. Indeed, technology might be construed as serving as a multiplier of opportunities for development and improvement, empowering individuals, communities, and institutions alike. How the fifteenth-century printing press democratised access to knowledge, became a tool that spurred revolutions, and helped spark the Enlightenment was one instance of this influential effect.


Today, rockets satisfy our impulse to explore space; the anticipated advent of quantum computers promises dramatic advances in machine learning as well as the modeling of natural events and behaviours, unbreakable encryption, and the development of drugs; nanotechnology leads to the creation of revolutionary materials — and all the time the Internet increasingly connects the world in ways once beyond the imagination.

 

In this manner, there are cascading events that work both ways: human needs and wants drive technology; and technology drives human needs and wants. Technological change thus is a Janus figure with two faces: one looking toward the past, as we figure out what is important and which lessons to apply; and the other looking toward the future, as we innovate. Accordingly, both traditional and new values become expressed, more than just obliquely, by the technology we invent, in a cycle of generation and regeneration.

 

Despite technology’s occasional fails, few people are really prepared to live unconditionally with nature, strictly on nature’s terms. To do so remains a romanticised vision, worthy of the likes of American idealist Henry David Thoreau. Rather, whether rightly or wrongly, more often we have seen our higher interests to make life yet a bit easier, a bit more palatable. 

 

Philosopher Martin Heidegger declared, rather dismally, that we are relegated to ‘remain unfree and chained to technology’. But I think his view is an unappreciative, undeservedly dismissive view of technology’s advantages, across domains: agriculture, education, industry, medicine, business, sanitation, transportation, building, entertainment, materials, information, and communication, among others. Domains where considerations like resource sustainability, ethics, and social justice have been key.

 

For me, in its reach, technology’s pulse has a sociocultural aspect, both shaping and drawing upon social, political, and cultural values. And to get the right balance among those values is a moral, not just a pragmatic, responsibility — one that requires being vigilant in making choices from among alternative priorities and goals. 

 

In innumerable ways, it is through technology, incubated in science, that civilisation has pushed back against the Hobbesian ‘nastiness and brutishness’ of human existence. That’s the record of history. In meantime, we concede the paradox of complex technology championing a simplified, pleasanter life. And as such, our tool-making impulse toward technological solutions, despite occasional fails, will continue to animate what makes us deeply human.

 

20 September 2020

‘What Are We?’ “Self-reflective Consciousness, Cooperation, and the Agents of Our Future Evolution”

Cueva de las Manos, Río Pinturas

Posted by John Hands 

‘What are we?’ This is arguably the fundamental philosophical question. Indeed, ‘What are we?’ along with ‘Where do we come from?’ and ‘Why do we exist?’ are questions that humans have been asking for at least 25,000 years. During all of this time we have sought answers from the supernatural. About 3,000 years ago, however, we began to seek answers through philosophical reasoning and insight. Then, around 150 years ago, we began to seek answers through science: through systematic, preferably measurable, observation or experiment. 

As a science graduate and former tutor in physics for Britain's ‘Open University*’, I wanted to find out what answers science currently gives. But I couldn’t find any book that did so. There are two reasons for this.

  • First, the exponential increase in empirical data generated by rapid developments in technology had resulted in the branching of science into increasingly narrow, specialized fields. I wanted to step back from the focus of one leaf on one branch and see what the whole evolutionary tree shows us. 
  • Second, most science books advocate a particular theory, and often present it as fact. But scientific explanations change as new data is obtained and new thinking develops. 

And so I decided to write ‘the book that hadn’t been written’: an impartial evaluation of the current theories that explain how we evolved, not just from the first life on Earth, but where that came from, right back to the primordial matter and energy at the beginning of the universe of which we ultimately consist. I called it COSMOSAPIENS Human Evolution from the Origin of the Universe* and in the event it took more than 10 years to research and write. What’s more, the conclusions I reached surprised me. I had assumed that the Big Bang was well-established science. But the more I investigated the more I discovered that the Big Bang Theory had been contradicted by observational evidence stretching back 60 years. Cosmologists had continually changed this theory as more sophisticated observations and experiments produced ever more contradictions with the theory.

The latest theory is called the Concordance Model. It might more accurately be described as ‘The Inflationary-before-or-after-the-Hot Big Bang-unknown-27% Dark Matter-unknown-68% Dark Energy model’. Its central axiom, that the universe inflated at a trillion trillion trillion times the speed of light in a trillion trillion trillionth of a second is untestable. Hence it is not scientific.

The problem arises because these cosmological theories are mathematical models. They are simplified solutions of Einstein’s field equations of general relativity applied to the universe. They are based on assumptions that the latest observations show to be invalid. That’s one surprising conclusion I found. 

Another surprise came when I examined the orthodox theory for the last 65 years in the UK and the USA of how and why life on Earth evolved into so many different species. It is known as NeoDarwinism, and was popularised by Richard Dawkins in his bestselling book, The Selfish Gene, where it says that biological evolution is caused by genes selfishly competing with each other to survive and replicate.

NeoDarwinism is based on the fallacy of ascribing intention to an acid, deoxyribonucleic acid, of which genes are composed. Dawkins admits that this language is sloppy and says he could express it in scientific terms. But I’ve read the book twice and he never does manage to do this. Moreover, the theory is contradicted by substantial behavioural, genetic, and genomic evidence. When confronted by such, instead of modifying the theory to take account of the evidence, as a scientist should do, Dawkins lamely says “genes must have misfired”. 

The fact is, he couldn’t modify the theory because the evidence shows that Darwinian competition causes not the evolution of species but the destruction of species. It is cooperation, not competition, that has caused the evolution of successively more complex species.

Today, most biologists assert that we differ only in degree from other animals. I think that this too is wrong. What marked our emergence as a distinct species some 25,000 years ago wasn’t the size or shape of our skulls, or that we walked upright, or that we lacked bodily hair, or the genes we possess. These are differences in degree from other animals. What made us unique was reflective consciousness.

Consciousness is a characteristic of a living thing as distinct from an inanimate thing like a rock. It is possessed in rudimentary form by the simplest species like bacteria. In the evolutionary lineage leading to humans, consciousness increased with increasing neural complexity and centration in the brain until, with humans, it became conscious of itself. We are the only species that not only knows but also knows that it knows. We reflect on ourselves and our place in the cosmos. We ask questions like: What are we? Where did we come from? Why do we exist? 

This self-reflective consciousness has transformed existing abilities and generated new ones. It has transformed comprehension, learning, invention, and communication, which all other animals have in varying degrees. It has generated new abilities, like imagination, insight, abstraction, written language, belief, and morality that no other animal has. Its possession marks a difference in kind, not merely degree, from other animals, just as there is a difference in kind between inanimate matter, like a rock, and living things, like bacteria and animals. 

Moreover, Homo sapiens is the only known species that is still evolving. Our evolution is not morphological—physical characteristics—or genetic, but noetic, meaning ‘relating to mental activity’. It is an evolution of the mind, and has been occurring in three overlapping phases: primeval, philosophical, and scientific. 

Primeval thinking was dominated by the foreknowledge of death and the need to survive. Accordingly, imagination gave rise to superstition, which is a belief that usually arises from a lack of understanding of natural phenomena or fear of the unknown. 

It is evidenced by legends and myths, the beliefs in animism, totemism, and ancestor worship of hunter-gatherers, to polytheism in city-states in which the pantheon of gods reflected the social hierarchy of their societies, and finally to a monotheism in which other gods were demoted to angels or subsumed into one God, reflecting the absolute power of king or emperor. 

The instinct for competition and aggression, which had been ingrained over millions of years of prehuman ancestry, remained a powerful characteristic of humans, interacting with, and dominating, reflective consciousness. 

The second phase of reflective consciousness, philosophical thinking, emerged roughly 1500 to 500 BCE. It was characterised by humans going beyond superstition to use reasoning and insight, often after disciplined meditation, to answer questions. In all cultures it produced the ethical view that we should treat all others, including our enemies, as ourselves. This ran counter to the predominant instinct of aggression and competition. 

The third phase, scientific thinking, gradually emerged from natural philosophy around 1600 CE. It branched into the physical sciences, the life sciences, and medical sciences. 

Physics, the fundamental science, then started to converge, rapidly so over the last 65 years, towards a single theory that describes all the interactions between all forms of matter. According to this view, all physical phenomena are lower energy manifestations of a single energy at the beginning of the universe. This is similar in very many respects to the insight of philosophers of all cultures that there is an underlying energy in the cosmos that gives rise to all matter and energy. 

During this period, reflective consciousness has produced an increasing convergence of humankind. The development of technology has led to globalisation, both physically and electronically, in trade, science, education, politics (United Nations), and altruistic activities such as UNICEF and Médecins Sans Frontières. It has also produced a ‘complexification’ of human societies, a reduction in aggression, an increase in cooperation, and the ability to determine humankind’s future. 

This whole process of human evolution has been accelerating. Primeval thinking emerges roughly 25,000 years ago, philosophical thinking emerges about 3,000 years ago, scientific thinking emerges some 400 years ago, while convergent thinking begins barely 65 years ago. 

I think that when we examine the evidence of our evolution from primordial matter and energy at the beginning of the universe, we see a consistent pattern. This shows that we humans are the unfinished product of an accelerating cosmic evolutionary process characterised by cooperation, increasing complexity and convergence, and that – uniquely as far we know – we are the self-reflective agents of our future evolution. 


 

*For further details and reviews of John’s new book, see https://johnhands.com 

Editor's note. The UK’s ‘Open University’ differs from other universities through its the policy of open admissions and its emphasis on distance and online learning programs.

02 February 2020

Picture Post #53 Buckled Rails


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Thomas Scarborough

Buckled railway line near Glasgow, 25 June 2018.

The thermal expansion of railway lines is governed most simply by the formula

Δ L ≈ α L Δ T

This formula failed near Glasgow on 25 June 2018, when railway lines buckled in the heat. In fact they buckled in heatwaves all across Europe in the 2010s.  Why?  The answer is simple.  This formula, and versions of it, failed to include environmental factors—at least, not those which mattered.

It is not only railway lines which buckle.  Oceans are polluted, glaciers retreat, bees are poisoned, toads go blind, groundwater is poisoned, people suffocate—in fact, thousands if not millions of things go wrong besides—all without their being included in the formulae.

Here is the problem.  We take at face value that physical laws are true of this world.  It is the heresy of Plato.  Ordinary things, held Plato, imitate forms.  We hold up forms to reality, which is formulae: 'This is how it is!'  It is not.  And so the world is continually bedevilled by negative consequences.