Showing posts with label Martin Heidegger. Show all posts
Showing posts with label Martin Heidegger. Show all posts

18 October 2020

Is Technology ‘What Makes us Human’?


Posted by Keith Tidman

Technology and human behaviour have historically always been intertwined, defining us as the species we are. Today, technology’s ubiquity means that our lives’ ever-faster turn toward it and its multiplicity of forms have given it stealth-like properties. Increasingly, for many people, technology seems just to happen, and the human agency behind it appears veiled. Yet at the same time, perhaps counterintuitively, what appears to us to happen ‘behind the curtain’ hints that technology is fundamentally rooted in human nature. 


Certainly, there is a delicate affinity between science and technology: the former uncovers how the world happens to be, while the latter helps science to convert those realities into artefacts. As science changes, technologists see opportunities: through invention, design, engineering, and application. This restlessly visionary process is not just incidental, I suggest, but rather is intrinsic to us.

 

Our species comprises enthusiastic toolmakers. The coupling of science and technology has led to humanity’s rich array of transformative products, from particle accelerators to world-spanning aircraft, to magnetic-resonance imaging devices, to the space-station laboratory and universe-imaging space telescopes. The alliance has brought us gene-editing technologies and bioengineering, robotics driven by artificial intelligence, energy-generating solar panels, and multifunctional ‘smart phones’.

 

There’s an ‘everywhereness’ of many such devices in the world, reaching into our lives, increasingly creating a one-world community linked by mutual interdependence on many fronts. The role of toolmaker-cum-technologist has become integrated, metaphorically speaking, into our species’ biological motherboard. In this way, technology has becomes the tipping point of globalisation’s irrepressibility.

 

RenĂ© Descartes went so far as to profess that science would enable humankind to ‘become the masters and possessors of nature’. An overreach, perhaps — the despoiling of aspects of nature, such as the air, land, and ecosystems at our over-eager hands convinces us of that — but the trend line today points in the direction Descartes declared, just as electric light frees swaths of the world’s population from dependence on daylight.

 

Technology was supercharged by the science of the Newtonian world, which saw the universe as a machine, and its subsequent vaulting to the world of digits has had obvious magnifying effects. These will next become amplified as the world of machine learning takes center stage. Yet human imagination and creativity have had a powerfully galvanizing influence over the transformation. 

 

Technology itself is morally impartial, and as such neither blameworthy nor praiseworthy. Despite how ‘clever’ it becomes, for the foreseeable future technology does not yet have agency — or preference of any kind. However, on the horizon, much cleverer, even self-optimising technology might start to exhibit moral partiality. But as to the point about responsibility and accountability, it is how technology is employed, through users, which gives rise to considerations of morality.

 

A car, for example, is a morally impartial technology. No nefarious intent can be fairly ascribed to either inventor or owner. However, as soon as someone chooses to exercise his agency and drive the car into a crowd with the intent to hurt, he turns the vehicle from its original purpose as an empowering tool for transportation into an empowering weapon of sorts. But no one wags their finger remonstratively at the car.

 

Technology influences our values and norms, prompting culture to morph — sometimes gradually, other times hurriedly. It’s what defines us, at least in large part, as human beings. At the same time, the incorporation and acceptance of technology is decidedly seductive. Witness the new Digital Revolution. Technology’s sway is hard to discount, and even harder to rebuff, especially once it has established roots deep into culture’s rich subsurface soil. But this sway can also be overstated.

 

To that last point, despite technology’s ubiquity, it has not entirely pulled the rug from under other values, like those around community, spirituality, integrity, loyalty, respect, leadership, generosity, and accountability, among others. Indeed, technology might be construed as serving as a multiplier of opportunities for development and improvement, empowering individuals, communities, and institutions alike. How the fifteenth-century printing press democratised access to knowledge, became a tool that spurred revolutions, and helped spark the Enlightenment was one instance of this influential effect.


Today, rockets satisfy our impulse to explore space; the anticipated advent of quantum computers promises dramatic advances in machine learning as well as the modeling of natural events and behaviours, unbreakable encryption, and the development of drugs; nanotechnology leads to the creation of revolutionary materials — and all the time the Internet increasingly connects the world in ways once beyond the imagination.

 

In this manner, there are cascading events that work both ways: human needs and wants drive technology; and technology drives human needs and wants. Technological change thus is a Janus figure with two faces: one looking toward the past, as we figure out what is important and which lessons to apply; and the other looking toward the future, as we innovate. Accordingly, both traditional and new values become expressed, more than just obliquely, by the technology we invent, in a cycle of generation and regeneration.

 

Despite technology’s occasional fails, few people are really prepared to live unconditionally with nature, strictly on nature’s terms. To do so remains a romanticised vision, worthy of the likes of American idealist Henry David Thoreau. Rather, whether rightly or wrongly, more often we have seen our higher interests to make life yet a bit easier, a bit more palatable. 

 

Philosopher Martin Heidegger declared, rather dismally, that we are relegated to ‘remain unfree and chained to technology’. But I think his view is an unappreciative, undeservedly dismissive view of technology’s advantages, across domains: agriculture, education, industry, medicine, business, sanitation, transportation, building, entertainment, materials, information, and communication, among others. Domains where considerations like resource sustainability, ethics, and social justice have been key.

 

For me, in its reach, technology’s pulse has a sociocultural aspect, both shaping and drawing upon social, political, and cultural values. And to get the right balance among those values is a moral, not just a pragmatic, responsibility — one that requires being vigilant in making choices from among alternative priorities and goals. 

 

In innumerable ways, it is through technology, incubated in science, that civilisation has pushed back against the Hobbesian ‘nastiness and brutishness’ of human existence. That’s the record of history. In meantime, we concede the paradox of complex technology championing a simplified, pleasanter life. And as such, our tool-making impulse toward technological solutions, despite occasional fails, will continue to animate what makes us deeply human.

 

23 September 2018

Why Is There Something Rather Than Nothing?

For scientists, space is not empty but full of quantum energy
Posted by Keith Tidman

Gottfried Wilhelm Leibniz introduced this inquiry more than three hundred years ago, saying, ‘The first question that should rightly be asked is, “Why is there something rather than nothing?”’ Since then, many philosophers and scientists have likewise pondered this question. Perhaps the most famous restatement of it came in 1929 when the German philosopher, Martin Heidegger, placed it at the heart of his book What Is Metaphysics?: ‘Why are there beings at all, and why not rather nothing?’

Of course, many people around the world turn to a god as a sufficient reason (explanation) for the universe’s existence. Aristotle believed, as did his forerunner Heraclitus, that the world was mutable — everything undergoing perpetual change — which he characterised as movement. He argued that there was a sequence of predecessor causes that led back deep into the past, until reaching an unmoved mover, or Prime Mover (God). An eternal, immaterial, unchanging god exists necessarily, Aristotle believed, itself independent of cause and change.

In the 13th century Saint Thomas Aquinas, a Christian friar, advanced this so-called cosmological view of universal beginnings, likewise perceiving God as the First Cause. Leibniz, in fact, was only proposing something similar, with his Contingency Argument, in the 17th century:

‘The sufficient reason [for the existence of the universe] which needs not further reason must be outside of this series of contingent things and is found in a substance which . . . is a necessary being bearing the reason for its existence within itself. . . .  This final reason for things is called God’ — Leibniz, The Principles of Nature and Grace

However, evoking God as the prime mover or first cause or noncontingent being — arbitrarily, on a priori rather than empirical grounds — does not inescapably make it so. Far from it. The common counterargument maintains that a god correspondingly raises the question that, if a god exists — has a presence — what was its cause? Assuming, that is, that any thing — ‘nothing’ being the sole exception — must have a cause. So we are still left with the question, famously posed by the theoretical physicist Stephen Hawking, ‘What is it that breathes fire into the equations and makes a universe for them to describe?’ To posit the existence of a god does not, as such, get around the ‘hard problem’: why there is a universe at all, not just why our universe is the way it is.



Some go so far as to say that nothingness is unstable, hence again impossible.


 
Science has not fared much better in this challenge. The British mathematician and philosopher Bertrand Russell ended up merely declaring in 1948, ‘I should say that the universe is just there, and that’s all’. A ‘brute fact’, as some have called it. Many scientists have embraced similar sentiments: concluding that ‘something’ was inevitable, and that ‘nothingness’ would be impossible. Some go so far as to say that nothingness is unstable, hence again impossible. But these are difficult positions to support unquestionally, given that, as with many scientific and philosophical predecessors and contemporaries, they do not adequately explain why and how. This was, for example, the outlook of Baruch Spinoza, the 17th-century Dutch philosopher who maintained that the universe (with its innumerable initial conditions and subsequent properties) had to exist. Leaping forward to the 20th century, Albert Einstein, himself an admirer of Spinoza’s philosophy, seemed to concur.

Quantum mechanics poses an interesting illustration of the science debate, informing us that empty space is not really empty — not in any absolute sense, anyway. Even what we might consider the most perfect vacuum is actually filled by churning virtual particles — quantum fluctuations — that almost instantaneously flit in and out of existence. Some theoretical physicists have suggested that this so-called ‘quantum vacuum’ is as close to nothingness as we might get. But quantum fluctuations do not equate to nothingness; they are not some modern-day-science equivalent of the non-contingent Prime Mover discussed above. Rather, no matter however flitting and insubstantial, virtual quantum particles are still something.

It is therefore reasonable to inquire into the necessary origins of these quantum fluctuations — an inquiry that requires us to return to an Aristotelian-like chain of causes upon causes, traceable back in time. The notion of a supposed quantum vacuum still doesn’t get us to what might have garnered something from nothing. Hence, the hypothesis that there has always been something — that the quantum vacuum was the universe’s nursery — peels away as an unsupportable claim. Meanwhile, other scientific hypotheses, such as string theory, bid to take the place of Prime Mover. At the heart of the theory is the hypothesis that the fundamental particles of physics are not really ‘points’ as such but rather differently vibrating energy ‘strings’ existing in many more than the familiar dimensions of space-time. Yet these strings, too, do not get us over the hump of something in place of nothing; strings are still ‘something’, whose origins (causes) would beg to be explained.

In addressing these questions, we are not talking about something emerging from nothing, as nothingness by definition would preclude the initial conditions required for the emergence of a universe. Also, ‘nothingness’ is not the mere absence (or opposite) of something; rather, it is possible to regard ‘nothingness’ as theoretically having been just as possible as ‘something’. In light of such modern-day challenges in both science and philosophy, Lugdwig Wittgenstein was at least partially right in saying, early in the 20th century (Tractatus Logico-Philosophicus, section 6.4 on what he calls ‘the mystical’), that the real mystery was, ‘Not how the world is . . . but that it is’.



30 July 2018

The Anthropic Principle: Was the Universe Made for Us?

Diagram on the dimensionality of spacetime, by Max Tegmark
Posted by Keith Tidman
‘It is clear that the Earth does not move, and that it does not lie elsewhere than at the center [of the universe]’ 
— Aristotle (4th century BCE)

Almost two millennia after Aristotle, in the 16th century, Nicolas Copernicus dared to differ from the revered ‘father of Western philosophy’. Copernicus rattled the world by arguing that the Earth is not at the center of the universe — in a move that to many at the time seemed to knock humankind off its pedestal, and reduce it from exceptionalism to mediocrity. The so-called ‘Copernican principle’ survived, of course, along with the profound disturbance it had evoked for the theologically minded.

Five centuries later, in the early 1970s, an American astrophysicist called Brandon Carter came up with a different model — the ‘anthropic principle’ — that has kept philosophers and scientists debating its significance cosmologically and metaphysically. With some irony, Carter proposed the principle at a symposium to mark Copernicus’s 500th birthday. The anthropic principle points to what has been referred to as the ‘fine-tuning’ of the universe: a list of cosmological qualities (physical constants) whose extraordinarily precise values were essential to making intelligent life possible.

Yet, as Thomas Nagel, the contemporary American philosopher, suggested, even the physical constants known to be required for our universe and an intelligent carbon-based life form need to be properly understood, especially in context of the larger-scaled universe:
‘One doesn’t show that something doesn’t require explanation by pointing out that it is a condition of one’s existence.’
The anthropic principle — its adherence to simplicity, consistency, and elegance notwithstanding — did not of course place Earth back at the center of the universe. As Carter put it, ‘Although our situation is not necessarily central, it is inevitably privileged’. To widen the preceding idea, let’s pose two questions: Did the anthropic principle reestablish humankind’s special place? Was the universe made for us?

First, some definitions. There are several variants of the anthropic principle, as well as differences among definitions, with Carter originally proposing two: the ‘weak anthropic principle’ and the ‘strong anthropic principle’. Of the weak anthropic principle, Carter says:
‘… our location in the universe [he was referring to the age of the universe at which humankind entered the world stage, as well as to location within space] is necessarily privileged to the extent of being compatible with our existence as observers.’
Of the strong anthropic principle, he explained,
‘The universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage’.
Although Carter is credited with coining the term ‘anthropic principle’, others had turned to the subject earlier than him. One in particular among them was the 19th-century German philosopher Arthur Schopenhauer, who presented a model of the world intriguingly similar to the weak anthropic principle. He argued that the world’s existence depended on numerous variables, like temperature and atmosphere, remaining within a very narrow range — presaging Carter’s fuller explanation. Here’s a snapshot of Schopenhauer’s thinking on the matter:
‘If any one of the actually appearing perturbations of [the planets’ course], instead of being gradually balanced by others, continued to increase, the world would soon reach its end’.
That said, some philosophers and scientists have criticized the weak variant as a logical tautology; however, that has not stopped others from discounting the criticism and favoring the weak variant. At the same time, the strong variant is considered problematic in its own way, as it’s difficult to substantiate this variant either philosophically or scientifically. It may be neither provable nor disprovable. However, at their core, both variants (weak and strong) say that our universe is wired to permit an intelligent observer — whether carbon-based or of a different substrate — to appear.

So, what kinds of physical constants — also referred to as ‘cosmic coincidences’ or ‘initial conditions’ — does the anthropic principle point to as ‘fine-tuned’ for a universe like ours, and an intelligent species like ours, to exist? There are many; however, let’s first take just one, to demonstrate significance. If the force of gravitation were slightly weaker, then following the Big Bang matter would have been distributed too fast for galaxies to form. If gravitation were slightly stronger — with the universe expanding even one millionth slower — then the universe would have expanded to its maximum and collapsed in a big crunch before intelligent life would have entered the scene.

Other examples of constants balanced on a razor’s edge have applied to the universe as a whole, to our galaxy, to our solar system, and to our planet. Examples of fine-tuning include the amount of dark matter and dark energy (minimally understood at this time) relative to all the observable lumpy things like galaxies; the ratio of matter and antimatter; mass density and space-energy density; speed of light; galaxy size and shape; our distance from the Milky Way’s center; the sun’s mass and metal content; atmospheric transparency . . . and so forth. These are measured, not just modeled, phenomena.

The theoretical physicist Freeman Dyson poignantly pondered these and the many other ‘coincidences’ and ‘initial conditions’, hinting at an omnipresent cosmic consciousness:
‘As we look out into the universe and identify the many accidents of physics and astronomy that have worked together to our benefit, it is almost as if the universe must in some sense have known we were coming.’
Perhaps as interestingly, humankind is indeed embedded in the universe, able to contemplate itself as an intelligent species; reveal the features and evolution of the universe in which humankind resides as an observer; and ponder our species’ place and purpose in the universe, including our alternative futures.

The metaphysical implications of the anthropic principle are many. One points to agency and design by a supreme being. Some philosophers, like St. Thomas Aquinas (13th century) and later William Paley (18th century), have argued this case. However, some critics of this explanation have called it a ‘God of the gaps’ fallacy — pointing out what’s not yet explained and filling the holes in our knowledge with a supernatural being.

Alternatively, there is the hypothetical multiverse model. Here, there are a multitude of universes each assumed to have its own unique initial conditions and physical laws. And even though not all universes within this model may be amenable to the evolution of advanced intelligent life, it’s assumed that a universe like ours had to be included among the infinite number. Which at least begins to speak to the German philosopher Martin Heidegger's question, ‘Why are there beings at all, instead of nothing?’