Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

25 September 2022

Where Do Ideas Come From?


By Keith Tidman

 

Just as cosmic clouds of dust and gas, spanning many light-years, serve as ‘nurseries’ of new stars, could it be that the human mind similarly serves as a nursery, where untold thought fragments coalesce into full-fledged ideas?

 

At its best, this metaphor for bringing to bear creative ideas would provide us with a different way of looking at some of the most remarkable human achievements in the course of history.

 

These are things like Michelangelo’s inspired painting, sculpting, architecture, and engineering. The paradigm-shifting science of Niels Bohr and Max Planck developing quantum theory. The remarkable compositions of Mozart. The eternal triumvirate of Socrates, Plato, and Aristotle — whose intellectual hold remains to today. The piercing insights into human nature memorably expressed by Shakespeare. The democratic spread of knowledge achieved through Gutenberg’s printing press. And so many more, of course.

 

To borrow from Newton (with his nod to the generations of luminaries who set the stage for his own influences upon science and mathematics), might humbler souls, too, learn to ‘stand on the shoulders of such giants’, even if in less remarkable ways? Yet still to reach beyond the rote? And, if so,  how might that work?

 

I would say that, for a start, it is essential for the mind to be unconstrained by conformance and orthodox groupthink in viewing and reconceiving the world: a quest for patterns. The creative process must not be sapped by concern over not getting endeavours right the first or second or third time. Doubting ideas, putting them to the test through decomposition and recomposition, adds to the rigour of those that optimally survive exploitation and scrutiny.


To find solutions that move significantly beyond the prevailing norms requires the mind to be undaunted, undistracted, and unflagging. Sometimes, how the creative process starts out — the initial conditions, as well as the increasing numbers of branching paths along which those conditions travel — greatly shapes eventual outcomes; other times, not. All part of the interlacing of analysis and serendipitous discovery. I think that tracing the genealogy of how ideas coalesce informs that process.

 

For a start, there’s a materialistic aspect to innovative thought, where the mind is demystified from some unmeasurable, ethereal other. That is, ideas are the product of neuronal activity in the fine-grained circuity of the brain, where hundreds of trillions of synapses, acting like switches and routers and storage devices, sort out and connect thoughts and deliver clever solutions. Vastly more synapses, one might note, than there are stars in our Milky Way galaxy!

 

The whispering unconscious mind, present in reposed moments such as twilight or midnight or simply gazing into the distance, associated with ‘alpha brain waves’, is often where creative, innovative insights dwell, being readied to emerge. It’s where the critical mass of creative insights is housed, rising to challenge rigid intellectual canon. This activity finds a force magnifier in the ‘parallel processing’ of others’ minds during the frothy back and forth of collaborative dialogue.

 

The panoply of surrounding influences helps the mind set up stencils for transitioning inspiration into mature ideas. These influences may germinate from individuals in one’s own creative orbit, or as inspiration derived from the culture and community of which one is a part. Yet, synthesising creative ideas across fields, resulting in multidisciplinary teams whose members complement one another, works effectively to kindle fresh insights and solutions.

 

Thoughts may be collaboratively exchanged within and among teams, pushing boundaries and inciting vision and understanding. It’s incremental, with ideas stepwise building on ideas in the manner famously acknowledged by Newton. Ultimately, at its best the process leads to the diffusion of ideas, across communities, as grist for others engaged in reflection and the generation of new takes on things. Chance happenings and spontaneous hunches matter, too, with blanks cooperatively filled in with others’ intuitions.

 

As an example, consider that, in a 1959 talk, the Nobel prize winning physicist, Richard Feynman, challenged the world to shrink text to such an extent that the entire twenty-four-volume Encyclopedia Britannica could fit onto the head of a pin. (A challenge perhaps reminiscent of the whimsical question about ‘the number of angels fitting on the head of a pin’, at the time intended to mock medieval scholasticism.) Meanwhile, Feynman believed there was no reason technology couldn’t be developed to accomplish the task. The challenge was met, through the scaling of nanotechnology, two and a half decades later. Never say never, when it comes to laying down novel intellectual markers.

 

I suggest that the most-fundamental dimension to the origination of such mind-stretching ideas as Feynman’s is curiosity — to wonder at the world as it has been, as it is now, and crucially as it might become. To doggedly stay on the trail of discovery through such measures as what-if deconstruction, reimagination, and reassembly. To ferret out what stands apart from the banal. And to create ways to ensure the right-fitting application of such reinvention.

 

Related is a knack for spotting otherwise secreted links between outwardly dissimilar and disconnected things and circumstances. Such links become apparent as a result of combining attentiveness, openness, resourcefulness, and imagination. A sense that there might be more to what’s locked in one’s gaze than what immediately springs to mind. Where, frankly, the trite expression ‘thinking outside-the-box’ is itself an ironic example of ‘thinking inside-the-box’.

 

Forging creative results from the junction of farsightedness and ingenuity is hard — to get from the ordinary to the extraordinary is a difficult, craggy path. Expertise and extensive knowledge is the metaphorical cosmic dust required in order to coalesce into the imaginatively original ideas sought. 

 

Case in point is the technically grounded Edison, blessed with vision and critical-thinking competencies, experiencing a prolific string of inventive, life-changing eureka moments. Another example is Darwin, prepared to arrive at his long-marinating epiphany into the brave world of ‘natural selection’. Such incubation of ideas, venturing into uncharted waters, has proven immensely fruitful. 

 

Thus, the ‘nurseries’ of thought fragments, coalescing into complex ideas, can provide insight into reality — and grist for future visionaries.

 

11 September 2022

The Uncaused Multiverse: And What It Signifies


By Keith Tidman

Here’s an argument that seems like commonsense: everything that exists has a cause; the universe exists; and so, therefore, the universe has a cause. A related argument goes on to say that the events that led to the universe must themselves ultimately originate from an uncaused event, bringing the regress of causes to a halt.

 

But is such a model of cosmic creation right?


Cosmologists assert that our universe was created by the Big Bang, an origin story developed by the Belgian physicist and Catholic priest Georges Lemaitre in 1931. However, we ought not to confuse the so-called singularity — a tiny point of infinite density — and the follow-on Big Bang event with creation or causation per se, as if those events preceded the universe. Rather, they were early components of a universe that by then already existed, though in its infancy.


It’s often considered problematic to ask what came before the Big Bang’, given the event is said to have led to the creation of space and time (I address ‘time’ in some detail below). By extension, the notion of nothingness prior to the Big Bang is equally problematic, because, correctly defined, nothingness is the total, absolute absence of everything — even energy and space. Although cosmologists claim that quantum fluctuations, or short bursts of energy in space, allowed the Big Bang to happen, we are surely then obliged to ask what allowed those fluctuations to happen.


Yet, it’s generally agreed you can’t get something from nothing. Which makes it all the more meaningful that by nothingness, we are not talking about space that happens to be empty, but rather the absence of space itself.

 

I therefore propose, instead, that there has always been something, an infinity where something is the default condition, corresponding to the impossibility of nothingness. Further, nothingness is inconceivable, in that we are incapable of visualising nothingness. As soon as we attempt to imagine nothingness, our minds — the act of thinking about it — causes the otherwise abstraction of ‘nothingness’ to turn into the concreteness of ‘something’: a thing with features. We can’t resist that outcome, for we have no basis in reality and in experience that we can match up with this absolute absence of everything, including space, no matter how hard we try to picture it in our mind’s eye.

 

The notion of infinity in this model of being excludes not just a ‘first universe’, but likewise excludes a ‘first cause’ or ‘prime mover’. By its very definition, infinity has no starting point: no point of origin; no uncaused cause. That’s key; nothing and no one turned on some metaphorical switch, to get the ball rolling.


What I wish to convey is a model of multiple universes existing  each living and dying  within an infinitely bigger whole, where infinity excludes a first cause or first universe. 


In this scenario, where something has always prevailed over nothingness, the topic of time inevitably raises its head, needing to be addressed. We cannot ignore it. But, I suggest, time appears problematic only because it's misconceived. Rather, time is not something that suddenly lurches out of the starting gate upon the occurrence of a Big Bang, in the manner that cosmologists and philosophers have typically described how it happens. Instead, when properly understood, time is best reflected in the unfolding of change.

 

The so-called ‘arrow of time’ traditionally appears to us in the three-way guise of the past leading to (causing) the present leading to the future. Allegorically, like a river. However, I propose that past and future are artificial constructs of the mind that simply give us a handy mechanism by which to live with the consequences of what we customarily call time: by that, meaning the consequences of change, and thus of causation. Accordingly, it is change through which time (temporal duration) is made visible to us; that is, the neurophysiological perception of change in human consciousness.

 

As such, only the present — a single, seamless ‘now’ — exists in context of our experience. To be sure, future and past give us a practical mental framework for modeling a world in ways that conveniently help us to make sense of it on an everyday level. Such as for hypothesising about what might be ahead and chronicling events for possible retrieval in the ‘now’. However, future and past are figments, of which we have to make the best. ‘Time reflected as change’ fits the cosmological model described here.


A process called entropy lets us look at this time-as-change model on a cosmic scale. How? Well, entropy is the irresistible increase in net disorder — that is, evolving change — in a single universe. Despite spotty semblances of increased order in a universe  from the formation of new stars and galaxies to someone baking an apple pie  such localised instances of increased order are more than offset by the governing physical laws of thermodynamics.


These physical laws result in increasing net disorder, randomness, and uncertainty during the life cycle of a universe. That is, the arrow of change playing out as universes live and peter out because of heat death — or as a result of universes reversing their expansion and unwinding, erasing everything, only to rebound. Entropy, then, is really super-charged change running its course within each universe, giving us the impression of something we dub time.  

 

I propose that in this cosmological model, the universe we inhabit is no more unique and alone than our solar system or beyond it our spiral galaxy, the Milky Way. The multiplicity of such things that we observe and readily accept within our universe arguably mirrors a similar multiplicity beyond our universe. These multiple universes may be regarded as occurring both in succession and in parallel, entailing variants of Big Bangs and entropy-driven ‘heat deaths’, within an infinitely larger whole of which they are a part.


In this multiverse reality of cosmic roiling, the likelihood of dissimilar natural laws from one universe to another, across the infinite many, matters as to each world’s developmental direction. For example, in both the science and philosophy of cosmology, the so-called ‘fine-tuning principle’ — known, too, as the anthropic principle — argues that with enough different universes, there’s a high probability some worlds will have natural laws and physical constants allowing for the kick-start and evolution of complex intelligent forms of life.


There’s one last consequence of the infinite, uncaused multiverse described here. Which is the absence of intent, and thus absence of intelligent design, when it comes to the physical laws and materialisation of sophisticated, conscious species pondering their home worlds. I propose that the fine-tuning of constants within these worlds does not undo the incidental nature of such reality.


The special appeal of this kind of multiverse is that it alone allows for the entirety of what can exist.


15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

24 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.



 

10 July 2022

Religions as World History

Religious manuscripts in the fabulous library of Timbuktu. Such texts are a storehouse of ancient knowledge.
By Keith Tidman

Might it be desirable to add teaching about world religions to the history curriculum in schools?


Religions have been deeply instrumental in establishing the course of human civilisation, from the earliest stirrings of community and socialisation thousands of years ago. Yet, even teaching about the world’s religions has often been guardedly held at arm’s length, for concern instruction might lapse into proselytising. 


Or at least, for apprehension over instructors actions being seen as such.


The pantheon of religions subject to being taught span the breadth: from Hinduism, Islam, Zoroastrianism, and Judaism to Buddhism, Christianity, and Sikhism  including indigenous faiths. The richness of their histories, the literacy and sacred quality of their storytelling, the complexities and directive principles held among their adherents, and religions seminal influences upon the advancement of human civilisation are truly consequential.


This suggests that religions might be taught as a version of world history. Done so without exhortation, judgment, or stereotyping. And without violating religious institutions desire to be solely responsible for nurturing the pureness of their faith. School instruction ought be straightforwardly scholarly and factual, that is  without presumption, spin, or bias. Most crucially, both subject-matter content and manner of presentation should avert transgressing the beliefs and faiths of students or their families and communities. And avoid challenging what theologians may consider axiomatic about the existence and nature of God, the word of authoritative figures, the hallowed nature of practices like petitionary prayer, normative canon, or related matters.

 

Accordingly, the aim of such an education would not be to evangelise or favour any religions doctrine over another’s; after all, we might agree that choice in paving a child’s spiritual foundation is the province of families and religious leaders.


Rather, the vision I wish to offer here is a secularised, scholarly teaching of religious literacy in the context of broader world histories. Adding a philosophical, ideas-based, dialogue-based layer to the historical explanation of religions may ensure that content remains subject to the rationalism (critical reflection) seen in educational content generally: as, for example, in literature, art, political theory, music, civics, rhetoric, geography, classics, science and math, and critical thinking, among other fields of enquiry.

 

You see, there is, I propose, a kind of DNA inherent in religion. This is rooted in origin stories of course, but also revealed by their proclivity toward change achieved through a kind of natural selection not dissimilar to that of living organisms. An evolutionary change in which the faithful — individuals, whole peoples, and formal institutions — are the animating force. Where change is what’s constant. We have seen this dynamical process result in the shaping and reshaping of values, moral injunctions, institutions, creeds, epistemologies, language, organisation, orthodoxies, practices, symbols, and cultural trappings.

 

In light of this evolutionary change, a key supporting pillar of an intellectually robust, curious society is to learn — through the power of unencumbered ideas and intellectual exploration — what matters to the development and practice of the world’s many religions. The aim being to reveal how doctrine has been reinterpreted over time, as well as to help students shed blinkers to others’ faith, engage in free-ranging dialogue on the nature, mindset, and language of religion writ large, and assume greater respect and tolerance.


Democracies are one example of where teaching about religion as an academic exercise can take firmest hold. One goal would be to round out understanding, insights, skills, and even greater wisdom for effective, enlightened citizenship. Such a program’s aim would be to encompass all religions on a par with one another in importance and solemnity, including those spiritual belief systems practiced by maybe only a few — all such religious expression nonetheless enriched by the piloting of their scriptures, ideologies, philosophies, and primary texts.

 

The objective should be to teach religious tenets as neutral, academic concepts, rather than doctrinal matters of faith, the latter being something individuals, families, and communities can and should choose for themselves. Such that, for example, whose moral code and doctrinal fundamentals one ought to adopt and whose to shy from are avoided — these values-based issues regarded as improper for a public-education forum. Although history has shown that worship is a common human impulse around the world, promoting worship per se ought not be part of teaching about religions. That’s for another time and place. 


Part and parcel, the instructional program should respect secularist philosophies, too. Like those individuals and families who philosophically regard faith and notions of transcendentalism as untenable, and see morality (good works) in humanistic terms. And like those who remain agnostically quizzical while grappling with what they suppose is the unknowability of such matters as higher-order creators, yet are at peace with their personal indecision about the existence of a deity. People come to these philosophical camps on equal footing, through deliberation and well-intentioned purposes — seekers of truth in their own right.

 

In paralleling how traditional world histories are presented, the keystone to teaching about religions should be intellectual honesty. Knowing what judiciously to put in and leave out, while understanding that tolerance and inclusion are core to a curious society and informed citizenry. Factually and scholarly opening minds to a range of new perspectives as windows on reality.

 

As such, teachings focus should be rigorously academic and instructional, not devotional. In that regard, it’s imperative that schools sidestep breaching the exclusive prerogative of families, communities, and religious institutions to frame whose ‘reality’ — whose truth, morality, orthodoxy, ritual, holy ambit, and counsel — to live by.


These cautions notwithstanding, it seems to me that schools ought indeed seek to teach the many ways in which the world’s religions are a cornerstone to humanity’s cultural, anthropological, and civilisational ecology, and thus a core component of the millennia-long narratives of world history.

 

12 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Bƶhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

22 May 2022

Are There Limits to Human Knowledge?


By Keith Tidman

‘Any research that cannot be reduced to actual visual observation is excluded where the stars are concerned…. It is inconceivable that we should ever be able to study, by any means whatsoever, their chemical or mineralogical structure’.
A premature declaration of the end of knowledge, made by the French philosopher, Auguste Comte, in 1835.
People often take delight in saying dolphins are smart. Yet, does even the smartest dolphin in the ocean understand quantum theory? No. Will it ever understand the theory, no matter how hard it tries? Of course not. We have no difficulty accepting that dolphins have cognitive limitations, fixed by their brains’ biology. We do not anticipate dolphins even asking the right questions, let alone answering them.

Some people then conclude that for the same reason — built-in biological boundaries of our species’ brains — humans likewise have hard limits to knowledge. And that, therefore, although we acquired an understanding of quantum theory, which has eluded dolphins, we may not arrive at solutions to other riddles. Like the unification of quantum mechanics and the theory of relativity, both effective in their own dominions. Or a definitive understanding of how and from where within the brain that consciousness arises, and what a complete description of consciousness might look like.

The thinking isn’t that such unification of branches of physics is impossible or that consciousness doesn’t exist, but that supposedly we’ll never be able to fully explain either one, for want of natural cognitive capacity. It’s argued that because of our allegedly ill-equipped brains, some things will forever remain a mystery to us. Just as dolphins will never understand calculus or infinity or the dolphin genome, human brains are likewise closed off from categories of intractable concepts.

Or at least, as it has been said.

Some among these believers of this view have adopted the self-describing moniker ‘mysterians’. They assert that as a member of the animal kingdom, homo sapiens are subject to the same kinds of insuperable cognitive walls. And that it is hubris, self-deception, and pretension to proclaim otherwise. There’s a needless resignation.

After all, the fact that early hominids did not yet understand the natural order of the universe does not mean that they were ill-equipped to eventually acquire such understanding, or that they were suffering so-called ‘cognitive closure’. Early humans were not fixed solely on survival, subsistence, and reproduction, where existence was defined solely by a daily grind over the millennia in a struggle to hold onto the status quo.

Instead, we were endowed from the start with a remarkable evolutionary path that got us to where we are today, and to where we will be in the future. With dexterously intelligent minds that enable us to wonder, discover, model, and refine our understanding of the world around us. To ponder our species’ position within the cosmic order. To contemplate our meaning, purpose, and destiny. And to continue this evolutionary path for however long our biological selves ensure our survival as opposed to extinction at our own hand or by external factors.

How is it, then, that we even come to know things? There are sundry methods, including (but not limited to) these: Logical, which entails the laws (rules) of formal logic, as exemplified by the iconic syllogism where conclusion follow premises. Semantic, which entails the denotative and connotative definitions and context-based meanings of words. Systemic, which entails the use of symbols, words, and operations/functions related to the universally agreed-upon rules of mathematics. And empirical, which entails evidence, information, and observation that come to us through our senses and such tools like those below for analysis, to confirm or finetune or discard hypotheses.

Sometimes the resulting understanding is truly paradigm-shifting; other times it’s progressive, incremental, and cumulative — contributed to by multiple people assembling elements from previous theories, not infrequently stretching over generations. Either way, belief follows — that is, until the cycle of reflection and reinvention begins again. Even as one theory is substituted for another, we remain buoyed by belief in the commonsensical fundamentals of attempting to understand the natural order of things. Theories and methodologies might both change; nonetheless, we stay faithful to the task, embracing the search for knowledge. Knowledge acquisition is thus fluid, persistently fed by new and better ideas that inform our models of reality.

We are aided in this intellectual quest by five baskets of ‘implements’: Physical devices like quantum computers, space-based telescopes, DNA sequencers, and particle accelerators. Tools for smart simulation, like artificial intelligence, augmented reality, big data, and machine learning. Symbolic representations, like natural languages (spoken and written), imagery, and mathematical modeling. The multiplicative collaboration of human minds, functioning like a hive of powerful biological parallel processors. And, lastly, the nexus among these implements.

This nexus among implements continually expands, at a quickening pace; we are, after all, consummate crafters of tools and collaborators. We might fairly presume that the nexus will indeed lead to an understanding of the ‘brass ring’ of knowledge, human consciousness. The cause-and-effect dynamic is cyclic: theoretical knowledge driving empirical knowledge driving theoretical knowledge — and so on indefinitely, part of the conjectural froth in which we ask and answer the tough questions. Such explanations of reality must take account, in balance, of both the natural world and metaphysical world, in their respective multiplicity of forms.

My conclusion is that, uniquely, the human species has boundless cognitive access rather than bounded cognitive closure. Such that even the long-sought ‘theory of everything’ will actually be just another mile marker on our intellectual journey to the next theory of everything, and the next one — all transient placeholders, extending ad infinitum.

There will be no end to curiosity, questions, and reflection; there will be no end to the paradigm-shifting effects of imagination, creativity, rationalism, and what-ifs; and there will be no end to answers, as human knowledge incessantly accrues.

09 May 2022

Peering into the World's Biggest Search Engine


 If you type “cat” into Google, sone of the top results are for Caterpillar machinery


By Martin Cohen and Keith Tidman


How does Google work? The biggest online search engine has long become ubiquitous in everyday personal and professional life, accounting for an astounding 70 percent of searches globally. It’s a trillion-plus-dollar company with the power to influence, even disrupt, other industries. And yet exactly how it works, beyond broad strokes, remains somewhat shrouded.

So, let’s pull back the curtain a little, if we can, to try observing the cogs whirring behind that friendly webpage interface. At one level, Google’s approach is every bit as simple as imagined. An obvious instance being that a lot of factual queries often simply direct you to Wikipedia on the upper portion of the first displayed page.

Of course, every second, Google performs extraordinary feats, such as searching billions of pages in the blink of an eye. However, that near-instantaneity on the computing dimension is, these days, arguably the easiest to get a handle on — and something we have long since taken for granted. What’s more nuanced is how the search engine appears to evaluate and weigh information.

That’s where web crawlers can screen what motivates: like possibly prioritizing commercial partners, and on occasion seeming to favor particular social and political messages. Or so it seems. Given the stakes in company revenue, those relationships are an understandable approach to running a business. Indeed, it has been reported that some 90% of earnings come from keyword-driven, targeted advertising.

It’s no wonder Google plays up the idea that its engineers are super-smart at what they do. What Google wants us to understand is that its algorithm is complex and constantly changing, for the better. We are allowed to know that when Google decides which search results are most important, pages are ranked by how many other sites link to them — with those sites in turn weighted in importance by their own links.

It’s also obvious that Google performs common-sense concordance searches on the exact text of your query. If you straightforwardly ask, “What is the capital of France?” you will reliably and just as straightforwardly be led to a page saying something like “Paris is the capital of France.” All well and good, and unpretentious, as far as those sorts of one-off queries go.

But what might raise eyebrows among some Google users is the placing of commercial sites above or at least sprinkled amidst factual ones. If you ask, “What do cats eat?” you are led to a cat food manufacturer’s website close to the top of the page, with other informational links surrounding it as if to boost credibility. And if you type “cat” into Google, the links that we recently found near the top of the first page took us not to anything furry and feline  –  but to clunking, great, Caterpillar machinery.

Meanwhile, take a subject that off and on over the last two-plus years has been highly polarizing and politicized — rousing ire, so-called conspiracy theories, and presumptuousness that cleave society across several fronts — like the topical query: “Do covid vaccines have side effects?” Let’s put aside for a moment what you might already be convinced is the answer, either way — whether a full-throated yea or nay.

As a general matter, people might want search engines to reflect the range of context and views — to let searchers ultimately do their own due diligence regarding conflicting opinions. Yet, the all-important first page at Google started, at the time of this particular search, with four sites identified as ads. Followed by several other authoritative links, bunched under ‘More results’, pointing to the vaccine indeed being safe. So, let’s say, you’ll be reassured, but have you been fully informed, to help you understand background and accordingly to make up your own mind?

When we put a similar query to Yahoo!, for comparison, the results were a bit more diverse. Sure, two links were from one of the same sources as Google’s, but a third link was quite a change of pace: a blog suggesting there might be some safety issues, including references to scholarly papers to make sense of the data and conclusions. Might one, in the spirit of avoiding prejudgment, conclude that diversity of information better honours searchers’ agency?

Some people suggest that the technology at Google is rooted in its procedural approach to the science behind it. As a result, it seems that user access to the best information may play second fiddle to mainstream opinion and commercialization, supported, as it has been said, by harvested user data. Yet, isn’t all that the adventurist economic and business model many countries embrace in the name of individual agency and national growth?

Google has been instrumental, of course, in globally democratising access to information in ways undreamt of by history’s cleverest minds. Impressively vast knowledge at the world’s fingertips. But as author Ken Auletta said, “NaĆÆvetĆ© and passion make a potent mix; combine the two with power and you have an extraordinary force, one that can effect great change for good or for ill.” Caveat emptor, in other words, despite what one might conclude are good intentions.

Might the savvy technical and business-theoretical minds at Google therefore continue parsing company strategies and search outcomes, as they inventively reshape the search engine’s operational model? And will that continual reinvention help to validate users’ experiences in quests for information intended not only to provide definitive answers but to inform users’ own prioritization and decision-making?

Martin Cohen investigates ‘How Does Google Think’ in his new book, Rethinking Thinking: Problem Solving fro Sun Tzu to Google, which was published by Imprint Academic last month.

17 April 2022

What Is Love? An Inquiry Reexamined


By Keith Tidman


Someone might say, I love my wife or husband. I love my children and grandchildren. I love my extended family. I love my friends.

All the while, that same someone might also avidly announce, I love…

Conversation. Mozart’s music. Cherry blossoms. Travel abroad. Ethnic cuisine. Democracy. Memories of parents. Sipping espresso. Paradoxes. Animal kingdom. Mysteries of quantum theory. Hiking trails. Absence of war. A baby’s eye contact. Language of mathematics. Theatre performances. History. African savanna. Freedom. Daydreaming on the beach. Loving love. And, yes, philosophy.

We’re free to fill in the blanks with endless personal possibilities: people, events, occasions, experiences, and things we care deeply about, which happen providentially to get elevated by their singular meaning to us on an individual level. The neurons that get triggered in each of us, as-yet unexplainably making what you uniquely experience by way of love as different from what everyone else definably feels — the subjectivism of sensation.

A hazard in applying the word ‘love’ across manifold dimensions like this is that we may start to cloud the concept, making it harder to distinguish love from competitor sentiments — such as simply ‘liking’, ‘fancying a lot’, or maybe ‘yearning’. Uncertainty may intrude as we bracket sentiments. The situation is that love itself comes in many different kinds. Steeped in a historical, cultural, spiritual, scientific, rational, and emotional melding pot. Three of the best-known semantic variants for love, whose names originate from Greek, descend to us from early philosophers.

They are Eros (pictured above on his pedestal in London), which is intensely passionate, romantic, and sexual love (famously fĆŖted by the arts). Intended also for species proliferation. Agape, which is a transcendent, reciprocated love for God and for all humanity, sometimes couched as a form of brotherly love. And philia, which is unconditional love for family and friends, and even one’s country. As well as ‘companionate’ love enjoyed, for example, by a couple later in life, when passion’s embers may have cooled. Philia evokes a mix of virtues, like integrity, fairness, parity, and acquaintance.

Those terms and definitions imply a rational tidiness that may not be deserved when it comes to the everyday, sometimes-fickle interpretation of love: when and how to appropriately apply the word. The reality is that people tend to parse ‘love’ along sundry lengths, widths, and heights, which can be subjective, even idiosyncratic, and often self-servingly changeable to suit the moment and the mood. Individual, family, and community values are influential here.

Love may even be outright ineffable: that is, beyond logical explanation and the search for the source of societal norms. Enough so, perhaps, to make the likes of Aristotle, St. Augustine, Friedrich Nietzsche, Arthur Schopenhauer, Bertrand Russell, and Simone de Bouvier — among other romantics and misanthropes, who thought about and critiqued the whimsicality of love — turn in their graves.

At the very least, we know that love, in its different kinds, can be heady, frenzied stuff, seemingly hard-wired, primal, and distractingly preoccupying. Of course, the category of love might shift — progressively, or abruptly — in accordance with evolving experiences, interactions, and relationships, as well as the sprouting of wholly novel circumstances. Arguably the biology, chemistry, and synapses of the brain, creating the complexities of mind, deterministically calling the shots.

Some contest that the love that others may claim to feel is not actually love, but something akin to it: either friendship, or impassioned obsession, or veneration, or lust, or appreciation of companionship, or esteem, or simply liking someone or something a whole lot. Distinctions between love and alternative sensations, as they wax and wane over time, are for the individual person to decide. We correctly accede to this element of individuality.

Love, as for all the other emotions just mentioned, has a flipside. Together, opposites make wholes — their serving as the source of what’s possible. Along with love can come dispiriting negatives, like possessiveness, insecurity, distrust, noxiousness, suspicion, sexist hindrances, jealousy, and objectification.

There can be a tension between these latter shadowy forces and such affirmative forces as bright-spiritedness, cleverness, romanticism, enchantment, physical attractiveness, empathy, humour, companionability, magnetism, kindness, and generosity. Such a tension usually lessens with the passage of time, as the distinctions between the good and the bad become less hazy and easier to sort from among.

There’s another form of tension, too: Individual values — acquired through personal reflection, and through family and community convictions, for example — may bump up against the stressors of love. Among love’s influences is sometimes having to rethink values. To refine norms in order to accommodate love. There may be justifiable reasons to believe we gain when we inspiringly and aspiringly love someone or something.

The gradations of moral and behavioural values challenge our autonomy — how we calculatedly manage life — as the effects of love invade our moment-to-moment decision-making. Choices become less intentional and less free, as we deferentially strive to preserve love. We might anxiously attempt to evade what we perceive, rightly or misguidedly, as the vulnerabilities of love.

When all is weighed, love appears wittingly compelling: not to cosset self-seeking indulgences, but rather to steer us toward a life affectionately moored to other people and experiences that serve as the fount of inspiration and authentic meaning. In this way, rationality and love become mutually inclusive.