19 June 2022

Modesty (a Poem)

by Carl-Theodor Olivet *

Henri Matisse, Branch of Flowers, 1906.

I love you and give myself to you,
Yet you must promise me one thing;
Do not betray too much of you to me,
It could break our happiness in pieces …

The way that you consider and joke and risk it,
All have their origins, I know,
Your clear look, the vague fear,
The cheeky mockermouth.

Your magic, which tenderly plays about you,
Just as it is, is entirely preserved for me,
The days, the nights, year after year,
Shall design me delightfully.

Do not forewarn me, that I’ve gone too far
With my sweet daydreams,
I’m treating myself to a piece of eternity
And would not want to squander any part of you.

Do not betray yourself, lest you manoeuvre
into place to thwart me in some way, wickedly,
I do not analyse either
Whether you will always remain so unique.

Don’t speak in reply; that would be a mistake,
Permit me to paint the greatest bliss,
It will yet be, as mother says:
There comes a day to pay one’s wages.

If it’s over, then say it coldly, boldly.
That has broken many hearts, indeed …
And would tear me likewise from my dream.
Surely, I will get over it. 

 

* Theo Olivet is an author, artist, and retired judge in Schleswig-Holstein.



Bescheidenheit (ein Gedicht)

von Carl-Theodor Olivet

Henry Matisse, Blumenstrauch, 1906.


Ich liebe dich und geb mich dir,
doch  eines musst du mir versprechen:
verrat mir nicht zuviel von dir,
das könnte unser Glück zerbrechen ...

Wie du bedenkst und scherzt und wagst,
das hat ja alles seinen Grund,
dein heller Blick, die vage Furcht, dein kecker Spöttermund ...

Dein Zauber, der dich zart umweht,
nur so bleibt er mir ganz  erhalten,
die Tage, Nächte, Jahr für Jahr,
soll er mir wonnevoll gestalten.

Mahne mich nicht, ich ging zu weit
mit  meinen süßen Tagesträumen,
ich gönn mir  ein Stück Ewigkeit
und möchte nichts von dir versäumen:

Verrat dich nicht, falls du taktierst
und bös mit was mich hintertreibst,
ich hinterfrage ja auch nicht,
ob immer du so einzig bleibst.

Sag nichts dazu, es wär verkehrt,
lass mich mir größtes Glück ausmalen,
es wird schon sein, wie Mutter sagt:
Man müsse einst für alles zahlen.

Wenn´s nicht mehr geht, sag´s kalt und dreist,
so etwas bricht zwar manche Herzen ...
wenns mich auch aus dem Traume reißt,
ich werde es gewiss verschmerzen.


* Theo Olivet ist ein Autor, Künstler und pensionierter Richter in Schleswig-Holstein.

12 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Böhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

05 June 2022

Picture Post #75 The Calm of the Library



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'


Posted by Martin Cohen


What makes this image particularly striking to me, is the quiet and earnest way the figures regard the books even as they stand amidst a scene of utter devastation. The man on the right nonchalant, hands-in-pockets browses the shelves seemingly oblivious to the collapsed roof just behind him; while another visitor to the library (in the left background) is clearly lost in the pages of one of his finds…

So, what‘s the back story? And this is that on the evening of 27 September, 1940, the Luftwaffe dropped 22 incendiary bombs on London's Holland House - a rambling, Jacobean country house, dating back to 1605, destroying all of it with the exception of the east wing, and, incredibly, almost all of the library.

The picture was originally used to make a propaganda point about the British shrugging off the Blitz, and that’s fine too, but today, stripped of its wartime context, I think it contains a more appealing message about how books and ideas can take us into a different world.

29 May 2022

Theological Self-Assembly

by Thomas Scarborough

The Swiss Reformed theologian Emil Brunner wrote, ‘World-views may be grouped in pairs.’ His original word for ‘pairs’ was Gegensatzpaare—pairs of opposites.

We see this in every dictionary of philosophy: realism vs. anti-realism, theism vs. atheism, altruism vs. egoism, and so on. Brunner himself provided these examples: materialism–idealism, pantheism–deism, rationalism–sensualism, dogmatism–scepticism, and monism–dualism (pluralism).

When Brunner made this observation, in 1937, on the face of it he could have meant two things. He could have meant that truth will always have its opposite—or that world-views are merely manifestations of our dualistic thinking. That is, they are mere phenomena, which have little if anything to do with the merits or demerits of the world-views themselves.

In fact Brunner meant the former—namely, that truth will always have its opposite—yet with an interesting twist. All of our world-views are untrue, he wrote, while faith is true: ‘If faith is lacking, a world-viewis necessary.’ Further, world-views are theoretical, while faith is responsible. ‘We understand existence from the point of view of responsibility.’ Faith, therefore, belongs to a category all of its own, far from the realm of world-views.

This raises a thousand questions. Is this not intellectual suicide? Are world-views a true opposite of faith? Are world-views without responsibility? Do they not drive our actions in every case? And if faith is not about world-views, on what does one ground it? Of course, there is the question of definitions, too. How does one definefaith, and world-views? *

More than this, however, Brunner apparently did not see that grouping world-views in pairs of opposites is especially theological. Theologians themselves are past masters at it. The language of personal salvation suppresses the language of social commitment; the language of community excludes the language of justification by faith; the language of religious values marginalises the language of the glory of God, and so on, and vice versa.

In theology, we find pairs of opposites such as liberalism–conservatism, immanence–transcendence, legalism–antinomianism, premillennialism–postmillennialism, and so on. These are not peripheral doctrines, but belong to theology’s core, and demonstrate that theological language has a natural, powerful tendency to exclude other theological concepts.

Brunner himself was aware that the separation of faith and world-views could be problematic. He wrote, ‘Can we, for instance, understand the spirit without ideas, norms, values, laws of thought, logos?There can never be any question of depreciating the reason, of hostility to reason, or of setting up a plea for irrationalism.’ Yet in that case, how may one interpret world-views as opposites of faith?

Theologians generally explain theological opposites in terms of the one-sidedness of their opponents—alternatively, they hold that their opponents are just plain wrong, or even apostate. Yet what if we simply have a natural tendency to generate pairs of opposites, regardless of truth?

It seems much like the honey-bee that remarks to another honey-bee that their colony has developed a most marvellous system: the exquisite selection of nectars and pollens, navigational skills second to none, with storage most wonderfully engineered. The other bee observes that the same is true, in fact, for all bees in every part of the world, over all of the known history of bees.

There is a powerful case in philosophy, not so much for our tendency to generate pairs of opposites, as our inability not to. The philosopher and logician Gottlob Frege described it as ‘the rule of words over the human mind.’ The literary critic and philosopher George Steiner wrote, ‘It is language that speaks, not, or not primordially, man.’ And the linguists Wilhelm Kamlah and Paul Lorenzen wrote, ‘We are thoroughly dominated by an unacknowledged metaphysics.’

That is, we are not free to think as we please. Whatever we turn out is the result of the dictatorship of ideas over the human mind—and over good sense, we might add. If this is the case in theology, then we have a theological crisis. Our cherished beliefs are merely the product of something more powerful than us, which holds us in its grip. How, then, to escape?

With our increasing awareness of various kinds of opposites—markedness, priority thoughts, term weighting, and otherness--among other things—it seems time that theologians should ask what is going on. For what reason are major concepts, both philosophical and theological, grouped in pairs of opposites? Is it, as is usually held, that some are true and others not? Is it that faith belongs to a category all of its own?

Or could it be that theological tenets of various kinds simply self-assemble?



* Emil Brunner is difficult to interpret, perhaps due to no fault of interpreters. A seminal statement of his: ‘Our nous therefore is the vessel but not the source of the Word of God. Where it receives the Word of God it is called : faith.’ Bearing in mind that Brunner is not so much the subject here as
Gegensatzpaare.

22 May 2022

Are There Limits to Human Knowledge?


By Keith Tidman

‘Any research that cannot be reduced to actual visual observation is excluded where the stars are concerned…. It is inconceivable that we should ever be able to study, by any means whatsoever, their chemical or mineralogical structure’.
A premature declaration of the end of knowledge, made by the French philosopher, Auguste Comte, in 1835.
People often take delight in saying dolphins are smart. Yet, does even the smartest dolphin in the ocean understand quantum theory? No. Will it ever understand the theory, no matter how hard it tries? Of course not. We have no difficulty accepting that dolphins have cognitive limitations, fixed by their brains’ biology. We do not anticipate dolphins even asking the right questions, let alone answering them.

Some people then conclude that for the same reason — built-in biological boundaries of our species’ brains — humans likewise have hard limits to knowledge. And that, therefore, although we acquired an understanding of quantum theory, which has eluded dolphins, we may not arrive at solutions to other riddles. Like the unification of quantum mechanics and the theory of relativity, both effective in their own dominions. Or a definitive understanding of how and from where within the brain that consciousness arises, and what a complete description of consciousness might look like.

The thinking isn’t that such unification of branches of physics is impossible or that consciousness doesn’t exist, but that supposedly we’ll never be able to fully explain either one, for want of natural cognitive capacity. It’s argued that because of our allegedly ill-equipped brains, some things will forever remain a mystery to us. Just as dolphins will never understand calculus or infinity or the dolphin genome, human brains are likewise closed off from categories of intractable concepts.

Or at least, as it has been said.

Some among these believers of this view have adopted the self-describing moniker ‘mysterians’. They assert that as a member of the animal kingdom, homo sapiens are subject to the same kinds of insuperable cognitive walls. And that it is hubris, self-deception, and pretension to proclaim otherwise. There’s a needless resignation.

After all, the fact that early hominids did not yet understand the natural order of the universe does not mean that they were ill-equipped to eventually acquire such understanding, or that they were suffering so-called ‘cognitive closure’. Early humans were not fixed solely on survival, subsistence, and reproduction, where existence was defined solely by a daily grind over the millennia in a struggle to hold onto the status quo.

Instead, we were endowed from the start with a remarkable evolutionary path that got us to where we are today, and to where we will be in the future. With dexterously intelligent minds that enable us to wonder, discover, model, and refine our understanding of the world around us. To ponder our species’ position within the cosmic order. To contemplate our meaning, purpose, and destiny. And to continue this evolutionary path for however long our biological selves ensure our survival as opposed to extinction at our own hand or by external factors.

How is it, then, that we even come to know things? There are sundry methods, including (but not limited to) these: Logical, which entails the laws (rules) of formal logic, as exemplified by the iconic syllogism where conclusion follow premises. Semantic, which entails the denotative and connotative definitions and context-based meanings of words. Systemic, which entails the use of symbols, words, and operations/functions related to the universally agreed-upon rules of mathematics. And empirical, which entails evidence, information, and observation that come to us through our senses and such tools like those below for analysis, to confirm or finetune or discard hypotheses.

Sometimes the resulting understanding is truly paradigm-shifting; other times it’s progressive, incremental, and cumulative — contributed to by multiple people assembling elements from previous theories, not infrequently stretching over generations. Either way, belief follows — that is, until the cycle of reflection and reinvention begins again. Even as one theory is substituted for another, we remain buoyed by belief in the commonsensical fundamentals of attempting to understand the natural order of things. Theories and methodologies might both change; nonetheless, we stay faithful to the task, embracing the search for knowledge. Knowledge acquisition is thus fluid, persistently fed by new and better ideas that inform our models of reality.

We are aided in this intellectual quest by five baskets of ‘implements’: Physical devices like quantum computers, space-based telescopes, DNA sequencers, and particle accelerators. Tools for smart simulation, like artificial intelligence, augmented reality, big data, and machine learning. Symbolic representations, like natural languages (spoken and written), imagery, and mathematical modeling. The multiplicative collaboration of human minds, functioning like a hive of powerful biological parallel processors. And, lastly, the nexus among these implements.

This nexus among implements continually expands, at a quickening pace; we are, after all, consummate crafters of tools and collaborators. We might fairly presume that the nexus will indeed lead to an understanding of the ‘brass ring’ of knowledge, human consciousness. The cause-and-effect dynamic is cyclic: theoretical knowledge driving empirical knowledge driving theoretical knowledge — and so on indefinitely, part of the conjectural froth in which we ask and answer the tough questions. Such explanations of reality must take account, in balance, of both the natural world and metaphysical world, in their respective multiplicity of forms.

My conclusion is that, uniquely, the human species has boundless cognitive access rather than bounded cognitive closure. Such that even the long-sought ‘theory of everything’ will actually be just another mile marker on our intellectual journey to the next theory of everything, and the next one — all transient placeholders, extending ad infinitum.

There will be no end to curiosity, questions, and reflection; there will be no end to the paradigm-shifting effects of imagination, creativity, rationalism, and what-ifs; and there will be no end to answers, as human knowledge incessantly accrues.

15 May 2022

Nine Cool Ways to ‘Rethink Thinking’

Detail from the cover illustration for the book Rethinking Thinking


By Martin Cohen

I’ve been thinking lot about thinking a lot! Here’s some of my thoughts…

Rule Number One in thinking, via the classic text The Art of War, is don’t do things the clever way, nor even the smart way: do them the easy way. Because it doesn’t matter what you’re wondering about, or researching or doing - someone else has probably solved the problem for you already. Flip to the back of the book, find the answer. In fact, the great thing about the ancient Chinese book, The Art of War, is that it is not a brainy book at all. It is really just unvarnished advice expressed in what was then the plainest language. We need more of that. Thinking Skill Two is to avoid ‘black and white’ thinking, binary distinctions, ‘yes/no’ language and questions, and instead interact with people in a non-linear, less ‘directive’ manner. Take the tip from ‘design thinking’ that approaches rooted in notions of questions and answers are themselves limiting insight, because questions and answers are like a series of straight lines, when what is needed is shading, colours and, well, ‘pictures’. This is why it is sometimes better to go for narratives – which are conceptually more like shapes.

Tip Number Three, which, yes, is connected to the previous tips, and that’s a good thing too, is to look for the pattern in the data. (That’s what Google does, of course, and it hasn’t done them any harm.) More generally, as I say in chapter three, these days, scientists and psychologists say that humans beings are, fundamentally, pattern seekers. Every aspect of the world arrives via the senses as an undifferentiated mass of data, yet we are usually unaware of this as it is presented to our minds in organized form following an automated and wholly unconscious process of pattern solving. However, there’s a caution that has to come with advice to pattern match, because as we become attentive to some characteristics, we start to latch onto confirmatory evidence, and may neglect - maybe indeed suppress - other information that doesn’t fit our preconceptions. Put another way, patterns are powerful tools for making sense of the world (or other people), but they aren’t actually the world – which may be more subtle and complex than we imagine.

Thinking Skill Number Four, coming in quite a different direction, is ‘pump up the intuition’. Instead of thinking about ‘what is’, think about what might be. The ability to create imaginary worlds in our heads is perhaps our most extraordinary mental tool –yet so often neglected in the pursuit of mere observations and measurements. The ability, that is, as I explore in chapter four, to take any set of facts and play with them, to consider alternatives and hypotheticals. Being ‘playful’ is what the approach is all about, and something that probably the techniques’s greatest exponent – Einstein – stressed too.

My Tip Number Five actually looks more of a caution than a counsel: put all your cozy assumptions to one side and instead be ready to rigorously test and challenge every aspect of them. This is part of the solution to the age-old problem of 'thinking you know’, when, in fact, you don’t, which has been the task of philosophy ever since Socrates walked the streets of Ancient Athens challenging the arrogant young men to debate. 'Thinking you know everything' about an issue is invariably intermingled with the tendency to only see what you expect to see, likely because you have got caught up in a particular story. Exploring some of the stories that make up the amazing tale of the moon shows how this, ‘engineering’ approach, an approach we wrongly dismiss as rather dull, actually has enormous creative power. Socrates understood that: demolishing old certainties is not an end in itself, but a precursor to allowing new ideas.

Problem Solving Strategy Number Six, my favourite tip, is to doodle! Doodles, it turns out, are another kind of ‘intuition pumps’ (like thought experiments) and that gives you the clue as to what is valuable about doodling. It is emphatically NOT about the visuals – yes some people can draw beautiful doodles, many people to curious ones, and some of us do downright awful ones – but the value of doodling lies in the freedom it gives your subconscious mind - with apologies to all those specialists in psychology who will insist on some technical definition of the term. Ideas, as the Japanese designer Oki Sato says, start small - and can easily wither at the first critical assessment.

Doodling can be metaphorically compared to watering a thousand seeds scattered into a plant tray. You don't know at the outset which of the tiny seeds will be important - and you don't care. Instead, you give them all a chance to grow a little. Doodling, we might say, is similarly non-judgmental, and, at the same time, actively in praise of the importance of small thoughts. Of course, we do this all the time here at Philosophical Investigations!

Strategy Number Seven is about a very different kind of thinking. It's all about finding the explanatory key to make sense of complexity - the kid of complexity that is all around us at every level. Here, the thinking tools you need are an eye for detail along with an ability to see broader relationships. Put that way, it seems to require rather exceptional abilities But the thinking tool here is to stop trying to work out, and maybe even control a complex system from above. That's the conventional approach – where you gather all the information and then organise it.

No, the tip here is let the complexity organise itself - and you just concentrate on spotting the tell-tale signs of order. Watch for the 'emergent properties' that arise as a system organises itself and devote yourself to preserving the conditions in which the best solutions evolve. Put another way, if the cat wants food, it will keep pestering you. Be aware of the patterns in the data!

Tip Number Eight, is the computer programmer's cautionary motto: ‘Garbage In, Garbage Out’. To illustrate this idea in the book, I looked at the science governing responses to the corona virus, which was indeed heavily driven not by real-world data, but by computer models. But forget that distinction for a moment, the point is really a very general one. In everything we do, be aware of the quality of the information you are acting on. It could be compared to decorating a room: before you apply the colorful paints, before you lay the expensive carpet, have you prepared the surfaces? Mended the floorboards? Because shortcuts in preparation carry disastrous costs down the line. It's the same thing with arguments and reasoning. That dodgy assumption you made in haste, or perhaps because checking seemed time-consuming (or just boring) to do – can easily undermine your entire strategy, causing it to fall apart like at the first brush with reality. Just as an elegant carpet is no use if the floorboard is creaking.

Thinking Strategy Nine, my last one here and the one ever so slightly paradoxically closing my book, is to use ‘emergent thinking’, by which I mean adopting strategies in life that involve exploring all the possibilities and then generating new ideas, concepts and solutions from out of combinations of those ideas which likely could not be found in them individually. Yes, the strategy sounds terribly simple, but that first stage - of exploring possibilities - can be terribly time consuming and literally, impractical. So there are a host of micro-strategies needed too, like learning how to find information quickly, and above all how to select just the useful stuff. This is why I call this strategy ‘thinking like a search engine’ in the book, but, as I explain there, search engines actually rely to a large extent on human judgements: about information relevance and quality.

You can't get away from it: sorting the garbage requires a little bit of skill along with the apparently trivial, routine procedures. Thinking requires a brain - whatever some researchers on mushrooms may tell you…



*Martin Cohen’s book Rethinking Thinking: Problem Solving from Sun Tzu to Google was published by Imprint on April 4th.

09 May 2022

Peering into the World's Biggest Search Engine


 If you type “cat” into Google, sone of the top results are for Caterpillar machinery


By Martin Cohen and Keith Tidman


How does Google work? The biggest online search engine has long become ubiquitous in everyday personal and professional life, accounting for an astounding 70 percent of searches globally. It’s a trillion-plus-dollar company with the power to influence, even disrupt, other industries. And yet exactly how it works, beyond broad strokes, remains somewhat shrouded.

So, let’s pull back the curtain a little, if we can, to try observing the cogs whirring behind that friendly webpage interface. At one level, Google’s approach is every bit as simple as imagined. An obvious instance being that a lot of factual queries often simply direct you to Wikipedia on the upper portion of the first displayed page.

Of course, every second, Google performs extraordinary feats, such as searching billions of pages in the blink of an eye. However, that near-instantaneity on the computing dimension is, these days, arguably the easiest to get a handle on — and something we have long since taken for granted. What’s more nuanced is how the search engine appears to evaluate and weigh information.

That’s where web crawlers can screen what motivates: like possibly prioritizing commercial partners, and on occasion seeming to favor particular social and political messages. Or so it seems. Given the stakes in company revenue, those relationships are an understandable approach to running a business. Indeed, it has been reported that some 90% of earnings come from keyword-driven, targeted advertising.

It’s no wonder Google plays up the idea that its engineers are super-smart at what they do. What Google wants us to understand is that its algorithm is complex and constantly changing, for the better. We are allowed to know that when Google decides which search results are most important, pages are ranked by how many other sites link to them — with those sites in turn weighted in importance by their own links.

It’s also obvious that Google performs common-sense concordance searches on the exact text of your query. If you straightforwardly ask, “What is the capital of France?” you will reliably and just as straightforwardly be led to a page saying something like “Paris is the capital of France.” All well and good, and unpretentious, as far as those sorts of one-off queries go.

But what might raise eyebrows among some Google users is the placing of commercial sites above or at least sprinkled amidst factual ones. If you ask, “What do cats eat?” you are led to a cat food manufacturer’s website close to the top of the page, with other informational links surrounding it as if to boost credibility. And if you type “cat” into Google, the links that we recently found near the top of the first page took us not to anything furry and feline  –  but to clunking, great, Caterpillar machinery.

Meanwhile, take a subject that off and on over the last two-plus years has been highly polarizing and politicized — rousing ire, so-called conspiracy theories, and presumptuousness that cleave society across several fronts — like the topical query: “Do covid vaccines have side effects?” Let’s put aside for a moment what you might already be convinced is the answer, either way — whether a full-throated yea or nay.

As a general matter, people might want search engines to reflect the range of context and views — to let searchers ultimately do their own due diligence regarding conflicting opinions. Yet, the all-important first page at Google started, at the time of this particular search, with four sites identified as ads. Followed by several other authoritative links, bunched under ‘More results’, pointing to the vaccine indeed being safe. So, let’s say, you’ll be reassured, but have you been fully informed, to help you understand background and accordingly to make up your own mind?

When we put a similar query to Yahoo!, for comparison, the results were a bit more diverse. Sure, two links were from one of the same sources as Google’s, but a third link was quite a change of pace: a blog suggesting there might be some safety issues, including references to scholarly papers to make sense of the data and conclusions. Might one, in the spirit of avoiding prejudgment, conclude that diversity of information better honours searchers’ agency?

Some people suggest that the technology at Google is rooted in its procedural approach to the science behind it. As a result, it seems that user access to the best information may play second fiddle to mainstream opinion and commercialization, supported, as it has been said, by harvested user data. Yet, isn’t all that the adventurist economic and business model many countries embrace in the name of individual agency and national growth?

Google has been instrumental, of course, in globally democratising access to information in ways undreamt of by history’s cleverest minds. Impressively vast knowledge at the world’s fingertips. But as author Ken Auletta said, “Naïveté and passion make a potent mix; combine the two with power and you have an extraordinary force, one that can effect great change for good or for ill.” Caveat emptor, in other words, despite what one might conclude are good intentions.

Might the savvy technical and business-theoretical minds at Google therefore continue parsing company strategies and search outcomes, as they inventively reshape the search engine’s operational model? And will that continual reinvention help to validate users’ experiences in quests for information intended not only to provide definitive answers but to inform users’ own prioritization and decision-making?

Martin Cohen investigates ‘How Does Google Think’ in his new book, Rethinking Thinking: Problem Solving fro Sun Tzu to Google, which was published by Imprint Academic last month.

01 May 2022

Picture Post #74 The Swimmers



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'


Posted by Martin Cohen

La Grotte des Nageurs
OK this is not exactly about the image, as much as the context. But then, that’s often what we end up talking most about here at Pi with our Picture Post series. ‘La Grotte des Nageurs’, or ancient cave of the swimmers, contains these unmistakable image of people swimming.  It was discovered in Egypt, near the border with Libya,  in 1933 and immediately caused much bafflement as it was located in one of the world’s least swimmable areas. Could it be that, say ten thousand years earlier, the Sahara had been a bit more like the seaside?

Seriously, it is thought that at this time, the area was indeed very different, a humid savanna replete with all sorts of wild animals, including gazelles, lions, gireaffes and elephants!

But back to the humans, and what I like about this picture is the way it conveys that curious lightness of being that can only be obtained by plunging into water while maybe holding something like a float, or catching a current. It’s a simple painting, by any standard, yet a curiously precise and delicate one.

The Grotte was portrayed in the novel The English Patient by Michael Ondaatje, and in a film adaptation starring Ralph Fiennes and Kristin Scott Thomas – and the two diminuitive swimming figures.

24 April 2022

The Dark Future of Freedom

by Emile Wolfaardt

Is freedom really our best option as we build a future enhanced by digital prompts, limits, and controls?

We have already surrendered many of our personal freedoms for the sake of safety – and yet we are just on the brink of a general transition to a society totally governed by instrumentation. Stop! Please read that sentence again! 

Consider for example how vehicles unlock automatically as authorised owners approach them, warn drivers when their driving is erratic, alter the braking system for the sake of safety and resist switching lanes unless the indicator is on. We are rapidly moving to a place where vehicles will not start if the driver has more alcohol in their system than is allowed, or if the license has expired or the monthly payments fall into arrears.

There is a proposal in the European Union to equip all new cars with a system that will monitor where people drive, when and above all, at what speed. The date will be transmitted in real time to the authorities.

Our surrender of freedoms, however, has advantages. Cell-phones alert us if those with contagions are close to us, and Artificial Intelligence (AI) and smart algorithms now land our aeroplanes and park our cars. When it comes to driving, AI has a far better track record than humans. In a recent study, Google claimed that its autonomous cars were ‘10x safer than the best drivers,’ and ‘40x safer than teenagers.’ AI promises, reasonably, to provide health protection and disease detection. Today, hospitals are using solutions based on Machine Learning and Artificial Intelligence to read scans. Researchers from Stanford developed an algorithm to assess chest X-rays for signs of disease. This algorithm can recognise up to fourteen types of medical condition – and was better at diagnosing pneumonia than several expert radiologists working together.

Not only that, but AI promises to both reduce human error and intervene in criminal behavior. PredPol is a US based company that uses Big Data and Machine Learning to predict the time and place of a potential offence. The software looks at existing data on past crimes and predicts when and where the next crime is most likely to happen – and has demonstrated a 7.4% reduction in crime across cities in the US and created a new avenue of study in Predictive Policing. It already knows the type of person who is likely to commit the crime and tracks their movement toward the place of anticipated criminal behavior.

Here is the challenge – this shift to AI, or ‘instrumentation’ as it is commonly called, has been both obfuscatious and ubiquitous. And here are the two big questions about this colossal shift that nobody is talking about.

Firstly, the entire move to the instrumentation of society is predicated on the wholesale surrender of personal data. Phone, watches, GPS systems, voicemails, e-mails, texts, online tracking, transactions records, and countless other instruments capture data about us all the time. This data is used to analyse, predict, influence, and control our behaviour. In the absence of any governing laws or regulation, the Googles, Amazons, and Facebooks of the world have obfuscated the fact that they collect hundreds of billions of bits of personal data every minute – including where you go, when you sleep, what you look at on your watch or phone or other device, which neighbour you speak to across the fence, how your pulse increases when you listen to a particular song, how many exclamation marks you put in your texts, etc. and they collect your data whether or not you want or allow them to.

Opting out is nothing more than donning the Emperor’s new clothes. Your personal data is collated and interpreted, and then sold on a massive scale to companies without your permission or remuneration. Not only are Google, Amazon and Facebook (etc.) marketing products to you, but they are altering you, based on their knowledge of you, to purchase the products they want you to purchase. Perhaps they know a user has a particular love for animals, and that she bought a Labrador after seeing it in the window of a pet store. She has fond memories of sitting in her living room talking to her Lab while ‘How Much is that Doggy in the Window’ played in the background. She then lost her beautiful Labrador to cancer. And would you know it – an ad ‘catches her attention’ on her phone or her Facebook feed with a Labrador just like hers, with a familiar voice singing a familiar song taking her back to her warm memories, and then the ad turns to collecting money for Canine Cancer. This is known as active priming.

According to Google, an elderly couple recently were caught in a life-threatening emergency and needed to get to the doctor urgently. They headed to the garage and climbed into their car – but because they were late on their payments, AI shut their car down – it would not start. We have moved from active priming into invasive control.

Secondly, data harvesting has become so essential to the business model that it is already past the point of reversal. It is ubiquitous. When challenged about this by the US House recently, Mark Zuckerberg offered that Facebook would be more conscientious about regulating themselves. The fox offered to guard the henhouse. Because this transition was both hidden and wholesale, by the time lawmakers started to see the trend it was too late. And too many Zuckerbucks had been ingested by the political system. The collaboration of big data has become irreversible – and now practically defies regulation.

We have transitioned from the Industrial Age where products were developed to ease our lives, to the Age of Capitalism where marketing is focused on attracting our attention by appealing to our innate desire to avoid pain or attract pleasure. We are now in what is defined as the Age of Surveillance Capitalism. In this sinister market we are being surveilled and adjusted to buy what AI tells us to buy. While it used to be true that ‘if the service is free, you are the product,’ it is now more accurately said that ‘if the service is free, you are the carcass ravaged of all of your personal data and freedom to choose.’ You are no longer the product, your data is the product, and you are simply the nameless carrier that funnels the data.

And all of this is marketed under the reasonable promise of a more cohesive and confluent society where poverty, disease, crime and human error is minimised, and a Global Base Income is being promised to everyone. We are told we are now safer than in a world where criminals have the freedom to act at will, dictators can obliterate their opponents, and human errors cost tens of millions of lives every year. Human behaviour is regulated and checked when necessary, disease is identified and cured before it ever proliferates, and resources are protected and maximised for the common betterment. We are now only free to act in conformity with the common good.

This is the dark future of freedom we are already committed to – albeit unknowingly. The only question remaining is this – whose common good are we free to act in conformity with? We may have come so far in the subtle and ubiquitous loss of our freedoms, but it may not be too late to take back control. We need to self-educate, stand together, and push back against the wholesale surrender of our freedom without our awareness.