Showing posts with label linguistics. Show all posts
Showing posts with label linguistics. Show all posts

07 August 2022

A Linguistic Theory of Creation

by Thomas Scarborough

Creation of the Earth, by Wenceslas Hollar (1607-1677)

Perhaps it has been obscured through familiarity. There is an obvious curiosity in the opening chapters of Genesis (the creation of the world). Step by step, God creates the world, then names the world—repeatedly both coupling and separating his* creating and his naming.
Would it not be more natural simply to describe God’s creative acts without embellishment? Would not a description of his creative acts alone suffice? Unless God's naming has some special significance in the narrative, it may seem quite superfluous.

Under any circumstances, the opening chapters of Genesis are supremely difficult to interpret. Bearing this very much in mind, the purpose here is to present an alternative view—unfinished, unrefined—as a new possibility.

Existing interpretations of Genesis include the following:

  • Heaven and earth were created in six days
  • The six days were six (longer) periods of time
  • The earth’s great age was ‘created into’ a six-day sequence
  • Genesis represents the re-creation of the world
  • Genesis stitches various creation stories together
  • Its purpose is to glorify God, not first to be factual
  • It is a synopsis, which may not be sequential
  • It is a myth
  • It is a spiritual allegory
  • It describes a dream of Moses

Here, then, is a new alternative—presented merely as a possibility—for greater minds to examine the rough edges and (possibly) inadmissible ideas on an exceedingly complex text.

We begin with a simple linguistic fact. Names, in the Bible, were often commemorative. The ATS Bible Dictionary sums it up well: ‘Names were assumed afterwards to commemorate some striking occurrence in one’s history.’ Therefore, an event took place—then it, or the place of its happening, was named: Babel, Israel, the Passover, and so on. In fact, often with a pause.

If we assume that the creation account in Genesis includes, similarly, a commemorative naming, then the account may separate a stage-by-stage creation of the world from a stage-by-stage naming of it. With this in mind, there would then be four stages to each act of creation in Genesis. For example, in the NASB translation of the Bible (abridged):

  • ‘Then God said, Let there be light.’
  • ‘And there was light.’
  • ‘And God called the light day.’
  • ‘And there was evening and there was morning, one day.’

One may reduce this to two stages:

  • God created.
  • Then God named it.
 
And with some nuance, we may possibly say:

  • God created, within unspecified periods of time.
  • God named his creation during equal pauses (days), as commemorative acts.


In this case, Genesis could be viewed as a series of linguistic events. Its opening verses could set the tone, as a linguistic announcement: ‘And the earth was formless and void’—reminiscent of the linguist Ferdinand de Saussure, ‘In itself, thought is like a swirling cloud, where no shape is intrinsically determinate. No ideas are established in advance, and nothing is distinct, before the introduction of linguistic structure.’ 

Further, one may see a major linguistic shift in Genesis 3:7: ‘Then the eyes of both of them were opened …’ We have, from this point, the language of ‘ought’, as the first rational creatures ostensibly discern right from wrong. Then, needless to say, Babel represents a major linguistic shift in Genesis chapter 11, as languages (plural) appear.

From this, two major issues arise.

Firstly, is God's creating, in each stage of creation, coincident with his naming of it? In other words, did God name things on the same day that he created them, or did he name them afterwards? 

If it was on the same day that he created them, then the theory suggested here would presumably unravel. But arguably, in its favour, each naming is preceded by the word ‘And … ,’ which in the creation account is mostly used to indicate sequences in time. ‘And God called ...’ may represent separate periods of time in which namings occurred, after acts of creation.

A possible problem lies in Genesis 5:2, ‘God named them … in the day they were created.’ However, the word ‘day’ may here encompass every day, as we find in Genesis 2:4. ‘In the day’ may not refer to the separate stages of creation of Genesis chapter 1.

A second issue arises: God's naming does not seem to appear in the text consistently. ‘God called …’ appears only three times in Genesis 1, in connection with the first three days of creation. 

However Genesis, in general, liberally makes use of related words. Take the key words ‘God created ...’ Alternatives that we find in the text are ‘made, ‘formed’, ‘brought forth’, and so on. The same is true of the key words ‘God called …’ Alternatives are ‘saw’, ‘blessed’, ‘sanctified’. An act of commemoration may be implied in all of these words.

In short, the time periods which are described in Genesis may be attached, not first to the creation of the world, but to God’s naming of it—and, incidentally, to man's naming of it. On the sixth day, ‘the man gave names …’

Such a theory would potentially remove major problems of other creation theories. In particular, it could possibly move beyond both literal and liberal readings of Genesis, without colliding with them.

----------------------------------

* I follow Rabbi Aryeh Kaplan: “We refer to G-d using masculine terms simply for convenience’s sake.

Also by Thomas Scarborough: Hell: A Thought Experiment.

14 March 2022

A Scientific Method of Holism

by Thomas O. Scarborough

Holistic thinking is much to be desired. It makes us more rounded, more balanced, and more skilled in every sphere, whether practical, structural, moral, intellectual, physical, emotional, or spiritual.

Yet how may we attain it?

Is holism something that we may merely hope for, merely aspire to, as we make our own best way forward—or is there a scientific method of pursuing it? Happily, yes, there is a scientific method of holism, although it is little known.

The video clip above, of 11 March 2022, gives us a classic example of the method—or rather, of one of its aspects. Here, CNN interviewer Alex Marquardt asks (so called) oligarch Mikhail Khodorkovsky, ‘Do they have any influence, these oligarchs ... any pressure, any sway, that they can put on President Putin?’

Khodorkovsky replies, ‘They cannot influence him. However, he can use them as a tool of influence, to influence the West.’

Notice, firstly, that the interviewer’s question is limited to the possibility of oligarchs influencing President Putin. It does not appear to cross his mind that influence could have another direction.

Khodorkovsky therefore brings a directional opposite into play, to reveal something that the interviewer does not see. In this way, he greatly expands our undertstanding of the situation. Khodorkovsky could have measured his answer to the question—'Do they have any influence ... on President Putin?’—but he did not. Instantly, he thought more holistically.

In linguistics, a directional opposite is one of several types of opposite—sometimes called oppositions. Directional opposites represent opposite directions on an axis: I influence you, you influence me; this goes up, that goes down, and so on. 

Two more familiar types of opposite are the antonym, which represents opposite extremes on a scale: that house is big, this house is small; we could seek war, we could seek peace. Then, there are heteronyms,* which represent alternatives within a given domain: Monday comes before Tuesday, which comes before Wednesday; we could travel by car, by boat, or by plane.

How then may we apply these types of opposite? 

In any given situation, we may examine the words which we use to describe it. Then we may search for their directional opposites, antonyms, heteronyms**—to consider how these may complement or expand the thoughts which we have thought so far.

As observed in the video clip above, this is not merely ‘semantics’. It genuinely opens up other possibilities to our thinking, and leads us into a greater holism. This applies in a multitude of fields, whether, for example, researching a subject, crafting an object, pursuing a goal, or solving a personal dilemma.



* Heteronyms may be variously defined. The linguist Sebastian Löbner defines them as 'members of a set'. This is how I define them here.
** One may add, in particular, complementaries and converses.

23 May 2021

A New Theory of Language

by Thomas Scarborough


The way that we use language does not fit with the way that we theorise about it.  Linguistics professor Michael Losonsky writes, ‘Language as human activity and language as system remain distinct focal points despite various attempts to develop a unified view.’

I have been shaping a manuscript, in which linguistic observations play a major role.  Friends have encouraged me to describe a complete theory of language.  Naturally, it can only be done too briefly in 700 words. 

Language, as we know it, is assembled from a range of basic elements: morphemes, words, phrases, and so on.  These we arrange according to certain rules: semantic, syntactic, morphological and more.  Language, therefore, is seen as a constructive enterprise.  

Take a simple example, ‘This city is green.’  

‘This city’ is the subject.
‘is green’ is the predicate, which completes an idea about the subject.
‘This’ is a determiner—which identifies this particular city. 
‘is’ is the verb—which, among other things, points back to the subject.

We assemble these pieces, then, to produce a meaningful communication with another language user, or users.  This is the standard view.

I propose that language is quite the opposite.  Rather than beginning with basic elements, with which we assemble the ideas we communicate, language begins with the whole world.  The function of language then is to begin with this whole, and reduce it. 

Again, the simple example, ‘This city is green.’ 

‘City’ greatly reduces the whole, now encircling only cities.
‘This’ narrows these cities to one particular city.
‘green’ narrows it to just one aspect of one city.
‘is’ reduces the time window to the present.

In fact, we may note that we do much the same with the scientific method.  The scientific method minimises unwanted influences on independent variables.  It begins with the whole world, then screens things out until only independent variables are left, undisturbed by outside influences. 

A holistic view of language should have various consequences, if it is true.  There are certain things we would expect to ensue.  Here are just a few: 

 Since language is a reduction of the whole, even as we reduce it, our words will retain some involvement in the whole.  This, in fact, is the case.  In the words of the philosopher Max Black, our words 'trail clouds of implication'.  

• Since our language reduces the whole, we may expect to run into problems which one associates with partial views. Everything we put into words, because it is reduced, will overlook critical aspects of the world. The statistician George Box put it simply, ‘All models (which are reductions) are wrong, but some are useful.’

• Language originates in the whole, therefore no part of the whole can be focal. A holistic view of language will exclude origins or central ideas -- at least as a valid means of establishing truth.  We shall avoid all such schemes as, in the words of Jacques Derrida, 'return to an origin'.

• Since language is a reduction of the whole, the rules of language -- semantics, syntax, inflections, and so on -- will represent a tool by which we efficiently reduce the whole. Since there are various methods of reduction, we would expect that there would be various grammars. This, too, is the case. In the words of Max Black, ‘Grammar has no essence.'

Since both ordinary language and science represent a reduction of the world, we would expect them both to work in the same way.  This should enable us to unite our ordinary language and science.  In fact, the philosopher Stephen Toulmin notes that, both in the common affairs of life and in our scientific pursuits, 'we use similar patterns of thought'. 

 The scientific method, being a reduction of the whole, would be tested not primarily by falsification within its own bounds, but by something I shall call ‘invalidation’ in the context of the whole.  The success of science (or otherwise) would be assessed within the context of the whole. 

 Different cultures have different physical and social worlds in their minds.  As they reduce this whole through language, it seems impossible that they could say anything partial which would contradict the whole.  Therefore even snippets of one's language will be a reflection of one's outlook on the world. 

26 June 2016

The Misconstruction of Construction

Posted by Christian Sötemann
More than one philosophical theory has been suggested as a way to construe the world primarily as a construction accomplished by human mental faculties – rather than as mere passive depiction of the objective state of the world. 
Such approaches (most overtly in what is called ‘constructivism’) suggest that what we seem to perceive as characteristics of the external world are essentially the results of a hidden process of internal construction. It seems to me that there are at least two possible misunderstandings of this particular mindset: firstly, that the mental construction process occurred out of thin air, and secondly, that in a constructed world, there are no criteria to distinguish fact from fiction.

To maintain that there can be only mental construction and nothing else would seem to imply human beings construct the experienced world from scratch. However, this quickly turns out to be a far from unassailable view. For a start, it appears to be impossible to construct a world of experience out of nothing at all. A putative building block devoid of any characteristics, of any potential or impact whatsoever is an empty conception and cannot lead to the emergence of something that exhibits certain qualities.

Elements of construction that are nothing are no elements of construction. If you combine nothing with nothing you will still end up with nothing.

There has to be something that can be processed and modified, some material that is used for the construction process; though this is not sufficient evidence for the existence of matter itself, which cannot automatically be extrapolated from the necessity of the existence of some sort of material for the process of mental construction.

What is more is that the process of construction is something in itself. An event has to occur in some way so that construction can take place. The something that provides the material for construction and the something that induces the construction process cannot emerge out of that very process they are supposed to enable in the first place. Therefore it is – by way of a placeholder – ‘a something’ that must be considered beyond construction.

Similarly, it always seems to be necessary to add ‘a somebody’ - some sort of person or centre of mental activity - to accomplish the construction, since without such a carrier, there could not be any cohesive mental process. If single acts of mental construction occurred incoherently here and there, it would merely mean occasional mental flickering and not have the connectedness that an experienced world evidently has, with its continuity in space and time. This does, on the other hand, not necessarily suggest the notion of a corporeal human being as carrier of mental construction: even our perceived body might dogmatically be regarded as a construct of experience and cognition itself.

Moving over to the second possible misunderstanding, just because the experienced world can be conceived as largely a result of construction processes of the mind, it does not mean that there were no difference between mere opinion and well-researched facts and were I to claim that I was able to construct the world in any way I want it to be would be to run the risk of self-delusion.

So what do constructivist authors (such as the American professor Ernst von Glasersfeld) suggest as means of differentiation instead? Put bluntly: some things work, others do not. I experience obstacles that point out to me that certain attempts to construct and construe a reality do not work. Consider these simple examples from the world of concrete objects, like that evergreen case of the table, beloved for philosophers from Plato to Bertrand Russell: 

Imagine a person from a culture that does not utilise tables at all. Exposed to a table standing in a garden, this person might conclude that this unknown object is a device to provide shelter from the rain. Is this viable? It surely is: I can sit down under the table in case of rain and hence be kept from getting wet. This may not be the original intention of our table-utilising culture, but it can be done that way. What cannot be done, for instance, is that I regard the table standing in the garden as some projected image that I can simply walk through if so inclined. I experience that this does not work. I will find that the table standing there hinders me from just walking through it.

Similarly, a plate could be used as a paperweight, a shield, or a percussive instrument, but not a beverage or a pen: I cannot make it a liquid for me to drink or have it emit ink. So, from a mindset that emphasises the aspect of mental construction, several alternatives are found to be viable – even if possibly inconvenient and not the best of alternatives – but others are not viable at all. There is a limit to the alternative usages and interpretations available. I may not be able to know the outside world beyond my experience, but in that very experience I can find out what this outside world allows me not to do. This acknowledgement of obstacles necessarily means that I have to relinquish the idea of living in a world I can equip in any way I want to.

There are plenty of utterly legitimate criticisms concerning philosophical stances emphasising construction (and not only constructivism itself), but the more useful step is to undertake a clarification of some of the typical misunderstandings. This can transform disagreement resting on disbelief and gut feelings into informed criticism.



Christian H. Sötemann has degrees in psychology and philosophy, and works in psychological counselling and as a lecturer in Berlin, Germany. He can be contacted via: chsoetemann@googlemail.com

09 January 2016

The Bridging Inference

A Pi Special Investigation into the workings of language

Posted by Thomas Scarborough
How is a definition defined? Much may depend on the answer to this simple question—including, arguably, the shape of our entire (post)modern society today. But more of this in a moment.
The way that one typically defines a word is with the most economical statement of its descriptive meaning. Therefore the Oxford English Dictionary defines a 'dog' as 'a domesticated, carnivorous mammal'.

However, not every linguist would agree that this is how one should go about definition. Wilhelm Kamlah and Paul Lorenzen took a suspicious view of this notion—considering that such definition is necessary but inadequate 1. It is, they suggested, 'a mere abbreviation'. A true definition of a word would require so much more.

But so it is: In terms of classical linguistics, in order to define a word, one enumerates its 'necessary and sufficient features' 2. Such definition may also be referred to as the denotative meaning of a word, or its 'hard core of meaning'—as opposed to its 'meanings around the edges', or its connotative meaning 3.

How is a definition defined?

In probing the answer to the question here, the linguistic feature the anaphora provides a useful starting point. The anaphora, in turn, is related to a lesser-known linguistic feature, the bridging inference. This promises to be more useful still.

But first, the anaphora.

The Anaphora

The anaphora, according to linguists Simon Botley and Tony McEnery, is particularly useful in telling us 'some things about how language is understood and processed' 4. That is, it opens windows into the inner workings of our language, which would normally seem closed to us.

The anaphora is called a referring expression—for the reason that it refers to another linguistic element in a text. Typically, it refers back. An example: 'Aristotle owned a house. He lived in it.' Here, 'He' refers back to Aristotle, while 'it' refers back to his house. Both 'He' and 'it', therefore, are anaphoras.

A fact less emphasised is that the meaning of the anaphora must match the meaning of the linguistic element which it refers to—otherwise an anaphora is 'unresolved'. For example: 'Aristotle owned a house. It popped,' or: 'Aristotle owned a house. It chased rabbits.' In these two examples, the meaning of the anaphora and the meaning of the referent do not coincide—as they ought to.

This deserves special emphasis: the anaphora refers to a linguistic element which is well defined, on the surface of it reflecting its denotative meaning.

 So a house is defined in the Oxford English Dictionary as 'a building used for human habitation,' or (Collins) 'a building used as a home,' or (Macmillan) 'a building for living in'. Thus the anaphora 'it', above, takes on the definition of a house—or so it would seem.

Thus far with the anaphora.

The Bridging Inference

Closely related to the anaphora is the lesser known referring expression the bridging inference. Like the anaphora, this typically refers back.
Here follows an example of a bridging inference: 'Aristotle owned a house. The plumbing was blocked.' At first glance, this might seem identical to the anaphora—yet it is quite different.

While no one should have a problem understanding these two sentences, the house is now no longer in explicit focus 5. Or to put it another way: one typically recognises a bridging inference by the fact that one cannot replace it with a pronoun. One cannot say, for instance: 'Aristotle owned a house. It was blocked.'

In the above example, the inference is that a house contains plumbing. However, there is something apparently inexplicable that meets us here. No definition of a house includes plumbing. The bridging inference assumes that when one speaks about a house, one knows something that one should not know, or does not need to know.

In fact, we intuitively relate many things to a house: 'Aristotle owned a house. The karma was bad,' or: 'Aristotle owned a house. The ceilings were sagging,' or: 'Aristotle owned a house. The valuation was too low.' In all of these examples and more, a house is intuitively understood to have karma, ceilings, value, and so on. To put it simply, all of these sentences work—in spite of having nothing to do with the definition of a house, as one finds it in the dictionary.

This is important. If something has nothing to do with the definition of a house, yet is intuitively understood to be a part of what it is, then we have a problem with the common notion of a definition.

The ease with which one uses inferences is all the more appreciated when incompatible inferences are made: 'Aristotle owned a house. The crank shaft was broken,' or: 'Aristotle owned a house. The preservative was vinegar.' One sees here, all the more clearly, how inferences are dependent on the meaning of the referent.

We return now to the anaphora.

The Anaphora Again

On the surface of it, the anaphora would seem to refer to the stock standard definition of a word—namely, its 'necessary and sufficient features'—while the bridging inference would seem to stray into 'meanings around the edges'. That is to say, on the surface of it the anaphora has more to do with the denotative meaning of a word, while the bridging inference has more to do with its connotative meaning.

Yet does this hold true?

If it does not, then there may be many more inferences in our language than we have supposed. Or to put it another way: the features of our definitions of words may not be as 'necessary' or 'sufficient' as they seem.

By way of experiment, consider what happens when one converts some of the bridging inferences above to anaphoras: 'Aristotle owned a house. It had bad karma,' or: 'Aristotle owned a house. It had sagging ceilings.'

At first glance, there may seem to be no inferences here: the anaphora 'It' would seem, in each case, to refer back to the house. However, it becomes clear that one is dealing with inferences as soon as one tries some false ones. For example: 'Aristotle owned a house. It had a broken crank shaft,' or: 'Aristotle owned a house. It was preserved with vinegar.'

What we see here is that the anaphora has to be compatible with various inferences which relate to a house. It is a precondition for the anaphora to work.

In fact we might go so far as to say that the English language depends on innumerable inferences. Both the bridging inference and the anaphora reveal that we make inferences which exceed the definition of a word—and with that, 'play old Harry' with the notion of the denotative meaning of the word.

'Every utterance, no matter how laboured,' said philosopher and linguist Max Black, 'trails clouds of implication' 6.

For what reason, then, might the bridging inference and the anaphora instantly be understood—where they have nothing to do with the definition of a thing?

Here follow some broad suggestions:

The Definition of a Definition

An answer to the puzzle may lie in what we have already seen, although it might seem alien to our analytical thinking today:

If there is any apparent relation between two things—between a house, say, and the plumbing—or between a house and its karma—then these will inevitably have something to do with each other's definition. If there is no apparent relation—between a house and a crank shaft, say, or a house and its preservative—then these will have nothing to do with each other's definition.

This has an important corollary.

It has to mean that the definitions of words are relational, not analytic: definitions are not first about features, they are about relations—and there may be a great many relations.

In fact it was Aristotle who first suggested that definitions are not features 'piled in a heap', but that they are 'disposed in a certain way' 7. That is, their features stand in a certain relationship with one another—as many as these may be.

Now if linguistics is a descriptive endeavour, not prescriptive—if it is about 'how people actually speak or write' 8—then what shall we do with the customary definition of a definition?

If definitions are relational, not analytic—then it may be suggested, on the basis of the way that we use words today, that the (post)modern era has gone vastly astray. Is it not our dissection of reality—rather than our being able to see its relatedness—that has led to environmental degradation, social disintegration, and a host of other ills?

The analytical view of the world should be compensated by a relational one. This may begin with the way that we see language. Or to put it another way: the way that we see language today may shape the entire society in which we live.



Matters arising - and some notes




The Question

Let us pause, to pose the question(s):

  • What is a definition—in light of the bridging inference in particular?  
  • What is it that denotation denotes?
  • And if a word is to be seen in relational terms, then how does one define it?


Citation

This post was written by Thomas Scarborough for PI Alpha, February 2014.
  • 1 Wilhelm Kamlah and Paul Lorenzen. Logical Propaedeutic, p. 65, 1984.
  • 2 John Taylor. Linguistic Categorization, p. 23, 1995.
  • 3 James Hurford and Brendan Heasley. Semantics, p. 90, 1990.
  • 4 Simon Botley and Tony McEnery. Corpus-Based and Computational Approaches to Discourse Anaphora, p. 3, 2000.
  • 5 Alan Garnham. Psycholingusitics, p. 156, 1985.
  • 6 Max Black. The Labyrinth of Language, p. 137, 1968.
  • 7 Aristotle. The Metaphysics, Book VII, 11.
  • 8 David Crystal. Linguistics, p. 595, 1999.

The Bridging Inference

A Pi Special Investigation into the workings of language

How is a definition defined? Much may depend on the answer to this simple question—including, arguably, the shape of our entire (post)modern society today. But more of this in a moment.

The way that one typically defines a word is with the most economical statement of its descriptive meaning. Therefore the Oxford English Dictionary defines a 'dog' as 'a domesticated, carnivorous mammal'.

However, not every linguist would agree that this is how one should go about definition. Wilhelm Kamlah and Paul Lorenzen took a suspicious view of this notion—considering that such definition is necessary but inadequate 1. It is, they suggested, 'a mere abbreviation'. A true definition of a word would require so much more.

But so it is: In terms of classical linguistics, in order to define a word, one enumerates its 'necessary and sufficient features' 2. Such definition may also be referred to as the denotative meaning of a word, or its 'hard core of meaning'—as opposed to its 'meanings around the edges', or its connotative meaning 3.

How is a definition defined?

In probing the answer to the question here, the linguistic feature the anaphora provides a useful starting point. The anaphora, in turn, is related to a lesser-known linguistic feature, the bridging inference. This promises to be more useful still.

But first, the anaphora.

The Anaphora

The anaphora, according to linguists Simon Botley and Tony McEnery, is particularly useful in telling us 'some things about how language is understood and processed' 4. That is, it opens windows into the inner workings of our language, which would normally seem closed to us.

The anaphora is called a referring expression—for the reason that it refers to another linguistic element in a text. Typically, it refers back. An example: 'Aristotle owned a house. He lived in it.' Here, 'He' refers back to Aristotle, while 'it' refers back to his house. Both 'He' and 'it', therefore, are anaphoras.

A fact less emphasised is that the meaning of the anaphora must match the meaning of the linguistic element which it refers to—otherwise an anaphora is 'unresolved'. For example: 'Aristotle owned a house. It popped,' or: 'Aristotle owned a house. It chased rabbits.' In these two examples, the meaning of the anaphora and the meaning of the referent do not coincide—as they ought to.

This deserves special emphasis: the anaphora refers to a linguistic element which is well defined, on the surface of it reflecting its denotative meaning.

 So a house is defined in the Oxford English Dictionary as 'a building used for human habitation,' or (Collins) 'a building used as a home,' or (Macmillan) 'a building for living in'. Thus the anaphora 'it', above, takes on the definition of a house—or so it would seem.

Thus far with the anaphora.

The Bridging Inference

Closely related to the anaphora is the lesser known referring expression the bridging inference. Like the anaphora, this typically refers back.
Here follows an example of a bridging inference: 'Aristotle owned a house. The plumbing was blocked.' At first glance, this might seem identical to the anaphora—yet it is quite different.

While no one should have a problem understanding these two sentences, the house is now no longer in explicit focus 5. Or to put it another way: one typically recognises a bridging inference by the fact that one cannot replace it with a pronoun. One cannot say, for instance: 'Aristotle owned a house. It was blocked.'

In the above example, the inference is that a house contains plumbing. However, there is something apparently inexplicable that meets us here. No definition of a house includes plumbing. The bridging inference assumes that when one speaks about a house, one knows something that one should not know, or does not need to know.

In fact, we intuitively relate many things to a house: 'Aristotle owned a house. The karma was bad,' or: 'Aristotle owned a house. The ceilings were sagging,' or: 'Aristotle owned a house. The valuation was too low.' In all of these examples and more, a house is intuitively understood to have karma, ceilings, value, and so on. To put it simply, all of these sentences work—in spite of having nothing to do with the definition of a house, as one finds it in the dictionary.

This is important. If something has nothing to do with the definition of a house, yet is intuitively understood to be a part of what it is, then we have a problem with the common notion of a definition.

The ease with which one uses inferences is all the more appreciated when incompatible inferences are made: 'Aristotle owned a house. The crank shaft was broken,' or: 'Aristotle owned a house. The preservative was vinegar.' One sees here, all the more clearly, how inferences are dependent on the meaning of the referent.

We return now to the anaphora.

The Anaphora Again

On the surface of it, the anaphora would seem to refer to the stock standard definition of a word—namely, its 'necessary and sufficient features'—while the bridging inference would seem to stray into 'meanings around the edges'. That is to say, on the surface of it the anaphora has more to do with the denotative meaning of a word, while the bridging inference has more to do with its connotative meaning.

Yet does this hold true?

If it does not, then there may be many more inferences in our language than we have supposed. Or to put it another way: the features of our definitions of words may not be as 'necessary' or 'sufficient' as they seem.

By way of experiment, consider what happens when one converts some of the bridging inferences above to anaphoras: 'Aristotle owned a house. It had bad karma,' or: 'Aristotle owned a house. It had sagging ceilings.'

At first glance, there may seem to be no inferences here: the anaphora 'It' would seem, in each case, to refer back to the house. However, it becomes clear that one is dealing with inferences as soon as one tries some false ones. For example: 'Aristotle owned a house. It had a broken crank shaft,' or: 'Aristotle owned a house. It was preserved with vinegar.'

What we see here is that the anaphora has to be compatible with various inferences which relate to a house. It is a precondition for the anaphora to work.

In fact we might go so far as to say that the English language depends on innumerable inferences. Both the bridging inference and the anaphora reveal that we make inferences which exceed the definition of a word—and with that, 'play old Harry' with the notion of the denotative meaning of the word.

'Every utterance, no matter how laboured,' said philosopher and linguist Max Black, 'trails clouds of implication' 6.

For what reason, then, might the bridging inference and the anaphora instantly be understood—where they have nothing to do with the definition of a thing?

Here follow some broad suggestions:

The Definition of a Definition

An answer to the puzzle may lie in what we have already seen, although it might seem alien to our analytical thinking today:

If there is any apparent relation between two things—between a house, say, and the plumbing—or between a house and its karma—then these will inevitably have something to do with each other's definition. If there is no apparent relation—between a house and a crank shaft, say, or a house and its preservative—then these will have nothing to do with each other's definition.

This has an important corollary.

It has to mean that the definitions of words are relational, not analytic: definitions are not first about features, they are about relations—and there may be a great many relations.

In fact it was Aristotle who first suggested that definitions are not features 'piled in a heap', but that they are 'disposed in a certain way' 7. That is, their features stand in a certain relationship with one another—as many as these may be.

Now if linguistics is a descriptive endeavour, not prescriptive—if it is about 'how people actually speak or write' 8—then what shall we do with the customary definition of a definition?

If definitions are relational, not analytic—then it may be suggested, on the basis of the way that we use words today, that the (post)modern era has gone vastly astray. Is it not our dissection of reality—rather than our being able to see its relatedness—that has led to environmental degradation, social disintegration, and a host of other ills?

The analytical view of the world should be compensated by a relational one. This may begin with the way that we see language. Or to put it another way: the way that we see language today may shape the entire society in which we live.



Matters arising - and some notes




The Question

Let us pause, to pose the question(s):

  • What is a definition—in light of the bridging inference in particular?  
  • What is it that denotation denotes?
  • And if a word is to be seen in relational terms, then how does one define it?


Citation

This post was written by Thomas Scarborough for PI Alpha, February 2014.
  • 1 Wilhelm Kamlah and Paul Lorenzen. Logical Propaedeutic, p. 65, 1984.
  • 2 John Taylor. Linguistic Categorization, p. 23, 1995.
  • 3 James Hurford and Brendan Heasley. Semantics, p. 90, 1990.
  • 4 Simon Botley and Tony McEnery. Corpus-Based and Computational Approaches to Discourse Anaphora, p. 3, 2000.
  • 5 Alan Garnham. Psycholingusitics, p. 156, 1985.
  • 6 Max Black. The Labyrinth of Language, p. 137, 1968.
  • 7 Aristotle. The Metaphysics, Book VII, 11.
  • 8 David Crystal. Linguistics, p. 595, 1999.

07 November 2015

Picture Post No. 8: Apples COMMENT ADDED

This is definitely not a Picture Post, Thomas. I think you have to reformati it. It is a bit more of your theory of how language works, so I guess should be 'potentailly' a post. But even as that it does seem rather trivial. You would need I think to redynamise this one - more examples maybe?

Martin



'Because things don’t appear to be the known thing; they aren’t that what they seemed to be neither will they become what they might appear to become.'

NOTE:  I have put a preferred version of this post at the top, yet have left the previous versions intact (below), to give priority to the editorial eye. Thomas.

Posted by Thomas Scarborough


One sees, above, the results of two Google Image searches. First, I searched for 'apples'.  Then, I searched for 'pommes'.  Then I jumbled them up.  Pommes, of course, are apples in French.  Do not scroll down. 

The 'apples' (English) have an ideal form.  Several shift even into abstraction or stylization.  They only occur singly, and most of them sport only one leaf.  They are red, and only red, and are polished to a perfect shine. One apple has been cut: not to eat, but to engrave a picture perfect symbol on it.  The 'pommes' (French) belong to a family of pommes, of various colours: red, green, even yellow.  One may take a bite out of them to taste, or cut them through or slice them: to smell their fragrance, or to drop them into a pot.  Pommes, too, are always real, unless one should draw one for a child.

Now separate out the apples from the pommes. Scroll down. You probably distinguished most apples from pommes. In so doing, you acknowledged – if just for a moment – that in some important way, apples are not pommes.


(While this example is flawed, try the same with more
distant languages, and more complex words).


Posted by Thomas Scarborough


One sees, above, the results of two Google Image searches. First, I searched for 'apples'.  Then, I searched for 'pommes'.  Then I jumbled them up.  Pommes, of course, are apples in French.  Do not scroll down. 

'Apples' have an ideal form.  So much so, in fact, that they tend to shift into abstraction or stylization.  Mostly (though not in every case), they sport only one leaf.  Apples only occur singly.  They are red, and only red, and they are polished to a perfect shine. One apple has been cut, though not to eat it – rather to engrave a picture perfect symbol on it.  'Pommes', on the other hand, belong to a family of pommes, of various colours: red, green, yellow, even plum.  And leaves: they may have one, or two, or none.  One may take a bite out of them to taste.  One may cut them through, or slice them: to smell their fragrance, or perhaps to drop them in a pot. And pommes are always real, unless one should draw one for a child.

Now separate out the apples from the pommes. Scroll down. You probably accomplished this with 80% accuracy. In so doing, you acknowledged – if just for a moment – that in some important way, apples are not pommes.


(Now try the same with more distant languages, and more complex words).


'Because things don’t appear to be the known thing; they aren’t that what they seemed to be neither will they become what they might appear to become.'

Posted by Thomas Scarborough

Two Google Image searches.  First, 'apples'.  Then, 'pommes'. (A pomme, of course, is an apple in French). 

The 'apples' (English) have an ideal form.  Several shift even into abstraction or stylization.  They sport one leaf (with two exceptions).  They only occur singly.  They are red, and only red, and are polished to a perfect shine. One apple has been cut: not to eat, but to engrave a picture perfect symbol on it.  The 'pommes' (French) belong to a family of pommes, of various colours: red, green, yellow, even plum.  One may take a bite out of them to taste, or cut them through or slice them: to smell their fragrance, or to drop them into a pot.  Pommes, too, are always real, unless one should draw a picture for a child.

Signifier points to signified, we are told, whether 'apple' or 'pomme'. But in English and in French, are the signifieds the same?