A word is vague if it has borderline cases. Yul Brynner (the lead in "The King and I") is definitely bald, I am (at the time of this writing) definitely not, and there are many people who seem to be neither. These people are in the “borderline region” of ‘bald’, and this phenomenon is central to vagueness.
Nearly every word in natural language is vague, from ‘person’and ‘coercion’ in ethics, ‘object’ and ‘red’ in physical science, ‘dog’ and ‘male’ in biology, to ‘chair’ and ‘plaid’ in interior decorating.
Vagueness is the rule, not the exception. Pick any natural language word you like, and you will almost surely be able to concoct a case -- perhaps an imaginary case -- where it is unclear to you whether or not the word applies.
Take ‘book’, for example. "The Bible" is definitely a book, and a light bulb is definitely not. Is a pamphlet a book? If you dipped a book in acid and burned off all the ink, would it still be a book? If I write a book in tiny script on the back of a turtle, is the turtle’s back a book?
We have no idea how to answer such questions. The fact that such questions appear to have no determinate answer is roughly what we mean when we say that ‘book’ is vague.
And vagueness is intimately related to the ancient sorites paradox, where from seemingly true premises that (i) a thousand grains of sand makes a heap, and (ii) if n+1 grains of sand make a heap, then n make a heap, one can derive the false conclusion that one grain of sand makes a heap.
Is vagueness a problem with our language, or our brains?
Or, could it be that vagueness is in some way necessary…
When you or I judge whether or not a word applies to an object, we are (in some abstract sense) running a program in the head.
The job of each of these programs (one for each word) is to output YES when input with an object to which the word applies, and to output NO when input with an object to which the word does not apply.
That sounds simple enough! But why, then, do we have vagueness? With programs like this in our head, we’d always get a clear YES or NO answer.
But it isn’t quite so simple.
Some of these “meaning” programs, when asked about some object, will refuse to respond. Instead of responding with a YES or NO, the program will just keep running on and on, until eventually you must give up on it and conclude that the object does not seem to clearly fit, nor clearly not fit.
Our programs in the head for telling us what words mean have “holes” in them. Our concepts have holes. And when a program for some word fails to respond with an answer -- when the hole is “hit” -- we see that the concept is actually vague.
Why, though, is it so difficult to have programs in the head that answer YES or NO when input with any object? Why should our programs have these holes?
Holes are an inevitability for us because they are an inevitability for any computing device, us included.
The problem is called the Always-Halting Problem. Some programs have inputs leading them into infinite loops. One doesn’t want one’s program to do that. One wants it to halt, and to do so on every possible input. It would be nice to have a program that sucks in programs and checks to see whether they have an infinite loop inside them. But the Always-Halting Problem states that there can be no such infinite-loop checking program. Checking that it is a program always halts is not generally possible.
That’s why programs have holes in them -- because it’s computationally impossible to get rid of them all.
And that’s why our own programs in the head have holes in them. That’s why our concepts have holes, or borderline cases where the concept neither clearly applies nor clearly fails to apply.
Furthermore, notice a second feature of vagueness: Not only is there no clear boundary between where the concept applies and does not, but there are no clear boundaries to the boundary region.
We do not find ourselves saying, “84 grains of sand is a heap, 83 grains is a heap, but 82 grains is neither heap nor non-heap.”
This facet of vagueness -- which is called “higher-order vagueness” -- is not only something we have to deal with, but is also something which any computational device must contend with.
If 82 grains is in the borderline region of ‘heap’, then it is not because the program-in-the-head said “Oh, that’s a borderline case.” Rather, it is a borderline case because the program failed to halt at all.
And when something fails to halt, you can’t be sure it won’t halt. Perhaps it will eventually halt, later.
The problem here is called the Halting Problem, a simpler problem than the Always-Halting Problem mentioned earlier. The issue now is simply whether a given program will halt on a given input (whereas the “Always” version concerned whether a given program will halt on every input).
And this problem also is not generally solvable by any computational device. When you get to 82 grains from 83, your program in the head doesn’t respond at all, but you don’t know it won’t ever respond.
Your ability to see the boundary of the borderline region is itself fuzzy.
Our concepts not only have holes in them, but unseeable holes. …in the sense that exactly where the borders of the holes are is unclear.
And these aren’t quirks of our brains, but necessary consequences of any computational creature -- man or machine -- having concepts.
***
This has been adapted from "The Brain from 25000 Feet" (Kluwer, 2003). This is an intentionally over-simplified presentation of my theory of vagueness in natural language. For the initiated, I recommend reading Chapter 4 of that book, which can be linked here: http://www.changizi.com/ChangiziBrain25000Chapter4.pdf
Mark Changizi is Director of Human Cognition at 2AI Labs. He is the author of three books, including his recent "The Vision Revolution" (Benbella, 2009) and his upcoming "Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man" (Benbella, 2011).
Why Meanings Must Be Fuzzy
Comments