daniel cloud book

CUP BLOG – But If I Try to Explain It...

By Daniel Cloud, Ph.D.

“What, then, is time?” Saint Augustine asks in the Confessions. “If no one asks me, I know, but if I try to explain it, I don’t know.” It’s a keen observation, because we’ve all had this experience. Still, it’s a rather peculiar state of affairs that’s being described. Did Augustine know what time is, or didn’t he? If he did know, why couldn’t he say what it is? If he didn’t, how could he go around using the word?

But the oddest thing of all is that Augustine then does go on to produce a philosophical analysis of his own concept of time that’s incredibly revealing, one that has been very influential ever since. If he knew all that just by knowing the meaning of the word, why couldn’t he say it in the beginning, why did he have to do so much work to know what he’d meant by the word all along? How can this procedure, the careful analysis of our own culturally acquired notions about the meaning of some word in ordinary language, possibly produce knowledge about the real universe, about a physical thing like time?

And yet… the process of lifting ourselves by our own bootstraps had to start somewhere. Historically, it really does look as if the philosophical analysis of ordinary language and ordinary ideas about time and space and causation and chance and knowledge and logic and evidence has played a role. It really looks as if Empedocles and Plato and Aristotle made some sort of contribution to making the existence of Euclid and Ptolemy and Newton and Darwin possible, though it’s very unclear what that contribution was.

Scientists in the modern world still have to rely on ordinary language and ordinary concepts – concepts like “cause” and “evidence” – even in doing scientific reasoning. It’s often necessary to stop and try to puzzle out or clarify the meaning of some word. What do we mean, here, by saying that A is the cause of B? What would count as evidence for that? This kind of occasional philosophical reasoning, in the course of some other inquiry, is probably a lot more important than the much smaller amount of philosophical reasoning done by people who think of themselves as philosophers. Always, the presumption seems be that there’s some logic in our way of using the word that just has to be teased out by reflection and critique. If we couldn’t rely on this tacit logic, it would be hard to get very far in doing science, because what counted as evidence against a hypothesis or the cause of an event would always be in doubt. We would be in the position of the Cartesian skeptic, reduced to worrying that we’re brains in vats. We constantly rely on this rather implausible source of information in procuring the kind of information that we consider unimpeachable – scientific knowledge. So we can’t really proceed on the assumption that it doesn’t exist.

How can there be useful information waiting to be teased out of ordinary language and ordinary concepts? Well, where do these things actually come from, how did they get to us in their present form? Perhaps there’s something about the process by which they became what they are that explains why they might have things to tell us. Names come associated with necessary truths, things that must be true of anything bearing the name. But where do these nominal necessities come from, and why are they so unreasonably useful?

There have been numerous attempts to answer this question. Without going into too many details, the problem is that we seem to face an unhappy choice between some form of behaviorism, and some form of rationalism. Either our use of words is simply a set of habits produced by positive and negative reinforcement, and any logic it may have simply reflects the pattern of rewards, or it is the result of our conscious, concerted choice to use certain conventional names for certain actual kinds of thing, things like “gold” or “tigers” that have their own inner necessities, reflected accurately in our rational accounts of them, just by being what they are.

If meanings are just habits produced by reinforcement, it isn’t clear why there’s much to be learned from investigating ordinary language and ordinary concepts. If they’re rationally chosen conventions, on the other hand, it isn’t clear why there’s any need to learn anything by investigating them – we should all already know all there is to know about them. David Lewis’s notion of “tacit knowledge” helps a little, but it isn’t clear from his story in Convention where all this tacit knowledge is supposed to have come from, or why its philosophical analysis should reveal heretofore unknown truths.

Confronted with unexplained optimality, anyone who spends much time thinking about biology will look for an evolutionary explanation. But if words evolve, it’s hard to see why that evolution should leave them with a hidden treasure of useful, analyzable meaning waiting to be found by philosophers. It seems as if evolution will optimize words to be whatever way is optimal for the words. Why should our words be so optimal for us, for the uses we need to put them to? And why should they have anything to tell us, any more than a fly or a flu virus does?

It seems to me that the solution to this puzzle – the puzzle of why philosophical reflection ever produces useful knowledge, why there’s anything there in ordinary language and common-sense notions to analyze in the first place – can be found in the account Darwin gives, in Descent of Man, of the cultural evolution of personal ornamentation. There, he’s interested in the variety of odd forms of self-decoration in various human populations. Some people knock out some of their teeth, some file them, some people pierce their lips or ears or scar or tattoo their face, and in at least three different groups in widely scattered parts of the world, the custom of enlarging a lip by distending it with a large plate has become common. Darwin apparently found Victorian fashions equally odd. His theory of these things was that they must have evolved in the same way domesticated, ornamental plants and animals acquired their various exaggerated features. The fancier, he explained, always wants just a little more. In every generation, he will choose to breed from the individuals in whom the characteristic traits are just a little bit bigger or brighter or more grotesque than the average individual in the population. He certainly doesn’t want some crazy departure from the norm, but just a little bit more is always welcome. Repeat this bias in every single generation, and soon enough you’ll get a crazy departure from what was once the norm, arrived at by a series of slight exaggerations, the result of choices between naturally occurring variations. The petticoat got enormous, and the lip-plate got big, he suggested, by a similar process of slow, incremental exaggeration, as audiences repeatedly copied exemplary versions of these adornments, instead of the average version.

That’s his explanation of the effect of domestication on beauty, but Darwin also, in the first chapter of Origin of Species, uses it to account for utility, for the amazing usefulness of the various breeds of domesticated animals and plants. The dachshund and the ear of corn became what they are as a result of a long series of rational human choices between naturally occurring variants, evolving incrementally into tools that are optimal for human purposes in ways no one of their owners will ever fully comprehend. There are many rational choices to be reconstructed if we’re to understand the full logic of this one creature’s evolution. The limits and possibilities of maize have shaped the societies of its domesticators in dramatic ways.

Cheeses, and fermented foods in general, may be even more like words. Nothing but a long history of small incremental choices could have produced the complex variety of cheeses and cheese-producing molds and bacteria that exists today. The shapers of Roquefort certainly didn’t understand the biology of the mold that makes it blue. Cheeses are delicious in really interesting ways that make them go well with wine because the people who ate them, over the centuries, were capable of discriminating between and enjoying very complex flavors. Lots and lots of people participated in these choices, so there’s a standard-of-taste or collective wisdom effect. Generation after generation, people with good taste have picked the most delicious and interesting and distinctive variant out of a population of possible choices, and their collective taste is embedded in the cheese as it is today, there to be learned and savored by the discerning palate. Of course, people with bad taste have also made choices. But there are always more ways of being wrong than ways of being right. Each person with bad taste in cheese probably has their own weird preoccupation or insensitivity or form of squeamishness, while the people with good taste when it comes to some particular type of cheese are likely to agree. Over time the flavor of the cheese will come to reflect the consistent judgments of the real gourmets, and not the idiosyncratic opinions of the less competent judges.

It seems to me that ordinary words are like that. There are things to be learned from our ordinary notions of what a cause is or what evidence is because over many generations, discerning people have again and again picked the best version of the word’s meaning available from the naturally occurring variants they found around them, and by this long incremental process the meanings have attained a kind of optimality no one user ever completely understood. Each choice was made on the basis of some particular small feature, which became salient in some local context, but between them they’ve optimized many features, while preserving the harmony of the overall design. Like Roquefort cheese and dachshunds, our concept of evidence is the result of rational choice, of a very large number of locally and myopically rational decisions, but not of a rational plan, or none more serious than the various vague notions and imperfect explanations understood by each of the many users. But their errors have tended to be idiosyncratic and inconsistent, because a mistake can be anything, while their correct choices have been relatively uniform, because reliable evidence is what it is. The goodness of the inherited concept reflects the quality of judgment in the community it’s evolved in, but it ends up being wiser than any one of that community’s members could make it. (And is therefore a suitable topic for philosophical investigation…)

Hilary Putnam pointed out, in The Meaning of Meaning, that meanings are always social, that some people – goldsmiths – are always responsible for knowing more about what is and isn’t gold than others, who rely on what the goldsmiths tell them. But I think the work of determining what is and isn’t gold is also shared across time, in a way that’s a little less intuitively obvious. The goldsmith in the present is the beneficiary of a whole cumulative history of slight refinements in our story about what gold is, and relies on the decisions of past goldsmiths to supply him with this knowledge of what counts as gold as much as we rely on his decisions about what is gold. The same thing was probably true of Augustine’s understanding of time. The people who bequeathed it to him made many different mistakes as individuals, but they ended up agreeing about a few things, so the ordinary meaning associated with the ordinary word came down to him with a kind of optimality that already needed a lot of unraveling. Common sense comes to us without much documentation, so we often need to do some reverse engineering to get a full appreciation of the design.


This article by Daniel Cloud, Ph.D. appeared in Columbia University Press Blog on January 8, 2015