Renewing Philosophy Summary
Scientism is the position that the only source of real knowledge about humans and the universe in which they live is the physical sciences. Hilary Putnam calls this doctrine “one of the most dangerous contemporary intellectual tendencies” and said in 1983 that the critique of it “is a duty for a philosopher who views his enterprise as more than a purely technical discipline.” In the first two-thirds of Renewing Philosophy Putnam discharges his self- imposed duty. He does so by focusing on recent attempts by philosophers to provide a wholly scientific account of what has come to be the central problem of the philosophy of mind and language-intentionality. One way to state the problem is to ask: How is it that words are able to refer or “hook onto” the world? Another is to ask: How are humans able to use marks on a piece of paper or vibrations of sound waves to represent things in the world to which they bear no resemblance?
Some people would say that there is nothing at all mysterious about the notion of reference: Point to a cat, repeat the word “cat” several times, and soon a child learns what the word “cat” means-what’s the big deal? The problem is that this response begs the question. The idea of “pointing to” is itself loaded with “intentionality.” As the cliche’ says, “Be careful where you point because there are always three fingers pointing back!” It is only because children somehow understand the intention of a person who is pointing that the act of pointing can succeed.
Thus, the existence of intentionality has become an embarrassment for those who see the physical sciences as the only real source of knowledge. The three most common attempts to eliminate this embarrassment are artificial intelligence (Al), the attempt to give an evolutionary explanation of language, and what philosophers have termed the “causal theory of reference.” Putnam argues that all three projects are doomed to fail.
The central idea behind Al is that the mind is really nothing more than a kind of “reckoning machine.” This idea goes back to the seventeenth century and the beginning of the “scientific view” of the world. Yet it was only with the perfection of the digital computer in the second half of this century that anyone seriously believed that scientists might actually create a “reckoning machine” which would have the abilities of a human mind.
And exactly what are these abilities? In a famous paper published in 1950, Alan Turing specified a fairly simple test for determining if a machine really could think. Place a computer and a person behind a wall and allow a second person to exchange typewritten messages with both. If the two people, working as a team, are not able to “unmask” the computer, then computers must be able to use language, and hence, think as well as humans.
Putnam says that Al breaks down at the same place logical positivism broke down earlier in this century, namely, solving the problem of induction. He illustrates one aspect of this “huge problem” with the following example:
As far as I know, no one who has ever entered Emerson Hall in Harvard University has been able to speak Inuit (Eskimo). Thinking formalistically, this suggests the induction that if any person X enters Emerson Hall, then X does not speak Inuit. Let Ukik be an Eskimo in Alaska who speaks Inuit. Shall I predict that if Ukik enters Emerson Hall, then Ukik will no longer be able to speak Inuit? Obviously not, but what is wrong with this induction?
Until this problem is solved it will always be possible to unmask the computer in a Turing test by asking it to make what for any five-year-old child is a simple induction.
Evolutionary and causal theories of reference both attempt to explain the way language refers to things in the world in terms of causal attachments. For example, the word “cat” refers to actual cats because, in the most basic cases, actual cats cause people to say or write “cats” more often than...
(The entire section is 2,360 words.)