Since the 1960’s, the computer has been used increasingly as a metaphor for the human mind. For those acquainted with the history of science, this development is not surprising. It is not easy to describe the workings of our minds, and we are constantly tempted to use the latest technology as a model for trying to understand them. Sir Charles Sherrington, the British neuroscientist and Nobel Prize winner, thought that the brain worked like a telegraph system. Sigmund Freud often compared the brain to hydraulic and electromagnetic systems. Gottfried Leibniz compared it to a mill. Some of the ancient Greeks thought that the brain functioned like a catapult. So there should be no cause for surprise that a popular contemporary metaphor for the brain is the digital computer.
The computer is probably no better and no worse as a metaphor for the brain than earlier mechanical metaphors; one can learn as much about the brain by comparing it to a computer as one does by saying that it is a telephone switchboard or telegraph system. This propensity for comparisons is not only evident in popularized accounts. Among scientists, too, there is a persistent need for visualization. In the nineteenth century, Alessandro Volta and André-Marie Ampère represented electricity by the pressures and flows of fluids, and we still rely on their metaphor when we talk about electrical current. Not long ago, chemists were using a visual term, “hooks,” to describe the action of the chemical bond. The problem is acute in modern physics—especially in the description of particles and quanta—and in studies of the human brain. Such metaphors are frequently intended to be helpful devices, not to be taken too seriously; in both the exact sciences and in those called “soft,” such as the social sciences, the preferred word is “model.” In the search for an answer to a problem or an explanation of data, models are frequently used on an improvised, trial-and-error basis in the early stages. During the process, these tentative models are rapidly tried out and discarded. In the search for one that works, dozens may be rejected along the way.
In another, less tentative fashion, however, metaphors are used not simply as vehicles to explain already accepted scientific discoveries but rather as engines of discovery. In such frankly speculative work, there is no way of knowing if the models proposed are true or false. When independent confirmation is impossible, one must remember that not all models are correct; one of the most common fallacies in the repertoire of imprecise thinking is what logicians call the “fallacy of false analogy.”
The above is a necessary prologue to considering Marvin Minsky’s The Society of Mind, because Minsky’s work is purely speculative. The entire field of artificial intelligence—what is called “AI"—is based on a metaphor. In practice, the term “artificial” means “computerized,” and “intelligence” refers to the accumulated technology—it is considerable—of computer science. AI has become a field of study at an increasing number of American universities. The most prominent of these centers are at the Massachusetts Institute of Technology (MIT), Carnegie-Mellon University, Yale University, and Stanford University. Several technical journals are already devoted exclusively to the field. Minsky is sometimes called one of the four “founding fathers” of artificial intelligence; the others are Allen Newell and Herbert Simon of Carnegie-Mellon and John McCarthy, at Stanford. Minsky’s background is primarily in mathematics and robotics. He received a B.A. in mathematics from Harvard University and a Ph.D. from Princeton University; he returned to Harvard as a Junior Fellow in the 1950’s and worked there for three years on problems of how people think, using computerized models. He moved later to MIT, where he is the Donner Professor of Science. A large number of the younger workers in the field of AI have been trained by Minsky as graduate students in his laboratory at MIT.
Is AI a legitimate field of study, a genuine discipline? No one doubts the importance of computers. Our material culture is increasingly computerized in almost every aspect; every since the mathematician John von Neumann discovered after World War II that he would be unable to perform the necessary calculations to construct a hydrogen bomb without more powerful computing machines, scientists have come to rely increasingly on computers. What is contested is the basic notion of AI and the far-reaching claims for it. The dispute has become a significant quarrel of the 1980’s. Two major partisans of AI are Minsky and Douglas Hofstadter, author of the book Gödel, Escher, Bach: An Eternal Golden Braid (1979), which has gained a cult following. Two prominent critics are John R. Searle, author of Minds, Brains, and Science (1985), and his colleague in the philosophy department of the University of California at Berkeley, Hubert L. Dreyfus, author of What Computers Can’t Do: The Limits of Artificial Intelligence (1979).
The debate is important, and neither of the two sides should be underestimated. The basic simile involved—that the human mind is like a computer—has run wild among some partisans of AI, who are creating a mode of science fiction, although with an academic patina. Clearly it is painful for some, especially for analytical philosphers with a thorough grounding in logic, such as Searle and Dreyfus, to read the work of mathematicians who do not define their terms with care. It would seem to be a matter of common sense whether a machine should be called “intelligent” or not; all that is required is to list the things that it can do and those that it cannot. Such a list, however, is often impossible. The curiosity and speculations of AI advocates are fueled by two large unknowns: first, the nature of the human mind itself, and second, the future development of computer capabilities, given their extraordinarily rapid evolution.
AI advocates have attracted much criticism because of their excessive enthusiasm. Minsky is on record as saying that the next generation of computers will be so intelligent that we will “be lucky if they are willing to keep us around the house as household pets.” McCarthy, who invented the term “artificial intelligence,” has claimed that even machines as simple as thermostats can be said to have beliefs. Beliefs? When challenged, he replied that his thermostat has three beliefs: “It’s too hot in here, it’s too cold in here, and it’s just right in here.” A logician could easily make mincemeat of such an assertion, as Searle has done. “In an utterly trivial sense,” he writes, “the pen that is on the desk in front of me can be described as a digital computer. It just happens to have a very boring computer program. The program says: ’Stay there.’... Anything whatever is a digital computer.”
It has been said that the field of artificial intelligence should be saved from its own partisans, and it is unquestionable that there is much that is worth saving. Minsky’s colleague at MIT, Joseph Weizenbaum, was cocreator of a computer program used for diagnosis in hospitals; he has also commented on the unfortunate eagerness with which ordinary people embrace the computer as metaphor for themselves or their fellowman. He maintains that the...