Form and Content

(Literary Essentials: Nonfiction Masterpieces)

Defining how the mind works and how intelligence is built out of smaller components, The Society of Mind posits the idea that the mind is a society made up of small processes called “agents,” which work together to produce action, thought, commonsense reasoning, emotion, and memory. Marvin Minsky uses examples of computer programs and artificial intelligence to demonstrate how intelligence or mind can be created out of small, repetitive steps.

The form of the book reflects this idea of mental societies; it is made up of 270 page-length essays which present a single idea or theory or demonstration that is connected in different ways to those of the other essays. The form does not impose any hierarchical order on the material, as there are numerous cross-references incorporated in the text, the glossary, and the index. The form also reflects the way the mind itself cross-connects its agents. The book makes extensive use of diagrams and drawings to demonstrate its concepts. Literary quotations are also incorporated in the text to provide examples of cultural cross-references. While the book does not use a considerable amount of technical psychological terminology, Minsky has created many new terms to describe mental processes. Definitions are given in the text and in a glossary. There is also an appendix which discusses the relationship of the mind to the brain.

To introduce the idea of agents, Minsky describes a computer program, “Builder,” which he and Seymour Papert developed in the late 1960’s. It combined a mechanical hand, a television eye, and a computer into a robot which could build a tower out of children’s blocks. The program had to use agents to “see” the block “grasp” it, “place” it, and “release” it. In addition, “Builder” had to be programmed or taught such concepts as not using a block already in the tower and how to begin and end the tower. Each of these agents, individually, is simple and not an activity which would normally be considered intelligence. “Builder” itself merely activates each separate agent. To understand the system as a whole, one must know how each part works, how it interacts with those to which it is connected, and how they combine to accomplish a given function. Intelligence or mind operates the same way. An additional complexity is the fact that the mind can perform a virtually unlimited number of procedures. Therefore, there needs to be an agent which decides which procedure will take precedence. A variety of agents such as noncompromise, hierarchies, and heterarchies can...

(The entire section is 1054 words.)

The Society of Mind

(Literary Masterpieces, Volume 8)

Since the 1960’s, the computer has been used increasingly as a metaphor for the human mind. For those acquainted with the history of science, this development is not surprising. It is not easy to describe the workings of our minds, and we are constantly tempted to use the latest technology as a model for trying to understand them. Sir Charles Sherrington, the British neuroscientist and Nobel Prize winner, thought that the brain worked like a telegraph system. Sigmund Freud often compared the brain to hydraulic and electromagnetic systems. Gottfried Leibniz compared it to a mill. Some of the ancient Greeks thought that the brain functioned like a catapult. So there should be no cause for surprise that a popular contemporary metaphor for the brain is the digital computer.

The computer is probably no better and no worse as a metaphor for the brain than earlier mechanical metaphors; one can learn as much about the brain by comparing it to a computer as one does by saying that it is a telephone switchboard or telegraph system. This propensity for comparisons is not only evident in popularized accounts. Among scientists, too, there is a persistent need for visualization. In the nineteenth century, Alessandro Volta and André-Marie Ampère represented electricity by the pressures and flows of fluids, and we still rely on their metaphor when we talk about electrical current. Not long ago, chemists were using a visual term, “hooks,” to describe the action of the chemical bond. The problem is acute in modern physics—especially in the description of particles and quanta—and in studies of the human brain. Such metaphors are frequently intended to be helpful devices, not to be taken too seriously; in both the exact sciences and in those called “soft,” such as the social sciences, the preferred word is “model.” In the search for an answer to a problem or an explanation of data, models are frequently used on an improvised, trial-and-error basis in the early stages. During the process, these tentative models are rapidly tried out and discarded. In the search for one that works, dozens may be rejected along the way.

In another, less tentative fashion, however, metaphors are used not simply as vehicles to explain already accepted scientific discoveries but rather as engines of discovery. In such frankly speculative work, there is no way of knowing if the models proposed are true or false. When independent confirmation is impossible, one must remember that not all models are correct; one of the most common fallacies in the repertoire of imprecise thinking is what logicians call the “fallacy of false analogy.”

The above is a necessary prologue to considering Marvin Minsky’s The Society of Mind, because Minsky’s work is purely speculative. The entire field of artificial intelligence—what is called “AI"—is based on a metaphor. In practice, the term “artificial” means “computerized,” and “intelligence” refers to the accumulated technology—it is considerable—of computer science. AI has become a field of study at an increasing number of American universities. The most prominent of these centers are at the Massachusetts Institute of Technology (MIT), Carnegie-Mellon University, Yale University, and Stanford University. Several technical journals are already devoted exclusively to the field. Minsky is sometimes called one of the four “founding fathers” of artificial intelligence; the others are Allen Newell and Herbert Simon of Carnegie-Mellon and John McCarthy, at Stanford. Minsky’s background is primarily in mathematics and robotics. He received a B.A. in mathematics from Harvard University and a Ph.D. from Princeton University; he returned to Harvard as a Junior Fellow in the 1950’s and worked there for three years on problems of how people think, using computerized models. He moved later to MIT, where he is the Donner Professor of Science. A large number of the younger workers in the field of AI have been trained by Minsky as graduate students in his laboratory at MIT.

Is AI a legitimate field of study, a genuine discipline? No one doubts the importance of computers. Our material culture is increasingly computerized in almost every aspect; every since the mathematician John von Neumann discovered after World War II that he would be unable to perform the necessary calculations to construct a hydrogen bomb without more powerful computing machines, scientists have come to rely increasingly on computers. What is contested is the basic notion of AI and the far-reaching claims for it. The dispute has become a significant quarrel of the 1980’s. Two major partisans of AI are Minsky and Douglas Hofstadter, author of the book Gödel, Escher, Bach: An Eternal Golden Braid (1979), which has gained a cult following. Two prominent critics are John R. Searle, author of Minds, Brains, and Science (1985), and his colleague in the philosophy department of the University of California at Berkeley, Hubert L. Dreyfus, author of What Computers Can’t Do: The Limits of Artificial Intelligence (1979).

The debate is important, and neither of the two sides should be underestimated. The basic simile involved—that the human mind is like a computer—has run wild among some partisans of AI, who are creating a mode of science fiction, although with an academic patina. Clearly it is painful for some, especially for analytical philosphers with a thorough grounding in logic, such as Searle and Dreyfus, to read the work of mathematicians who do not define their terms with care. It would seem to be a matter of common sense whether a machine should be called “intelligent” or not; all that is required is to list the things that it can do and those that it cannot. Such a list, however, is often impossible. The curiosity and speculations of AI advocates are fueled by two large unknowns: first, the nature of the human mind itself, and second, the future development of computer capabilities, given their extraordinarily rapid evolution.

AI advocates have attracted much criticism because of their excessive enthusiasm. Minsky is on record as saying that the next generation of computers will be so intelligent that we will “be lucky if they are willing to keep us around the house as household pets.” McCarthy, who invented the term “artificial intelligence,” has claimed that even machines as simple as thermostats can be said to have beliefs. Beliefs? When challenged, he replied that his thermostat has three beliefs: “It’s too hot in here, it’s too cold in here, and it’s just right in here.” A logician could easily make mincemeat of such an assertion, as Searle has done. “In an utterly trivial sense,” he writes, “the pen that is on the desk in front of me can be described as a digital computer. It just happens to have a very boring computer program. The program says: ’Stay there.’... Anything whatever is a digital computer.”

It has been said that the field of artificial intelligence should be saved from its own partisans, and it is unquestionable that there is much that is worth saving. Minsky’s colleague at MIT, Joseph Weizenbaum, was cocreator of a computer program used for diagnosis in hospitals; he has also commented on the unfortunate eagerness with which ordinary people embrace the computer as metaphor for themselves or their fellowman. He maintains that the...

(The entire section is 3026 words.)

The Society of Mind Bibliography

(Literary Essentials: Nonfiction Masterpieces)

Bernstein, Jeremy. “Mind and Machine: Profile of Marvin Minsky,” in Science Observed: Essays Out of My Mind, 1982.

Johnson-Laird, P.N. “Minsky’s Mentality,” in Nature. CCCXXVIII (July 30, 1987), pp. 387-388.

McCorduck, Pamela. Machines Who Think, 1979.

Meer, Jeff. “Mind Models: How Far Have We Come?” in Psychology Today XXI (May, 1987), pp. 102-103.

Winston, Patrick H., and Richard H. Brown, eds. Artificial Intelligence: An MIT Perspective, 1979 (3 volumes).

(The entire section is 64 words.)