Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 1054
Defining how the mind works and how intelligence is built out of smaller components, The Society of Mind posits the idea that the mind is a society made up of small processes called “agents,” which work together to produce action, thought, commonsense reasoning, emotion, and memory. Marvin Minsky uses examples of computer programs and artificial intelligence to demonstrate how intelligence or mind can be created out of small, repetitive steps.
The form of the book reflects this idea of mental societies; it is made up of 270 page-length essays which present a single idea or theory or demonstration that is connected in different ways to those of the other essays. The form does not impose any hierarchical order on the material, as there are numerous cross-references incorporated in the text, the glossary, and the index. The form also reflects the way the mind itself cross-connects its agents. The book makes extensive use of diagrams and drawings to demonstrate its concepts. Literary quotations are also incorporated in the text to provide examples of cultural cross-references. While the book does not use a considerable amount of technical psychological terminology, Minsky has created many new terms to describe mental processes. Definitions are given in the text and in a glossary. There is also an appendix which discusses the relationship of the mind to the brain.
To introduce the idea of agents, Minsky describes a computer program, “Builder,” which he and Seymour Papert developed in the late 1960’s. It combined a mechanical hand, a television eye, and a computer into a robot which could build a tower out of children’s blocks. The program had to use agents to “see” the block “grasp” it, “place” it, and “release” it. In addition, “Builder” had to be programmed or taught such concepts as not using a block already in the tower and how to begin and end the tower. Each of these agents, individually, is simple and not an activity which would normally be considered intelligence. “Builder” itself merely activates each separate agent. To understand the system as a whole, one must know how each part works, how it interacts with those to which it is connected, and how they combine to accomplish a given function. Intelligence or mind operates the same way. An additional complexity is the fact that the mind can perform a virtually unlimited number of procedures. Therefore, there needs to be an agent which decides which procedure will take precedence. A variety of agents such as noncompromise, hierarchies, and heterarchies can serve this function. Pain and pleasure are also agents which help the mind determine which procedure to give priority. From these simple agents, the mind builds the self, a sense of individuality, consciousness, and meaning.
Intelligence is defined as the ability to solve “hard problems” fairly rapidly and individually. Minsky excludes from intelligence instinctive behavior. The ability to solve hard problems often relies on the use of memory. A theory of memory must be able to answer questions about knowledge such as how it is represented, stored, retrieved, and used. The theory proposed is that “we keep each thing we learn close to the agents that learn it in the first place.” The mind can activate an agent called a “knowledge-line” to do all these things. Knowledge-lines can attach to other knowledge-lines, which in turn create societies. These societies are organized into various “level-bands”; thus, any given mental process operates at any given moment only within a specified range of the structure of the agent. The idea of a level-band explains how it is possible for one process to concentrate on details while other processes are concerned with large-scale plans. From the concepts of agents, knowledge-lines, and level-bands, learning, reasoning, emotions, and language can develop.
There are at least four different ways of learning or “making useful changes in the workings of our minds”: “Uniframing” combines several specific instances into a generalization; “accumulating” collects examples which violate the generalization; “reformulating” modifies the uniframe or accumulation; and “trans-framing” bridges structures, functions, and actions. These learning strategies, and problem solving, depend on the short-term memory in order to be able to modify strategies, remember what has just been done, and do something differently. There are many kinds of memory, some attached to time frames and others totally detached from time. These different kinds of memory allow for the interruption of mental processes and also allow them to be broken up into smaller units.
Reasoning is often divided into two different types: logical and common sense. Logical reasoning is often perceived as more difficult than commonsense reasoning, but actually the reverse is true. Logic follows rigid rules for creating chains of reasons. In fact, it is easier to program a computer to express logical reasoning than the commonsense variety. Commonsense reasoning makes chains through causes, similarities, and dependencies.
Although Western, scientific culture tends to emphasize that thought and emotion are very different, emotions are varieties or types of thoughts built up out of different brain agencies. Emotions may be necessary for certain kinds of learning to take place, especially for constructing coherent value systems or participating in a culture. The emotion of attachment may be requisite for developing a knowledge of language.
To explain how cultural concepts and language are processed, Minsky introduces the idea of “frames,” or a sort of skeletal outline with slots to be filled. Each slot can be connected to other structures and is connected to a “default assumption,” or a basic idea which can be modified or changed as more specific information is gained. For example, for most people the frame for “bird” is a feathered, winged, creature that flies. When it is known that the particular bird under discussion is a penguin, that slot in that frame is modified but the basic default assumptions about birds do not change. Frames such as grammatical structures help in the understanding of sentences. Frames such as cultural contexts and knowledge help in the understanding of language and stories. Most human communication is possible because frames of reference and meaning are constructed.
The power of intelligence comes from its diversity, because it can be made up of many different parts. Humans have many effective, although imperfect, means of achieving and expressing intelligence. The society of mind provides duplication and alternative perspectives which give intelligence versatility and durability.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 3026
Since the 1960’s, the computer has been used increasingly as a metaphor for the human mind. For those acquainted with the history of science, this development is not surprising. It is not easy to describe the workings of our minds, and we are constantly tempted to use the latest technology as a model for trying to understand them. Sir Charles Sherrington, the British neuroscientist and Nobel Prize winner, thought that the brain worked like a telegraph system. Sigmund Freud often compared the brain to hydraulic and electromagnetic systems. Gottfried Leibniz compared it to a mill. Some of the ancient Greeks thought that the brain functioned like a catapult. So there should be no cause for surprise that a popular contemporary metaphor for the brain is the digital computer.
The computer is probably no better and no worse as a metaphor for the brain than earlier mechanical metaphors; one can learn as much about the brain by comparing it to a computer as one does by saying that it is a telephone switchboard or telegraph system. This propensity for comparisons is not only evident in popularized accounts. Among scientists, too, there is a persistent need for visualization. In the nineteenth century, Alessandro Volta and André-Marie Ampère represented electricity by the pressures and flows of fluids, and we still rely on their metaphor when we talk about electrical current. Not long ago, chemists were using a visual term, “hooks,” to describe the action of the chemical bond. The problem is acute in modern physics—especially in the description of particles and quanta—and in studies of the human brain. Such metaphors are frequently intended to be helpful devices, not to be taken too seriously; in both the exact sciences and in those called “soft,” such as the social sciences, the preferred word is “model.” In the search for an answer to a problem or an explanation of data, models are frequently used on an improvised, trial-and-error basis in the early stages. During the process, these tentative models are rapidly tried out and discarded. In the search for one that works, dozens may be rejected along the way.
In another, less tentative fashion, however, metaphors are used not simply as vehicles to explain already accepted scientific discoveries but rather as engines of discovery. In such frankly speculative work, there is no way of knowing if the models proposed are true or false. When independent confirmation is impossible, one must remember that not all models are correct; one of the most common fallacies in the repertoire of imprecise thinking is what logicians call the “fallacy of false analogy.”
The above is a necessary prologue to considering Marvin Minsky’s The Society of Mind, because Minsky’s work is purely speculative. The entire field of artificial intelligence—what is called “AI"—is based on a metaphor. In practice, the term “artificial” means “computerized,” and “intelligence” refers to the accumulated technology—it is considerable—of computer science. AI has become a field of study at an increasing number of American universities. The most prominent of these centers are at the Massachusetts Institute of Technology (MIT), Carnegie-Mellon University, Yale University, and Stanford University. Several technical journals are already devoted exclusively to the field. Minsky is sometimes called one of the four “founding fathers” of artificial intelligence; the others are Allen Newell and Herbert Simon of Carnegie-Mellon and John McCarthy, at Stanford. Minsky’s background is primarily in mathematics and robotics. He received a B.A. in mathematics from Harvard University and a Ph.D. from Princeton University; he returned to Harvard as a Junior Fellow in the 1950’s and worked there for three years on problems of how people think, using computerized models. He moved later to MIT, where he is the Donner Professor of Science. A large number of the younger workers in the field of AI have been trained by Minsky as graduate students in his laboratory at MIT.
Is AI a legitimate field of study, a genuine discipline? No one doubts the importance of computers. Our material culture is increasingly computerized in almost every aspect; every since the mathematician John von Neumann discovered after World War II that he would be unable to perform the necessary calculations to construct a hydrogen bomb without more powerful computing machines, scientists have come to rely increasingly on computers. What is contested is the basic notion of AI and the far-reaching claims for it. The dispute has become a significant quarrel of the 1980’s. Two major partisans of AI are Minsky and Douglas Hofstadter, author of the book Gödel, Escher, Bach: An Eternal Golden Braid (1979), which has gained a cult following. Two prominent critics are John R. Searle, author of Minds, Brains, and Science (1985), and his colleague in the philosophy department of the University of California at Berkeley, Hubert L. Dreyfus, author of What Computers Can’t Do: The Limits of Artificial Intelligence (1979).
The debate is important, and neither of the two sides should be underestimated. The basic simile involved—that the human mind is like a computer—has run wild among some partisans of AI, who are creating a mode of science fiction, although with an academic patina. Clearly it is painful for some, especially for analytical philosphers with a thorough grounding in logic, such as Searle and Dreyfus, to read the work of mathematicians who do not define their terms with care. It would seem to be a matter of common sense whether a machine should be called “intelligent” or not; all that is required is to list the things that it can do and those that it cannot. Such a list, however, is often impossible. The curiosity and speculations of AI advocates are fueled by two large unknowns: first, the nature of the human mind itself, and second, the future development of computer capabilities, given their extraordinarily rapid evolution.
AI advocates have attracted much criticism because of their excessive enthusiasm. Minsky is on record as saying that the next generation of computers will be so intelligent that we will “be lucky if they are willing to keep us around the house as household pets.” McCarthy, who invented the term “artificial intelligence,” has claimed that even machines as simple as thermostats can be said to have beliefs. Beliefs? When challenged, he replied that his thermostat has three beliefs: “It’s too hot in here, it’s too cold in here, and it’s just right in here.” A logician could easily make mincemeat of such an assertion, as Searle has done. “In an utterly trivial sense,” he writes, “the pen that is on the desk in front of me can be described as a digital computer. It just happens to have a very boring computer program. The program says: ’Stay there.’... Anything whatever is a digital computer.”
It has been said that the field of artificial intelligence should be saved from its own partisans, and it is unquestionable that there is much that is worth saving. Minsky’s colleague at MIT, Joseph Weizenbaum, was cocreator of a computer program used for diagnosis in hospitals; he has also commented on the unfortunate eagerness with which ordinary people embrace the computer as metaphor for themselves or their fellowman. He maintains that the computer will never be able to understand our biological constitution in any real sense. One of the fields closest to AI, robotics, is increasingly important in the industrial and manufacturing worlds. Robotics has become crucial to the ability of many American firms to compete in the international marketplace. The national security of the United States has also come to depend upon robotics—a dependency that became clear in June, 1987, when it was discovered that the Toshiba Corporation of Japan and Kongsberg Vapenfabrik of Norway had illicitly sold computer-controlled industrial robots to the Soviet Union, which permitted the Soviets to mill submarine propellers that make little noise. As a result, Soviet submarines are now more difficult for Americans to track on radar. Minsky has made great contributions to robotics—he devised some of the pioneering programs for robots in the 1960’s and 1970’s, and in 1985 he edited an influential anthology entitled Robotics. There is no question about the present and future importance of robotics. A few writers use the term artificial intelligence as synonomous with robotics; if this more narrow meaning of the term were more widely accepted, many misconceptions would be avoided.
The Society of Mind is an intriguing speculation about the way humans think. It does not make use of the normal language of psychology; it is the product of Minsky’s long experience of devising programs for robots and of trying to simulate with computers the different modes of our consciousness. The unprepared reader will immediately wonder why there are so few references to past and present work in psychology, psychoanalysis, neurology, and the many other disciplines bearing upon the human mind or brain. After all, some of the most stunning advances of modern science have been made in these fields and were popularized in books such as Carl Sagan’s The Dragons of Eden: Speculations of the Evolution of Human Intelligence (1977) or Arthur Koestler’s The Ghost in the Machine (1967). The bibliography to The Society of Mind is sketchy and refers to none of this work. Consequently, Minsky could be accused of lack of erudition—especially for a subject as broad as human thought or “thinking about thinking"—with some justification. The omission, however, is deliberate. For a long time, Minsky had a rule in his laboratory that no psychological data were allowed; he considered that not much could be learned by averaging a lot of people’s responses, for he wanted to get at something more basic. In his own words (unfortunately not quoted in The Society of Mind), “Like what Freud did. Tom Evans and I asked ourselves, in depth, what we did to solve problems . . . and that seemed to work pretty well.”
As a result, the book has a personal, even intimate quality that the reader might not expect in an ambitious study of how people think. The thought processes are Minsky’s own and, to a certain extent, those of children he has observed. To these processes he brings to bear his ample experience in computer programming. As in designing a program, he must start his description from scratch, clearly separating each identifiable step and taking nothing for granted. This exercise probably affords his project its greatest value. Does it justify the deliberate ignoring of so much accumulated knowledge about the human mind? Minsky tries to account for such knowledge anew, in his own terms, in a manner that could be theoretically programmed into a future machine. The book is imaginative from beginning to end. It does not represent science but is instead an extended meditation on how human features, human “consciousness,” could be duplicated in a machine. It is informed by thorough knowledge of computer technology of the period from 1960 to 1985, but few of Minsky’s proposed programs have been tried. The book concludes with a discussion of models, and it is no accident that a book as speculative as The Society of Mind ends on this note. Minsky writes:Even if our models of the world cannot yield good answers about the world as a whole, and even though their other answers are frequently wrong, they can tell us something about ourselves. We can regard what we learn about our models of the world as constituting our models of our models of the world.
The style is not graceful, but the point is important. Although at one point Minsky ridicules philosophical idealism and George Berkeley, he stresses the importance of our mental constructs when we think about the world and try to deal with it.
The book has an original format: Each page is a selfcontained unit or chapter. This structure is intended to facilitate a process of accumulation, or building with small parts into a large whole. The resulting whole, reached at the conclusion, is to be a theory of how the mind works and represents the world. The organization does not permit extended argument. On the other hand, the pages are large and the format suits the speculative nature of the book. Each page provides a meditation, musing, query, or new idea. Although there is no unified theory resulting from the many chapters that proceed toward increasing generality, there are numerous insights. The chapters on memory are particularly good. Minsky distinguishes between short- and long-term memory, memory that is conscious, “shallow,” and unconscious. These descriptions shed light on many of our everyday states of mind. Minsky speculates that memory is not stored in a single portion of the brain but is distributed throughout the body, different types of memory located next to or adjoining different agents. He has read Marcel Proust closely and with profit. He has also paid close attention to the writings of Jean Piaget, one of the few scientists he mentions by name. Minsky’s concepts of the exploitation of one agent by another within the body are interesting, as are his theories on the maintenance of distinct levels of “management” requiring noninteraction, or insulation, of different levels—what he calls “level bands,” each having different thresholds.
Charts and diagrams accompany almost every chapter, illustrating the processes that Minsky is describing. Many of the charts suggest wiring; sometimes they are mechanical, and sometimes they resemble flowcharts. A large proportion of the book is devoted to examining motor skills, for example, picking up a block or locating an apple; the final third of the book considers aspects of language. Minsky’s view of the mechanisms responsible for motor skills is complex. There is ambivalence and conflict between the various parts or mechanisms; each individual is an accumulation—a “society"—of myriad agents and subagents, of “frames” (that act as typical situations or stereotypes), “trans-frames” and “frame arrays,” “specialists” and “proto-specialists,” as well as exotic entities such as “polynemes,” “pronomes,” and “isonomes.” His “default assumptions” represent a very suggestive type of inference. Paradoxically, although Minsky uses machinelike concepts—or metaphors—to describe how humans think, the overall model could not be described as mechanistic. The human being, instead, is a republic of minute processes that add up in peculiar, improvised, intertwined ways to a whole that is constantly subject to actual and potential conflict, confronted by difficult choices depending on trial and error as well as on previous learning.
This whole, however, is strangely weak. The emphasis on motor skills prevents Minsky from broaching many of the more adult skills except in passing. Whether deliberately or not, the child is the “model” of the adult. The book does not enter the domain of human history or social conflict. Culture is considered as if it were a storehouse of memories that the community makes available to the child. Minsky’s view of the mind is relatively benign; instead of being potentially destructive, it is potentially weak and chaotic. There is nothing in Minsky’s model that corresponds to what has been called the R-complex and the limbic system. Piaget has clearly exerted a stronger influence on Minsky’s concepts than has Freud. Minsky’s “society” of diverse elements often seems to lack integration, or forces powerful enough to integrate it; those larger synthetic mechanisms for holding the parts of the “society” together and directing them outward at the world are tentative and have little force. Minsky contradicts or demystifies notions of the soul, of the self (contra Erik Erikson’s concept of identity), and of mental energy. His remarks about logic are consistently derogatory. Yet Minsky proposes nothing to take the place of these larger integrative forces. His chapters on adults, at the end of the book, are hurried. No doubt a description of the world of adults would have required a book twice as long. Also, the world of the adult, and of history, is less amenable to laboratory models than is the play of the child.
A peculiar feature of The Society of Mind—and one of the main arguments against it—is Minsky’s deliberate exclusion of biology. Surely the mind is part of the biological world, and as biologically based as growth, or digestion, or the secretion of bile. Brains are biological engines, and their biology matters. It is plausible that psychological terms should be divided into smaller and smaller components in order to understand them—but why should this process bypass the realm of biology altogether? The rejection of biology is strangely anachronistic. Perhaps, as Searle has suggested, it reverts to an earlier dualism in which the mind represents spirit in opposition to the body. In focusing on computer hardware, advocates of artificial intelligence arbitrarily exclude all “wetware,” Searle’s ironical term for those biochemical processes that are often simpler and more economical and use more ingenious structures and shapes than the formal operations of the computer.
This exclusion is especially counterproductive in view of great recent advances in biology, such as the discovery of the genetic code. Jacques Monod, the 1965 Nobel Prize winner and author of Le Hasard et la nécessité (1970; Chance and Necessity, 1971), has speculated that some of our major myths—of Cain and Abel, of a Messiah or Christ-figure, the utopian future of George Hegel and Karl Marx—recur in too many different societies to be accidental and thus probably have a genetic basis. Though Monod’s suggestion is only a speculation, it might well lead to future developments in the fields of folklore and myth. There is no place in Minsky’s “society,” however, for such a mechanism.
A paradox of computers is that some of their greatest successes have been in carrying out highly specialized skills, but the most difficult problems arise in creating machines that can mimic the simplest elements in human behavior. Computer operations throw light on many of our mental operations, and Minsky demonstrates this time and time again in The Society of Mind. His theories, however, need expansion before they can be called a comprehensive view of the mind—expansion into the world of the adult and of history, and into other sciences.
Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 64
Bernstein, Jeremy. “Mind and Machine: Profile of Marvin Minsky,” in Science Observed: Essays Out of My Mind, 1982.
Johnson-Laird, P.N. “Minsky’s Mentality,” in Nature. CCCXXVIII (July 30, 1987), pp. 387-388.
McCorduck, Pamela. Machines Who Think, 1979.
Meer, Jeff. “Mind Models: How Far Have We Come?” in Psychology Today XXI (May, 1987), pp. 102-103.
Winston, Patrick H., and Richard H. Brown, eds. Artificial Intelligence: An MIT Perspective, 1979 (3 volumes).
Unlock This Study Guide Now
- 30,000+ book summaries
- 20% study tools discount
- Ad-free content
- PDF downloads
- 300,000+ answers
- 5-star customer support