(Critical Survey of Contemporary Fiction)

The Dartmouth Conference of 1956 was the first meeting organized around the topic of artificial intelligence and first brought that term into broad use. The participants could not agree on the meaning of the topic or on the proper approaches to study it; those questions have persisted in some form to the present. The main objective of artificial intelligence that emerged was getting computers to performs tasks that, if performed by humans, would require intelligence.

Soon after the Dartmouth Conference, researchers began using studies of how humans solved problems to see if they could emulate that behavior with computers. Some research explored, in a different vein, how to exploit the characteristics of computers. Programmers soon taught computers simple rules of logic and allowed the machines to “learn” in artificial worlds, figuring out how elements of those worlds related to each other. Joseph Weizenbaum developed the ELIZA program, which simulates a psychoanalyst and seemingly satisfies the goal of artificial intelligence in that people interacting with it believe that its responses are those of a psychoanalyst. That program, however, does nothing more than cleverly rearrange the “patient’s” previous statements and insert a few stock phrases.

The development of expert systems brought corporate attention to artificial intelligence. Developers of these systems coded the knowledge of experts in various fields into rules that computers could apply. Many of the systems were able to give solutions to problems that were almost as good as those of the experts whose knowledge they used.

In later chapters, Crevier discusses philosophical issues of artificial intelligence, issues made difficult by the problem of defining the concept of intelligence. Questions arise concerning whether a computer could ever take the place of a human brain and whether it could truly have awareness or emotions. These questions prompt Crevier to speculate on how computers will affect societies of the future. He offers several scenarios and warns against allowing computers to have too much control before scientists understand exactly how they think and react once they have been programmed to learn.