Introduction (Psychology and Mental Health)
Ideas proposed in cybernetics, developments in psychology in terms of studying internal mental processes, and the development of the computer were important precursors for the area of artificial intelligence (AI). Cybernetics, a term coined by Norbert Wiener in 1948, is a field of study interested in the issue of feedback for artificial and natural systems. The main idea is that a system could modify its behavior based on feedback generated by the system or from the environment. Information, and in particular feedback, is necessary for a system to make intelligent decisions. During the 1940’s and 1950’s, the dominant school in American psychology was behaviorism. The focus of research was on topics in which the behaviors were observable and measurable. During this time, researchers such as George Miller were devising experiments that continued to study behavior but also provided some indication of internal mental processes. This cognitive revolution in the United States led to research programs interested in issues such as decision making, language development, consciousness, and memory, issues relevant to the development of an intelligent machine. The main tool for implementing AI, the computer, was an important development that came out of World War II.
The culmination of many of these events was a conference held at Dartmouth College in 1956, which explored the idea of developing computer programs that behaved in an...
(The entire section is 382 words.)
Traditional AI Versus Computer Simulations (Psychology and Mental Health)
“Artificial intelligence” is a general term that includes a number of different approaches to developing intelligent machines. Two different philosophical approaches to the development of intelligent systems are traditional AI and computer simulations. This term can also refer to the development of hardware (equipment) or software (programs) for an AI project. The goal remains the same for traditional AI and computer simulations: the development of a system capable of performing a particular task that, if done by a human, would be considered intelligent.
The goal of traditional AI (sometimes called pure AI) is to develop systems to accomplish various tasks intelligently and efficiently. This approach makes no claims or assumptions about the manner in which humans process and perform a task, nor does it try to model human cognitive processes. A traditional AI project is unrestricted by the limitations of human information processing. One example of a traditional AI program would be earlier versions of Deep Blue, the chess program of International Business Machines (IBM). The ability of this program to successfully “play” chess depended on its ability to compute a larger number of possible board positions based on the current positions and then select the best move. This computational approach, while effective, lacks strategy and the ability to learn from previous games. A modified version of...
(The entire section is 351 words.)
Theoretical Issues (Psychology and Mental Health)
A number of important theoretical issues influence the assumptions made in developing intelligent systems. Stan Franklin, in his book Artificial Minds (1995), presents these issues in what he labels the three debates for AI: Can computing machines be intelligent? Does the connectionist approach offer something that the symbolic approach does not? and Are internal representations necessary?Thinking Machines
The issue of whether computing machines can be intelligent is typically presented as “Can computers think in the sense that humans do?” There are two positions regarding this question: weak AI and strong AI. Weak AI suggests that the utility of artificial intelligence is to aid in exploring human cognition through the development of computer models. This approach aids in testing the feasibility and completeness of the theory from a computational standpoint. Weak AI is considered by many experts in the field as a viable approach. Strong AI takes the stance that it is possible to develop a machine that can manipulate symbols to accomplish many of the tasks that humans can accomplish. Some would ascribe thought or intelligence to such a machine because of its capacity for symbol manipulation. Alan Turing proposed a test, the imitation game, later called the Turing test, as a possible criterion for determining if strong AI has been accomplished. Strong AI also has opponents stating that it is not possible for...
(The entire section is 1241 words.)
Approaches to Modeling Intelligence (Psychology and Mental Health)
Intelligent tutoring systems (ITSs) are systems in which individual instruction can be tailored to the needs of a particular student. This is different from computer-aided instruction (CAI), in which everyone receives the same lessons. Key components typical of ITSs are the expert knowledge base (or teacher), the student model, instructional goals, and the interface. The student model contains the knowledge that the student has mastered as well as the areas in which he or she may have conceptual errors. Instruction can then be tailored to help elucidate the concepts with which the student is having difficulty.
An expert system attempts to capture an individual’s expertise, and the program should then perform like an expert in that particular area. An expert system consists of two components: a knowledge base and an inference engine. The inference engine is the program of the expert system. It relies on the knowledge base, which “captures the knowledge” of an expert. Developing this component of the expert system is often time-consuming. Typically, the knowledge from the expert is represented in if-then statements (also called condition-action rules). If a particular condition is met, this leads to execution of the action part of the statement. Testing of the system often leads to repeating the knowledge-acquisition phase and modification of the condition-action rules. An example of an expert system...
(The entire section is 418 words.)
Sources for Further Study (Psychology and Mental Health)
Bechtel, William, and George Graham, eds. A Companion to Cognitive Science. Malden, Mass.: Blackwell, 1998. Provides a history for the area of cognitive science as well as an exploration of many of the issues of interest today.
Clark, Andy, and Josefa Toribio, eds. Cognitive Architectures in Artificial Intelligence. New York: Garland, 1998. A collection of papers dealing with the three types of architectures: physical symbol system, connectionist, and subsumption. Written by experts in the various areas.
Dennett, Daniel C. Brainchildren: Essays on Designing Minds. Cambridge, Mass.: MIT Press, 1998. Dennett, a philosopher, looks at issues such as what it means to be intelligent. He also takes a look at the appropriateness of the Turing test.
Franklin, Stan. Aritificial Minds. Cambridge, Mass.: MIT Press, 1995. Franklin takes complicated subjects and presents them in a readable way.
Gardner, Howard. The Mind’s New Science: A History of the Cognitive Revolution. New York: Basic Books, 1998. A good overview of the issues leading up to the cognitive revolution as well as the main issues of study for this area.
Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. Cambridge, Mass.: MIT Press, 2008. An examination of cybernetics, artificial life, and artificial intelligence.
Von Foerster, Heinz....
(The entire section is 242 words.)
Artificial Intelligence (Encyclopedia of Science)
Artificial intelligence (AI) is a subfield of computer science that focuses on creating computer software that imitates human learning and reasoning. Computers can out-perform people when it comes to storing information, solving numerical problems, and doing repetitive tasks. Computer programmers originally designed software that accomplished these tasks by completing algorithms, or clearly defined sets of instructions. In contrast, programmers design AI software to give the computer only the problem, not the steps necessary to solve it.
Overview of artificial intelligence
All AI programs are built on two foundations: a knowledge base and an inferencing capability (inferencing means to draw a conclusion based on facts and prior knowledge). A knowledge base is made up of many different pieces of information: facts, concepts, theories, procedures, and relationships. Where conventional computer software must follow a strictly logical series of steps to reach a conclusion (algorithm), AI software uses the techniques of search and pattern matching. The computer is given some initial information and then searches the knowledge base for specific conditions or patterns that fit the problem to be solved. This special ability of AI programso reach a solution based on facts rather than on a preset series of stepss what most closely resembles the thinking function of the human...
(The entire section is 897 words.)
Artificial Intelligence (Encyclopedia of Science and Religion)
Artificial intelligence (AI) is the field within computer science that seeks to explain and to emulate, through mechanical or computational processes, some or all aspects of human intelligence. Included among these aspects of intelligence are the ability to interact with the environment through sensory means and the ability to make decisions in unforeseen circumstances without human intervention. Typical areas of research in AI include game playing, natural language understanding and synthesis, computer vision, problem solving, learning, and robotics.
The above is a general description of the field; there is no agreed upon definition of artificial intelligence, primarily because there is little agreement as to what constitutes intelligence. Interpretations of what it means to be intelligent vary, yet most can be categorized in one of three ways. Intelligence can be thought of as a quality, an individually held property that is separable from all other properties of the human person. Intelligence is also seen in the functions one performs, in actions or the ability to carry out certain tasks. Finally, some researchers see intelligence as a quality that can only be acquired and demonstrated through relationship with other intelligent beings. Each of these understandings of intelligence has been used as the basis of an approach to developing computer programs with intelligent characteristics.
First attempts: symbolic AI
The field of AI is considered to have its origin in the publication of British mathematician Alan Turing's (1912954) paper "Computing Machinery and Intelligence" (1950). The term itself was coined six years later by mathematician and computer scientist John McCarthy (b. 1927) at a summer conference at Dartmouth College in New Hampshire. The earliest approach to AI is called symbolic or classical AI and is predicated on the hypothesis that every process in which either a human being or a machine engages can be expressed by a string of symbols that is modifiable according to a limited set of rules that can be logically defined. Just as geometry can be built from a finite set of axioms and primitive objects such as points and lines, so symbolicists, following rationalist philosophers such as Ludwig Wittgenstein (1889951) and Alfred North Whitehead (1861947), predicated that human thought is represented in the mind by concepts that can be broken down into basic rules and primitive objects. Simple concepts or objects are directly expressed by a single symbol while more complex ideas are the product of many symbols, combined by certain rules. For a symbolicist, any patternable kind of matter can thus represent intelligent thought.
Symbolic AI met with immediate success in areas in which problems could be easily described using a limited domain of objects that operate in a highly rule-based manner, such as games. The game of chess takes place in a world where the only objects are thirty-two pieces moving on a sixty-four square board according to a limited number of rules. The limited options this world provides give the computer the potential to look far ahead, examining all possible moves and countermoves, looking for a sequence that will leave its pieces in the most advantageous position. Other successes for symbolic AI occurred rapidly in similarly restricted domains such as medical diagnosis, mineral prospecting, chemical analysis, and mathematical theorem proving.
Symbolic AI faltered, however, not on difficult problems like passing a calculus exam, but on the easy things a two year old child can do, such as recognizing a face in various settings or understanding a simple story. McCarthy labels symbolic programs as brittle because they crack or break down at the edges; they cannot function outside or near the edges of their domain of expertise since they lack knowledge outside of that domain, knowledge that most human "experts" possess in the form of what is known as common sense. Humans make use of general knowledgehe millions of things that are known and applied to a situationoth consciously and subconsciously. Should it exist, it is now clear to AI researchers that the set of primitive facts necessary for representing human knowledge is exceedingly large.
Another critique of symbolic AI, advanced by Terry Winograd and Fernando Flores in their 1986 book Understanding Computers and Cognition is that human intelligence may not be a process of symbol manipulation; humans do not carry mental models around in their heads. Hubert Dreyfus makes a similar argument in Mind over Machine (1986); he suggests that human experts do not arrive at their solutions to problems through the application of rules or the manipulation of symbols, but rather use intuition, acquired through multiple experiences in the real world. He describes symbolic AI as a "degenerating research project," by which he means that, while promising at first, it has produced fewer results as time has progressed and is likely to be abandoned should other alternatives become available. This prediction has proven fairly accurate. By 2000 the once dominant symbolic approach had been all but abandoned in AI, with only one major ongoing project, Douglas Lenat's Cyc (pronounced "psych"). Lenat hopes to overcome the general knowledge problem by providing an extremely large base of primitive facts. Lenat plans to combine this large database with the ability to communicate in a natural language, hoping that once enough information is entered into Cyc, the computer will be able to continue the learning process on its own, through conversation, reading, and applying logical rules to detect patterns or inconsistencies in the data Cyc is given. Initially conceived in 1984 as a ten-year initiative, Cyc has not yet shown convincing evidence of extended independent learning.
Functional or weak AI
In 1980, John Searle, in the paper "Minds, Brains, and Programs," introduced a division of the field of AI into "strong" and "weak" AI. Strong AI denoted the attempt to develop a full human-like intelligence, while weak AI denoted the use of AI techniques to either better understand human reasoning or to solve more limited problems. Although there was little progress in developing a strong AI through symbolic programming methods, the attempt to program computers to carry out limited human functions has been quite successful. Much of what is currently labeled AI research follows a functional model, applying particular programming techniques, such as knowledge engineering, fuzzy logic, genetic algorithms, neural networking, heuristic searching, and machine learning via statistical methods, to practical problems. This view sees AI as advanced computing. It produces working programs that can take over certain human tasks. Such programs are used in manufacturing operations, transportation, education, financial markets, "smart" buildings, and even household appliances.
For a functional AI, there need be no quality labeled "intelligence" that is shared by humans and computers. All computers need do is perform a task that requires intelligence for a human to perform. It is also unnecessary, in functional AI, to model a program after the thought processes that humans use. If results are what matters, then it is possible to exploit the speed and storage capabilities of the digital computer while ignoring parts of human thought that are not understood or easily modeled, such as intuition. This is, in fact, what was done in designing the chess-playing program Deep Blue, which in 1997 beat the reigning world chess champion, Gary Kasparov. Deep Blue does not attempt to mimic the thought of a human chess player. Instead, it capitalizes on the strengths of the computer by examining an extremely large number of moves, more moves than any human player could possibly examine.
There are two problems with functional AI. The first is the difficulty of determining what falls into the category of AI and what is simply a normal computer application. A definition of AI that includes any program that accomplishes some function normally done by a human being would encompass virtually all computer programs. Nor is there agreement among computer scientists as to what sorts of programs should fall under the rubric of AI. Once an application is mastered, there is a tendency to no longer define that application as AI. For example, while game playing is one of the classical fields of AI, Deep Blue's design team emphatically states that Deep Blue is not artificial intelligence, since it uses standard programming and parallel processing techniques that are in no way designed to mimic human thought. The implication here is that merely programming a computer to complete a human task is not AI if the computer does not complete the task in the same way a human would.
For a functional approach to result in a full human-like intelligence it would be necessary not only to specify which functions make up intelligence, but also to make sure those functions are suitably congruent with one another. Functional AI programs are rarely designed to be compatible with other programs; each uses different techniques and methods, the sum of which is unlikely to capture the whole of human intelligence. Many in the AI community are also dissatisfied with a collection of task-oriented programs. The building of a general human-like intelligence, as difficult a goal as it may seem, remains the vision.
A relational approach
A third approach is to consider intelligence as acquired, held, and demonstrated only through relationships with other intelligent agents. In "Computing Machinery and Intelligence" (1997), Turing addresses the question of which functions are essential for intelligence with a proposal for what has come to be the generally accepted test for machine intelligence. An human interrogator is connected by terminal to two subjects, one a human and the other a machine. If the interrogator fails as often as he or she succeeds in determining which is the human and which the machine, the machine could be considered as having intelligence. The Turing Test is not based on the completion of tasks or the solution of problems by the machine, but on the machine's ability to relate to a human being in conversation. Discourse is unique among human activities in that it subsumes all other activities within itself. Turing predicted that by the year 2000, there would be computers that could fool an interrogator at least thirty percent of the time. This, like most predictions in AI, was overly optimistic. No computer has yet come close to passing the Turing Test.
The Turing Test uses relational discourse to demonstrate intelligence. However, Turing also notes the importance of being in relationship for the acquisition of knowledge or intelligence. He estimates that the programming of the background knowledge needed for a restricted form of the game would take at a minimum three hundred person-years to complete. This is assuming that the appropriate knowledge set could be identified at the outset. Turing suggests that rather than trying to imitate an adult mind, computer scientists should attempt to construct a mind that simulates that of a child. Such a mind, when given an appropriate education, would learn and develop into an adult mind. One AI researcher taking this approach is Rodney Brooks of the Massachusetts Institute of Technology, whose lab has constructed several robots, including Cog and Kismet, that represent a new direction in AI in which embodiedness is crucial to the robot's design. Their programming is distributed among the various physical parts; each joint has a small processor that controls movement of that joint. These processors are linked with faster processors that allow for interaction between joints and for movement of the robot as a whole. These robots are designed to learn tasks associated with human infants, such as eye-hand coordination, grasping an object, and face recognition through social interaction with a team of researchers. Although the robots have developed abilities such as tracking moving objects with the eyes or withdrawing an arm when touched, Brooks's project is too new to be assessed. It may be no more successful than Lenat's Cyc in producing a machine that could interact with humans on the level of the Turing Test. However Brooks's work represents a movement toward Turing's opinion that intelligence is socially acquired and demonstrated.
The Turing Test makes no assumptions as to how the computer arrives at its answers; there need be no similarity in internal functioning between the computer and the human brain. However, an area of AI that shows some promise is that of neural networks, systems of circuitry that reproduce the patterns of neurons found in the brain. Current neural nets are limited, however. The human brain has billions of neurons and researchers have yet to understand both how these neurons are connected and how the various neurotransmitting chemicals in the brain function. Despite these limitations, neural nets have reproduced interesting behaviors in areas such as speech or image recognition, natural-language processing, and learning. Some researchers, including Hans Moravec and Raymond Kurzweil, see neural net research as a way to reverse engineer the brain. They hope that once scientists can design nets with a complexity equal to the human brain, the nets will have the same power as the brain and develop consciousness as an emergent property. Kurzweil posits that such mechanical brains, when programmed with a given person's memories and talents, could form a new path to immortality, while Moravec holds out hope that such machines might some day become our evolutionary children, capable of greater abilities than humans currently demonstrate.
AI in science fiction
A truly intelligent computer remains in the realm of speculation. Though researchers have continually projected that intelligent computers are immanent, progress in AI has been limited. Computers with intentionality and self consciousness, with fully human reasoning skills, or the ability to be in relationship, exist only in the realm of dreams and desires, a realm explored in fiction and fantasy.
The artificially intelligent computer in science fiction story and film is not a prop, but a character, one that has become a staple since the mid-1950s. These characters are embodied in a variety of physical forms, ranging from the wholly mechanical (computers and robots) to the partially mechanical (cyborgs) and the completely biological (androids). A general trend from the 1950s to the 1990s has been to depict intelligent computers in an increasingly anthropomorphic way. The robots and computers of early films, such as Maria in Fritz Lang's Metropolis (1926), Robby in Fred Wilcox's Forbidden Planet (1956), Hal in Stanley Kubrick's 2001: A Space Odyssey (1968), or R2D2 and C3PO in George Lucas's Star Wars (1977), were clearly constructs of metal. On the other hand, early science fiction stories, such as Isaac Asimov's I, Robot (1950), explored the question of how one might distinguish between robots that looked human and actual human beings. Films and stories from the 1980s through the early 2000s, including Ridley Scott's Blade Runner (1982) and Stephen Spielberg's A.I. (2001), pick up this question, depicting machines with both mechanical and biological parts that are far less easily distinguished from human beings.
Fiction that features AI can be classified in two general categories: cautionary tales (A.I., 2001) or tales of wish fulfillment (Star Wars; I, Robot). These present two differing visions of the artificially intelligent being, as a rival to be feared or as a friendly and helpful companion.
Philosophical and theological questions
What rights would an intelligent robot have? Will artificially intelligent computers eventually replace human beings? Should scientists discontinue research in fields such as artificial intelligence or nanotechnology in order to safeguard future lives? When a computer malfunctions, who is responsible? These are only some of the ethical and theological questions that arise when one considers the possibility of success in the development of an artificial intelligence. The prospect of an artificially intelligent computer also raises questions about the nature of human beings. Are humans simply machines themselves? At what point would replacing some or all human biological parts with mechanical components violate one's integrity as a human being? Is a human being's relationship to God at all contingent on human biological nature? If humans are not the end point of evolution, what does this say about human nature? What is the relationship of the soul to consciousness or intelligence? While most of these questions are speculative in nature, regarding a future that may or may not come to be, they remain relevant, for the way people live and the ways in which they view their lives stand to be critically altered by technology. The quest for artificial intelligence reveals much about how people view themselves as human beings and the spiritual values they hold.
See also ALGORITHM; ARTIFICIAL LIFE; CYBERNETICS; CYBORG; IMAGO DEI; THINKING MACHINES; TURING TEST
Asimov, Isaac. I, Robot. New York: Doubleday, 1950.
Brooks, Rodney. "Intelligence without Representation." In Mind Design II: Philosophy, Psychology, Artificial Intelligence, rev. edition, ed. John Haugeland. Cambridge, Mass.: MIT Press, 1997.
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books, 1993.
Dreyfus, Hubert. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: Free Press, 1986.
Kurzweil, Raymond. The Age of Spiritual Machines. New York: Viking, 1999.
Lenat, Douglas. "CYC: A Large-Scale Investment in Knowledge Infrastructure." Communications of the ACM 38 (1995): 338.
Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1986.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge, Mass.: Harvard University Press, 1988.
Searle, John. "Minds, Brains, and Programs." The Behavioral and Brain Sciences 3 (1980): 41724.
Stork, David, ed. HAL's Legacy: 2001's Computer as Dream and Reality. Cambridge, Mass.: MIT Press, 1997.
Turing, Alan. "Computing Machinery and Intelligence." In Mind Design II: Philosophy, Psychology, Artificial Intelligence, rev. edition, ed. John Haugeland.Cambridge, Mass.: MIT Press, 1997.
Telotte, J. P. Replications: A Robotic History of the Science Fiction Film. Urbana: University of Illinois Press, 1995.
Turkel, Sherry. The Second Self: Computers and the Human Spirit. New York: Simon and Schuster, 1984.
Warrick, Patricia. The Cybernetic Imagination in Science Fiction. Cambridge, Mass.: MIT Press, 1980.
Winograd, Terry, and Flores, Fernando. Understanding Computers and Cognition: A New Foundation for Design. Norwood, N.J.: Ablex, 1986. Reprint, Reading, Mass.: Addison-Wesley, 1991.
2001: A Space Odyssey. Directed by Stanley Kubrick. Metro-Goldwyn-Mayer; Polaris, 1968.
AI. Directed by Steven Spielberg. Amblin Entertainment; Dreamworks SKG; Stanley Kubrick Productions; Warner Bros., 2001.
Blade Runner. Directed by Ridley Scott. Blade Runner Partnership; The Ladd Comany, 1982.
Forbidden Planet. Directed by Fred Wilcox. Metro- Goldwyn-Mayer, 1956.
Metropolis. Directed by Fritz Lang. Universum Film, A.G., 1926.
Star Wars. Directed by George Lucas. Lucasfilm Ltd., 1977.
NOREEN L. HERZFELD