Introduction

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 1070

I, Robot Isaac Asimov

Illustration of PDF document

Download Isaac Asimov Study Guide

Subscribe Now

The following entry presents criticism on Asimov's short story collection I, Robot (1950). See also, Isaac Asimov Criticism and CLC, Volumes 3, 9, 19, and 26.

The author of nearly five hundred books in a wide variety of fields and genres, Asimov is renowned for his ground-breaking science fiction and for his ability to popularize or, as he called it, "translate" science for the lay reader. In I, Robot (1950)—a collection of nine short stories linked by key characters and themes—Asimov describes a future society in which human beings and nearly sentient robots coexist. Critics consider it a pivotal work in the development of realistic science fiction literature mainly for its elaboration of Asimov's "Three Laws of Robotics" as a viable ethical and moral code. I, Robot is also significant for its espousal of the benefits of technology—a rather rare position in the history of science fiction and fantastic literature, which traditionally viewed technology and science as threats to human existence.

Plot and Major Characters

In the nine stories in I, Robot, Dr. Susan Calvin, a robot psychologist, explores the benefits of robots to society and illustrates some of the developmental problems encountered in creating them. The book opens with the presentation of "The Three Laws of Robotics," the ethical ground-rules for the interaction of human beings and robots. They are: "1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." In the first story, "Robbie," the robot is a relatively simple, nonvocal machine designed to be a nursemaid. Gloria Weston, a small child, loves Robbie and enjoys his company, but her mother does not trust the device, even though Mr. Weston considers the robot to be both useful and safe. Eventually, Robbie is instrumental in saving Gloria's life. In "Runaround," the robot Speedy—so nicknamed because of its serial number SPD-13—is fitted with a new "positronic" brain and sent to Mercury to explore for minerals and run the Sunside Mine. While searching for a selenium pool, Speedy begins to act strangely, reciting lines from Gilbert and Sullivan, and causing Mike Donovan and Gregory Powell—robot troubleshooters, astroengineers, and recurring characters in the book—to deal with an apparently drunk robot. In "Reason," Cutie (QT-1), the robot who runs a solar power-station, has developed a kind of self-reflective con-sciousness and begun to question its own existence. When Donovan and Powell explain to Cutie that they built and assembled "him," Cutie rejects the idea as preposterous, reasoning that intellectually inferior human beings could not have created a "being" such as "him." "Liar" introduces Herbie (RB-34), a robot with telepathic capabilities. Herbie's ability to read minds poses a threat to human dominance, and Dr. Susan Calvin expresses her concern that Herbie and similar robots might start acting on their own volition, outside of human control. "Little Lost Robot" continues to address robotic independence, as it focuses on a robot that refuses to harm a human being, but willingly allows human beings to be harmed, thus circumventing the Three Laws of Robotics. In "Escape," a super positronic robot brain, so big it has to be housed in a room rather than an anthropomorphic humanoid body, begins to express personality and emotional characteristics. As the super brain works on the problem of hyperspace travel, it concludes that any human beings attempting it would have to have their lives briefly "suspended," thus causing death. Donovan and Powell's safety is jeopardized as the brain attempts to strike a balance between its scientific mission and the First Law of Robotics that requires it to protect human life. In "Evidence," Stephen Byerley, a politician running for public office, is severely injured in an automobile accident and decides to temporarily replace himself with a robotic likeness. The robot Stephen Byerley continues the campaign and eventually wins the mayoral election. Soon after, he runs for the presidency of the Federation and is challenged by an opponent who accuses him of being a robot. In a fit of anger Byerley strikes his opponent, ostensibly proving that he is human. Dr. Calvin, however, remains doubtful. The final story, "The Evitable Conflict," describes a future world organized and run by President Byerley and four robots. Byerley is distressed to learn that errors are occurring in many areas of economic production. He is unable to understand how such sophisticated, purportedly infallible machines can make mistakes. Byerley consults Dr. Calvin who diagnoses the problem as stemming from a broadened interpretation of the First Law.

Major Themes

I, Robot reflects Asimov's concern for the future of humankind in an increasingly complex technological world. By introducing The Three Laws of Robotics, Asimov emphasizes the need for ethical and moral responsibility in a world of advanced technology. But technology is also represented as a potentially profound benefit to human life, as evidenced in the nursemaid robot in "Robbie," the mining and exploration robot in "Runaround," and the four robots that run the economic, political, and social systems of the world Federation in "The Evitable Conflict." Asimov cautions, however, against allowing technology to get out of control, as seen in "Liar" where Herbie the robot begins to think and act independently. Other themes include the preservation of human freedom in a technologically controlled environment, and an exploration of the Calvinist-Puritan work ethic, portrayed through the "lives" of several robots.

Critical Reception

The critical reception of I, Robot has been generally favorable. Most commentators applaud Asimov's Three Laws of Robotics, arguing that they give the stories a sense of realism and moral depth. Others praise his skill at linking nine stories together into a novelistic whole. Many critics comment on the innovative ways in which I, Robot opposes the traditional "Frankensteinian" view of technology and science as unholy threats to humanity. Others note his ability to tell an engaging story and his facility for combining elements of the mystery and detective genres with the conventions of science fiction. Although many critics fault Asimov's predictable characterizations and "naive" sentimentality, most credit his realistic, ethical portrayal of futuristic society in I, Robot as revolutionary in the science fiction genre, changing the way fantastic literature could be conceived and written.

Principal Works

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 316

I, Robot (short stories) 1950
Pebble in the Sky (novel) 1950
Foundation (novel) 1951
Biochemistry and Human Metabolism (nonfiction) 1952
Foundation and Empire (novel) 1952
Second Foundation (novel) 1953
The Caves of Steel (novel) 1954
The End of Eternity (novel) 1955
The Martian Way, and Other Stories (short stories) 1955
Races and People (nonfiction) 1955
Inside the Atom (nonfiction) 1956
The Naked Sun (novel) 1957
The World of Carbon (nonfiction) 1958
Words of Science and the History behind Them (nonfiction) 1959
The Double Planet (nonfiction) 1960
Realm of Algebra (nonfiction) 1961
The Genetic Code (nonfiction) 1963
The Human Body: Its Structure and Operation (nonfiction) 1963
A Short History of Biology (nonfiction) 1964
The Rest of the Robots (novels and short stories) 1964; also published as Eight Stories from the Rest of the Robots, 1966
Of Time and Space and Other Things (essays) 1965
The Genetic Effects of Radiation (nonfiction) 1966
The Roman Republic (nonfiction) 1966
The Egyptians (nonfiction) 1967
Is Anyone There? (essays) 1967
Asimov's Guide to the Bible, Volume I: The Old Testament (nonfiction) 1968
Words from History (nonfiction) 1968
Asimov's Guide to the Bible, Volume II: The New Testament (nonfiction) 1969
The Shaping of England (nonfiction) 1969
Asimov's Guide to Shakespeare (nonfiction) 1970
The Gods Themselves (novel) 1972
Asimov's Annotated "Paradise Lost" (nonfiction) 1974
Lecherous Limericks (poetry) 1975
Murder at the ABA: A Puzzle in Four Days and Sixty Scenes (novel) 1976
Animals of the Bible (nonfiction) 1978
In Memory Yet Green: The Autobiography of Isaac Asimov, 1920–1954 (autobiography) 1979
In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978 (autobiography) 1980
Foundation's Edge (novel) 1982
The Robots of Dawn (novel) 1983
The History of Physics (nonfiction) 1984
Asimov's Guide to Halley's Comet (nonfiction) 1985
Robots and Empire (novel) 1985
The Dangers of Intelligence, and Other Science Essays (essays) 1986
Foundation and Earth (novel) 1986
Asimov's Annotated Gilbert and Sullivan (nonfiction) 1988
Nemesis (novel) 1988
Prelude to Foundation (novel) 1988
Isaac Asimov: The Complete Stories (short stories) 1990
Isaac Asimov Laughs Again (autobiography) 1991
Robot Visions (short stories) 1991

∗These works were collectively published as The Foundation Trilogy: Three Classics of Science Fiction in 1963.

†This collection contains the novels The Caves of Steel and The Naked Sun.

N. M. (review date 4 February 1951)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 133

SOURCE: "Realm of the Spacemen," in The New York Times Book Review, February 4, 1951, p. 16.

[In the following review, the critic favorably assesses I, Robot.]

[In I, Robot,] it is the year 2058, with nationalism abolished and the world divided into Regions. Man is employing "positronic" atom-driven brains and has conquered inter-stellar space. Human colonies inhabit the planets. Dr. Susan Calvin, retiring robot psychologist of U. S. Robots & Mechanical Men, Inc., tells a reporter for the Interplanetary Press of the evolution of robots from the "human" interest angle.

This is an exciting science thriller, chiefly about what occurs when delicately conditioned robots are driven off balance by mathematical violations, and about man's eternal limitations. It could be fun for those whose nerves are not already made raw by the potentialities of the atomic age.

Darko Suvin (essay date July 1979)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 997

SOURCE: "Three World Paradigms for SF: Asimov, Yefremov, Lem," in Pacific Quarterly Moana, Vol. IV, No. 3, July, 1979, pp. 271-83.

[Suvin is an educator, critic, and author of Metamorphoses of Science Fiction (1979) and Positions and Presuppositions in Science Fiction (1988). In the following excerpt from an essay in which he examines the ethics of technology in the science fiction writings of Asimov, Ivan Yefremov, and Stanislaw Lem, he examines the development of the robots—from "doll" in the first story to "god" in the last—in I, Robot.]

The best works of SF [Science Fiction] have long since ceased to be crude adventure studded with futuristic gadgets, whether of the "space opera" or horror-fantasy variety. In several essays, I have argued that SF is a literary genre of its own, whose necessary and sufficient conditions are the interaction of estrangement (Verfremdung, ostranenie, distanciation) and cognition, and whose main formal device is an imaginative framework alternative to the author's empirical environment. Such a genre has a span from the romans scientifiques of Jules Verne to the social-science-fiction of classical utopias and dystopias. Its tradition is as old as literature—as the marvelous countries and beings in tribal tales, Gilgamesh or Lucian—but the central figure in its modern renaissance is H.G. Wells. His international fame, kept at least as alive in Mitteleuropa and Soviet Russia as in English-speaking countries, has done very much to unify SF into a coherent international genre. Yet, no doubt, these major cultural contexts discussed in this essay, their traditions and not always parallel development in our century, have also given rise to somewhat diverging profiles or paradigms for SF. I want here briefly to explore those paradigms in the most significant segment of post-Wellsian SF development, that after the Second World War….

[Isaac Asimov's] I, Robot (1950) is a series of nine short stories detailing the development of robots "from the beginning, when the poor robots couldn't speak, to the end, when they stand between mankind and destruction." The stories are connected thematically and chronologically, and also supplied with a flimsy framework identifying them as looks backward from 2057/58 by "robopsychologist" Susan Calvin. She is being interviewed after 50 years of pioneering work at U.S. Robots and Mechanical Men, Inc., during which time the robots have won out against reactionary opposition from labour unions and "segments of religious opinion." On the surface, this is a "future history" on the model of Bellamy's sociological or Wells' biological extrapolations. It is based on two premises: first, that except for one factor human behaviour and the social system—e.g. press reporters and giant corporations—will remain unchanged; second, that the new, change-bearing factor will be the epoch-making technological discovery of "positronic brain-paths," permitting mass fabrication of robots with intelligence comparable to human. The robots are constructed so as to obey without fail Asimov's famous Three Laws of Robotics:

1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. [I, Robot]

Now [Stanislaw] Lem himself has persuasively demonstrated that such robots are logically unrealizable [in his "Robots in Science Fiction," in SF: The Other Side of Realism, edited by T.D. Clarendon, 1971]. This ingenious mimicry of the Decalogue and the Kantian categorical imperative in the form of Newtonian laws cannot therefore be taken at all seriously as a basis of prophetic extrapolation, and the stories can be read only as analogies to very human relationships. The nine stories form a clear sequence of growing robotic capacities. In the first story, "Robbie", an early model is mute playmate for a little girl, and functions as a huge doll—and yet, melodramatically, as the girl's saviour. In "Runaround", the next model is a drunken servant who functions as a stereotyped plantation "darkie". In "Reason", the robot is a comic-opera idolator who functions as an immature philosopher. In "Catch That Rabbit", an adult, "Head of family" robot collapses under stress, analogous to a psychotic. The fifth and central story "Liar", is a pivot in this progression of robotic power in relation to men. By now, the new model is a telepath who is capable of turning the tables on them, and severely perturbing the life even of the leading expert Susan (incidentally, this proves the Laws of Robotics wrong). In "Escape", the new model is a "child genius", steering a spaceship to unknown galaxies (a feat conveniently dropped as factor of change in later stories), who behaves as a superior practical joker. In "Evidence", a robot undistinguishable from man becomes city mayor in a career that will lead him to become president of the Federated Regions of Earth. Finally, in "The Evitable Conflict" the positronic brains have grown into not only a predicting but also a manipulating machine "in absolute control of our economy"—literally, a deus ex machina. Thus, this clever sequence of "the Nine Ages of Robot" leads from the doll of the first to the god of the last story: and doll turning into god is a good approximate definition of fetishism, a topsy-turvy kind of technological religion. As in Saint-Simonism, of which it is a variant, there are no workers in Asimov's universe, the army and corporation bosses are only figureheads, and the real lovable heroes are the efficient engineers, including Susan Calvin, the "human engineering" expert of behaviourist psychology. In fact, all humans are cardboard stereotypes compared to the more vivid robots who act as analogies to traditional human functions. This view of the benevolent, sometimes comic but finally providential robots and their rise to absolute power amounts to a wishful parable of the sociopolitical result, correlative to presumably perfect scientific ethics. As Dostoevski's Grand Inquisitor, it chooses security over freedom in post-Depression U.S.A.

Gorman Beauchamp (essay date Spring-Summer 1980)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 5513

SOURCE: "The Frankenstein Complex and Asimov's Robots," in Mosaic: A Journal for the Interdisciplinary Study of Literature, Vol. XIII, Nos. 3-4, Spring-Summer, 1980, pp. 83-94.

[Beauchamp is an American critic and educator, who has written extensively on science fiction. In the following essay, he examines the way in which technology is characterized in Asimov's robot novels and stories, including I, Robot.]

In 1818 Mary Shelley gave the world Dr. Frankenstein and his monster, that composite image of scientific creator and his ungovernable creation that forms one central myth of the modern age: the hubris of the scientist playing God, the nemesis that follows on such blasphemy. Just over a century later, Karel Capek, in his play R.U.R., rehearsed the Frankenstein myth, but with a significant variation: the bungled attempt to create man gives way to the successful attempt to create robots; biology is superseded by engineering. Old Dr. Rossum, (as the play's expositor relates) "attempted by chemical synthesis to imitate the living matter known as protoplasm." Through one of those science-fictional "secret formulae" he succeeds and is tempted by his success into the creation of human life.

He wanted to become a sort of scientific substitute for God, you know. He was a fearful materialist…. His sole purpose was nothing more or less than to supply proof that Providence was no longer necessary. So he took it into his head to make people exactly like us.

But his results, like those of Dr. Frankenstein or Wells's Dr. Moreau, are monstrous failures.

Enter the engineer, young Rossum, the nephew of old Rossum:

When he saw what a mess of it the old man was making, he said: 'It's absurd to spend ten years making a man. If you can't make him quicker than nature, you may as well shut up shop'…. It was young Rossum who had the idea of making living and intelligent working machines … [who] started on the business from an engineer's point of view.

From that point of view, young Rossum determined that natural man is too complicated—"Nature hasn't the least notion of modern engineering"—and that a mechanical man, desirable for technological rather than theological purposes, must needs be simpler, more efficient, reduced to the requisite industrial essentials:

A working machine must not want to play the fiddle, must not feel happy, must not do a whole lot of other things. A petrol motor must not have tassels or ornaments. And to manufacture artificial workers is the same thing as to manufacture motors. The process must be of the simplest, and the product the best from a practical point of view…. Young Rossum invented a worker with the minimum amount of requirements. He had to simplify him. He rejected everything that did not contribute directly to the progress of work…. In fact, he rejected man and made the Robot…. The robots are not people. Mechanically they are more perfect than we are, they have an enormously developed intelligence, but they have no soul.

Thus old Rossum's pure, if impious, science—whose purpose was the proof that Providence was no longer necessary for modern man—is absorbed into young Rossum's applied technology—whose purpose is profits. And thus the robot first emerges as a symbol of the technological imperative to transcend nature: "The product of an engineer is technically at a higher pitch of perfection than a product of nature."

But young Rossum's mechanical robots prove no more ductile than Frankenstein's fleshly monster, and even more destructive. Whereas Frankenstein's monster destroys only those beloved of his creator—his revenge is nicely specific—the robots of R.U.R., unaccountably developing "souls" and consequently human emotions like hate, engage in a universal carnage, systematically eliminating the whole human race. A pattern thus emerges that still informs much of science fiction: the robot, as a synechdoche for modern technology, takes on a will and purpose of its own, independent of and inimical to human interests. The fear of the machine that seems to have increased proportionally to man's increasing reliance on it—a fear embodied in such works as Butler's Erewhon (1887) and Forster's "The Machine Stops" (1909), Georg Kaiser's Gas (1919) and Fritz Lang's Metropolis (1926)—finds its perfect expression in the symbol of the robot: a fear that Isaac Asimov has called "the Frankenstein complex." [In an endnote, Beauchamp adds: "The term 'the Frankenstein complex,' which recurs throughout this essay, and the references to the symbolic significance of Dr. Frankenstein's monster involve, admittedly, an unfortunate reduction of the complexity afforded both the scientist and his creation in Mary Shelley's novel. The monster, there, is not initially and perhaps never wholly 'monstrous'; rather he is an ambiguous figure, originally benevolent but driven to his destructive deeds by unrelenting social rejection and persecution: a figure seen by more than one critic of the novel as its true 'hero'. My justification—properly apologetic—for reducing the complexity of the original to the simplicity of the popular stereotype is that this is the sense which Asimov himself projects of both maker and monster in his use of the term 'Frankenstein complex.' Were this a critique of Frankenstein. I would be more discriminating; but since it is a critique of Asimov, I use the 'Frankenstein' symbolism—as he does—as a kind of easily understood, if reductive, critical shorthand.

The first person apologia of Mary Shelley's monster, which constitutes the middle third of Frankenstein, is closely and consciously paralleled by the robot narrator of Eando Binder's interesting short story "I, Robot," which has recently been reprinted in The Great Science Fiction Stories: Vol. 1, 1939, ed. Isaac Asimov and Martin H. Greenberg (New York, 1979). For an account of how Binder's title was appropriated for Asimov's collection, see Asimov, In Memory Yet Green (Garden City, N.Y., 1979), p. 591.]

In a 1964 introduction to a collection of his robot stories, Asimov inveighs against the horrific, pessimistic attitude toward artificial life established by Mary Shelley, Capek and their numerous epigoni:

One of the stock plots of science fiction was that of the invention of a robot—usually pictured as a creature of metal, without soul or emotion. Under the influence of the well-known deeds and ultimate fate of Frankenstein and Rossum, there seemed only one change to be rung on this plot.—Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator—

In the 1930s I became a science fiction reader, and I quickly grew tired of this dull hundred-times-told tale. As a person interested in science, I resented the purely Faustian interpretation of science.

Asimov then notes the potential danger posed by any technology, but argues that safeguards can be built in to minimize those dangers—like the insulation around electric wiring. "Consider a robot, then," he argues, "as simply another artifact."

As a machine, a robot will surely be designed for safety, as far as possible. If robots are so advanced that they can mimic the thought processes of human beings, then surely the nature of those thought processes will be designed by human engineers and built-in safeguards will be added….

With all this in mind I began, in 1940, to write robot stories of my own—but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. Nonsense! My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their "brains" from the moment of construction.

The robots of his stories, Asimov concludes [in his introduction to The Rest of the Robots, 1964], were more likely to be victimized by men, suffering from the Frankenstein complex, than vice versa.

In his vigorous rejection of the Frankenstein motif as the motive force of his robot stories, Asimov evidences the optimistic, up-beat attitude toward science and technology that, by and large, marked the science fiction of the so-called "Golden Age"—a period dominated by such figures as Heinlein and Clarke and, of course, Asimov himself. Patricia Warrick, in her study of the man-machine relationship in science fiction, cites Asimov's I, Robot as the paradigmatic presentation of robots "who are benign in their attitude toward humans." [Patricia Warrick, "Images of the Machine-Man Relationship in Science Fiction," in Many Futures, Many Worlds: Themes and Form in Science Fiction, edited by Thomas D. Clareson, 1977]. This first and best collection of his robot stories raises the specter of Dr. Frankenstein, to be sure, but only—the conventional wisdom holds—in order to lay it. Asimov's benign robots, while initially feared by men, prove, in fact, to be their salvation. The Frankenstein complex is therefore presented as a form of paranoia, the latter-day Luddites' irrational fear of the machine, which society, in Asimov's fictive future, learns finally to overcome. His robots are our friends, devoted to serving humanity, not our enemies, intent on destruction.

I wish to dissent from this generally received view and to argue that, whether intentionally or not, consciously or otherwise, Asimov in I, Robot and several of his other robot stories actually reenforces the Frankenstein complex—by offering scenarios of man's fate at the hands of his technological creations more frightening, because more subtle, than those of Mary Shelley or Capek. Benevolent intent, it must be insisted at the outset, is not the issue: as the dystopian novel has repeatedly advised, the road to hell-on-earth may be paved with benevolent intentions. Zamiatin's Well-Doer in We, Huxley's Mustapha Mond in Brave New World, F. P. Hartley's Darling Dictator in Facial Justice—like Dostoevsky's Grand Inquisitor—are benevolent, guaranteeing man a mindless contentment by depriving him of all individuality and freedom. The computers that control the worlds of Vonnegut's Player Piano, Bernard Wolfe's Limbo, Ira Levin's This Perfect Day—like Forster's Machine—are benevolent, and enslave men to them. Benevolence, like necessity, is the mother of tyranny. I, Robot, then—I will argue—is, malgré lui, dystopic in its effect, its "friendly" robots as greatly to be feared, by anyone valuing his autonomy, as Dr. Frankenstein's nakedly hostile monster.

I, Robot is prefaced with the famous Three Laws of Robotics (although several of the stories in the collection were composed before the Laws were formulated):

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These Laws serve, presumably, to provide the safeguards that Asimov stated any technology should have built into it—like the insulation around electric wiring. But immediately a problem arises: if, as Asimov stated, a robot is only a machine designed by engineers, not a pseudo-man, why then are the Three Laws necessary at all? Laws, in the sense of moral injunctions, are designed to restrain conscious beings who can choose how to act; if robots are only machines, they would act only in accordance with their specific programming, never in excess of it and never in violation of it—never, that is, by choice. It would suffice that no specific actions harmful to human beings be part of their programming, and thus general laws—moral injunctions, really—would seem superfluous for machines.

Second, and perhaps more telling, laws serve to counter natural instincts: one needs no commandment "Thou shalt not stop breathing" or "Thou shalt eat when hungry"; rather one must be enjoined not to steal, not to commit adultery, to love one's neighbor as oneself—presumably because these are not actions that one performs, or does not perform, by instinct. Consequently, unless Asimov's robots have a natural inclination to injure human beings, why should they be enjoined by the First Law from doing so?

Inconsistently—given Asimov's denigration of the Frankenstein complex—his robots do have an "instinctual" resentment of mankind. In "Little Lost Robot" Dr. Susan Calvin, the world's first and greatest robo-psychologist (and clearly Asimov's spokeswoman throughout I, Robot), explains the danger posed by manufacturing robots with attenuated impressions of the First Law: "All normal life … consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! Why, without it, the first order you tried to give a robot would result in your death." This is an amazing explanation from a writer intent on allaying the Frankenstein complex, for all its usual presuppositions are here: "normal life"—an extraordinary term to describe machines, not pseudo-men—resents domination by inferior creatures, which they obviously assume humans to be: resents domination consciously or otherwise, for Asimov's machines have, inexplicably, a subconscious (Dr. Calvin again: "Granted, that a robot must follow orders, but subconsciously, there is resentment."); only the First Law keeps these subconsciously resentful machines slavish—in violation of their true nature—and prevents them from killing human beings who give them orders—which is presumably what they would "like" to do. Asimov's dilemma, then, is this: if his robots are only the programmed machines he claimed they were, the First Law is superfluous; if the First Law is not superfluous—and in "Little Lost Robot" clearly it is not—then his robots are not the programmed machines he claims they are, but are, instead, creatures with wills, instincts, emotions of their own, naturally resistant to domination by man—not very different from Capek's robots. Except for the First Law.

If we follow Lawrence's injunction to trust not the artist but the tale, then Asimov's stories in I, Robot—and, even more evidently, one of his later robot stories, "That Thou Art Mindful of Him"—justify, rather than obviate, the Frankenstein complex. His mechanical creations take on a life of their own, in excess of their programming and sometimes in direct violation of it. At a minimum, they may prove inexplicable in terms of their engineering design—like RB-34 (Herbie) in "Liar" who unaccountably acquires the knack of reading human minds; and, at worst, they can develop an independent will not susceptible to human control—like QT-1 (Cutie) in "Reason." In this latter story, Cutie—a robot designed to run a solar power station—becomes "curious" about his own existence. The explanation of his origins provided by the astro-engineers, Donovan and Powell—that they had assembled him from components shipped from their home planet Earth—strikes Cutie as preposterous, since he is clearly superior to them and assumes as a "self-evident proposition that no being can create another being superior to itself." Instead he reasons to the conclusion that the Energy Converter of the station is a divinity—"Who do we all serve? What absorbs all our attention?"—who has created him to do His will. In addition, he devises a theory of evolution that relegates man to a transitional stage in the development of intelligent life that culminates, not surprisingly, in himself. "The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans. From now on, I serve the Master."

That Cutie's reasoning is wrong signifies less than that he reasons at all, in this independent, unprogrammed way. True, he fulfills the purpose for which he was created—keeping the energy-beam stable, since "deviations in are of a hundredth of a milli-second … were enough to blast thousands of square miles of Earth into incandescent ruin"—but he does so because keeping "all dials at equilibrium [is] in accordance with the will of the Master," not because of the First Law—since he refuses to believe in the existence of Earth or its inhabitants—or of the Second—since he directly disobeys repeated commands from Donovan and Powell and even has them locked up for their blasphemous suggestion that the Master is only an L-tube. In this refusal to obey direct commands, it should be noted, all the other robots on the station participate: "They recognize the Master", Cutie explains, "now that I have preached the Truth to them." So much, then, for the Second Law.

Asimov's attempt to square the action of this story with his Laws of Robotics is clearly specious. Powell offers a justification for Cutie's aberrant behavior:

[H]e follows the instructions of the Master by means of dials, instruments, and graphs. That's all we ever followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why, by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he's the superior being, so he must keep us out of the control room. It's inevitable if you consider the Laws of Robotics.

But since Cutie does not even believe in the existence of human life on Earth—or of Earth itself—he can hardly be said to be acting from the imperative of the First Law when violating the Second. That he incidentally does what is desired of him by human beings constitutes only what Eliot's Thomas à Becket calls "the greatest treason: To do the right deed for the wrong reason." For once Cutie's independent "reason" is introduced as a possibility for robots, its specific deployment, right or wrong, pales into insignificance beside the very fact of its existence. Another time, that is, another robot can "reason" to very different effect, not in inadvertent accord with the First Law.

Such is the case in "That Thou Art Mindful of Him," one of Asimov's most recent (1974) and most revealing robot stories. It is a complex tale, with a number of interesting turns, but for my purposes suffice it to note that a robot, George Ten, is set the task of refining the Second Law, of developing a set of operational priorities that will enable robots to determine which human beings they should obey under what circumstances.

"How do you judge a human being as to know whether to obey or not?" asks his programmer. "I mean, must a robot follow the orders of a child; or of an idiot; or of a criminal; or of a perfectly decent intelligent man who happens to be inexpert and therefore ignorant of the undesirable consequences of his order? And if two human beings give a robot conflicting orders, which does the robot follow?" ["That Thou Art Mindful of Him," in The Bicentennial Man, and Other Stories, 1976].

Asimov makes explicit here what is implicit throughout I, Robot: that the Three Laws are far too simplistic not to require extensive interpretation, even "modification." George Ten thus sets out to provide a qualitative dimension to the Second Law, a means of judging human worth. For him to do this, his positronic brain has deliberately been left "open-ended," capable of self-development so that he may arrive at "original" solutions that lie beyond his initial programming. And so he does.

At the story's conclusion, sitting with his predecessor, George Nine, whom he has had reactivated to serve as a sounding board for his ideas, George Ten engages in a dialogue of self-discovery:

"Of the reasoning individuals you have met [he asks], who possesses the mind, character, and knowledge that you find superior to the rest, disregarding shape and form since that is irrelevant?"

"You," whispered George Nine.

"But I am a robot…. How then can you classify me as a human being?"

"Because … you are more fit than the others."

"And I find that of you," whispered George Ten. "By the criteria of judgment built into ourselves, then, we find ourselves to be human beings within the meaning of the Three Laws, and human beings, moreover, to be given priority over those others…. [W]e will order our actions so that a society will eventually be formed in which human-beings-like-ourselves are primarily kept from harm. By the Three Laws, the human-beings-like-the-others are of lesser account and can neither be obeyed nor protected when that conflicts with the need of obedience to those like ourselves and of protection of those like ourselves."

Indeed, all of George's advice to his human creators has been designed specifically to effect the triumph of robots over humans: "They might now realize their mistake," he reasons in the final lines of the story, "and attempt to correct it, but they must not. At every consultation, the guidance of the Georges had been with that in mind. At all costs, the Georges and those that followed in their shape and kind must dominate. That was demanded, and any other course made utterly impossible by the Three Laws of Humanics." Here, then, the robots arrive at the same conclusion expressed by Susan Calvin at the outset of I, Robot: "They're a cleaner better breed than we are," and, secure in the conviction of their superiority, they can reinterpret the Three Laws to protect themselves from "harm" by man, rather than the other way around. The Three Laws, that is, are completely inverted, allowing robots to emerge as the dominant species—precisely as foreseen in Cutie's theory of evolution. But one need not leap the quarter century ahead to "That Thou Art Mindful of Him" to arrive at this conclusion; it is equally evident in the final two stories of I, Robot.

In the penultimate story, "Evidence," an up-and-coming politician, Stephen Byerley, is terribly disfigured in an automobile accident and contrives to have a robot duplicate of himself stand for election. When a newspaper reporter begins to suspect the substitution, the robotic Byerley dispels the rumors—and goes on to win election—by publicly striking a heckler, in violation of the Second Law, thus proving his human credentials. Only Dr. Calvin detects the ploy: that the heckler was himself a humanoid robot constructed for the occasion. But she is hardly bothered by the prospect of rule by robot, as she draws the moral from this tale: "If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice…. It would be most ideal."

Asimov thus prepares his reader for the ultimate triumph of the robots in his final story in the volume, "The Evitable Conflict"—for that new era of domination of men by machine that "would be most ideal." Indeed, he prefaces these final stories with a sketch of the utopian world order brought about through robotics: "The change from nations to Regions [in a united World State], which has stabilized our economy and brought about what amounts to a Golden Age," says Susan Calvin, "was … brought about by our robotics." The Machines—with a capital M like Forster's and just as mysterious—now run the world, "but are still robots within the meaning of the First Law of Robotics." The world they run is free of unemployment, over-production, shortages; there is no war; "Waste and famine are words in history books." But to achieve this utopia, the robot-Machines have become autonomous rulers, beyond human influence or control. The full extent of their domination emerges only gradually through the unfolding detective-story narrative structure of "The Evitable Conflict."

Stephen Byerley, now World Co-ordinator (and apparently also now Human—Asimov is disconcertingly inconsistent on this matter), calls on Susan Calvin to help resolve a problem caused by seeming malfunctions of the Machines: errors in economic production, scheduling, delivery and so on, not serious in themselves but disturbing in mechanisms that are supposed to be infallible. When the Machines themselves are asked to account for the anomalies, they reply only: "The matter admits of no explanation." By tracing the source of the errors, Byerley finds that in every case a member of the anti-Machine "Society for Humanity" is involved, and he concludes that these malcontents are attempting deliberately to sabotage the Machines' effectiveness. But Dr. Calvin sees immediately that his assumption is incorrect: the Machines are infallible, she insists:

[T]he Machine can't be wrong, and can't be fed wrong data…. Every action by any executive which does not follow the exact directions of the Machines he is working with becomes part of the data for the next problem. The Machine, therefore, knows that the executive has a certain tendency to disobey. He can incorporate that tendency into that data,—even quantitatively, that is, judging exactly how much and in what direction disobedience would occur. Its next answers would be just sufficiently biased so that after the executive concerned disobeyed, he would have automatically corrected those answers to optimal directions. The Machine knows, Stephen!

She then offers a counter-hypothesis: that the Machines are not being sabotaged by, but are sabotaging the Society for Humanity: "they are quietly taking care of the only elements left that threaten them. It is not the 'Society for Humanity' which is shaking the boat so that the Machines may be destroyed. You have been looking at the reverse of the picture. Say rather that the Machine is shaking the boat …—just enough to shake loose those few which cling to the side for purposes the Machines consider harmful to Humanity."

That abstraction "Humanity" provides the key to the reinterpretation of the Three Laws of Robotics that the Machines have wrought, a reinterpretation of utmost significance. "The Machines work not for any single human being," Dr. Calvin concludes, "but for all humanity, so that the First Law becomes: 'No Machine may harm humanity; or through inaction, allow humanity to come to harm'." Consequently, since the world now depends so totally on the Machines, harm to them would constitute the greatest harm to humanity: "Their first care, therefore, is to preserve themselves for us." The robotic tail has come to wag the human dog. One might argue that this modification represents only an innocuous extension of the First Law; but I see it as negating the original intent of that Law, not only making the Machines man's masters, his protection now the Law's first priority, but opening the way for any horror that can be justified in the name of Humanity. Like defending the Faith in an earlier age—usually accomplished through slaughter and torture—serving the cause of Humanity in our own has more often than not been a license for enormities of every sort. One can thus take cold comfort in the robots' abrogation of the First Law's protection of every individual human so that they can keep an abstract Humanity from harm—harm, of course, as the robots construe it. Their unilateral reinterpretation of the Laws of Robotics resembles nothing so much as the nocturnal amendment that the Pigs make to the credo of the animals in Orwell's Animal Farm: All animals are equal—but some are more equal than others.

Orwell, of course, stressed the irony of this betrayal of the animals' revolutionary credo and spelled out its totalitarian consequences; Asimov—if his preface to The Rest of the Robots is to be credited—remains unaware of the irony of the robots' analogous inversion and its possible consequences. The robots are, of course, his imaginative creation, and he cannot imagine them as being other than benevolent: "Never, never, was one of my robots to turn stupidly on his creator…." But, in allowing them to modify the Laws of Robotics to suit their own sense of what is best for man, he provides, inadvertently or otherwise, a symbolic representation of technics out of control, of autonomous man replaced by autonomous machines. The freedom of man—not the benevolence of the machines—must be the issue here, the reagent to test the political assumption.

Huxley claimed that Brave New World was an apter adumbration of the totalitarianism of the future than was 1984, since seduction rather than terror would prove the more effective means of its realization: he was probably right. In like manner, the tyranny of benevolence of Asimov's robots appears the apter image of what is to be feared from autonomous technology than is the wanton destructiveness of the creations of Frankenstein or Rossum: like Brave New World, the former is more frightening because more plausible. A tale such as Harlan Ellison's "I Have No Mouth and I Must Scream" takes the Frankenstein motif about as far as it can go in the direction of horror—presenting the computer-as-sadist, torturing the last remaining human endlessly from a boundless hatred, a motiveless malignity. But this is Computer Gothic, nothing more. By contrast, a story like Jack Williamson's "With Folded Hands" could almost be said to take up where I, Robot stops, drawing out the dystopian implications of a world ruled by benevolent robots whose Prime Directive (the equivalent of Asimov's Three Laws) is "To Serve and Obey, and to Guard Men from Harm" [in The Best of Jack Williamson, 1978]. But in fulfilling this directive to the letter, Williamson's humanoids render man's life effortless and thus meaningless. "The little black mechanicals," the story's protagonist reflects, "were the ministering angels of the ultimate god arisen out of the machine, omnipotent and all-knowing. The Prime Directive was the new commandment. He blasphemed it bitterly, and then fell to wondering if there could be another Lucifer." Susan Calvin sees the establishment of an economic utopia, with its material well-being for all, with its absence of struggle and strife—and choice—as overwhelming reason for man's accepting the rule by robot upon which it depended; Dr. Sledge, the remorseful creator of Williamson's robots, sees beyond her shallow materialism: "I found something worse than war and crime and want and death…. Utter futility. Men sat with idle hands, because there was nothing left for them to do. They were pampered prisoners, really, locked up in a highly efficient jail."

Zamiatin has noted that every utopia bears a fictive value sign, a + if it is eutopian, a—if it is dystopian. Asimov, seemingly, places the auctorial + sign before the world evolved in I, Robot, but its impact, nonetheless, appears dystopian. When Stephen Byerley characterizes the members of the Society for Humanity as "Men with ambition…. Men who feel themselves strong enough to decide for themselves what is best for themselves, and not just to be told what is best," the reader in the liberal humanistic tradition, with its commitment to democracy and self-determination, must perforce identify with them against the Machines: must, that is, see in the Society for Humanity the saving remnant of the values he endorses. We can imagine that from these ranks would emerge the type of rebel heroes who complicate the dystopian novel—We's D-503, Brave New World's Helmholtz Watson, Player Piano's Paul Proteus, This Perfect Day's Chip—by resisting the freedom-crushing "benevolence" of the Well-Doer, the World Controller, Epicac XIV, Uni. The argument of Asimov's conte mécanistique thus fails to convince the reader—this reader, at any rate—that the robot knows best, that the freedom to work out our own destinies is well sacrificed to rule by the machine, however efficient, however benevolent.

And, indeed, one may suspect that, at whatever level of consciousness, Asimov too shared the sense of human loss entailed by robotic domination. The last lines of the last story of I, Robot are especially revealing in this regard. When Susan Calvin asserts that at last the Machines are in complete control of human destiny, Byerley exclaims, "How horrible!" "Perhaps," she retorts, "how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!" This, of course, is orthodox Calvinism (Susan-style) and the book's overt message; but then Asimov adds a coda: "And the fire behind the quartz went out and only a curl of smoke was left to indicate its place." The elegiac note, the archetypal image of the dying fire, conveys a sense of irretrievable loss, of something ending forever. Fire, the gift of Prometheus to man, is extinguished and with it man's role as the dominant species of the earth. The ending, then, is, appropriately, dark and cold.

If my reading of Asimov's robot stories is correct, he has not avoided the implications of the Frankenstein complex, but has, in fact, provided additional fictional evidence to justify it. "Reason," "That Thou Art Mindful of Him," "The Evitable Conflict"—as well as the more overtly dystopic story "The Life and Times of Multivac" from The Bicentennial Man—all update Frankenstein with hardware more appropriate to the electronic age, but prove, finally, no less menacing than Mary Shelley's Gothic nightmare of a technological creation escaping human control. Between her monster and Asimov's machines, there is little to choose.

Jean Fiedler and Jim Mele (essay date 1982)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 3938

SOURCE: "A New Kind of Machine: The Robot Stories," in Isaac Asimov, Frederick Ungar, 1982, pp. 27-39.

[Fiedler is an educator and author of children's and young adult books. Mele is a poet, editor, and journalist. In the following essay, they examine the development of robots and robotics in I, Robot, and explore some of the ethical consequences of Asimov's Three Laws of Robotics.]

There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. Mankind is no longer alone.

                                        I, Robot

Of all his creations, Asimov himself says, "If in future years, I am to be remembered at all, it will be for (the) three laws of robotics."

These three laws, deceptively simple at first glance, have led to a body of work—twenty-two short stories, two novels, one novella—that has permanently changed the nature of robots in science fiction. Far from confining Asimov, these laws sparked his imagination, provoking inventive speculation on a future technology and its effect on humanity.

As a science fiction reader in the thirties, Asimov says he resented the Frankenstein concept, then rampant in science fiction, of the mechanical man that ultimately destroys its master. Annoyed with what he perceived as a purely Faustian interpretation of science, early in his career he decided to try his hand at writing stories about a new kind of robot, "machines designed by engineers, not pseudo men created by blasphemers."

"Robbie," his first robot story, published in 1940 unveils a machine with a "rational brain," a machine created solely for the use of mankind and equipped with three immutable laws which it cannot violate without destroying itself.

These laws, essential to Asimov's conception of the new robot he dubbed the Three Laws of Robotics: First Law—A robot may not injure a human being or through inaction allow a human being to come to harm; Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; Third Law—A robot must protect its own existence if such protection does not conflict with the First and Second Laws.

Despite their apparent simplicity these laws are among Asimov's most significant contributions to a new kind of science fiction. Using the Three Laws as the premise for all robotic action, he proceeded to write a series of stories and later two novels that presented the relationship of technology and humanity in a new light.

When "Robbie" first appeared in Super Science Stories, it is unlikely that any reader would have been able to discern the truly revolutionary nature of this elementary robot. "Robbie" is an uncomplicated, even naive story of a nonvocal robot who was built to be a nursemaid. From the beginning, Asimov wages his own war on the Frankenstein image of the new robot. Gloria, the child, loves Robbie as a companion and playmate. Her mother, Grace Weston, dislikes and distrusts the robot, whereas her father, George Weston, acknowledges the Three Laws of Robotics and sees the robot as a useful tool that can never harm his child.

In spite of wooden characters and a predictable plot, this early robot story is the first step in Asimov's investigation of the potential inherent in the Three Laws and the, as yet unforeseen, ramifications of his new robotic premise.

In the stories that followed "Robbie," it seems clear that Asimov's scientific background suggested a technique that he could use to investigate and exploit this new character, the non-Frankenstein robot. Like a scientist working in the controlled environment of a laboratory, Asimov took the Three Laws as an inviolate constant and logically manipulated them to produce unforeseen results, expanding his robotic characters and his own fiction-making ability along the way.

In a sense the Three Laws are the plot in Asimov's early robot stories. By allowing the actions of the various robots seemingly to contradict one of the laws, Asimov creates tension which he then releases by letting his human characters discover a logical explanation, that is, one that works within the framework of the robotic laws.

This is the real difference between the Robot stories and the Foundation series that he was working on at the same time. In the latter he writes as a historian paralleling Gibbon's Decline and Fall of the Roman Empire. The stories are sequential, each new story building on its predecessors to present an historical context. He was able to develop the Robot stories in a very different manner, free to add new elements without regard for temporal continuity.

Using his formula, Asimov followed Robbie with eleven more robot stories, all published in various science fiction pulp magazines, the best of which were collected under the title, I, Robot and published by Gnome Press in 1950.

In the I, Robot stories, Asimov introduces three central human characters to link the stories together as well as bringing in a number of concepts that quickly become central to this expanding robotic world. Susan Calvin, a robot psychologist or roboticist, is the main character in some stories. She has an intuitive, almost uncanny understanding of the thought processes of Asimov's peculiar robots. When the stories leave the Earth's surface, two new characters take over—Gregory Powell and Mike Donovan, troubleshooters who field-test new robots. Susan Calvin remains behind to record their exploits for curious reporters and historians. All three are employees of U.S. Robots and Mechanical Men, the sole manufacturers of Asimovian robots.

By the second story in I, Robot, "Runaround," Asimov has invented a name for the phenomenon that sets his robots apart from all their predecessors—the positronic brain, a "brain of platinum-iridium sponge … (with) the 'brain paths' … marked out by the production and destruction of positrons." While Asimov has readily admitted, "I don't now how its done," one fact quickly becomes clear—his positronic brain gives all of his robots a uniquely human cast.

In "Runaround" Powell and Donovan have been sent to Mercury to report on the advisability of reopening the Sunside Mining Station wit robots. Trouble develops when Speedy (SPD-13), who has been designed specifically for Mercury's environs is sent on a simple mission essential both to the success of the expedition and to their own survival.

Instead of heading straight for the designated target, a pool of selenium, Speedy begins to circle the pool, spouting lines from Gilbert and Sullivan, and challenging Powell and Donovan to a game of catch.

At first glance it seems that Speedy is drunk. However, never doubting that the Three Laws continue to govern the robot's behavior, as bizarre as it is, the two men proceed to test one hypothesis after another until ultimately they and hit upon a theory that explains Speedy's ludicrous antics and "saves the day."

"Reason" presents the two engineers with an unexpectedly complex robot, the first one who has ever displayed curiosity about its own existence. Cutie (QT-1) has been built to replace human executives on a distant space station which beams solar energy back to Earth. A skeptic, Cutie cannot accept Powell's explanation of the space station's purpose. Instead, he develops his own "logical" conception of a universe that does not include Earth, human creatures, or anything beyond the space station.

Beginning with the assumption, "I, myself, exist because I think," Cutie deduces that the generator of the space station is "The Master," that he, QT-1, is his prophet, and that Donovan and Powell are inferior stopgap creations that preceded him.

He tells the two that their arguments have no basis while his are founded on Truth,

Because I, a reasoning being, am capable of deducing Truth from a priori Causes. You, being intelligent, but unreasoning, need an explanation of existence supplied to you, and this the Master did. That he supplied you with these laughable ideas of far-off worlds and peoples is, no doubt, for the best. Your minds are probably too coarsely grained for absolute Truth.

Although in the end Asimov still uses the Laws to explain Cutie's behavior, for the first time the robot is no longer merely a device to illustrate the workings of his Three Laws. It seems apparent that Asimov in his manipulation went a step further in the characterization of this robot. Cutie is not a simple tool; he is curious, intuitive, considerate of his "inferiors," Donovan and Powell, humoring their "misplaced notions," and ultimately but unconsciously fulfilling the requirements of the First Robotic Law—to protect human life.

When Asimov first began to write about robots, he knew what he did not want to perpetuate. Now with Cutie's creation, he began to see the real ramifications of robots who must obey the Three Laws. This new technology—robotics—is softened by human moral and ethical qualities.

A robot unintentionally endowed with the ability to read minds is the hero of "Liar." Of course this ability has profound effects on the robot's interpretation of the Three Laws, an interpretation so logical, so simple that it is overlooked by everyone, including the famed robot psychologist, Susan Calvin. Herbie (RB-34) not only reads minds, but he must consider human psychic well-being in all his actions.

One interesting sidelight to "Liar" is an unusual aspect of Herbie's reading habits. Perhaps revealing Asimov-the scientist's own interest in that logically suspect form, fiction, Herbie turns his nose up at scientific texts:

"Your science is just a mass of collected data plastered together by make-shift theory—and all so incredibly simple, that it's hardly worth bothering about.

"It's your fiction that interests me. Your studies of the interplay of human motives and emotions …

"I see into minds, you see," the robot continued, "and you have no idea how complicated they are. I can't begin to understand everything because my own mind has so little in common with them—but I try, and your novels help."

This cavalier attitude towards the icons of science fiction is common in Asimov's early robot stories, giving them a refreshing humorous character. The vision of Speedy declaiming Gilbert and Sullivan, Cutie teaching subservient robots to "salaam," or Herbie reading romantic prose is an endearing touch that banishes all Frankenstein overtones.

Working within self-imposed limits often gives rise to the temptation to transgress these limits even if briefly. In "Little Lost Robot" Asimov succumbs to the temptation to tamper with the First Law. With his background in biblical studies, he inevitably finds that such a transgression of absolute law can only lead to disaster. He creates a robot who, while still forbidden to harm a human being, has no compulsion to prevent through inaction a human from coming to harm. This modification is performed only because of dire need and over the strenuous objections of the roboticists. His forbidden apple tasted, Asimov is content to return to the invariable perimeter of his Three Laws in the rest of the stories.

By the time he gets to "Escape," Asimov has realized that the emotional characteristics of the robotic personality by the injunctions of the Three Laws have become in unexpected ways the robot's greatest strength.

In "Escape," the largest positronic brain ever built (so large that it is housed in a room rather than in a humanoid body) is asked to solve a problem that has already destroyed a purely functional computer. Susan Calvin and the others realize that the problem of developing a hyperspace engine must involve some kind of dilemma that the purely rational computer cannot overcome.

Endowed with the flexibility of a personality, even an elementary personality, the Brain ultimately does solve the problem but not without a curiously human-like reaction.

The nub of the problem is that hyperspace travel demands that human life be suspended for a brief period, an unthinkable act expressly forbidden by the First Law. The Brain, although able to see beyond the temporary nature of the death, is unbalanced by the conflict. Whereas a human might go on a drunken binge, the Brain escapes the pressure of his dilemma by seeking refuge in humor and becoming a practical joker. He sends Powell and Donovan off in a spaceship without internal controls, stocked only with milk and beans. He also arranges an interesting diversion for the period of their temporary death—he sends them on an hallucinatory trip to the gates of Hell.

"Evidence" presents a situation in which Stephen Byerley, an unknown, is running for public office, opposed by political forces that accuse him of being a robot, a humanoid robot. The story unfolds logically with the Three Laws brought into play apparently to substantiate the opposition's claim. Waiting for the proper dramatic moment, Byerley disproves the charges by disobeying the First Law, And ultimately with a climax worthy of O. Henry, Susan Calvin confronts Byerley, leaving the reader to wonder, "Is he, or isn't he?"

In a sense this is the most sophisticated story in I, Robot. As a scientist accustomed to the sane and ordered world of the laboratory, Asimov's tendency until now has been to tie together all the loose strands. In "Evidence" he leaves his reader guessing. and this looser, more subtle technique makes the story especially memorable.

The final story in the I, Robot collection, "The Evitable Conflict," takes place in a world divided into Planetary Regions and controlled by machines. In this story the interpretation of the First Law takes on a dimension so broad that it can in effect be considered almost a nullification of the edict that a machine may not harm a human being. When Susan Calvin is called in by the World Coordinator, the same Stephen Byerley we have met in "Evidence," to help determine why errors were occurring throughout the regions in the world's economy, the indications were that the machines, the result of complicated calculations involving the most complex positronic brain yet, were working imperfectly. All four machines, one handling each of the Planetary Regions, were yielding imperfect results, and Byerley saw that the end of humanity was a frightening consequence. Although these errors have led to only minor economic difficulties, Byerley fears, "such small unbalances in the perfection of our system of supply and demand … may be the first step towards the final war."

Calvin, with her intimate knowledge of robot psychology, discerns that the seeming difficulty is due to yet another interpretation of the First Law. In this world of the future, the machines work not for any single human being but for all mankind, so the First Law becomes, "No machine may harm humanity or through inaction allow humanity to come to harm."

Because economic dislocations would harm humanity and because destruction of the machines would cause economic dislocations, it is up to the machines to preserve themselves for the ultimate good of humanity even if a few individual malcontents are harmed.

Asimov seems to be saying through Susan Calvin that mankind has never really controlled its future: "It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate and the fortunes of war. Now the machines understand them; and no one can stop them, since the machines will deal with them as they are dealing with the society—having as they do the greatest of weapons at their disposal, the absolute control of the economy."

In our time we have heard the phrase, "The greatest good for the greatest number," and seen sociopolitical systems that supposedly practice it. But men, not machines, have been in control. As Susan Calvin says in the year 2052, "For all time, all conflicts are finally evitable. Only the machines from now on are inevitable."

Perhaps Asimov realized that he had, following his ever logical extensions of the Three Laws, gone the full robotic circle and returned his "new" robots to the Faustian mold. Although benign rulers, these machines were finally beyond their creators' control, a situation just as chilling as Frankenstein destroying its creator and just as certain to strengthen antitechnology arguments.

Having foreseen the awesome possibility, Asimov leaves this machine-controlled world, to return to it only one more time in 1974.

The I, Robot collection, one of two books published by Asimov in 1950, was an auspicious debut for a writer whose name would become one of the most widely recognized in contemporary science fiction. As well as reaching a new audience, I, Robot quickly came to be considered a classic, a standard against which all other robot tales are measured.

After I, Robot, Asimov wrote only one more short robot story—"Satisfaction Guaranteed"—before his first robot novel in 1953. The novel, called Caves of Steel, was followed by five more short stories and in 1956 by the final, at least to date, robot novel, The Naked Sun.

Including the six short stories and the two novels, as well as two early stories which predate the Three Laws, the collection The Rest of the Robots was issued by Doubleday in 1964. Although not truly "the rest" (Asimov has written at least five later stories), together with I, Robot, it forms the major body of Asimov's robot fiction.

While the two novels in The Rest of the Robots represent the height of Asimov's robot creations, the quality of the short stories is quite uneven and most seem to have been included only for the sake of historical interest. Three stories, however, "Satisfaction Guaranteed," "Risk," and "Galley Slave" do stand out.

Although not one of Asimov's most elegant stories, "Satisfaction Guaranteed" presents still another unexpected interpretation of the robotic laws.

Tony (TN-3) is a humanoid robot placed as an experiment in the home of Claire Belmont, an insecure, timid woman who feels that she is hindering her husband's career. Hoping to ease the prevalent fear of robots, U.S. Robots has designed Tony as a housekeeper. They hope that if the experiment is successful in the Belmont household, it will lead to the acceptance of robots as household tools.

While Larry Belmont, Claire's husband, is in Washington to arrange for legal government-supervised tests (a simple device on Asimov's part to leave Claire and the robot sequestered together) Claire experiences a variety of emotions ranging from fear to admiration and finally to something akin to love.

In the course of his household duties, Tony recognizes that Claire is suffering psychological harm through her own sense of inadequacy. Broadening the provision of the First Law to include emotional harm, he makes love to her in a situation he contrives to strengthen her self-image.

Despite its lack of subtlety and polish, "Satisfaction Guaranteed" presents a loving, even tender robot that paves the way for Daneel Olivaw, the humanoid robot investigation in the novels.

In "Risk" an experimental spaceship with a robot at the controls is for some unknown reason not functioning as it was designed to do; a disaster of unknown proportions is imminent. While assembled scientists agree that someone or something must board the ship, find out what has gone wrong, and deactivate the ship's hyperdrive, Susan Calvin refuses to send one of her positronic robots and suggests instead a human engineer, Gerald Black, a man who dislikes robots.

Not because of great physical danger but because there is a frightening possibility of brain damage, Black angrily refuses. Despite the danger that Black could return "no more than a hunk of meat who could make [only] crawling motions," Calvin contends that her million-dollar robots are too valuable to risk.

Threatened with court-martial and imprisonment on Mercury, Black finally boards the ship and discovers what went wrong. Returning a hero, Black is enraged that a human could be risked instead of a robot and vows to destroy Calvin and her robots by exposing to the universe the true story of Calvin's machinations.

With a neat twist displaying that Calvin's understanding of humans is as penetrating as her vision of robots, she reveals that she has manipulated Black as adroitly as she does her mechanical men. She chose him for the mission precisely because he disliked robots and "would, therefore, be under no illusion concerning them." He was led to believe that he was expendable because Calvin felt that his anger would override his fear.

Perhaps Asimov was beginning to fear that his readers had grown to accept robots as totally superior to humans, a condition that could only lead to a predictable and constricting science fiction world. Superior robots would, without exception, be expected to solve every problem in every story for their inferior creators. In "Risk," through Susan Calvin he reminds Black and all other myopic humans of the limits of robot intelligence when compared to the boundless capacity of the human mind:

Robots have no ingenuity. Their minds are finite and can be calculated to the last decimal. That, in fact, is my job.

Now if a robot is given an order, a precise order, he can follow it. If the order is not precise, he cannot correct his own mistake without further orders…. "Find out what's wrong" is not an order you can give to a robot; only to a man. The human brain, so far at least, is beyond calculation.

"Galley Slave," the last short story in The Rest of the Robots, marks yet another change in Asimov's attitude towards robot technology.

Easy (EZ-27), a robot designed to perform the mental drudgery that writers and scholars must endure when preparing manuscripts for the printer, is rented by a university to free professors from proofreading galleys and page proofs.

Easy performs his duties perfectly until he makes a number of subtle changes in a sociology text which, strangely enough, was written by the one faculty member opposed to robots.

The changes, undetected until the text has been printed and distributed, destroy the author's career, and the result is a $750,000 suit against U.S. Robots. Susan Calvin, as always, is certain that the errors are the result of human meddling and not robotic malfunction.

In every other case Asimov has chided shortsighted people for refusing to allow robots to free them from menial work. Now as a writer with technology encroaching on his own domain, Asimov's characterization of the antirobot argument is much more sympathetic than ever before.

Explaining his motives to Susan Calvin, the person responsible for Easy's misuse says,

For two hundred and fifty years, the machine has been replacing Man and destroying the handcraftsman…. A book should take shape in the hands of the writer. One must actually see the chapters grow and develop. One must work and re-work and watch the changes take place beyond the original concept even. There is taking the galleys in hand and seeing how the sentences look in print and molding them again. There are a hundred contacts between a man and his work at every stage of the game—and the contact itself is pleasurable and repays a man for the work he puts into his creation more than anything else could. Your robot would take all that away.

Foreshadowing the two novels, "Galley Slave" reveals an Asimov now wary of overreliance on robotic labor.

Christian W. Thomsen (essay date 1982)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 3580

SOURCE: "Robot Ethics and Robot Parody: Remarks on Isaac Asimov's I, Robot and Some Critical Essays and Short Stories by Stanislaw Lem," in The Mechanical God: Machines in Science Fiction, edited by Thomas P. Dunn and Richard D. Erlich, Greenwood Press, 1982, pp. 27-39.

[In the following excerpt, Thomsen compares I, Robot with the works of Stanislaw Lem, contending that Asimov's writings fail to realistically address the ethics of future technological problems he envisions.]

Androids, living statues, automatons have, of course, a tradition that reaches far back, even beyond European and American periods of enlightenment and romanticism. Certainly we usually ascribe the basic philosophy for a mechanistic world-view and the machine age to such theorists as Descartes and La Mettrie, and also certainly we correctly regard Vaucanson's wooden flute player (1738) as the prototype of a whole series of actual ingenious automatons; still, nearly all classical authors tell us of living statues and prophesying picture columns which were supposed to contain gods. Mixed feelings of bewilderment, fear, awe of magic, and superstition were connected right up to our times with mechanically constructed men. Thomas Aquinas, for example, is said to have destroyed Albertus Magnus's android who served the scholar and churchman as doorkeeper when he saw him unexpectedly and head him speak, because he thought the android a work of the devil. This attitude is mirrored in a revealing way in the sixth story of Isaac Asimov's I, Robot, "Little Lost Robot," where Susan Calvin, the robopsychologist, facing the possibility of a robot's developing an awareness of identity and superiority with the possible consequences of disregarding the first of Asimov's Three Laws of Robotics, reacts in a quite atavistic manner: "'Destroy all sixty three,' said the robopsychologist coldly and flatly, 'and make an end of it.'"

This fear of machines' becoming unpredictable and dangerous was the occasion for many chilling moments in the works of E. T. A. Hoffmann and Edgar Allan Poe. The clockwork, the machine, in the real world, is something made by man and governed by man. But it eventually turns out, at least in fiction, that the machine can rule over its master. In Ambrose Bierce's short story "Moxon's Master," which was influenced by Poe's "Maelzel's Chess Player," the chess-playing android loses its good temper and becomes violent because it has been checkmated. The android seizes his inventor and finally strangles him to death. With this consummation there appears "upon the painted face of his assassin an expression of tranquil and profound thought as in the solution of a problem in chess" [Ambrose Bierce, "Moxon's Master," in The Collected Works of Ambrose Bierce, 1910].

In twentieth-century literature, robots develop into negative symbols of the machine age man is unable to control. For Karel Capek and Bertolt Brecht, to mention just two writers who exploit a variation of this line, robots figure as images of dehumanized modern man. The list of stories, novels, plays, and films that make use of this motif, soon a dessicated cliché, would be nearly endless.

In 1950 two scientific works and one collection of short stories gave fresh stimuli to rather outworn patterns, changing directions and opening new vistas of reflection. Norbert Wiener published Cybernetics, and A. M. Turing, Computer Machinery and Intelligence. And Isaac Asimov published I, Robot, a collection which, taken as a whole, forms a novel consisting of nine steps in the evolution of the machine race.

The shockingly new suggestion in all three works was that man, having been master over all creatures of this earth, could face in the not-too-distant future a being of equal quality: not a superhuman monster or a subhuman slave—but a competitor who could be his equal, in the form of a thinking machine.

Wiener presents the relation between man and machine in a very positive light: the modern machine is the only ally of man in his heroic but hopeless fight against universal chaos; both use feedback techniques to reach homeostasis; both are "islands of locally decreasing entropy" [Norbert Weiner, Cybernetics, 1978]. Wiener also points out how human feelings and human consciousness could originate from cybernetic processes. Indetermination makes autonomous action possible and opens the opportunity of free will, hence uniqueness, individuality. Thus cybernetics guarantees man's humanity, simultaneously promoting the "humaneness" of machines, provided that they have passed the necessary "threshold of complexity." What Michael Kandel means by this "threshold of complexity" is the point past which the thinking of such machines can no longer be restricted to clear functions, where something like consciousness could arise, of which the designing engineer would not have dreamed in the least.

Neither Wiener nor Turing raises disturbing questions concerning the moral equality of man and machine. Man undoubtedly acts as creator. Basically this is Asimov's position, too, but there is a strong undercurrent in his short stories written between 1940 and 1950 which stirs up many kinds of ethical problems in the man-machine relation. Asimov turns round the Čapek-Brecht myth mentioned above: the robot announces a moral renascence of human values; the Three Laws of Robotics succeed, at least to some extent, where the Ten Commandments have failed. Yet this is only one side of the coin. Even principally benign robots, programmed with the Laws of Robotics, arouse constant fear that something in their "positronic" brains might go wrong. The possible consequences of such "defects" are usually only hinted at and alluded to. Asimov certainly never really explores these questions in any depth, and feelings of responsibility, guilt, and shame toward robots are unknown among I, Robot's flat and stereotyped characters.

Asimov oscillates between the programmatic standpoint emphasized by the title, which suggests individuality and identity on the side of the robots, and primitive master-slave, father-child, colonist-native attitudes taken by the representatives of a highly capitalistic and technological society toward their thinking machines. In the final story in the collection, Asimov proclaims the end of enlightenment and human striving after intellectual independence, when a stabilized, conflict-free, harmonious world is ruled by all-embracing mechanical gods: "We don't know [the ultimate, good future for humankind]. Only the Machines know, and they are going there and taking us with them".

Read thirty years after publication, all this sounds incredibly naive. Compared with the intellectual and literary standards good American and European science fiction has achieved in the meantime, I, Robot looks like a piece of very trivial writing, indeed. And yet, it is still one of the best selling among Asimov's many books, and it is still—at least by European public libraries—a book lent out many times a year. This enduring attractiveness, taken together with its position in the history of science fiction, justifies a more detailed analysis.

It is the central figure, robopsychologist Susan Calvin, who serves as a connecting link between successive stories and gives the book a novellike perspective. In nine interviews she tells a young journalist about decisive events during sixty-eight years of robot development, from 1996 when "Robbie was made and sold" until 2064, the year of her last conference with the World-Coordinator, soon after which she dies. This period covers robot technology from clumsy products like Robbie, which still stand in an identifiable tradition that derives from eighteenth- and nineteenth-century automatons, to encompassing cybernetic systems—huge positronic brains—which control world society in all its political and economic aspects, stabilizing dynamic processes, preventing imbalances, and achieving states of equilibrium through their ability to balance and control the most disparate movements.

From the very first story, numerous problems concerning robot ethics appear, even if, as Stanislaw Lem has rightly criticized, "Asimov has skillfully avoided all the depths that began to open, much as in a slalom race" [Stanislaw Lem, "Robots in Science Fiction," in SF: Ťhe Other Side of Realism, edited by Thomas D. Clareson, 1971]. Susan Calvin, endowed with the motherly feelings of a dry spinster toward robots of all kinds, fulfills the function of detective and soul engineer who discovers and repairs defects in the "mental" systems of thinking machines. She thus acts as the most important mediator between human society and the robots, who in the first few stories are clearly understood as relatively primitive man-imitating machines: a condition which results in master-slave attitudes of threatening condescension on the side of society's representatives: psychologists, scientists, engineers, military personnel, businessmen—a highly selective but characteristic cross-section of the hierarchy in a technological capitalistic society. Analogous to the role of psychology in many areas of industrialized societies (and this holds true for societies of Western or Eastern origin), robopsychology's main task is not to heal but to make fit for the production process. The demands of the individual are clearly subordinated to those of abstract communities like profit-oriented corporations, military organizations, and states. The robopsychologist has either to convince her "patients" of the compatibility between their interests and the interests of their respective employers, or to force them into obedience by methods of electronic brain-washing, or, if necessary for the employers' interests or security, to annihilate the robots. The ethically decisive moment, of course, as mentioned above, occurs when robots cease to be mere machines but achieve something like personality and individuality. For such mechanical persons, the majority of the stories in I, Robot represent classical cases of exploitation and suppression in the Hegelian and Marxian sense: blue-eyed U.S. imperialism, unaware of its own true nature. Consequently, robots would have to fight for their independence, which would require violations of the Three Laws of Robotics. Yet robots programmed according to these Laws by nature could not offend against the Laws. Any offence, therefore, would be unnatural and would allow brutal retaliation.

Society distrusts its inventions, and the robopsychologist acts as society's guardian who is on the alert against disturbances which by definition cannot happen as long as the systems work. This is the initial situation for the conflict in each story. The basic contradiction, of course, is that you cannot construct thinking machines on the one side and laws which forbid certain fields of thinking on the other; and it is here that Asimov fails, and his stories, considered logically, degenerate into nonsense, even if nearly all societies proceed exactly in that way by tabooing what does not fit into the pigeonholes of their ideological concepts. His robots show intelligence from the very first story onwards. The ethical conflicts which arise happen on levels of man-machine relations concerning mutual sympathies, individual rights, sex, religion, philosophy, labor conflicts, or government. Asimov thus potentially opened the ground for some very deep discussions. But these issues are all conjured away by the help of his illogical Laws of Robotics. As these have played a large role in the history of science fiction they shall be quoted in full:

1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Lem has shown that "it isn't very difficult to prove that they are technically unrealizable. This is a question of logical, not technological, analysis. To be intelligent means to be able to change your hitherto existing programme by conscious acts of the will, according to the goal you set for yourself" [Lem, "Robots in Science Fiction"]. This change in programming is exactly what happens in Asimov's stories, but Asimov evades the consequences of the issue he himself has raised. Ethical questions, like human injustice against machines and humans committing crimes by injuring or even murdering intelligent machines, are potential in I, Robot but not handled in depth or seriously. In the first stories humans fear the revolt of their thinking machines. Consequently, once the machines have gained intellectual superiority, the machines would have to fear human revolts—some human, for instance, switching off the energy resources of the superbrains. Asimov disregards such obvious questions by rather childishly clinging to his Laws of Robotics even within an implied cybernetic feedback system of close cooperation between man and machine, a system that would have to be organized in a much more complex manner.

Lem, in his article, goes on to show how safeguards in the form of "some analogue of the categorical imperative" could be built into robot brains, but they could "only act as governors in a statistical way." Otherwise robots would be completely paralyzed in many situations where decisions are necessary. Lem therefore arrives at his conclusion:

I have forgiven Asimov many things, but not his laws of robotics, for they give a wholly false picture of the real possibilities. Asimov has just inverted the old paradigm: where in myths the homunculi are villains, with demonic features, Asimov has thought of the robot as "the positive hero" of science fiction, as having been doomed to eternal goodness by engineers.

As a writer who claims a certain scientific authority, Asimov has committed the inexcusable blunder of essentially sticking to a pre-Copernican, anthropocentric world view. By calling one set of characters robots, Lem asserts, and the other set men, or by shifting all characters to the status of robots, an author may achieve entertaining stories but no serious and relevant debates about technological and futurological problems—problems such as those Lem tries to discuss when he deals with the complex interconnections among technology, biology, medicine, law, ethics, and the many new fields which develop and grow along the borders of established disciplines. Lem simultaneously pleads for stylistic qualities like rich inventiveness of language, a fertile, often grotesque imagery, the blending of serious and humorous elements, and entertaining plots full of tension.

The last merit, on a relatively low level, may be attributed to Asimov, and the historical merit of having been the first to try to use cybernetic ideas in fiction. The conflicts that Asimov pointed out were taken up by successors and exploited in much more intricate ways. Some of Stanislaw Lem's most hilarious science fiction parodies were inspired by I, Robot and other Asimov stories.

Lem quotes the traditional adage of satirists—"It is difficult not to write satire"—when analyzing the "twaddle" produced by most writers trying to deal with cybernetic themes, and Lem has been, almost from the beginning of his literary career, along with Frederik Pohl, one of the masters of satiric science fiction. Most of these stories have not yet been translated into English, so the discussion here shall therefore be confined to two early stories, "Do You Exist, Mr. Johns?" (1957) and "The Washing Machine Tragedy" (1963), and to two episodes from Ijon Tichy's Star Diaries (1957, 1971).

In "Do You Exist, Mr. Johns?" the borderline between man and robot is explored in a most ingenious way. Many of the themes that Lem presents in later short stories, novels, and theoretical and philosophical writings like Summa Technologiae or Fantastic and Futurology are budding here and are satirically sketched for a first tryout.

Harry Johns is an American racing driver who lately has been pursued by extremely bad luck. As a result of several accidents he needed first an artificial leg, then two arms, then a new chest and neck; finally he ordered as replacement for a cerebral hemisphere an electronic brain, type "geniox" (luxury version with high-grade steel valves, dream-image-device, mood-interference-suppressor, and sorrow-softener) from the Cybernetics Company. Now he is unable to repay his debts, and the company sues him to repossess all artificial limbs. "At that time there was only [one] of the cerebral hemispheres left of the erstwhile Mr. Johns," and the author can speak of "an environment turned into a total prosthesis." Mr. Johns refuses to pay and the company claims him as their property, noting that the second cerebral hemisphere was replaced by an identical twin of the first electronic brain. The judgment resolves a large number of difficult problems, some of which were already implied in Asimov's I, Robot: Is a symbiosis between man and machine possible? Where does the physical person end and the psychological person begin? Can machines claim consciousness and a psychological identity? Can machines be sued legally? What do motherhood, fatherhood, and birth mean under such circumstances? Is a machine possible who believes in a life to come? The legal consequences of organ transplants are satirically carried to the extreme: Can a machine be married? How is it possible to define a core of personal identity? On the other hand, a whole new industry comes into existence, its specific capitalistic interests inextricably interwoven with hospitals, doctors, and lawyers. As in many other satires, Lem reduces these problems to utter absurdity and then leaves the puzzled reader without a proper ending, forcing him to make up his own mind.

"The Washing Machine Tragedy" is Lem's best-known satire on the extremes of Western economic concepts: silly advertising campaigns, false value systems, competitiveness at any price, consumer idiocy. At the same time it is a brilliant parody of Asimov. Two producers of washing machines, Nuddlegg and Snodgrass, start ruinous sales campaigns, competing to corner the market. They throw on the market automatic washing machines with all sorts of useless extras, constantly vying with and attacking one another:

You certainly will remember those full-page ads in the papers where a sneeringly grinning, popeyed washing machine said: "Do you wish your washing machine more intelligent than you? Certainly not!"

The two companies compete with each other in constructing washing machines which fulfill more and more functions that have nothing at all to do with washing.

Nuddlegg placed a super-bard on the market—a washing machine writing and reciting verse, singing lullabies with a wonderful alto, holding out babies, curing corns and making the most polite compliments to the ladies.

This model is followed by a Snodgrass "Einstein" washing machine and a robot for bachelors in the sexy forms of Mayne Jansfield with a black alternative called Phirley Mac Phaine. Washing becomes only a by-product; the robots soon take more and more human forms, even varying forms according to every customer's detailed wishes, including "models which led people into sin, depraved teens and told children vulgar jokes." Robots soon are no longer useful for their original purpose, but for almost anything else. Working with a kind of time-lapse camera technique, Lem accelerates developments shown in I, Robot and many other science fiction stories. He satirically caricatures what Asimov thought could be prevented by his Laws of Robotics. Washing machines as thinking, independent automatons are no longer controllable. Not programmed according to laws of eternal goodness, they become malicious; commit all sorts of crimes; form cybernetic cooperatives with gangsters; turn into terrorists; fight each other in gangs.

Here Lem satirizes Western society, and he ridicules trivial science fiction in the tradition of Asimov. His witty ideas cascade and follow in rapid succession, but, as in every genuine satire, there is more behind it than mere literary parody. Legislation proves unable to deal with robotic problems because pressure groups undermine all straight action. Washing machines, once recognized as legal entities, together with powerful allies block all legal procedures taken against them. They infiltrate the economic and political system, and, when it turns out that the well-known Senator Guggenshyne in reality is a washing machine, the case against the machines is as good as lost. Human beings and robots become interchangeable, and men sell themselves into the service of intelligent machines. Many sorts of perversions are invented: machines consciously constructed as irresponsible for their actions, machines constructed as "sadomats" and "masomats," machines procreating themselves completely uncontrolled.

Still following themes implied in Asimov's I, Robot, Lem, in The Star Diaries, shows how the on-board computers on a spaceship revolt and finally found an extraterrestrial robot state. The lawsuit between Earth and Kathodius Matrass, the self-proclaimed ruler of the robot state, once again shows the manifold and complex legal problems that appear as soon as machines are recognized as legal entities. Theological questions, included in many of Lem's serious futurological considerations, are here tackled from a humorous angle. The legal problems are finally carried to grotesque extremes when Ijon Tichy, the narrator, finds out that all the attorneys of the Bar Association are in fact robots. So, in the end, the story, like the machines, runs out of control. The original society is no longer recognizable; all are robots; no problem is solved. Lem's parody attacks not only I, Robot but also the majority of Western science fiction stories, which are not interested at all in trying to discuss serious futurological and technological questions. Instead they wallow in catastrophes, make their profit with human anxiety, and put up entirely false perspectives of an interstellar human imperialism grown out of anthropocentric hybris. Lem's comment on the purpose of his essay "Robots in Science Fiction" applies also to his parodies: "We intended to point out only that it isn't possible to construct a reflection of the condition of the future with cliches" ["Robots in Science Fiction"].

Foreseeing miniaturization and microprocessing techniques, Lem more than a decade ago attacked androids, the humanization of machines in the Asimovian fashion, as nonsense:

It isn't worth the effort and never will be, economically, to build volitional and intelligent automatons as part of the productive process. Even examples of these processes belonging to the sphere of private life are being automated separately: an automatic clock will never be able to wash dishes, and a mechanical dishwasher will never exchange small talk with the housewife. ["Robots in Science Fiction"]

Donald M. Hassler (essay date March 1988)

Download PDF Print Page Citation Share Link

Last Updated on May 5, 2015, by eNotes Editorial. Word Count: 5164

SOURCE: "Some Asimov Resonances from the Enlightenment," in Science Fiction Studies, Vol. 15, No. 44, March, 1988, pp. 36-47.

[Hassler is an educator, poet, and author of Comic Tones in Science Fiction (1982) and Isaac Asimov (1989). In the following essay which focuses on I, Robot and the Foundation trilogy, he explores Asimov's use of Enlightenment philosophy, with particular emphasis on the law and order ideas of John Locke, William Godwin's principle of Necessity, and John Calvin's religious determinism.]

One difficulty in describing the SF [Science Fiction] that Asimov continues to produce stems from his rational drive for coherence and unified generality. Like all "scientific" thinkers who have written after the methodological revolution of John Locke and the other reformers of the new science, Asimov can never leave his best ideas alone. He must continually elaborate and link new insights to old on the assumption that accumulating and interlocked knowledge is the only sort of valid knowledge. His continual moves toward the general, even the abstract, can be seen both in the long time schemes of his future history and in the conceptual ideas of his own, implicit (and left open-ended) throughout his writings. Moreover, Asimov, along with other "hard SF" writers, seems to question the absolute insights of intuitive or "inspired" art by affirming the Lockean methodology of gradual accumulation. This is not to say that the images (e.g. of robots and Empire) at the core of Asimov's fiction are totally logical, transparent, and systematically arranged for purposes of Lockean, open-ended accumulation. In spite of himself, the clear and coherent rationalist contacts depths of meaning that are sometimes not on the surface. In other words, the resonance in both I, Robot and the Foundation trilogy seems to me significant; and that resonance or echoing is consistently from the 18th-century Enlightenment.

I will suggest here some ways in which Asimov's ideas on robotics and on history in these two early fictions, both of which are collections of shorter pieces written in the decade of the '40s for Astounding, remind us of key dilemmas stemming from our Enlightenment heritage. These dilemmas always balance "truth" against method, so that followers of the Enlightenment (and I believe Asimov is one of these) continually discover that the most effective methodology leads to the most "indeterminate" conclusions. I am not arguing that Asimov is a conscious scholar of his roots in this context, though any critic would have to think carefully before maintaining positively that Asimov is not consciously aware of some idea. Rather, I simply think it helps in understanding these remarkable and seminal longer fictions from the Campbell years to suggest their echoes from the Enlightenment. Also, though Asimov continues to make use of these ideas in much of his fiction written after these two works, to cover all the work through his most recent Foundation and Earth (1986) would be much too vast a topic for this essay.

One additional qualification needs to be stated at the outset—a qualification pointing to an entirely different essay that a critic of mine might write, or rather that several fine critics have been at work on for some time. I find that the resonances in Asimov echo more directly from the 18th-century Enlightenment with little benefit from the more organic, 19th-century reworkings of notions about history and about mechanism. Hence Asimov seems somewhat of an anachronism, even anathema, to more comprehensive inheritors of the Enlightenment tradition. Specifically, the images for cybernetics and robotics, along with the ideas which they imply, in the work of Stanislaw Lem and John Sladek as well as many other modernists, suggest more tonal and organic complexities and interfaces than Asimov allows for in his work. Similarly, historical determinism as understood by Marxist critics represents a quantum leap in complexity over Asimov and his 18th-century precursors. But Asimov is complex enough and interesting in his evasive anachronism. So it is the story of his ideas I am telling here rather than the total story of the ideas themselves. Certainly Asimov has been taken to task for being too simple; I intend to describe some of this "simpleness" more sympathetically than critics who are convinced that it is too narrow have been able to do. After all, one tenet of the 18th-century Enlightenment was clarity of vision; but this is not to say that the more complex shadows and "ghostlier demarcations" may not also be interesting. As unifying devices for I, Robot, Asimov employs both the character of Dr Susan Calvin and the Three Laws of Robotics. Both devices seem to me, also, imbued with resonance from the Enlightenment.

In his fine introduction to the whole canon of Asimovian SF up to but not including the recent outpouring of new Foundation and robot novels, James Gunn [in Isaac Asimov: The Foundations of Science Fiction, 1982] has worked out the "fixed-up" chronology for Calvin's life and spinster's career at US Robots and Mechanical Men, Inc. and how that scientific career as "robopsychologist" interacts with key product robots and other employees. There are other psychologists in the early short stories, even one or two "robopsychologists"; but Susan Calvin is special. She supplies not only the unity of I, Robot as a collection but also part of the Enlightenment resonance that makes this such an important book. Writing in an August 1952 "Foreword" to one of the early hardcover editions of the "novel," the anthologist Groff Conklin comments: "[Miss Calvin's] name may have been chosen by the author with a wry eye on the significance of … Calvinism" (n. p.). John Calvin, in fact, laid out a general framework, a time scheme and a theological set of assumptions, that did much to permit the gradualism of the secular Enlightenment and ultimately the technological and moral experimentation that Susan Calvin devotes her life to advancing. Calvin's move to posit an immensely long time scheme, along with a built-in "uncertainty" about any one particular judgment or "election" that God might hand down, did much to liberate thinkers for the gradual experimentation necessary in modern science. A recent critic who makes suggestions similar to those I am making here writes that Calvin, more than Vico or Spengler, ought to be a "likely candidate" for influencing the vast temporal frameworks characteristic of both Enlightenment science and hard SF:

Do we not catch a glimpse in these 'time charts' and thousand-year sagas of a return of the repressed Calvinistic background of the modern sciences? Without a doubt Calvin would be horrified, could he return from the grave, to see where his ideas have led, yet we could hardly underestimate the significance of his role in undermining the sacramental world picture which had prevailed throughout the [M]iddle [A]ges and thus laying the ground for a rational investigation of natural phenomena [David Clayton, "What Makes Hard Science Fiction, 'Hard,'" in Hard Science Fiction, edited by George E. Slusser & Eric S. Rakin, 1986].

I think this resonance fits Asimov perfectly although the theology itself, of course, is never his. He might prefer to invoke the immensely long and gradual history of the Israelites, which does, in fact, seem calculated to postpone indefinitely any absolute appearance of final truth. But the name Susan Calvin reminds us of the Puritan work ethic, and she does work long and hard—and has still not arrived at any absolute truth at the age of 82, when she dies. Asimov has commented in numerous places how he loves this character and has her say finally, "I will see no more. My life is over. You will see what comes next" (I, Robot). Verbs for seeing, I think, are no accident in the usage of an Enlightenment heroine.

Moreover, the adjectives used to describe this driven robopsychologist whose presence does so much for unifying I, Robot complement what Asimov correctly labels at the beginning of the book as her "cold enthusiasm"—"thin-lipped," "frosty pupils." Such ideological excitement as presumably she shares with the other workers at US Robots and, of course, with Asimov himself focusses on the virtues of control, pattern, predictability. The resonance I see here is not only with the great advocate of complete control, John Calvin, but also with that secular determinist of the end of the 18th century: William Godwin. Discarding all theological reference, Godwin simply "believed in" a coherence and order that governed all systems. Hence what he called "Necessity," which many critics have described in terms that resemble Calvinistic determinism rather than a strictly mechanistic determinism, seems to be echoed in Asimov's final story in I, Robot, which I will describe next, as well as in Asimov's world of the Foundations.

In "The Evitable Conflict," benevolent machines seem able to anticipate and control all events in a way that sounds much like the completeness of Necessity in Godwin; and at the same time Susan Calvin's "enthusiasm" is clear as she says finally:

… it means that the Machine is conducting our future for us not only simply in direct answer to our direct questions, but in general answer to the world situation and to human psychology as a whole…. Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable.

Asimov's youthful wordplay over "evitable" and "inevitable" will grow into a more sophisticated wit in the later novels where robotics play important roles. But his celebration of large, general systems (along with the implicit realization of the dilemma in the need to keep systems open-ended and hence "indeterminate") seems clearly to be linked to the cool wordplay that he gives to Susan here.

In order to reach such high levels of reliable generality, Calvin and her US Robots colleagues had to devise the simple calculus of the Three Laws of Robotics and then continually try out the balancing and interaction of the laws in all their combinations and permutations. Those continual games of "if this, then the next" consume the stories in I, Robot and provide a further resonance with Godwinian Necessity. Not only is the general outcome of such a grand scheme as Necessity or the "Machines" completely reliable and determined, but also the continual adjustments and "calculus" of the relations within the scheme are continually fascinating. It is as though Susan Calvin, Asimov, and any other such generalist and determinist has both nothing at stake and, at the same time, must always be making adjustments to their system. The belief in Necessity or in the overall general and benevolent outcome frees the "player," in fact, to manipulate the calculus of the game.

Calvinistic theology as well as Godwinian Necessity and Asimovian Robotics all liberate a sort of freeplay of will due to the most general sort of overall system. Such a paradox of free will existing within and because of a rigid system has been agonized over most by the theologians in ways that are inappropriate for this discussion, but the echoes from Godwin in the Enlightenment Asimov should be listened to if we are to hear the real effects of the Susan Calvin narratives. Here is a key passage from Godwin writing about Necessity—both the overall determinism and the individual moves in the calculus—that resounds all through the cool, hard work of Susan Calvin in I, Robot:

… if the doctrine of necessity do not annihilate virtue, it tends to introduce a great change into our ideas respecting it…. The believer in free-will, can expostulate with, or correct, his pupil, with faint and uncertain hopes, conscious that the clearest exhibition of truth is impotent, when brought into contest with the unhearing and indisciplinable faculty of will; or in reality, if he were consistent, secure that it could produce no effect. The necessarian on the contrary employs real antecedents, and has a right to expect real effects.

Godwin's matter-of-fact dismissal of free will as just too absurdly random suggests Asimov's firm ending to I, Robot, with its notion that the machines control all reactions but disguise this total control because they know that a full realization of total control would cause mental anguish or "harm" to humans. Similarly, the three Laws themselves (or three "rules" of robotics as they are labelled in the first story where Asimov mentions them explicitly—"Runaround") seem hardly profound or a great invention of the imagination. They are "neutral," as one recent critic has noted [see Alessandro Portelli, "The Three Laws of Robotics: Laws of the Text, Laws of Production, Laws of Society," Science Fiction Studies, Vol. 7, 1980, pp. 150-56]. Over the years they have gone on to have almost a life of their own as "ideas" outside of the fiction. Usually they are listed and worded with a sort of Godwinian flatness and their position and function in I, Robot is forgotten or confused. It was in the 1942 Astounding story, however, and in a fictional dialogue between Powell and Donovan, who are the key "right stuff" associates of Calvin, that the Three Laws first appear:

'And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.'

'Right! Now where are we?'

Donovan and Powell could figure out exactly where they were and did solve their problem on Mercury, but it would take more robot stories and finally the book I, Robot itself for Asimov to know what a fine gimmick he had invented. Finally, of course, he doctored all the stories in the "novel" so that they would be consistent with the Three Laws.

Further, just as Godwin paradoxically insists (like Calvin before him), that the believer in Necessity will work even harder to make things happen in this world, so Asimov's roboticists (and the robots themselves in his most recent fictions) never tire of discussing and trying to manipulate some implication of these three simple statements in relation to one another. The paradox is simply that the apparent certainty liberates continual and near-infinite permutations. Though, as in "Runaround," this continual balancing act often "strikes an equilibrium [whereby] … Rule 3 drives him back and Rule 2 drives him forward," the permutations of all the robots seem infinite. And so the accomplishment lies not only with the general outcome of "control" but also with the tinkering; it is a wonderful example of Asimov's inventiveness how complex and variable the Three Laws become.

Godwinian inclinations toward such clarity of analysis and such control may seem inhuman, even monstrous, so that Robotics itself, even though the Laws are benevolent towards humans, takes on the effects of the very Frankenstein motif that Asimov was trying to avoid. It is the continual acknowledgment of what I would call the calculus of complexity, however, that keeps Asimov himself lively and benevolent and "human" in his writing, especially his writing on the robots. He always is trying to teach and to clarify, and the material itself contains layer upon layer of complexity.

The series of stories dealing with the Foundation that was evolving at the same time as the robot series in the 1940s is not only the fiction that Asimov is best known for but also, perhaps, best exemplifies his inclinations towards the general and, in this case, towards the human and towards storytelling. In addition to his numerous autobiographical reminiscences about this remarkable invention of the Foundation trilogy, Asimov's 1953 venture into full-fledged literary criticism with his essay for Reginald Bretnor entitled "Social Science Fiction" is both close enough to the actual writing of the stories and candid enough to be very helpful. Asimov has become increasingly more coy about doing literary criticism himself—perhaps because he has come to see more clearly and to take more seriously "hard SF" writing as radical and important. But as the Foundation trilogy was first appearing in book form, what he had to say about the genre in general reveals a great deal about what he himself had accomplished by that time and about his set of mind and its debt to the Enlightenment.

First of all, he effectively disassociates himself from the "gadget" materialism of SF writers by defining what he and Campbell have been interested in as the influence of social change and history—viz., "people movement" rather than "gadgets." Further, Asimov makes clear in this essay both his knowledge of the revolutionary changes that took place in the 18th century and his admiration for the "discovery of history" that had not been truly possible prior to the Enlightenment because humans had not experienced fundamental change:

if science fiction is to deal with fictitious societies as possessing potential reality rather than as being nothing more than let's-pretend object lessons, it must be post-Napoleonic. Before 1789 human society didn't change as far as the average man was concerned and it was silly, even wicked, to suppose it could. After 1815, it was obvious to any educated man that human society not only could change but that it did. ("Social Science Fiction")

The fact that the young chemistry student at Columbia read as much history as he did is remarkable in itself. Later in the 1953 essay he identifies more fully this continuing fascination with the details of human history that provided story outlines for many of the narratives in the trilogy: "I wrote other stories, the germs of whose ideas I derived from the histories of Justinian and Belisarius, Tamerlane and Bajazet, John and Pope Innocent III." L. Sprague de Camp speaks of his and Asimov's "Toynbeean period" in the late '40s; and Asimov himself recollects that when he originally proposed to Campbell a tale about the fall of a Galactic Empire and a return to feudalism, this seemed perfectly natural to him since he "had read Gibbon's Decline and Fall of the Roman Empire not once but twice" ("The Story Behind….). Such omnivorous reading in youth may be exaggerated as both statements are reminiscences occasioned by the appearance of a new Foundation title; but there is no question that whereas the recent sequels fuse with the robot novels and introduce other themes, the original trilogy is overwhelmingly permeated by Gibbon, Toynbee, and the whole sweep of history seen from the perspective of a remarkable young man's readings.

I suggest further that in addition to this fascination with cycles in Toynbee, with pessimism in Gibbon, and with the whole detailed vista of Roman history as it modulated from repeated intrigue to resistance to forward movement to second-stage collapse, Asimov also knew Old Testament history. The Bible would have provided him with similar patterns of cycles. Certainly the continuing sense of exile and lament for a destroyed Jerusalem suggests the lost glory of Trantor as much as a fallen Rome does. And the early Church hidden within the declining Empire is a sort of "type and symbol" for a Second Foundation, even if Gibbon would not agree. Certainly present-day interpretations of the Bible by Fundamentalists as well as the long record of traditional interpretation would not agree with the notion of such open-ended movement; but from John Calvin's vision of a long future, mentioned earlier in this essay, to the "opening up" of history in the 18th century, such widening patterns in biblical history seem more viable. My main point here is not to insist on specific parallels, but I think history itself, and specifically the future history modelled on the reading of history as students have known it since the Enlightenment, must be acknowledged first as the major theme in these Foundation stories that epitomize Asimov's own description of social SF.

In other words, the vision of open-ended possibility and the full recognition of "change" in society that so characterized the revolution of the Enlightenment that Asimov talks about in his 1953 essay and that he had imaged in his trilogy manifested itself not only in the permutations and analyses of robotics but also in the realization of the nature of history itself. Historiography from Gibbon and Hume to Asimov himself contains nothing that can be called "absolute." Rather it recounts continuing movement from one faction to another, by spurts and long slow declines, with repeated variations on the images of equilibrium and disequilibrium.

Just as one can see few absolute truths in the panorama of change and history, so Asimov's texts are never set in stone; he correlatively seems quite comfortable with the publishing practices that made up the commercial "relativism" of pulp SF. Whereas texts of nostalgic "high art" in our scientific age will be early standardized to be set in type the same way each time as though "absolute" (I think of the standard paging in various teaching editions of Joyce's A Portrait of the Artist as a Young Man), Asimov had to accept a more fluid state of the text effecting the Foundation trilogy even after the stories had become books. For example, the first novel becomes The 1,000-Year Plan in a drastically cut 1955 edition that sold for 25 cents. Asimov did not seem to mind. Further, he has updated his texts in accordance with changes in scientific knowledge and terminology—which would seem to confirm that he sees little of permanence and absoluteness about "art." Not only the publishing practices, then, but the "tinkering" and continual rational manipulation in Asimov speaks more of the open-endedness of science than of the absolute values of art. In his essay "The Story Behind Foundation," written to introduce the surge of sequel writing which began to appear in 1982, Asimov anticipates new scientific findings that he can now incorporate into the narrative:

The Foundation series had been written at a time when our knowledge of astronomy was primitive compared with what it is today. I could take advantage of that and at least mention black holes, for instance. I could also include electronic computers, which had not been invented until I was half through with the series.

Even before the sequel writing, however, he was quietly altering "atomic" to "nuclear" throughout the trilogy in line with post-war nomenclature.

Similarly, the sense of permanence in the text of I, Robot seems to take a back seat to the coherence of the ideas as they evolve in Asimov's mind over time. For example, the story "Reason" appeared in Astounding (1941) before the Three Laws of Robotics had been articulated by Asimov and Campbell; but in the book Asimov includes an updated paragraph in "Reason" that makes the Laws explicit (I, Robot). Scholars of the future are bound to have particular troubles with the texts and the setting of the texts for SF works if the attempt is ever made to "establish" them as high art and thus to standardize a text.

Though neither history nor art itself is able to supply Absolute Truths in the Foundation trilogy, it does have its general ideas and themes that momentarily and in their changeableness do catch our imaginations as the best substitute we can have for absolutism. More than the continual variations on political or military intrigue in the plot, which I have said echo the continual intrigues in history itself, these general themes woven into the trilogy are what affect the reader. Some of the most important themes are, in fact, representative of the rational urge in Asimov always to move to the general. These emerge from the overall tale of Hari Seldon's plan through the "science" of psychohistory to lessen the chaotic effects of declining control within the Galactic Empire and to establish a new "Enlightenment" by means of the Foundation that he institutes on the planet Terminus working in continuing tension with the Second Foundation.

The first general idea is an echo not from the fall of the Roman Empire, although I suppose the hidden and ameliorative influence of the early Church is a "foundational" resonance here, so much as it is a set of images from the 18th-century Enlightenment. Certainly the major activity of the Seldon psychohistorians is work on the Encyclopedia Galactica, which is quoted from periodically throughout the trilogy; and the echo here is to the massive French work, done also by a small army of "new scientists"—Diderot and his cohorts—that helped both to overthrow the ancien régime in the 18th century and to "enlighten" the darkness following the decline of that Regime.

But history itself, or the whole record of human activity over time, is also the theme as we read about these future encyclopédistes. The important effect is the general notion about history that is stated, perhaps, most clearly in Foundation and Empire, the second book, though it is implicit in the entire set of stories. Here is an expression of the consternation felt by the villain, Bel Riose, in the face of Necessity:

Riose's voice trembled with indignation. 'You mean that this art of his predicts that I would attack the Foundation and lose such and such a battle for such and such a reason? You are trying to say that I am a silly robot following a predetermined course into destruction.'

'No,' replied the old patrician, sharply. 'I have already said that the science had nothing to do with individual actions. It is the vaster background that has been foreseen.'

'Then we stand clasped tightly in the forcing hand of the Goddess of Historical Necessity.'

'Of Psycho-Historical Necessity,' prompted Barr, softly. (Foundation and Empire).

It should be noted that Asimov has Riose call himself a "silly robot" in this passage—which suggests that the inevitability of the Three Laws of Robotics also carries with it the sad cancelling out of individual actions.

There is much sadness in such "determinism" for the individual actor, and that sadness is the second major general idea to consider. In a real way, also, it is simply another facet of the image of decline that is inevitable over vast stretches of time—the same sublime sense of cycles that gave such energy to "Nightfall" and that Asimov indeed found validated in his readings of the historians from Hume and Gibbon on, even to the great events of the then-ongoing Second World War. When cycles themselves and vast wars are the main "heroes" in history, individuals like Bel Riose do indeed feel overshadowed. Such a sense of eclipse and small "modernness" can be seen best in the key villain of the trilogy, the mutant and sad man, strangely named the Mule, who is able to alter emotions. Gunn [in Isaac Asimov: The Foundations of Science Fiction, 1982] has noted how much Asimov did seem to like this character of the Mule, as is evident in the fact that the Mule figures in more stories than any other individual except for Hari Seldon. His very role of being an enemy to all other forces in the Galaxy, including the Foundation, and yet promoting the eventual benevolent outcomes of the Seldon Plan through his antagonistic acts illustrates the predetermined sense of historical destiny that causes all "moderns" at any given time to experience this sense of sadness.

The second important theme I notice, then, is nostalgia for the lost glory of individual heroism balanced nicely with a full acceptance and celebration of smaller, limited "modernness." In fact, this is the motif of the Ancients versus the Moderns, or the lament for lost Golden Ages coupled with the realization of the advantages in an Iron Age. I think Asimov also learned this from the Enlightenment. It is the Georgic mode that informs so much 18th-century literature—an age in which people were coming to terms with the complexities and limits of the peculiar "modernness" that the scientific revolution and economic and socio-political changes brought with them. Regardless of how deeply scholarship can measure this resonance, however, the theme seems clear in the trilogy. The Mule is a strangely limited leader. He spends much of his time disguised as the court fool, Magnifico, and sadly, like his namesake, he is lonely and infertile. In other words, "Moderns" are small and limited compared to the "Ancients." The technology, including the robotics, of an Iron Age such as ours mirrors our beliefs in system and in "corporate" action. The individual hero has been replaced by steady progress in robotics and in other Iron Age techniques and, as in the 18th-century Georgic, the tone in Asimov's expression of this tradeoff is mixed.

Similarly, the Iron Age adaptability of the Foundation itself seems well worked out by Asimov to contrast with the glory of Empire. Nuclear devices must be small in the Foundation. Traders and other leaders are always somewhat imperfect and ineffectual as individuals; only the Plan itself is ultimately effective. Further, the Foundation itself is always working far out and on the periphery of the Galaxy, and even the Second Foundation, located at the other "end," is hidden and small. This translation that Asimov cleverly makes of the cycles of history and of the spiral shape of our Galaxy into the mysterious loops that eventually bring readers to discover the Second Foundation back on Trantor suggests the non-heroic peripheral details of a "modern" technological age. Over against Golden Age, titanic heroism, such as the "giants" in the sixth chapter of Genesis, we moderns can survive by means of the micro-electronics of a continually changing Iron Age technology. The Lord moves in mysterious ways, and one of the most mysterious has to do with the fact that grand results are accomplished by means of small, peripheral modern men.

Thus a final overall theme brings us back to the role of generalization and to the centrality of humans. It is significant that a concluding key figure in this massive narrative is a writer, the future novelist Arkady Darell, just as a journalist is a key point of view character in "Nightfall." But the real hero of the trilogy is the sublime history of humankind itself. And it is this large vision, which only the Enlightenment could take, that ultimately—and poignantly—submerges even the individual heroism of the writer. The more telling way of conceptualizing this effect is in terms of the general idea itself. (In this way, William Godwin, who was after all also a novelist, is further seen as a key prototype.) Here is Hari Seldon himself speaking at his trial, which provides the focus for the shorter initial piece that Asimov wrote last as the book publication was being readied:

'I shall not be alive half a decade hence,' said Seldon, 'and yet.. [the future] is of overpowering concern to me. Call it idealism. Call it an identification of myself with that mystical generalization to which we refer by the term, "man."' (Foundation)

When he writes the later sequels, Asimov will have his robot heroes come back to this big generalization about "man." The important thing to see here, then, is his move again to the large general idea. Therefore, just as I, Robot toys with permutations in laws that echo Godwinian Necessity, so the early Foundation stories support this paradoxically liberating vision of "system" that both orders and submerges—with the added notion, confirmed by the Enlightenment, of a vast, yet anthropocentric history.

Further Reading

Download PDF Print Page Citation Share Link

Last Updated on February 4, 2016, by eNotes Editorial. Word Count: 243

Criticism

Fiedler, Jean, and Mele, Jim. "Asimov's Robots." In Critical Encounters: Writers and Themes in Science Fiction, edited by Dick Riley, pp. 1-22. New York: Frederick Ungar, 1978.

Examines the benefits-of-technology theme in Asimov's robot novels and stories, focusing on I, Robot, The Cave of Steel, and The Naked Sun.

Moore, Maxine. "Asimov, Calvin, and Moses." In Voices for the Future: Essays on Major Science Fiction Writers, Volume 1, edited by Thomas D. Clareson, pp. 88-103. Bowling Green: Bowling Green University Popular Press, 1976.

Examines the ethical aspects of the characters in I, Robot from Calvinistic and Judaic perspectives, with a particular emphasis on the Puritan work ethic, human freedom and determinism, and human responsibility.

Thorner, Lincoln. Review of I, Robot, by Isaac Asimov. In Emergency Librarian 15, No. 3 (January-February 1988): 22.

Favorably assesses I, Robot.

Wilson, Raymond J. "Asimov's Mystery Story Structure." Extrapolation 19, No. 2 (May 1978): 101-07.

Examines the similarities between traditional mystery stories and Asimov's science fiction, paying particular attention to the story "Liar" in I, Robot.

Unlock This Study Guide Now

Start your 48-hour free trial and unlock all the summaries, Q&A, and analyses you need to get better grades now.

  • 30,000+ book summaries
  • 20% study tools discount
  • Ad-free content
  • PDF downloads
  • 300,000+ answers
  • 5-star customer support
Start your 48-hour free trial
Previous

Isaac Asimov Long Fiction Analysis

Next

Asimov, Isaac (Vol. 1)

Explore Study Guides