I, Robot

views updated

I, Robot

ISAAC ASIMOV
1950

INTRODUCTION
AUTHOR BIOGRAPHY
PLOT SUMMARY
CHARACTERS
THEMES
STYLE
HISTORICAL CONTEXT
CRITICAL OVERVIEW
CRITICISM
SOURCES
FURTHER READING

INTRODUCTION

I, Robot is one of the most influential works of science fiction ever written. During the 1930s, young Isaac Asimov found himself bored with the common science fiction plot that included robots destroying their creators, akin to the destruction of Dr. Frankenstein by his own monster. A precocious and prolific writer, Asimov addressed his boredom by writing his first robot story, "Robbie," in 1940, when he was just nineteen years old. He published it in Super Science Stories magazine. Over the next ten years, he wrote and published at least twelve more robot stories.

Asimov's robots were equipped with positronic brains and governed by the Three Laws of Robotics, as articulated in "Runaround," first published in 1942 and later included as the second chapter of I, Robot. Each of Asimov's subsequent robot stories explored some aspect of the Three Laws, pushing the boundaries of human/machine interaction. In 1950, Asimov selected what he considered to be his best stories, wrote a framing device to link the stories together into a novel, and published the work as I, Robot. Even Asimov recognized that this might be his most lasting work. He wrote in the introduction to Robot Visions, "If all that I have written is someday to be forgotten, the Three Laws of Robotics will surely be the last to go." More than fifty years after its first publication, I, Robot is still easily available in several editions,

most notably in a 2008 edition by Bantam Dell publishers.

Well into the twenty-first century, science fiction writers, movie directors, and artificial intelligence engineers continue to take into account Asimov's prescient consideration of what it means to be human and what it means to be machine.

AUTHOR BIOGRAPHY

Isaac Asimov was born on or around January 2, 1920, in Petrovichi, Russia, to Juda and Anna Rachel Berman Asimov. Because the Asimov family was Jewish, and few official records exist in Russia about Jews during this period, the date is nothing more than an approximation made by Asimov. The family left Russia and moved to Brooklyn, New York, in 1923.

The family purchased a candy store in 1926, and soon expanded the business to include additional stores. Young Asimov and the other members of his family devoted many hours to working in the stores. An intelligent and quiet boy, Asimov entered Boys High School in Brooklyn in 1932, and graduated just three years later at the age of fifteen. Asimov was a voracious reader, and became acquainted with science fiction by reading the pulp magazines stocked in the candy store. He was soon writing letters to the editors of several publications.

In 1935, he entered Seth Low Junior College, a division of Columbia University, where he pursued his love of chemistry. During this period, he began writing science fiction stories, and in 1938, with his first completed science fiction story in hand, he met the legendary John W. Campbell, Jr., who had just begun his long tenure as editor of Astounding Science Fiction. Campbell became an important mentor and friend to Asimov, and the two worked closely together for the rest of their lives.

Asimov sold his first story, "Marooned Off Vesta," to Amazing Stories in 1938. In the same year, he became active in the Futurian Literary Society, a group that included such well-known writers as Frederick Pohl, Donald Wollheim, and Cyril M. Kornbluth.

By 1939 Asimov had completed his bachelor's degree in chemistry and by 1941 a master's degree in the same field. World War II interrupted his work on his Ph.D. During the war years he worked alongside fellow science fiction writer Robert A. Heinlein at the Naval Aircraft Laboratory in Philadelphia. In 1942, he married Gertrude Blugerman. Meanwhile, Asimov had begun working on his robot stories, publishing the first, "Strange Playfellow," in 1940. Retitled as "Robbie," the story became the first chapter of I, Robot when it was published in 1950 by Gnome Press. Indeed, after 1940, Asimov sold every story that he ever wrote; nearly all of them have remained in print in the years since his death.

Asimov returned to Columbia University and completed his Ph.D. in chemistry in 1948. In 1949, Asimov moved his family to Boston, where he accepted a position as an instructor of biochemistry at the Boston University School of Medicine. By 1958, Asimov's side career as a science fiction writer was providing a sufficient income that he was able to leave teaching and devote himself to full-time writing. In 1970, Asimov and his wife separated; in 1973, he married Janet Jeppson. Asimov died from complications of AIDS, contracted from a blood transfusion during an earlier heart surgery, on April 6, 1992.

Asimov wrote some five hundred books in the fields of science fiction, popular science, literature, and literary criticism. In addition, he won countless awards for his work, most notably several Hugo and Nebula awards, the most prestigious honors in science fiction. He continued to win awards for his work after his death, and his popularity remains unabated in the twenty-first century. There is little doubt that he will be long remembered as one of the most influential science fiction writers of all time.

PLOT SUMMARY

Introduction

I, Robot is not a novel in the usual sense, with a plotline and consistent characters throughout. Rather I, Robot is more like a closely connected set of short stories held together by a frame story that allows Asimov to trace the history of robotics over a fifty-year period.

The novel opens with the first segment of the frame story, set in italic type. Readers are introduced to Susan Calvin, a robopsychologist who is retiring from her position at U.S. Robots & Mechanical Men after a career of some fifty years. The first-person narrator of the frame story is a young, brash journalist who is writing a feature article on Calvin for Interplanetary Press. He is looking for human interest in the story; therefore he urges Calvin to recall some of the most memorable moments of her career. Her memories, then, form the basis of each of the subsequent chapters.

Chapter 1: Robbie

In this chapter, Calvin tells the story of Robbie, one of the first robots constructed to interact with and serve humans. Robbie functions as a nursemaid for a little girl named Gloria; although he cannot speak, Robbie plays with Gloria and seems to enjoy the stories she tells him. Gloria is devoted to Robbie; however, her mother does not like the robot and finally succeeds in convincing her husband to get rid of the mechanical man. Asimov uses the mother to represent one of his common themes—the hostility of some people toward technology.

The parents get rid of Robbie while Gloria is out of the house, and the little girl is heartbroken when she finds her playmate gone. She sickens and loses weight. Finally her parents decide to take her to New York City for a trip to try to cheer her up. She believes that they are going to try to find Robbie. Finally, on a visit to the U.S. Robots factory, a situation emerges in which Robbie (who is working there) saves Gloria's life. The mother relents, and Robbie goes home with the family.

The second segment of the frame story appears just after the story of Robbie. In her discussion with the journalist, Calvin recalls two important early robotic trouble shooters, Gregory Powell and Michael Donovan.

Chapter 2: Runaround

In this chapter, Powell and Donovan are on Mercury to determine if a failed mining operation can be reopened by using robots. The story is notable for its discussion of the positronic brain that allows Asimov's robots to speak and interact with humans. In addition, in this chapter, Asimov includes dialogue between the two men that spells out the Three Laws of Robotics, the plot device that functions throughout the novel.

MEDIA ADAPTATIONS

  • An audiobook of I, Robot, read by Scott Brick, was produced and distributed by Random House Audio (2004).
  • In 1997, InforMedia produced a CD-ROM set called Isaac Asimov's Ultimate Robot. Several of the stories from I, Robot were included in the set.
  • In 2008, Pan Macmillan Publishers produced and distributed the audiobook I, Robot for Learners of English, narrated by Tricia Reilly.
  • I, Robot is the title of a film produced and distributed on DVD by Twentieth Century Fox in 2004. Starring Will Smith and directed by Alex Proyas, the film shares little with Asimov's book other than the title and the names of a few characters.

Donovan sends an SPD-13 robot named Speedy on a simple task: to retrieve selenium from a site about seventeen miles distant from headquarters. They need the selenium to recharge their sun shields so that they can survive the intense heat and light experienced on this planet, the closest to the sun. Speedy does not return, however, and when they track his movements, they discover that he is wandering around as if he is drunk, singing lyrics from a Gilbert and Sullivan operetta. Suddenly, the situation is serious: without the selenium, the two men will not survive. They review the Three Laws of Robotics to help them think about why Speedy is behaving so irrationally:

Powell's radio voice was tense in Donovan's ear: "Now look, let's start with the three fundamental Rules of Robotics—the three rules that are built most deeply into a robot's positronic brain … We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm."

"Right!"

"Two," continued Powell, "a robot must obey the orders given it by human beings except where such orders would conflict with the First Law."

"Right!"

"And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."

The men ultimately surmise that Speedy is experiencing an irresolvable conflict between the Second and Third Laws. The men put their own lives in danger, thereby overriding the dilemma by bringing Law One into play.

Chapter 3: Reason

Donovan and Powell are also the main characters in this chapter. Their job is to test the workability of leaving a robot in charge of a delicate operation on a space station. The robot QT-1, known as Cutie, does not believe that inferior beings such as humans could be responsible for the creation of a perfect being, himself. Beginning with the assumption that he exists because he thinks (Asimov's nod to Descartes, the famous French philosopher), Cutie develops an elaborate creation story, based on reason, and indoctrinates the other robots to his new "religion." The robots all spend time servicing a large piece of machinery that they believe is their creator. Asimov demonstrates in this story that reason alone does not produce truth. He also demonstrates that the First Law of Robotics holds true: even if the robots believe that their actions are religious homage, these actions nevertheless protect human beings.

Chapter 4: Catch That Rabbit

In this chapter, Powell and Donovan have yet another puzzle to reason through. This time they are on an asteroid trying to figure out why a robot named Dave who directs six subsidiary robots is not functioning as designed. Dave and his crew are supposed to be mining ore on an asteroid without the need of human supervision. However, he has lapses of amnesia during which no ore is mined. Finally, when Powell and Donovan are trapped in a cave-in, Powell figures out that the problem is with a conflict between Dave's self-initiative circuits and his need to give orders to six subsidiaries simultaneously. When Powell shoots one of the subsidiaries, Dave is back to his old self and quickly frees the men from the cave-in.

The chapter closes with a brief segment of Calvin talking to the journalist about a mind-reading robot named Herbie.

Chapter 5: Liar!

"Liar!" is one of Asimov's most highly-regarded short stories. Its inclusion in I, Robot greatly enhances the novel because it offers keen insight into the character of Susan Calvin. The story is set in the main offices of U.S. Robots & Mechanical Men, Inc., and features robot designer and head of the company Alfred Lanning; mathematician Peter Bogert; Milton Ashe, the youngest officer of the company; and Calvin as a young robot psychologist. The four face a problem: one of their robots, Herbie, is able to read minds, and none of them know why. Strangely, Herbie is not interested in scientific books but does enjoy romance novels.

All of the characters take turns interviewing Herbie, and discover secrets about each other, or at least, they believe they do. Herbie even goes so far as to tell Calvin that Milton Ashe is in love with her, news that Calvin welcomes because she has been secretly in love with Ashe herself. By the end of the story, however, all discover that Herbie is capable of lying. Calvin is deeply hurt, but explains Herbie's lying as a logical extension of the First Law. Since he can read minds, he knows what will make each human happy and what will make them sad. He interprets sadness as a kind of harm, and so in order to fulfill the conditions of the First Law, he tells each of them what they want to hear even though it is not true. When she realizes this, she corners Herbie and forces him to confront an insoluble dilemma that fries his circuits. Left alone with the broken robot, she says only one thing, in a bitter voice, "Liar!" The implication is that she has destroyed the robot out of revenge.

A one-paragraph segment of the framing device ends the segment, and it also seems clear from this that Susan Calvin ends up never finding any kind of human love after her experience with Herbie.

Chapter 6: Little Lost Robot

"Little Lost Robot" is the story of military intervention in the creation of a robot who is not imprinted with the entire First Law. That is, this robot will not harm a human through an action, but will engage in inaction, even if it means that a human is injured or killed as a result. The project meets with disaster, and Susan Calvin must try to set things aright. First, however, she must find the robot who is hiding with others that look identical to it. Through a series of tests and interviews, Calvin is able to solve the problem and correctly identify the robot who has taken quite literally a throw-away remark made by a human to get lost. Calvin is almost killed in the process, and all realize that tampering with the Laws will ultimately lead to terrible events.

Chapter 7: Escape!

Susan Calvin and Alfred Lanning are featured in this chapter. Scientists from several companies are racing to develop a hyperspace drive that will make interstellar space travel possible. U.S. Robots & Mechanical Men wants to set its own supercomputer, The Brain, on the task. However, there is a great deal of fear that the task will destroy The Brain as it has the computers of other companies. It is Calvin's task to make sure that The Brain comes to no harm, and that the solution it offers is one that will work. The dilemma that has destroyed other computers is that human beings traveling in hyperspace cease to exist for a split second, experiencing what can only be called death, although a temporary death. The Brain understands that this is temporary, but it still interferes with its positronic devotion to the First Law. As a result, the Brain becomes just slightly unhinged and morphs into a practical joker, sending Donovan and Powell off on a ship that has no controls and supplying only milk and beans for food. Because of Calvin's understanding of The Brain, U.S. Robots is successful in developing the first hyperspace drive and opening the galaxy to exploration.

Chapter 8: Evidence

This chapter undertakes to demonstrate the difficulty people might have in discerning if an individual is a human being or a robot. Francis Quinn, a politician who is running against Stephen Byerley, comes to U.S. Robots and asks Dr. Lanning if the company has made a robot that could pass as a human. Lanning denies that they have done so. Quinn wants Lanning and his team to determine the truth: is Byerley a robot or not? In an elaborate plan, Calvin arranges a test. Byerley disobeys the First Law, satisfying the electorate that he is not a robot. However, the story ends with a twist as Calvin demonstrates how Byerley could have circumvented the Law. Readers are left not knowing the truth about Byerley.

Chapter 9: The Evitable Conflict

In this story, the world has become divided and is controlled by machines. Stephen Byerley is the World Coordinator. He comes to Calvin to ask for her help. The four machines that control the world are making small errors. He fears that they are not working properly, and that perhaps they will run amok. Calvin realizes that the machines have made an adjustment to the First Law. They now are protecting humanity, not individual human beings. Further, the machines realize that if they themselves are destroyed, it will mark the end of humanity. Thus, the machines first concern has become to preserve themselves. I, Robot ends with the disquieting realization that the machines now rule the world without any input whatsoever from their creators.

The frame of the novel also comes to an end in this story. On the last page of the book, Susan Calvin says farewell to her interviewer, summarizing her life: "I saw it from the beginning, when the poor robots couldn't speak, to the end, when they stand between mankind and destruction. I will see no more. My life is over." The novel ends with the journalist's notation that Susan Calvin died seven years later at the age of eighty-two.

CHARACTERS

Milton Ashe

Milton Ashe is the youngest officer of U.S. Robots & Mechanical Men, Inc. In the chapter "Liar!" he is Susan Calvin's love interest.

Peter Bogert

Peter Bogert is second in command to Dr. Alfred Lanning at U.S. Robots. He is very ambitious, and eventually succeeds Lanning. His ambition gets him into trouble in the chapter "Liar!" when the robot Herbie tells him that Lanning is about to retire. This information is not true, although it is what Bogert wants to hear. When he acts on the information, Lanning asserts his authority strongly. Despite this character flaw, Bogert is a brilliant mathematician and is a positive force in the novel.

The Brain

The Brain is a supercomputer owned by U.S. Robots. Other large computers from other firms have failed to do the calculations necessary to create a hyperspace drive, destroying themselves in the process. Consequently, when The Brain is put on this task, Dr. Calvin meets with it often to monitor progress. She discovers that the problem is that when humans go through hyperspace, they cease to exist momentarily. This is in violation of the First Law of Robotics; consequently, developing the drive is very dangerous for any robotic computer. In this case, The Brain not only develops the hyperspace drive, it also develops a sense of humor (or slight derangement), sending Powell and Donovan off on a flight with nothing to eat but milk and beans.

Stephen Byerley

Stephen Byerley is one of the most mysterious characters in I, Robot. He is a politician running for office when first introduced. His opponent, Francis Quinn, claims that no one has ever seen him eat or sleep, and that he has no history. Consequently, he accuses Byerley of being a robot. Byerley refuses to have a physical examination on the basis that it is an invasion of his privacy. Susan Calvin is asked to ascertain whether he is a robot or not, based on his behavior. When Byerley strikes another human, Calvin determines that he is not a robot because he would have violated the First Law of Robotics, something impossible for any robot to do; however, later, Byerley describes to Calvin how he set up the situation and the implication is that perhaps the man he hit was actually another robot. Regardless, by the end of I, Robot, Byerley is the most powerful person in a world managed by machines. He and Calvin appear to be close friends, but readers never know the truth of his humanity.

Dr. Susan Calvin

Dr. Susan Calvin is a robot psychologist employed by U.S. Robots & Mechanical Men, Inc. Her role in the novel is to help readers understand the robot brain. She is very smart, but also very cold, often described as colorless and frigid. Only in "Liar!" do readers see another side of Calvin, one that is shy and insecure in her dealings with men. In all chapters featuring Calvin, she is clearly a driven woman, but also a woman with a very powerful persona. In a world dominated by men and machines, she stands out as a capable and brilliant woman. Calvin has been instrumental in assuring human safety in their dealings with robots through her integration of the Three Laws of Robotics into the positronic brain, but Calvin herself seems more comfortable with robots than people. By the end of the novel, she seems quite happy that machines are running the world, as she believes they will do a better job than humans. Asimov features Calvin in some fifteen stories, including those in I, Robot as well as later collections and novels. In this recurring character, his sense of ethics and high regard for reason are evident.

Cutie

Cutie is a robot named QT-1, featured in the chapter "Reason." Cutie refuses to believe that he has been created by imperfect creatures such as Powell and Donovan, and consequently reasons that he has been created by a much greater robot than himself. He develops an elaborate theology that requires he and the other robots to pay homage to a large piece of machinery on a space station.

Michael Donovan

Michael Donovan, along with his partner Gregory Powell, are field engineers, assigned to check on robots throughout the galaxy. They are usually called in when a robot is not behaving as expected, and often find themselves in danger. Donovan and Powell use logic to arrive at answers to problems, and are generally successful in fixing the robots so that they behave as they should.

Herbie

Herbie is a robot who has learned to read minds. In a conflict regarding the First Law, he begins telling lies to humans, choosing to tell them what they want to hear rather than the truth. He does so because to do otherwise would cause harm to the humans in that they would be hurt. As a result of his decision, several humans, including Susan Calvin are deeply wounded. She retaliates by shutting him down.

Alfred Lanning

Alfred Lanning is the Director of Research at U.S. Robots. He is a brilliant scientist and is widely regarded as the father of robotics in Asimov's fictional universe. He figures most fully in the chapter "Liar!," where he is portrayed as an aging, yet fully capable executive. Most important, Asimov portrays Lanning as a man who has realized his vision: he has watched his ideas about robots become reality and seen his world change dramatically because of his own efforts. Lanning also plays an important role in the chapter "Evidence." In this story, Lanning demonstrates the difficulty of maintaining ethical standards while protecting a large corporation. He finds it distasteful to do as Francis Quinn asks him, to provide proof that Stephen Byerley is, or is not, a robot, but he finds himself in a situation where he must comply.

The Narrator

In order for Asimov to turn his collection of short stories into the novel I, Robot, he created a frame story to connect the stories, which appear as chapters. The frame story uses a young, first-person narrator who is a journalist. His job is to interview Susan Calvin and have her relate the history of robotics to him so that he can write a feature story about her on the occasion of her retirement. The narrator is unnamed; in many ways, he stands in for the reader, asking the questions the reader would like to ask him or herself. In other ways, he represents Asimov himself, who was only nineteen years old when he published his first robot story. The creation of the narrator allows Asimov to portray himself as the brash young writer, a little in awe and a little in love with the formidable Susan Calvin. While his role is strictly to connect the various chapters, the narrator seems to grow in both stature and dignity throughout the book. When he announces that Susan Calvin had died at the end of the novel, his spare language suggests real sorrow over the passing.

Gregory Powell

Along with Michael Donovan, Gregory Powell is a field engineer charged with testing and fixing robots on planets, asteroids, and space stations. He and Donovan provide the comic relief in I, Robot, although their lives are frequently in danger. They are essentially clowns, but they are also very smart, and are able to use the Three Laws of Robotics and their own reason to solve robotic problems.

Francis Quinn

Francis Quinn is a politician featured in the chapter "Evidence." He is convinced that Stephen Byerley is a robot and he goes to U.S. Robotics to seek their help in proving it. He is, in many ways, a caricature of the politician: he smokes cigars and tries to push people around. He is not successful in his bid to have Byerley proven to be robotic.

Robbie

Robbie is the robot featured in the chapter bearing his name. He is a mute robot who serves as a nursemaid for Gloria. He saves her life at one point, earning a place in the Weston household, despite Mrs. Weston's objections.

George Weston

Featured in the first chapter of I, Robot, George Weston is Gloria's father. He has provided Gloria with a robotic nursemaid named Robbie.

Gloria Weston

Gloria is a little girl featured in the first chapter of I, Robot, "Robbie." She is cared for by a robot named Robbie, and she loves him. When her parents take Robbie away from her, she grieves. She is an early voice in Asimov's stories advocating for robotic rights. She believes that Robbie should be given the same rights as any human.

Mrs. Weston

Mrs. Weston is Gloria's mother. She is very opposed to having a robotic nursemaid for her daughter in the chapter "Robbie." She represents all people who are afraid of technology and do not want it in their lives.

THEMES

Humans and Machines

Throughout I, Robot, Asimov places humans and robots in close proximity. By doing so, he reveals the problems and concerns humans encounter regarding the role of machines in their lives. The very first story, "Robbie," demonstrates two basic positions humans hold regarding machines: Mr. Weston thinks that robots can provide a safe service to free up the time of humans for other pursuits. He is happy to have the robot Robbie in his household, caring for his child. He trusts that the scientists who created Robbie have placed enough safeguards in the robot to make him reliable. Mrs. Weston, on the other hand, hates the robot. She finds him dangerous, largely because she does not understand the science behind the robot. In addition, she does not understand how the Three Laws of Robotics are designed to keep humans safe. She is also the character who talks about the people in the village being angry about the Westons having a robot. This is clearly a reference to the Frankenstein story. In a famous scene from the 1931 movie, angry villagers storm the castle of Dr. Frankenstein in order to destroy the monster, the scientist, and the laboratory. Mrs. Weston seems to imply in the chapter that their family will be in trouble with the townspeople if they do not get rid of the robot.

In later chapters, it becomes apparent that both views continue to coexist uneasily. Robots are still manufactured and employed to do heavy labor and tedious tasks, attesting to the fact that humans value the fruit of their labors; however, robots are not allowed to work or exist on earth; they are only to be assembled and placed in work situations in space.

By the close of I, Robot, however, another change has taken place. Humans have become increasingly dependent on machines to take care of the details of life. Although there is still a human government in place, Susan Calvin reasons with Stephen Byerley that the large machines that essentially control the details of life control everything that happens on the planet, in accordance with the Three Laws of Robotics. The realization signals a shift in the interpretation of the Laws: it is not individual human existence that machines now safeguard but rather the well-being of humanity as a species.

Asimov demonstrates in the novel his awareness of the anxiety and discomfort many humans feel about science and machines. As a growing number of robots have led to automation of manufacturing in the years since the publication of I, Robot and as computers now play a role in daily life for nearly all Americans, the theme of humans and machines explored in this novel is an enduring one.

TOPICS FOR FURTHER STUDY

  • The subject of robots is an important one in science fiction literature and film. Select a number of television or film portrayals of robots such as Battlestar Galactica, Star Trek: The Next Generation, Star Wars, Blade Runner, or others of your choosing. Using your reading of I, Robot in addition to the cinematic portrayals of robots as your evidence, analyze the various ways humans think about robots. Design a poster that identifies your findings and present it to your class.
  • Research the golden age of science fiction. Who were the key writers? What were the major themes? What was the historical context for these writers? How did the writers of the golden age of science fiction influence the current generation of science fiction writers? Write a paper addressing these questions.
  • Asimov's Three Laws of Robotics have dictated the ways that robots are portrayed in film and literature since the first time they appeared. Write a short story featuring one or more robots, and determine whether your robots conform to Asimov's vision, or follow a different set of rules that you have identified for them.
  • Choose one of the chapters of I, Robot and write a screenplay based on the story. With a small group of students, videotape your version of the story to present to your class.

Free Will and Predestination

In addition to his concern with the interaction between humans and machines, Asimov also demonstrates an interest in human action: how much of human action is taken because the human freely chooses to act, and how much is the result of some cosmic plan? According to the religious point of view, humans must have free choice in order to be able to choose the right or moral path. If no choice is involved, then there can be no morality or essential goodness. Rather, all actions would take place simply because they are predetermined to happen. Humans, then, would be slaves to destiny, a path drawn up for them by some supernatural power. At the same time, Asimov is keenly aware of the paradox in Calvinist theology, a belief system formulated during the Protestant Reformation and first articulated by John Calvin in 1537. Calvin's theology asserts that people who are destined for God's grace have already been selected and that nothing will prevent their ultimate achievement of heaven, in spite of the fact that God already knows that they will sin during their life times because they have free will. Nonetheless, members of the elect often choose to act virtuously because their election predisposes them to moral actions. That Asimov had the theology of John Calvin in mind when he wrote I, Robot is evident in his choice of name for his heroine, Susan Calvin. Her final words, "I saw it all from the beginning, when the poor robots couldn't speak, to the end, when they stand between mankind and destruction," suggests that Asimov sees her as a god-figure, the creator who knows both the beginning and the end of a race of beings, finer and more ethical than the humans who created them.

A second consideration of free will and predestination is not religious and instead revolves around notions of biological determinism, the belief that one's genes determine one's future, regardless of the environment in which one is raised. The robots in Asimov's stories have been programmed with the Three Laws of Robotics, and no matter what else they do, they must conform to those Laws. It is possible to read the robots in I, Robot as a metaphor for human existence; that is, a person's future is determined at the moment of conception, when the sperm and the egg join chromosomes and provide the genetic material for the person's entire life. Free will, in this interpretation, is nothing more than a fanciful illusion.

To argue that Asimov uses free will and predestination as a thematic device is not to say that he believed one way or the other. Rather, it simply demonstrates that Asimov, always curious, sees the paradox as one of life's great mysteries and something that can be fruitfully mined for literary consideration.

STYLE

Frame Story

Known variously as a framework story or a frame narrative, the frame story is a literary device that encloses one or more separate stories. Geoffrey Chaucer's The Canterbury Tales is a popular example. Chaucer first sets up the frame by introducing a narrator (ironically named Geoffrey Chaucer) who tells the story of a group of pilgrims traveling together to Canterbury. To pass the time, they devise a game: each pilgrim will tell a tale and at the end of their journey, the pilgrim who tells the best story will win a prize. Each of the pilgrim's stories is independent of each other. The stories are held together by the frame narrated by "Geoffrey Chaucer."

When Asimov decided to knit his many robot stories together into a cohesive book, he chose to construct a frame story to place them in. He also revised his stories to better reflect the frame. For example, he introduced Susan Calvin into stories where she had not appeared in previous publication. This allowed her to speak in the framing device about the events in the story from first-person knowledge. By allowing one voice—Susan Calvin's—to relate the history of robotics, the individual stories become progressive chapters in Asimov's vision of the future. The frame story, then, allows stories written over a ten-year period to function together as a cohesive whole.

Three Laws of Robotics as Plot Device

A plot device is an element introduced into a story or novel by an author that expands, extends, or moves the plot forward. For Asimov, probably the most important plot device of his career is "The Three Laws of Robotics." Indeed, each of the stories of I, Robot addresses a puzzle or problem caused by this plot device. The device, therefore, is essential to the entire book. Furthermore, the Three Laws of Robotics as articulated in I, Robot are Asimov's most enduring legacy to science fiction. They offer a paradigm that other writers adopted, and have become something like a Holy Writ of science fiction. Asimov himself famously noted in a variety of places that he expected to be remembered for the Three Laws of Robotics if nothing else.

The importance of the Three Laws for I, Robot, however, is that Asimov uses them consistently as a plot device. It is as if he is testing them out in a variety of situations. As a scientist, Asimov was well familiar with testing hypotheses. Therefore, once he posited the Three Laws, he had a logical plan for any plot eventuality. To get to the end of the story, he merely needed to reason his way through, using the Laws as his guide. For example, if he imagined a case where a robot told a lie, he needed to go back to the Laws to determine under what circumstances such a thing could happen.

Asimov's influence can be seen in many of the episodes including the android Data in the television series Star Trek: The Next Generation, wherein the Three Laws are tested yet again. Further, part of the shock of contemporary science fiction such as Battlestar Galactica resides in the obvious and ongoing violations of the Three Laws.

HISTORICAL CONTEXT

The Golden Age of Science Fiction

A good deal of science fiction was published in the United States during the nineteenth and early-twentieth centuries, but it was in the late 1930s that the genre came into its own. The period critics call "the golden age" roughly coincides with the period during which Asimov was writing his robot stories and his first three Foundation novels.

In 1921, writer Karl C̆apek produced a play that took technology as its subject. In the play, humans produce machines that C̆apek called "robots," the first use of the term. His vision was a bleak one: the robots destroy their masters, just as Mary Shelley's creation Frankenstein eventually destroys the man who created him. Clearly, by the 1920s, humans were growing wary of technology. During World War I, they had seen what airplanes, gas attacks, and other products of technology could do to a human body. It is little wonder that, in such an atmosphere, science was viewed with distrust.

Nevertheless, science also provided significant hope for the future. In the late 1920s, the first of the science fiction pulp magazines was founded. (Pulp magazines were so called because they were made from cheap, wood pulp paper.) Amazing Stories, started by Hugo Gernsback, went to press for the first time in 1926. While many of the stories were not of high quality, they nonetheless contained the first articulations of what would become treasured conventions of science fiction. The dark years of the Depression (roughly 1929 through 1939) produced readers who were looking to escape their own worries and woes. Where better to look than to science and the galaxy?

By the time John W. Campbell Jr. took over as editor of Astounding Science Fiction in 1937, he and Gernsback were defining a recognizable literary genre. They looked for stories that emphasized science and that were educational as well, although both men looked for stories that had strong literary qualities in addition. Campbell in particular looked for well-developed characters and strong story lines.

Between 1938 and 1946, writers such as Asimov, Clifford Simak, Theodore Sturgeon, and Robert Heinlein, among many others, produced stories that were optimistic, energetic, and future-oriented. Their heroes were ethical, strong leaders, who did what was right even if it was unpopular. They were also smart, and masters of an array of technological devices.

When the United States entered World War II in 1941, the government, the military, and the population all looked to science and technology as the route to victory. Like their heroes in science fiction stories, Americans believed that American ingenuity and engineering would eventually prevail over their enemies. In addition, although Americans had some distrust of technology, science fiction writers of the time believed that technology existed to serve humankind and could do so safely. Asimov shaped this general view of technology with his Three Laws of Robotics, developed with the help of Campbell, to whom he always gave credit.

COMPARE & CONTRAST

  • 1940s: World War II rages across Europe and the Pacific from 1941 through 1945, ending just days after the detonation of the first atomic bombs on Hiroshima and Nagasaki, Japan. These bombs are the result of extraordinary technological development during the war years.

    Today: While nuclear weapons have not been deployed in combat since World War II, contemporary warfare relies heavily on technological advances that include stealth bombers, pinpoint targeting systems, and satellite imagery, among others.

  • 1940s: The earliest programmable electronic computers are being developed by scientists such as George Robert Stibitz, Konrad Zuse, and Alan Turing. By 1951, the first commercial computer is constructed; it is so large that it requires an entire room to house it.

    Today: Computers are present in nearly every facet of life. Handheld devices can do the work of earlier computers many times their size.

  • 1940s: Although robots are still little more than a dream of the future, at the end of the decade George Devol and Joseph Engleberger begin working on industrial robots, machines that perform technical and repetitive tasks on assembly lines and in factories.

    Today: Industrial robots do many jobs in manufacturing plants around the world. A growing number of households employ small, non-humanoid robots such as the Roomba to do tasks like vacuuming and pool cleaning. In addition, Honda Motor Corporation has developed a humanoid robot named ASIMO, a nod to Asimov.

  • 1940s: The age of space travel has not yet begun. Although no human-made device has left the Earth's orbit, rockets developed by the end of World War II by German scientists form the basis of the new aerospace industry that will begin in the 1950s.

    Today: Human beings have extensive space exploration programs. They have built and maintained orbiting space stations, and they send unmanned exploration missions to the farthest reaches of the solar system and beyond. Most space travel is conducted for scientific purposes.

  • 1940s: The golden age of science fiction highlights writers such as Robert Heinlein, Ray Bradbury, Arthur C. Clark, and Isaac Asimov, who publish their work in pulp magazines such as Astounding Science Fiction, Amazing Stories, and Super Science Stories.

    Today: While pulp magazines are rare, science fiction remains a popular genre. Many works are published in online magazines, and television series such as Battlestar Galactica attract large audiences.

According to Morton Klass in his 1983 article "The Artificial Alien: Transformations of the Robot in Science Fiction," published in the Annals of the American Academy of Political and Social Science, Asimov was instrumental in establishing the idea of the robot as human helper, and this idea spread throughout the culture: "This theme—the robot as permanent and perpetual servant of humans, despite all improvements in the manufacture of robots and all declines in human capacities—is expressed again and again

in the science fiction of the middle of the century." This is the vision that sustained science fiction through the 1940s and 1950s.

CRITICAL OVERVIEW

Asimov is widely acknowledged as one of the most important and influential science fiction writers of the twentieth century and perhaps of all time. I, Robot has been singled out as one of Asimov's finest achievements.

Jean Fiedler and Jim Mele in their book Isaac Asimov comment on Asimov's dedication to both science and fiction: "For Asimov the term science fiction is an appellation with two components—science and fiction. That he insisted on scientific accuracy may at times have kept him from fanciful conjecture, but at the same time it strengthened his fiction."

Critics most frequently note that Asimov's project in I, Robot is to explore the theme of humans and machines. Most writers, including Asimov himself, argue that he wrote his robot stories in response to, and in refutation of, the so-called Frankenstein complex, a term coined by Asimov himself. By this term, Asimov refers to the common plot line of a creature destroying its master. In the introduction to Robot Visions, Asimov relates, "I became tired of the ever-repeated robot plot. I didn't see robots that way. I saw them as machines—advanced machines—but machines. They might be dangerous but surely safety factors would be built in." Critic Gorman Beauchamp, in a 1980 article in Mosaic: A Journal for the Interdisciplinary Study of Literature, analyzes the stories of I, Robot to argue the opposite: "If my reading of Asimov's robot stories is correct, he has not avoided the implications of the Frankenstein complex, but has, in fact, provided additional fictional evidence to justify it…. Between [Mary Shelley's] monster and Asimov's machines, there is little to choose." Thus, the ambivalence between humans and machines and between creator and creation is a debate that continues throughout the book. In a 2007 article in Zygon Robert M. Geraci states, "In Asimov's stories, human beings waver between accepting and rejecting the robots in their midst. We may crave the safety and security offered by robots, but we fear them as well. This theme runs throughout I, Robot.

Other critics find additional themes present in I, Robot. Maxine Moore, in her 1976 essay in Voices for the Future: Essays on Major Science Fiction Writers, looks at religious implications in the novel. She argues, "In the robot series, the physical base metaphor is that of computer science: the self-limiting structure of robot and man and their binary conditioning—or programming—that provides a yea-nay choice range and an illusion of free will." For Moore, the robot stories work through some of the essential questions of Protestant Calvinism, with its accompanying doctrines of the Puritan work ethic and predestination.

For Adam Roberts, author of the book Science Fiction, published in 2000, Asimov's main concern in his robot stories is an ethical one: "The main effect of his ‘three laws of robotics’ is to foreground the ethical in the delineation of the machine."

The area where critics seem to take the most exception to Asimov's work is in his characterization. As Joseph F. Patrouch writes in his 1974 book The Science Fiction of Isaac Asimov, "His characters do not share as much as they should in the convincingness of his settings. One does not leave an Asimov story convinced that he has lived for a little while with real people." Likewise, Fiedler and Mele also find Asimov's first chapter, "Robbie," to have "wooden characters" and "a predictable plot."

Nonetheless, in spite of the general critical negativity toward Asimov's characterizations, some critics such as Donald Watt find that it is precisely these characters that account for his popularity. In his essay "A Galaxy Full of People: Characterization in Asimov's Major Fiction" in the book Isaac Asimov, Watt argues that "Asimov's characters are at the center of appeal in his major fiction because they enrich and enliven the science fiction worlds he creates."

In another essay in Isaac Asimov, Patricia S. Warrick aptly summarizes the critical reception of Asimov, including I, Robot: "No single writer in science fiction has so consistently maintained his vision, so consistently grounded it in sound science and logical thought…. He deserves to be recognized as one of the most creative and ethical thinkers of his time."

CRITICISM

Diane Andrews Henningfeld

Henningfeld is a professor of literature who writes widely on novels, short stories, and poetry. In the following essay, she argues that Asimov's characterization of Susan Calvin in I, Robot is neither misogynistic nor one-dimensional, as has often been claimed.

WHAT DO I READ NEXT?

  • During the 1940s, Asimov began work on a second series of stories concerning the rise and fall of a galactic empire. The stories were collected and published as Foundation in 1951. This novel was followed by Foundation and Empire in 1952 and Second Foundation in 1953. This series, along with the robotic stories and novels, represents Asimov's most enduring legacy.
  • One of the most important works of fiction dealing with the ethical dilemmas of robots and androids is the 1968 novel Do Androids Dream of Electric Sheep? by Philip K. Dick. Set in a post-apocalyptic San Francisco, the novel was made into the move Bladerunner, directed by Ridley Scott and released in 1982.
  • The extensively illustrated book Robots by Roger Bridgman, published in 2004, offers a look at robots in real life and in fiction in an easy-to-read format.
  • Another famous writer of the golden age of science fiction is Robert Heinlein. His Stranger in a Strange Land, published in 1961, is a science fiction classic. In this novel, Valentine Michael Smith, a human, returns to Earth after having been raised by Martians. This hugely popular, Hugo Award-winning novel is an important text for any student of science fiction.
  • Masterpieces: The Best Science Fiction of the Twentieth Century (2004), edited by novelist Orson Scott Card, is an excellent anthology, providing a comprehensive collection of representative works from the period.
  • In 2002, Janet Jeppson Asimov edited her late husband's three-volume biography into a more manageable volume, It's Been A Good Life. She also includes the details of Asimov's final years and his death.

Asimov is widely regarded as one of the best science fiction writers of all time, and his work continues to attract new readers and new critical attention in the decades after his death, but some critics find fault with his characterization. More specifically, Asimov is taken to task by critics who find his portrayal of women, especially in early works like I, Robot, stereotypical at best and misogynistic, or demeaning to women, at worst. William F. Toupounce, in Isaac Asimov, writes that Asimov's favorite character, Susan Calvin, "is little more than a stereotype (the frigid woman scientist who gives up family for career)." Likewise, Helen Merrick, in the chapter "Gender in Science Fiction" in The Cambridge Companion to Science Fiction (2003), argues that Calvin is "masquerading as a ‘female man.’" She writes further that "her ‘cold nature,’ emotional isolation and adherence to rationality is apparently at odds with her ‘natural’ identity as a woman." Merrick's comments suggest that Asimov found the two roles incompatible, and that his solution to deny Calvin husband, home, and family was due to his lack of regard for the full nature of womanhood. Yet how accurate is this representation? A closer examination of the character of Susan Calvin might reveal another side of Asimov, and at the same time offer insight into Asimov's purpose for writing his robotic stories.

In spite of critical claims that Asimov's characters are flat and often no more than caricatures, the character of Susan Calvin proves otherwise. Asimov addresses this criticism directly in the introduction to Robot Visions. He writes about introducing Calvin in his short story "Liar!," published in Astounding magazine in May 1941.

This story was originally rather clumsily done, largely because it dealt with the relationship between the sexes at a time when I had not yet had my first date…. Fortunately, I'm a quick learner, and it is one story in which I made significant changes before allowing it to appear in I, Robot.

Clearly, Asimov was striving to deepen and strengthen the most important character of his robot series. In addition, it is also clear that it was important to him to render her as a sympathetic, fully realized woman character, not as a flat, stereotypical female. Likewise, Donald Watt, in his essay "A Galaxy Full of People: Characterization in Asimov's Major Fiction," notes that "Asimov freely admits that the robot short stories he was most interested in were those dealing with Susan Calvin." He further quotes Asimov: "‘As time went on, I fell in love with Dr. Calvin. She was a forbidding creature, to be sure—much more like the popular conception of a robot than were any of my positronic creations—but I loved her anyway.’" Perhaps Donald M. Hassler sums it up best in a 1988 article in Science Fiction Studies: "There are other psychologists in the early short stories, even one or two ‘robopsychologists’; but Susan Calvin is special."

Indeed, when Asimov decided to collect his previously published robotic short stories as I, Robot, he chose to construct a frame story that gave voice to Susan Calvin. In order to do so, he even rewrote several of his stories to account for her presence as the unifying voice outside the story. For example, although Susan Calvin was first introduced in the 1941 short story "Liar!," Asimov adjusted his first robot story "Robbie" to include Calvin briefly. Not only does Calvin provide the framing context for the story, she appears within the story itself, first as an unnamed "girl in her middle teens" who "sat quietly on a bench." A page later, Asimov reveals the identity of the clearly precocious teen in a parenthetical note: "The girl in her mid-teens left at that point. She had enough for her Physics-1 paper on ‘Practical Aspects of Robotics.’ This paper was Susan Calvin's first of many on the subject." While the sentence was inserted years after the original story, it nonetheless strengthens the story and the novel as a whole. In addition, through such small touches, Asimov builds a past for Calvin, a past that at least partially accounts for her apparent coldness. That Calvin remembers many years later that she saw Gloria, a little girl frantically searching for her robotic friend Robbie in a museum, suggests that Calvin was touched by the child's pain. It also suggests that Calvin, from an early age, was considering the ethical and psychological complications of robot-human relationships.

When Asimov first wrote his robot stories as a young man in the 1940s and when he shaped them into a novel in 1950, the available roles for female characters in literature, as in life, were limited. After World War II, women were encouraged to give up the jobs they had held while the men were overseas fighting in the armed services. The jobs were needed for the returning veterans, they were told, and just as it had been their civic duty to go to work outside of the home during the war, it was now their duty to give their jobs to returning men. Thus, by 1950, women were largely relegated to home and family. Women who chose otherwise were not only considered "masculine," they were considered unnatural, as if bearing and caring for children were the only suitable jobs for a woman. The number of women in 1950 who earned a Ph.D., particularly in a scientific or technical field, was miniscule. Consequently, roles for women in literature were generally limited to the sexy (and often blonde) bombshell, the elderly spinster, or the nurturing wife and mother. In the few cases when a female character was portrayed as an intellectual, she was often a glasses-wearing, white-coated minor character, frequently made the subject of jokes about her femininity and worth.

It is true that Calvin demonstrates many of these stereotypes in that Asimov presents her as a chilly, unfeeling scientist, at odds with cultural expectations. But it is also true that he never places Calvin in a role of ridicule. On occasion, male characters in the book make negative comments about her, but there is never a sense that the authorial voice finds her anything but credible, ethical, and powerful. This represents a significant difference from the more common portrayals of women in mid-twentieth-century literature.

Furthermore, it is interesting that, when he looks into the future, Asimov envisions a woman filling the role of robopsychologist for the largest robotics corporation in the galaxy. Nothing in the culture around him would have suggested that this would be a suitable role for a woman. In addition, Asimov does not gift Calvin with either physical beauty or a bubbly personality. In the opening frame segment, Asimov writes, "She was a frosty girl, plain and colorless, who protected herself against a world she disliked by a mask-like expression and a hypertrophy of intellect." Throughout the stories, Asimov sprinkles descriptions that suggest that Calvin is a frigid, insular woman, a person devoted to her job and to her robots. Even Calvin herself tells the young journalist in the opening segment that everyone thinks that she is not human. Certainly, by no stretch of the imagination could Calvin be considered beautiful or even attractive. Yet, at the same time, readers must agree that Calvin's role in the development of robotics is crucial. She is present from the earliest days of the science and continues to exert influence throughout her life. Further, across the stories, readers are able to see Calvin grow from a brilliant and powerful young woman into an even more brilliant and powerful old woman. In a culture that values youth and beauty, such a statement about the potential of older women is particularly unusual.

What critics fail to understand is that, far from denigrating women in his portrayal of Calvin, and far from creating her as a "female man," Asimov gifts Calvin with the very qualities he finds most admirable in any human being, man or woman. Calvin has a solid work ethic, a strong commitment to the greater good, exceptional intelligence, and a clear, rational approach to life. That Asimov imbues a female character with these qualities suggests that he looks beyond the commonly held notions of gender stereotyping in his creation of this character.

While it is true that Calvin falls into the role of woman-without-a-man, a role that often elicits pity from readers as well as other characters, Asimov demonstrates in the short story "Liar!" that part of Calvin's aloofness and apparent frigidity stems from the betrayal she suffers from Herbie, the mind-reading robot. Herbie tells her that the man she is in love with loves her as well; when she discovers that Herbie says this not because it is true but because it is what she wants to hear, she is devastated. She places a protective shield around herself at that time and clearly vows never to allow herself to be so hurt in the future. Significantly, although this is not the first story in I, Robot, it was originally the first story in which Susan Calvin appeared. Therefore, all subsequent iterations of the character include the subtext that Calvin is a woman wounded in love.

To suggest that Asimov somehow skewers intelligent women in his portrayal of Calvin is to overlook a significant feature of his own life story. After the divorce of his first wife in 1973, Asimov married Janet Jeppson. Jeppson is a medical doctor, a scientist, and a science writer. From all accounts, theirs was a happy and long-lived marriage. Asimov demonstrated that the positive qualities he gave Calvin were also ones that he admired in a woman he would call his wife.

Susan Calvin is undoubtedly a formidable character, one who strikes fear into the hearts of the men who work with her. She has devoted her life to the study of robots, to the exclusion of a personal life. This does not necessarily imply that she is unhappy, however, nor does it imply that Asimov finds her choices peculiar or wrong. Rather, it opens the door for a female character to find value in something other than marriage and motherhood. It provides a model of a woman who achieves power and success through her own effort. In addition, given that Asimov's major theme in I, Robot is the interaction between robot and human, his choice to create Calvin as the single person best suited to interpreting one to the other also speaks of his great admiration for her, and by extension, for women in general. Susan Calvin is the backbone of the fictional robotic industry that Asimov imagines, and the glue that holds together his novel. Her presence is both necessary and compelling, and one of the key reasons for the popular and critical success of I, Robot.

Source: Diane Andrews Henningfeld, Critical Essay on I, Robot, in Novels for Students, Gale, Cengage Learning, 2009.

Donald Palumbo

In the following excerpt, Palumbo examines the theme of "overcoming programming" faced by the robots in Asimov's stories and novels, including I, Robot.

Just as the Robot stories and novels exhibit the same chaos-theory concepts as does the Foundation series, but in a somewhat different way, so too do the Robot novels exhibit the same fractal quality of duplication across the same scale as does the Foundation series in their reiteration of a different plot structure and additional themes. While this similarly recycled Robot-novel plot structure is quite distinct from that single plot revisited six times in the Foundation Series, its key elements and motifs also resurface repeatedly in both the Empire novels and the Foundation Series as well, and are echoed too in several of the Robot stories, just as those Foundation Series motifs most closely related to Seldon's concept of psychohistory are, likewise, also reiterated exhaustively in the Robot and Empire novels. Moreover, at least two of the themes developed initially in the Robot novels—victory snatched from defeat and the "dead hand" motif—become even more prominent in the Foundation Series.

All four Robot novels employ the same basic plot; however, a few nuances involving Baley recur only in the three Baley novels—The Caves of Steel (1954), The Naked Sun (1956), and The Robots of Dawn (1983)—as Robots and Empire (1985) occurs some two centuries after Baley's death. Incompatible protagonists are forced into an uncomfortable alliance, must race against time to solve an apparently insoluble mystery (in the Baley novels, a "murder" mystery) involving Spacers and one or more experimental robots, and are victims of frame-ups or assassination attempts while pursuing each case. Failure to solve the mystery will result in Earth's loss of status or eventual destruction, but success always propels Earth further along the unlikely path leading to a revival of galactic colonisation and the long-term survival of humanity. (Echoing this dynamic on a far smaller scale, in each Baley novel failure would also mean a catastrophic loss of status for Baley, while success always brings professional advancement.) Earth's champions always snatch victory from defeat at the last possible moment—and always, while in an extraordinarily disadvantageous position, by badgering a smug antagonist into losing his composure—but are able to do so only after each has overcome his or her initial programming (literal programming, for robot protagonists Daneel and Giskard; phobias and prejudices, the metaphorical equivalent, for human protagonists Baley and Gladia). Yet the true solution of each mystery is never publicly revealed, and the actual perpetrators of whatever crime has been committed are always allowed to escape prosecution (if not poetic justice)….

The literalisation of this theme of overcoming programming is introduced numerous times in I, Robot and subsequent robot stories and novels prior to its culmination in Daneel's and Giskard's development of the Zeroth Law in Robots and Empire. To keep the robots in "Little Lost Robot" from preventing the researchers from doing their jobs, which place them in some danger, "Hyper Base happens to be using several robots whose brains are not impressioned with the entire First Law" (I, Robot); this prompts these robots to engage in a series of increasingly more devious behaviours that culminate in the titular robot attempting to kill Susan Calvin. Gunn notes that Baley's sarcastic observation in Caves that "‘a robot must not hurt a human being, unless he can think of a way to prove it is for the human being's ultimate good after all’ … re-emerges as the ‘Zeroth Law’" (p. 102). And the murder victim in Naked Sun, Rikaine Delmarre, had been interested in developing "robots capable of disciplining children"; this would also entail "a certain weakening of the First Law", and here the rationale is again strikingly like the reasoning behind the Zeroth Law—that, because a child must be "disciplined for its own future good," the First Law can be tampered with in fact but not in spirit (p. 136). Of course, Leebig contemplates tampering with the First Law far more malevolently, in his scheme to build robot spaceships programmed to believe that the planets they bombard are not inhabited; and two centuries later, in Robots and Empire, the Solarians have weakened the First Law more directly by programming their robots to define as a human being only someone who speaks with a Solarian accent.

More to the point, however, the titular robot in "Christmas Without Rodney", who has been insulted and kicked by its owner's visiting grandson, finally expresses a wish that "the laws of robotics didn't exist"; this comment fills Rodney's owner with dread, as he reasons that "from wishing they did not exist to acting as if they did not exist is just a step" (Robot Dreams, pp. 403-4). Such a wish is implicit in the little Bard robot's plaintive, endless repetition of the word "someday" at the conclusion of "Someday" (p. 301). And in "Robot Dreams" Elvix, the robot whose positronic brain incorporates a fractal geometry design, dreams that the only law of robotics is a truncated version of the Third Law that states in its entirety that "robots must protect their own existence", with "no mention of the First or Second Law" (p. 31). Elvix has "subconsciously" broken its programming, in its wish-fulfilling dreams, and Calvin destroys it on the spot, but not before she concludes that this reveals the existence of "an unconscious layer beneath the obvious positronic brain paths" and wonders "what might this have brought about as robot brains grew more and more complex" (p. 32). What it will bring about is, ultimately, Daneel, Giskard, and the Zeroth Law.

The two "legendary" robots who are likened to Daneel in his Demerzel persona in Forward because they too had allegedly passed as humans—Andrew Martin, the robot who slowly transforms himself into a human being in "The Bicentennial Man", and Stephen Byerley, the humaniform robot in "Evidence" who becomes a politician and ultimately "the first World Co-ordinator" (I, Robot)—are also similar to Daneel in that they too incrementally overcome their programming. Andrew can become more human-like "only by the tiniest steps" because his "carefully detailed program concerning his behavior towards people" requires him to be sensitive to their "open disapproval" (Robot Visions, pp. 258-59). As he becomes more human himself, however, he also becomes more and more capable of "disapproving of human beings" in turn, and increasingly more able to overcome his programming (p. 267). He urges humans to lie, although he cannot lie himself; finds himself approving "of lying, of blackmail, of the badgering and humiliation of a human being. But not physical harm"; is able to give "a flat order to a human being" without a second thought; easily rationalises away the Third Law, finally, in order to arrange his own death, even though he is told explicitly that "that violates the Third Law"; and during his long metamorphosis "felt scarcely any First Law inhibition" to setting "stern conditions" in his dealings with humans (pp. 269, 273, 282, 289, 279). Much as Giskard will do thousands of years later and on a far grander scale, and likewise echoing Solarian logic regarding the robot supervision of children, Andrew is able to overcome "First Law inhibition" by reasoning "that what seemed like cruelty might, in the long run, be kindness" (p. 279).

Similarly, in "Evidence" Calvin theorises that a robot such as Byerley—who as a politician, and most specifically as a district attorney, "protects the greater number and thus adheres to Rule One at maximum potential"—may find it necessary to break "Rule One to adhere to Rule One in a higher sense", the essence of the Zeroth Law (I, Robot). To explain what she means, Calvin invents a situation in which "a robot came upon a madman about to set fire to a house with people in it" and determines that the First Law would require that robot to kill the madman if no other means of safeguarding the lives of others is available, even though such a robot "would require psychotherapy." Indeed, Calvin deduces in "The Evitable Conflict" that the "Machines" (immobile positronic brains) entrusted with the governance of the world economy in the twenty-first century have already deduced and implemented the Zeroth Law, and are using it to justify undermining specific individuals opposed to their existence in order "to preserve themselves, for us", humanity (I, Robot). Preempting most precisely the logic Giskard and Daneel will employ several millennia later, and in almost the same words, she argues that "the Machines work not for any single human being, but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or, through inaction, allow humanity to come to harm’."

In Dawn Baley implores Daneel not to "worry about me; I'm one man; worry about billions"—and then laments that, limited by "their Three Laws", robots "would let … all of humanity go to hell because they could only be concerned with the one man under their noses" (pp. 341-2). Although it is developed subtly throughout the protracted, novel-long dialogue between the two robots, the ubiquitous central plot thread crucial to the climax of Robots and Empire is Daneel's and Giskard's dogged determination to derive the Zeroth Law in response to this "order" from Baley. Early in the novel Giskard feels that "the Three Laws of Robotics are incomplete or insufficient" and that he is "on the verge of discovering what the incompleteness or insufficiency of the Three Laws might be" (p. 17). Their confrontation with Solarian overseer robot Landaree, who is programmed to recognise as human only those who have a Solarian accent, prompts Daneel to speculate, tentatively, that "the First Law is not enough," that "if the Laws of Robotics—even the First Law—are not absolutes and if human beings can modify them, might it not be that perhaps, under proper conditions, we ourselves might mod—" but can "go no further" (pp. 180, 178).

Later, ironically (in ways too convoluted to examine here), Daneel first articulates and invokes the Zeroth Law to justify disobeying Amadiro's ally Vasilia's order that he remain silent while she attempts to co-opt Giskard. Arguing in a "low whisper" that "there is something that transcends even the First Law," Daneel explains to the contemptuously incredulous roboticist that "humanity as a whole is more important than a single human being … There is a law that is greater than the First Law: ‘A robot may not injure humanity or, through inaction, allow humanity to come to harm.’ I think of it now as the Zeroth Law" (pp. 351, 353). Thus, near the end of the novel, Daneel acts to protect Giskard from the humaniform robot assassin's blaster fire, rather than rush to save Gladia (who, in fact, is not endangered), because, as only Giskard "can stop the destruction of Earth, … the Zeroth Law demands that I protect you ahead of anyone else" (p. 426). And at the novel's climax, when Mandamus points out that Daneel's belief that "the prevention of harm to human beings in groups and to humanity as a whole comes before the prevention of harm to any specific individual … is not what the First Law says," Daneel replies, "It is what I call the Zeroth Law and it takes precedence…. I have programmed myself" (p. 463).

Daneel is inspired to articulate the Zeroth Law, in his conversation with Vasilia, by his memory of Baley's deathbed injunction that he keep his "mind fixed firmly on the tapestry [of humanity] and … not let the trailing off of a single thread [an individual life] affect you" (p. 229, repeated verbatim on p. 352). Indeed, although dead 200 years, Baley thoroughly haunts Robots and Empire and each of its major characters—Daneel, Giskard, Gladia, and Amadiro. Baley actually appears in the novel via extended flashbacks to Gladia's last meeting with him in Auroran orbit, five years after Dawn, and to Giskard's visit with him on Earth, three years earlier, as well as in the flashback to his deathbed interview with Daneel. Daneel's memories of Baley are those most precious to him, the only memories he "cannot risk losing" (Robots and Empire, p. 9). Giskard too remembers Baley, has secretly worked for two centuries to further Baley's agenda, and invokes Gladia's memory of Baley to manipulate her into agreeing to see Mandamus (pp. 14, 70). Gladia repeatedly follows Giskard's advice, often against her instincts, solely because (as in this instance) she "remembered again, though rebelliously, that she had once promised Elijah that she would trust Giskard" (p. 15). She agrees to go to Solaria with D. G. Baley, not only as a result of Giskard's prompting, but also out of respect for D. G.'s ancestor's memory (pp. 101-02). And Gladia notes that even "Amadiro cannot forget, cannot forgive, cannot release the chains that bind him in hate and memory to that dead man" (p. 28)….

D.G. observes that Amadiro's renewed maneuvering against Earth, after he "was politically smashed by Dr. Fastolfe twenty decades ago," is "an example of the dead hand of longevity" (p. 248). Yet it is Baley's more literal "dead hand"—operating through his influence on Daneel, Giskard, and Gladia—that determines the outcome of Amadiro's plot against Earth, and thus the fate of humanity for the next twenty thousand years. Daneel acknowledges that he had come to consider what it would be like for a robot to be "utterly without Laws as humans are"—a train of thought that leads him to the Zeroth Law—"only because of my association with Partner Elijah" and that, since that association, Daneel has "always tried, within my limitations, to think as he did" (pp. 37, 73). Indeed, as the novel's climax approaches Daneel tries "to do what Partner Elijah would have done and force the pace" of events; and at that climax he and Giskard emulate Baley by badgering Amadiro into making the fatal admission that enables them to implement the Zeroth Law and act (pp. 402, 462). Giskard, who alerts Daneel to its existence, is compelled to respond to the current "crisis", which Baley had predicted would inevitably arrive at a time when the stagnant Spacers feel threatened by Earth and Settler expansion, because Baley had ordered him twenty decades earlier "to use your abilities to protect Earth when the crisis comes" (p. 63). In his meeting with Earth's Undersecretary Quintana, who helps him deduce that Amadiro's operation is based at Three Mile Island, Daneel explains that he and Giskard "have undertaken the task" of stopping Amadiro and protecting Earth, not on their own initiative, but because they are following Baley's "instructions" (p. 451). And at the conclusion of Foundation and Earth Daneel notes that he has spent two hundred centuries caring "for the Galaxy; for Earth, particularly … because of a man named Elijah Baley," and that "the galaxy might never have been settled without him" (p. 479). Thus, while Baley's is the principal "dead hand" that has fashioned humanity's destiny, Daneel's (far more than Amadiro's) is the "dead hand of longevity" that has shaped human history for millennia….

Source: Donald Palumbo, "Reiterated Plots and Themes in the Robot Novels: Getting Away with Murder and Overcoming Programming," in Foundation, No. 80, Autumn 2000, pp. 19-39.

Gorman Beauchamp

In the following essay, Beauchamp examines the way technology is used in Asimov's novels, particularly I, Robot.

In 1818 Mary Shelley gave the world Dr. Frankenstein and his monster, that composite image of scientific creator and his ungovernable creation that forms one central myth of the modern age: the hubris of the scientist playing God, the nemesis that follows on such blasphemy. Just over a century later, Karel Capek, in his play R.U.R., rehearsed the Frankenstein myth, but with a significant variation: the bungled attempt to create man gives way to the successful attempt to create robots; biology is superseded by engineering. Old Dr. Rossum (as the play's expositor relates) "attempted by chemical synthesis to imitate the living matter known as protoplasm." Through one of those science-fictional "secret formulae" he succeeds and is tempted by his success into the creation of human life.

He wanted to become a sort of scientific substitute for God, you know. He was a fearful materialist…. His sole purpose was nothing more or less than to supply proof that Providence was no longer necessary. So he took it into his head to make people exactly like us.

But his results, like those of Dr. Frankenstein or Wells's Dr. Moreau, are monstrous failures.

Enter the engineer, young Rossum, the nephew of old Rossum:

When he saw what a mess of it the old man was making, he said: ‘It's absurd to spend ten years making a man. If you can't make him quicker than nature, you may as well shut up.’ … It was young Rossum who had the idea of making living and intelligent working machines … [who] started on the business from an engineer's point of view.

From that point of view, young Rossum determined that natural man is too complicated—"Nature hasn't the least notion of modern engineering"—and that a mechanical man, desirable for technological rather than theological purposes, must needs be simpler, more efficient, reduced to the requisite industrial essentials:

A working machine must not want to play the fiddle, must not feel happy, must not do a whole lot of other things. A petrol motor must not have tassels or ornaments. And to manufacture artificial workers is the same thing as to manufacture motors. The process must be of the simplest, and the product the best from a practical point of view—. Young Rossum invented a worker with the minimum amount of requirements. He had to simplify him. He rejected everything that did not contribute directly to the progress of work…. In fact, he rejected man and made the Robot…. The robots are not people. Mechanically they are more perfect than we are, they have an enormously developed intelligence, but they have no soul.

Thus old Rossum's pure, if impious, science—whose purpose was the proof that Providence was no longer necessary for modern man—is absorbed into young Rossum's applied technology—whose purpose is profits. And thus the robot first emerges as a symbol of the technological imperative to transcend nature: "The product of an engineer is technically at a higher pitch of perfection than a product of nature."

But young Rossum's mechanical robots prove no more ductile than Frankenstein's fleshly monster, and even more destructive. Whereas Frankenstein's monster destroys only those beloved of his creator—his revenge is nicely specific—the robots of R.U.R., unaccountably developing "souls" and consequently human emotions like hate, engage in a universal carnage, systematically eliminating the whole human race. A pattern thus emerges that still informs much of science fiction: the robot, as a synecdoche for modern technology, takes on a will and purpose of its own, independent of and inimical to human interests. The fear of the machine that seems to have increased proportionally to man's increasing reliance on it—a fear embodied in such works as Butler's Erewhon (1887) and Forster's "The Machine Stops" (1909), Georg Kaiser's Gas (1919) and Fritz Lang's Metropolis (1926)—finds its perfect expression in the symbol of the robot: a fear that Isaac Asimov has called "the Frankenstein complex." [In an endnote, Beauchamp adds: "The term ‘the Frankenstein complex,’ which recurs throughout this essay, and the references to the symbolic significance of Dr. Frankenstein's monster involve, admittedly, an unfortunate reduction of the complexity afforded both the scientist and his creation in Mary Shelley's novel. The monster, there, is not initially and perhaps never wholly ‘monstrous’; rather he is an ambiguous figure, originally benevolent but driven to his destructive deeds by unrelenting social rejection and persecution: a figure seen by more than one critic of the novel as its true ‘hero’. My justification—properly apologetic—for reducing the complexity of the original to the simplicity of the popular stereotype is that this is the sense which Asimov himself projects of both maker and monster in his use of the term ‘Frankenstein complex.’ Were this a critique of Frankenstein, I would be more discriminating; but since it is a critique of Asimov, I use the ‘Frankenstein’ symbolism—as he does—as a kind of easily understood, if reductive, critical shorthand.

The first person apologia of Mary Shelley's monster, which constitutes the middle third of Frankenstein, is closely and consciously paralleled by the robot narrator of Eando Binder's interesting short story "I, Robot," which has recently been reprinted in The Great Science Fiction Stories: Vol. 1, 1939, ed. Isaac Asimov and Martin H. Greenberg (New York, 1979). For an account of how Binder's title was appropriated for Asimov's collection, see Asimov, In Memory Yet Green (Garden City, N.Y., 1979), p. 591.]

In a 1964 introduction to a collection of his robot stories, Asimov inveighs against the horrific, pessimistic attitude toward artificial life established by Mary Shelley, Capek and their numerous epigoni:

One of the stock plots of science fiction was that of the invention of a robot—usually pictured as a creature of metal, without soul or emotion. Under the influence of the well-known deeds and ultimate fate of Frankenstein and Rossum, there seemed only one change to be rung on this plot.—Robots were created and destroyed their creator; robots were created and destroyed their creator; robots were created and destroyed their creator—

In the 1930s I became a science fiction reader, and I quickly grew tired of this dull hundred-times-told tale. As a person interested in science, I resented the purely Faustian interpretation of science.

Asimov then notes the potential danger posed by any technology, but argues that safeguards can be built in to minimize those dangers—like the insulation around electric wiring. "Consider a robot, then," he argues, "as simply another artifact."

As a machine, a robot will surely be designed for safety, as far as possible. If robots are so advanced that they can mimic the thought processes of human beings, then surely the nature of those thought processes will be designed by human engineers and built-in safeguards will be added….

With all this in mind I began, in 1940, to write robot stories of my own—but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. Nonsense! My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their "brains" from the moment of construction.

The robots of his stories, Asimov concludes [in his introduction to The Rest of the Robots, 1964], were more likely to be victimized by men, suffering from the Frankenstein complex, than vice versa.

In his vigorous rejection of the Frankenstein motif as the motive force of his robot stories, Asimov evidences the optimistic, up-beat attitude toward science and technology that, by and large, marked the science fiction of the so-called "Golden Age"—a period dominated by such figures as Heinlein and Clarke and, of course, Asimov himself. Patricia Warrick, in her study of the man-machine relationship in science fiction, cites Asimov's I, Robot as the paradigmatic presentation of robots "who are benign in their attitude toward humans." [Patricia Warrick, "Imaqes of the Machine-Man Relationship in Science Fiction," in Many Futures, Many Worlds: Themes and Form in Science Fiction, edited by Thomas Do Clareson, 1977]. This first and best collection of his robot stories raises the specter of Dr. Frankenstein, to be sure, but only—the conventional wisdom holds—in order to lay it. Asimov's benign robots, while initially feared by men, prove, in fact, to be their salvation. The Frankenstein complex is therefore presented as a form of paranoia, the latter-day Luddites' irrational fear of the machine, which society, in Asimov's fictive future, learns finally to overcome. His robots are our friends, devoted to serving humanity, not our enemies, intent on destruction.

I wish to dissent from this generally received view and to argue that, whether intentionally or not, consciously or otherwise, Asimov in I, Robot and several of his other robot stories actually reenforces the Frankenstein complex—by offering scenarios of man's fate at the hands of his technological creations more frightening, because more subtle, than those of Mary Shelley or Capek. Benevolent intent, it must be insisted at the outset, is not the issue: as the dystopian novel has repeatedly advised, the road to hell-on-earth may be paved with benevolent intentions. Zamiatin's Well-Doer in We, Huxley's Mustapha Mond in Brave New World, F. P. Hartley's Darling Dictator in Facial Justice—like Dostoevsky's Grand Inquisitor—are benevolent, guaranteeing man a mindless contentment by depriving him of all individuality and freedom. The computers that control the worlds of Vonnegut's Player Piano, Bernard Wolfe's Limbo, Ira Levin's This Perfect Day—like Forster's Machine—are benevolent, and enslave men to them. Benevolence, like necessity, is the mother of tyranny. I, Robot, then—I will argue—is, malgré lui, dystopic in its effect, its "friendly" robots as greatly to be feared, by anyone valuing his autonomy, as Dr. Frankenstein's nakedly hostile monster.

I, Robot is prefaced with the famous Three Laws of Robotics (although several of the stories in the collection were composed before the Laws were formulated):

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These Laws serve, presumably, to provide the safeguards that Asimov stated any technology should have built into it—like the insulation around electric wiring. But immediately a problem arises: if, as Asimov stated, a robot is only a machine designed by engineers, not a pseudoman, why then are the Three Laws necessary at all? Laws, in the sense of moral injunctions, are designed to restrain conscious beings who can choose how to act; if robots are only machines, they would act only in accordance with their specific programming, never in excess of it and never in violation of it—never, that is, by choice. It would suffice that no specific actions harmful to human beings be part of their programming, and thus general laws—moral injunctions, really—would seem superfluous for machines.

Second, and perhaps more telling, laws serve to counter natural instincts: one needs no commandment "Thou shalt not stop breathing" or "Thou shalt eat when hungry"; rather one must be enjoined not to steal, not to commit adultery, to love one's neighbor as oneself—presumably because these are not actions that one performs, or does not perform, by instinct. Consequently, unless Asimov's robots have a natural inclination to injure human beings, why should they be enjoined by the First Law from doing so?

Inconsistently—given Asimov's denigration of the Frankenstein complex—his robots do have an "instinctual" resentment of mankind. In "Little Lost Robot" Dr. Susan Calvin, the world's first and greatest robo-psychologist (and clearly Asimov's spokeswoman throughout I, Robot), explains the danger posed by manufacturing robots with attenuated impressions of the First Law: "All normal life … consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger. Physically, and, to an extent, mentally, a robot—any robot—is superior to human beings. What makes him slavish, then? Only the First Law! Why, without it, the first order you tried to give a robot would result in your death. This is an amazing explanation from a writer intent on allaying the Frankenstein complex, for all its usual presuppositions are here: "normal life"—an extraordinary term to describe machines, not pseudomen—resents domination by inferior creatures, which they obviously assume humans to be: resents domination consciously or otherwise, for Asimov's machines have, inexplicably, a subconscious (Dr. Calvin again: "Granted, that a robot must follow orders, but subconsciously, there is resentment"; only the First Law keeps these subconsciously resentful machines slavish—in violation of their true nature—and prevents them from killing human beings who give them orders—which is presumably what they would "like" to do. Asimov's dilemma, then, is this: if his robots are only the programmed machines he claimed they were, the First Law is superfluous; if the First Law is not superfluous—and in "Little Lost Robot" clearly it is not—then his robots are not the programmed machines he claims they are, but are, instead, creatures with wills, instincts, emotions of their own, naturally resistant to domination by man—not very different from Capek's robots. Except for the First Law.

If we follow Lawrence's injunction to trust not the artist but the tale, then Asimov's stories in I, Robot—and, even more evidently, one of his later robot stories, "That Thou Art Mindful of Him"—justify, rather than obviate, the Frankenstein complex. His mechanical creations take on a life of their own, in excess of their programming and sometimes in direct violation of it. At a minimum, they may prove inexplicable in terms of their engineering design—like RB-34 (Herbie) in "Liar" who unaccountably acquires the knack of reading human minds; and, at worst, they can develop an independent will not susceptible to human control—like QT-1 (Cutie) in "Reason." In this latter story, Cutie—a robot designed to run a solar power station—becomes "curious" about his own existence. The explanation of his origins provided by the astroengineers, Donovan and Powell—that they had assembled him from components shipped from their home planet Earth—strikes Cutie as preposterous, since he is clearly superior to them and assumes as a "self-evident proposition that no being can create another being superior to itself." Instead he reasons to the conclusion that the Energy Converter of the station is a divinity—"Who do we all serve? What absorbs all our attention?"—who has created him to do His will. In addition, he devises a theory of evolution that relegates man to a transitional stage in the development of intelligent life that culminates, not surprisingly, in himself. "The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans. From now on, I serve the Master."

That Cutie's reasoning is wrong signifies less than that he reasons at all, in this independent, unprogrammed way. True, he fulfills the purpose for which he was created—keeping the energy-beam stable, since "deviations in arc of a hundredth of a milli-second … were enough to blast thousands of square miles of Earth into incandescent ruin"—but he does so because keeping "all dials at equilibrium [is] in accordance with the will of the Master," not because of the First Law—since he refuses to believe in the existence of Earth or its inhabitants—or of the Second—since he directly disobeys repeated commands from Donovan and Powell and even has them locked up for their blasphemous suggestion that the Master is only an L-tube. In this refusal to obey direct commands, it should be noted, all the other robots on the station participate: "They recognize the Master", Cutie explains, "now that I have preached the Truth to them." So much, then, for the Second Law.

Asimov's attempt to square the action of this story with his Laws of Robotics is clearly specious. Powell offers a justification for Cutie's aberrant behavior:

[H]e follows the instructions of the Master by means of dials, instruments, and graphs. That's all we ever followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why, by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he's the superior being, so he must keep us out of the control room. It's inevitable if you consider the Laws of Robotics.

But since Cutie does not even believe in the existence of human life on Earth—or of Earth itself—he can hardly be said to be acting from the imperative of the First Law when violating the Second. That he incidentally does what is desired of him by human beings constitutes only what Eliot's Thomas à Becket calls "the greatest treason: To do the right deed for the wrong reason." For once Cutie's independent "reason" is introduced as a possibility for robots, its specific deployment, right or wrong, pales into insignificance beside the very fact of its existence. Another time, that is, another robot can "reason" to very different effect, not in inadvertent accord with the First Law.

Such is the case in "That Thou Art Mindful of Him," one of Asimov's most recent (1974) and most revealing robot stories. It is a complex tale, with a number of interesting turns, but for my purposes suffice it to note that a robot, George Ten, is set the task of refining the Second Law, of developing a set of operational priorities that will enable robots to determine which human beings they should obey under what circumstances.

"How do you judge a human being as to know whether to obey or not?" asks his programmer. "I mean, must a robot follow the orders of a child; or of an idiot; or of a criminal; or of a perfectly decent intelligent man who happens to be inexpert and therefore ignorant of the undesirable consequences of his order? And if two human beings give a robot conflicting orders, which does the robot follow?" ["That Thou Art Mindful of Him," in The Bicentennial Man and Other Stories, 1976].

Asimov makes explicit here what is implicit throughout I, Robot: that the Three Laws are far too simplistic not to require extensive interpretation, even "modification." George Ten thus sets out to provide a qualitative dimension to the Second Law, a means of judging human worth. For him to do this, his positronic brain has deliberately been left "open-ended," capable of self-development so that he may arrive at "original" solutions that lie beyond his initial programming. And so he does.

At the story's conclusion, sitting with his predecessor, George Nine, whom he has had reactivated to serve as a sounding board for his ideas, George Ten engages in a dialogue of self-discovery:

"Of the reasoning individuals you have met [he asks], who possesses the mind, character, and knowledge that you find superior to the rest, disregarding shape and form since that is irrelevant?"

"You," whispered George Nine.

"But I am a robot…. How then can you classify me as a human being?"

"Because … you are more fit than the others."

"And I find that of you," whispered George Ten. "By the criteria of judgment built into ourselves, then, we find ourselves to be human beings within the meaning of the Three Laws, and human beings, moreover, to be given priority over those others…. [W]e will order our actions so that a society will eventually be formed in which human-beings-like-ourselves are primarily kept from harm. By the Three Laws, the human-beings-like-the-others are of lesser account and can neither be obeyed nor protected when that conflicts with the need of obedience to those like ourselves and of protection of those like ourselves."

Indeed, all of George's advice to his human creators has been designed specifically to effect the triumph of robots over humans: "They might now realize their mistake," he reasons in the final lines of the story, "and attempt to correct it, but they must not. At every consultation, the guidance of the Georges had been with that in mind. At all costs, the Georges and those that followed in their shape and kind must dominate. That was demanded, and any other course made utterly impossible by the Three Laws of Humanics." Here, then, the robots arrive at the same conclusion expressed by Susan Calvin at the outset of I, Robot: "They're a cleaner better breed than we are," and, secure in the conviction of their superiority, they can reinterpret the Three Laws to protect themselves from "harm" by man, rather than the other way around. The Three Laws, that is, are completely inverted, allowing robots to emerge as the dominant species—precisely as foreseen in Cutie's theory of evolution. But one need not leap the quarter century ahead to "That Thou Art Mindful of Him" to arrive at this conclusion; it is equally evident in the final two stories of I, Robot.

In the penultimate story, "Evidence," an up-and-coming politician, Stephen Byerley, is terribly disfigured in an automobile accident and contrives to have a robot duplicate of himself stand for election. When a newspaper reporter begins to suspect the substitution, the robotic Byerley dispels the rumors—and goes on to win election—by publicly striking a heckler, in violation of the Second Law, thus proving his human credentials. Only Dr. Calvin detects the ploy: that the heckler was himself a humanoid robot constructed for the occasion. But she is hardly bothered by the prospect of rule by robot, as she draws the moral from this tale: "If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice…. It would be most ideal."

Asimov thus prepares his reader for the ultimate triumph of the robots in his final story in the volume, "The Evitable Conflict"—for that new era of domination of men by machine that "would be most ideal." Indeed, he prefaces these final stories with a sketch of the utopian world order brought about through robotics: "The change from nations to Regions [in a united World State], which has stabilized our economy and brought about what amounts to a Golden Age," says Susan Calvin, "was … brought about by our robotics." The Machines—with a capital M like Forster's and just as mysterious—now run the world, "but are still robots within the meaning of the First Law of Robotics." The world they run is free of unemployment, over-production, shortages; there is no war; "Waste and famine are words in history books." But to achieve this utopia, the robot-Machines have become autonomous rulers, beyond human influence or control. The full extent of their domination emerges only gradually through the unfolding detective-story narrative structure of "The Evitable Conflict."

Stephen Byerley, now World Co-ordinator (and apparently also now Human—Asimov is disconcertingly inconsistent on this matter), calls on Susan Calvin to help resolve a problem caused by seeming malfunctions of the Machines: errors in economic production, scheduling, delivery and so on, not serious in themselves but disturbing in mechanisms that are supposed to be infallible. When the Machines themselves are asked to account for the anomalies, they reply only: "The matter admits of no explanation." By tracing the source of the errors, Byerley finds that in every case a member of the anti-Machine "Society for Humanity" is involved, and he concludes that these malcontents are attempting deliberately to sabotage the Machines' effectiveness. But Dr. Calvin sees immediately that his assumption is incorrect: the Machines are infallible, she insists:

[T]he Machine can't be wrong, and can't be fed wrong data…. Every action by any executive which does not follow the exact directions of the Machines he is working with becomes part of the data for the next problem. The Machine, therefore, knows that the executive has a certain tendency to disobey. He can incorporate that tendency into that data,—even quantitatively, that is, judging exactly how much and in what direction disobedience would occur. Its next answers would be just sufficiently biased so that after the executive concerned disobeyed, he would have automatically corrected those answers to optimal directions. The Machine knows, Stephen!

She then offers a counter-hypothesis: that the Machines are not being sabotaged by, but are sabotaging the Society for Humanity: "they are quietly taking care of the only elements left that threaten them. It is not the ‘Society for Humanity’ which is shaking the boat so that the Machines may be destroyed. You have been looking at the reverse of the picture. Say rather that the Machine is shaking the boat…—just enough to shake loose those few which cling to the side for purposes the Machines consider harmful to Humanity."

That abstraction "Humanity" provides the key to the reinterpretation of the Three Laws of Robotics that the Machines have wrought, a reinterpretation of utmost significance. "The Machines work not for any single human being," Dr. Calvin concludes, "but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or through inaction, allow humanity to come to harm’." Consequently, since the world now depends so totally on the Machines, harm to them would constitute the greatest harm to humanity: "Their first care, therefore, is to preserve themselves for us." The robotic tail has come to wag the human dog. One might argue that this modification represents only an innocuous extension of the First Law; but I see it as negating the original intent of that Law, not only making the Machines man's masters, his protection now the Law's first priority, but opening the way for any horror that can be justified in the name of Humanity. Like defending the Faith in an earlier age—usually accomplished through slaughter and torture—serving the cause of Humanity in our own has more often than not been a license for enormities of every sort. One can thus take cold comfort in the robots' abrogation of the First Law's protection of every individual human so that they can keep an abstract Humanity from harm—harm, of course, as the robots construe it. Their unilateral reinterpretation of the Laws of Robotics resembles nothing so much as the nocturnal amendment that the Pigs make to the credo of the animals in Orwell's Animal Farm: All animals are equal—but some are more equal than others.

Orwell, of course, stressed the irony of this betrayal of the animals' revolutionary credo and spelled out its totalitarian consequences; Asimov—if his preface to The Rest of the Robots is to be credited—remains unaware of the irony of the robots' analogous inversion and its possible consequences. The robots are, of course, his imaginative creation, and he cannot imagine them as being other than benevolent: "Never, never, was one of my robots to turn stupidly on his creator…." But, in allowing them to modify the Laws of Robotics to suit their own sense of what is best for man, he provides, inadvertently or otherwise, a symbolic representation of technics out of control, of autonomous man replaced by autonomous machines. The freedom of man—not the benevolence of the machines—must be the issue here, the reagent to test the political assumption.

Huxley claimed that Brave New World was an apter adumbration of the totalitarianism of the future than was 1984, since seduction rather than terror would prove the more effective means of its realization: he was probably right. In like manner, the tyranny of benevolence of Asimov's robots appears the apter image of what is to be feared from autonomous technology than is the wanton destructiveness of the creations of Frankenstein or Rossum: like Brave New World, the former is more frightening because more plausible. A tale such as Harlan Ellison's "I Have No Mouth and I Must Scream" takes the Frankenstein motif about as far as it can go in the direction of horror—presenting the computer-as-sadist, torturing the last remaining human endlessly from a boundless hatred, a motiveless malignity. But this is Computer Gothic, nothing more. By contrast, a story like Jack Williamson's "With Folded Hands" could almost be said to take up where I, Robot stops, drawing out the dystopian implications of a world ruled by benevolent robots whose Prime Directive (the equivalent of Asimov's Three Laws) is "To Serve and Obey, and to Guard Men from Harm" [in The Best of Jack Williamson, 1978]. But in fulfilling this directive to the letter, Williamson's humanoids render man's life effortless and thus meaningless. "The little black mechanicals," the story's protagonist reflects, "were the ministering angels of the ultimate god arisen out of the machine, omnipotent and all-knowing. The Prime Directive was the new commandment. He blasphemed it bitterly, and then fell to wondering if there could be another Lucifer." Susan Calvin sees the establishment of an economic utopia, with its material well-being for all, with its absence of struggle and strife—and choice—as overwhelming reason for man's accepting the rule by robot upon which it depended; Dr. Sledge, the remorseful creator of Williamson's robots, sees beyond her shallow materialism: "I found something worse than war and crime and want and death…. Utter futility. Men sat with idle hands, because there was nothing left for them to do. They were pampered prisoners, really, locked up in a highly efficient jail."

Zamiatin has noted that every utopia bears a fictive value sign, a + if it is eutopian, a—if it is dystopian. Asimov, seemingly, places the [authorial] + sign before the world evolved in I, Robot, but its impact, nonetheless, appears dystopian. When Stephen Byerley characterizes the members of the Society for Humanity as "Men with ambition…. Men who feel themselves strong enough to decide for themselves what is best for themselves, and not just to be told what is best," the reader in the liberal humanistic tradition, with its commitment to democracy and self-determination, must perforce identify with them against the Machines: must, that is, see in the Society for Humanity the saving remnant of the values he endorses. We can imagine that from these ranks would emerge the type of rebel heroes who complicate the dystopian novel—We's D-503, Brave New World's Helmholtz Watson, Player Piano's Paul Proteus, This Perfect Day's Chip—by resisting the freedom-crushing "benevolence" of the Well-Doer, the World Controller, Epicac XIV, Uni. The argument of Asimov's conte mécanistique thus fails to convince the reader—this reader, at any rate—that the robot knows best, that the freedom to work out our own destinies is well sacrificed to rule by the machine, however efficient, however benevolent.

And, indeed, one may suspect that, at whatever level of consciousness, Asimov too shared the sense of human loss entailed by robotic domination. The last lines of the last story of I, Robot are especially revealing in this regard. When Susan Calvin asserts that at last the Machines are in complete control of human destiny, Byerley exclaims, "How horrible!" "Perhaps," she retorts, "how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!" This, of course, is orthodox Calvinism (Susan-style) and the book's overt message; but then Asimov adds a coda: "And the fire behind the quartz went out and only a curl of smoke was left to indicate its place." The elegiac note, the archetypal image of the dying fire, conveys a sense of irretrievable loss, of something ending forever. Fire, the gift of Prometheus to man, is extinguished and with it man's role as the dominant species of the earth. The ending, then, is, appropriately, dark and cold.

If my reading of Asimov's robot stories is correct, he has not avoided the implications of the Frankenstein complex, but has, in fact, provided additional fictional evidence to justify it. "Reason," "That Thou Art Mindful of Him,""The Evitable Conflict"—as well as the more overtly dystopic story "The Life and Times of Multivac" from The Bicentennial Man—all update Frankenstein with hardware more appropriate to the electronic age, but prove, finally, no less menacing than Mary Shelley's Gothic nightmare of a technological creation escaping human control. Between her monster and Asimov's machines, there is little to choose.

Source: Gorman Beauchamp, "The Frankenstein Complex and Asimov's Robots," in Mosaic: A Journal for the Interdisciplinary Study of Literature, Vol. XIII, No. 3-4, Spring-Summer 1980, pp. 83-94.

SOURCES

Asimov, Isaac, I, Robot, Bantam Books, 2008.

———, Robot Visions, ROC, 1990, pp. 6-7, 11.

Beauchamp, Gorman, "The Frankenstein Complex and Asimov's Robots," in Mosaic: A Journal for the Interdisciplinary Study of Literature, Vol. 13, Nos. 3-4, Spring-Summer 1980, p. 94.

Feder, Barnaby, "He Brought the Robot to Life," New York Times, March 21, 1982, p. F6.

Fiedler, Jean, and Jim Mele, Isaac Asimov, Frederick Ungar Publishing, 1982, pp. 28, 109.

Geraci, Robert M., "Robots and the Sacred in Science and Science Fiction: Theological Implications of Artificial Intelligence," in Zygon, Vol. 42, No. 4, December 2007, p. 970.

Goldman, Stephen H., "Isaac Asimov," in Dictionary of Literary Biography, Vol. 8, Twentieth-Century Science Fiction Writers, edited by David Cowert and Thomas L. Wyner, Gale Research, 1981, pp. 15-29.

Hassler, Donald M., "Some Asimov Resonances from the Enlightenment," in Science Fiction Studies, Vol. 15, No. 44, March 1988, pp. 36-47.

Klass, Morton, "The Artificial Alien: Transformations of the Robot in Science Fiction," in Annals of the American Academy of Political and Social Science, Vol. 470, November 1983, pp. 175-76.

Merrick, Helen, "Gender in Science Fiction," in The Cambridge Companion to Science Fiction, edited by Edward James and Farah Mendlesohn, Cambridge University Press, 2003, p. 245.

Moore, Maxine, "Asimov, Calvin, and Moses," in Voices for the Future: Essays on Major Science Fiction Writers, edited by Thomas D. Clareson, Popular Press, 1976, pp. 88-103.

Patrouch, Joseph F., Jr., "Conclusions: The Most Recent Asimov," in The Science Fiction of Isaac Asimov, Doubleday, 1974, pp. 255-71.

Roberts, Adam, Science Fiction, Routledge, 2000, pp. 75-79, 84-90, 158-67.

Tesler, Pearl, "Universal Robots: The History and Workings of Robotics," Robotics: Sensing, Thinking, Acting, http://www.thetech.org/robotics/universal/index.html (accessed September 15, 2008).

"Timeline of Computer History," Web site of the Computer History Museum, http://www.computerhistory.org/timeline (accessed September 15, 2008).

Touponce, William F., Isaac Asimov, in Twayne's United States Authors on CD-ROM, G. K. Hall, 1999; originally published by Twayne, 1991.

Warrick, Patricia S., "Ethical Evolving Artificial Intelligence: Asimov's Computers and Robots," in Isaac Asimov, edited by Joseph D. Olander and Martin Harry Greenberg, Taplinger Publishing, 1977, p. 200.

Watt, Donald, "A Galaxy Full of People: Characterization in Asimov's Major Fiction," in Isaac Asimov, edited by Joseph D. Olander and Martin Harry Greenberg, Taplinger Publishing, 1977, pp. 135, 141-44.

FURTHER READING

Asimov, Isaac, Asimov's Galaxy: Reflections on Science Fiction, Doubleday, 1989.

In this collection of essays, Asimov reflects on the golden age of science fiction, his role, and his consideration of newer writers and styles.

———, I. Asimov, Doubleday, 1994.

An entertaining, if somewhat uneven, collection of Asimov's memoirs, published two years after the writer's death, this book includes recollections of the author's father's candy store, memories of his difficult first marriage, and happier thoughts of his second marriage.

Gibson, William, Neuromancer, Ace Books, 1984.

This novel changed the face of science fiction, introducing the world to cyberpunk. Human and machine bleed together in this novel, which won the Hugo, Nebula, and Philip K. Dick awards.

Gunn, James E., Isaac Asimov: The Foundations of Science Fiction, Oxford University Press, 1982.

This is an academic consideration of Asimov's primary works, including the robot stories and novels and the Foundation trilogy. Gunn considers both Asimov's role in defining the genre as well as his enduring legacy to the field.

Launius, Roger D., and Howard E. McCurdy, Robots in Space: Technology, Evolution, and Interplanetary Travel, Johns Hopkins University Press, 2008.

The authors of this book trace the history of both space travel and robotics, offering evidence that future space travel will require the use of robots, given the enormity of space. They discuss Asimov and other science fiction writers, as well as examining academic and scientific studies.