Androids are mechanical, or otherwise artificial, creations in the shape of humans. They have long been a staple of science fiction. From the clockwork persons of myth to Isaac Asimov's humanoid robots, to Star Wars's C-3PO, and to Steven Spielberg's A.I. Artificial Intelligence, imagined mechanical persons have enabled people to reflect upon what it means to be human.
The real world of androids is substantially more mundane than their appearance in science fiction. Although there exists a long history of clockwork automata and other mechanical imitations of persons, these have never been more than theatrical curiosities. The creation of more ambitious androids has had to await advances in robotics. Until the 1990s, the problems involved in creating a robot that could walk on two legs prevented robots from taking humanoid form. Yet if robotics technology continues to improve, then it seems likely that robots shaped like and perhaps even behaving like human beings will be manufactured within the twenty-first century.
For the purpose of considering the ethical issues they may raise, androids can be divided into three classes: Those that are merely clever imitations of human beings, hypothetical fully-fledged "artificial persons," and—in between—intelligent artifacts whose capacities are insufficient to qualify them as moral persons.
Existing androids are at most clever imitations of people, incapable of thought or independent behavior, and consequently raise a limited range of ethical questions. The use of animatronics in educational and recreational contexts raises questions about the ethics of representation and communication akin to those treated in media ethics. A more interesting set of questions concerns the ethics of human/android relations. Even clever imitations of human beings may be capable of a sufficient range of responses for people to form relationships with them, which may then be subject to ethical evaluation. That is, people's behavior and attitudes towards such androids may say something important about them. Moreover, the replacement of genuine ethical relations with ersatz relations may be considered ethically problematic. This suggests that some uses of androids—for instance, as substitute friends, caregivers, or lovers—are probably unethical.
Any discussion of the ethical issues surrounding "intelligent" androids is necessarily speculative, as the technology is so far from realization. Yet obvious issues would arise should androids come to possess any degree of sentience. The questions about the ethics of android/human relationships outlined above arise with renewed urgency, because the fact of intelligence on the part of the android widens the scope for these relationships. If androids are capable of suffering, then the question of the moral significance of their pain must be addressed. Once one admits that androids have internal states that are properly described as pain, then it would seem that one should accord this pain the same moral significance as one does the pain of other sentient creatures.
There is also a set of important questions concerning the design and manufacture of such entities. What capacities should they be designed with? What inhibitions should be placed on their behavior? What social and economic roles should they be allowed to play? If androids were to move out of the research laboratory, a set of legal issues would also need to be addressed. Who should be liable for damage caused by an android? What rights, if any, should be possessed by androids? What penalties should be imposed for cruelty to, or for "killing," an android? Ideally, these questions would need to be resolved before such entities are created.
However, the major ethical issue posed by sentient androids concerns the point at which they move from being intelligent artifacts to "artificial persons." That is, when they become worthy of the same moral regard that individuals extend to other (human) people around them. If it is possible to manufacture self-conscious and intelligent androids, then presumably at some point it will be possible to make them as intelligent, or indeed more intelligent, than humans are. It would seem morally arbitrary to deny such entities the same legal and political rights granted human beings.
Importantly, any claim that this point has been reached necessitates a particular set of answers to the questions outlined above. If androids become moral persons then it is not only morally appropriate but required that humans should respond to the death of an android with the same set of moral responses as they do a human person; for instance, with horror, grief, and remorse. This observation alone is enough to suggest that the creation of artificial persons is likely to be more difficult than is sometimes supposed.
Brooks, Rodney Allen. (2003). Robot: The Future of Flesh and Machines. London: Allen Lane. A popular account of the history and probable future of robots and robotics by a leading robotics researcher.
Menzel, Peter, and Faith D'Aluisio. (2000). Robo Sapiens: Evolution of a New Species. Cambridge, MA: The MIT Press. A survey of the state of robotics research at the beginning of the twenty first century in the form of a photoessay and set of interviews with researchers around the world.