Robots and Robotics

views updated

ROBOTS AND ROBOTICS

Robots are programmable machines capable of moving around in and interacting with their physical environment. The word robot was popularized by Karel Capek (1890–1938) in his play R.U.R., where he used it to refer to a race of manufactured humanoid slaves; robots are machines that can do the work of humans. It is debatable whether merely remote-controlled devices should count as robots, although many devices popularly thought of as robots are of this nature. Similarly, computer programs such as virtual "autonomous agents" and web "'bots" are not, strictly speaking, robots as they lack the ability to manipulate the physical world.

The term robotics was coined by Isaac Asimov and refers to the study and use of robots. Research into robotics began in the 1940s, alongside research into cybernetics and computers. The first commercial robots were produced for industrial applications in manufacturing in the 1960s. As computing technology began to improve rapidly in the 1980s and 1990s, a number of writers such as Hans Moravec (1998) and Ray Kurzweil (1992) made arguably exaggerated claims on behalf of robots, suggesting that they would soon possess consciousness and intelligence. Major limitations on the tasks that can be performed by robots—especially in real environments—remain, largely due to a lack of success in reproducing "intelligence" and robust locomotive and sensory systems. The vast majority of existing robots are industrial robots, which perform a limited range of repetitive tasks in a controlled environment.

The ethical, political, and legal issues surrounding robots can be roughly grouped into two categories: those that are raised by existing technologies and a more speculative set that would arise if genuinely "intelligent" or conscious robots were to become a reality.

Existing technologies largely raise questions relating to their social impact (Weiner 1961). The main impact of robotics thus far has been to displace persons from jobs in manufacturing industries. It might be argued that by replacing workers in industries where jobs tended to be both highly paid and skilled, robots have had a negative impact on human happiness. Alternatively, it might be argued that robots have contributed to human happiness by eliminating the necessity of repetitive and occasionally dangerous work. The economies of scale and other increases in efficiency that robotics have made possible would also need to be taken into account in this calculation. Access to robots could conceivably become a source of inequality in a society where robots play a significant role.

Another area where it seems likely that robots will have dramatic social impacts is warfare. A number of types of remote control and semi-autonomous devices are already deployed by militaries around the world. It seems likely that fully autonomous robots will play a role in wars conducted by industrialized nations in the future.

The use of robots in military contexts raises many difficult ethical and legal issues. They offer to reduce casualties amongst friendly combatants, but in doing so may decrease the threshold of war. "Smart weapons" may allow commanders to attack military targets with greater precision and thus lower the risk of civilian casualties in war. However, the possession of such weapons by one side only may increase the likelihood and extent of asymmetrical warfare and consequently of increased civilian casualties. There are also ethical and legal questions surrounding the allocation of responsibility for deaths caused when such weapons go astray, resulting in attacks on targets that are not legitimate under the rules of war.

More prosaically, a number of quite advanced robots are now manufactured as entertainment devices and "robot pets." The development of robot toys suggests that there is a need to scrutinize the educative and communicative functions of these robots. There are also questions surrounding the ethics of human/robot interactions. Are robots appropriate objects of emotional attitudes? If not, then designing robots to encourage such investment may be wrong.

A much larger, more complex, but also speculative, set of issues would arise if robots were to achieve any degree of consciousness, or genuine intelligence.

At what point would such creations deserve moral concern? What rights should they have? While these questions are regularly raised by writers in the area, little serious philosophical work has been done on these subjects, perhaps reflecting a lack of faith that the technology will become a reality.

Yet much contemporary moral theory, which grounds moral status in the capacities of individuals, suggests that sentient robots would be deserving of the same moral regard as other sentient creatures. If robots can feel pain, then humans will have obligations to avoid causing them pain. If they become self-conscious, can reason, and have future-oriented desires, then they will be worthy of the same moral regard and respect as human persons. This suggests that it would be entirely appropriate to feel grief stricken by the "death" of a robot, to feel remorse for killing a robot, and even sometimes to choose to save the life of a robot over that of a human being.

This last scenario might serve as a test of the moral status of robots. Humans will know that robots are moral persons when they feel that the choice between the survival of a robot and of a person is a genuine moral dilemma. This might be called the "Turing Triage Test," after Alan Turing's famous test for when a machine can be said to think. If this test is a valid one, it suggests that what is required for robots to become persons may include the ability to express subtle and complex emotional states through their bodily appearance.

As well as the question of how people should treat robots, there is also the question of how robots are expected to treat people. What ethical precepts should they be designed to obey? Isaac Asimov's " three laws of robotics" are a famous attempt to answer some of these questions. Yet, as Asimov's stories demonstrate, much more will need to be done before humans become confident that intelligent robots could safely take their place alongside humanity. These questions would become especially urgent if artificially intelligent robots might be capable of reproducing themselves and thereby pose a threat to the human species. If robotics researchers are on the verge of creating entities that will be more intelligent than humans and that may compete with humanity for dominance over the planet, then this is a momentous decision, which should only be made after extensive public deliberation.

ROBERT SPARROW

SEE ALSO Androids;Artificial Intelligence;Artificial Morality;Asimov, Isaac;Robot Toys;Turing Tests.

BIBLIOGRAPHY

Asimov, Isaac. (1950). I, Robot. New York: Gnome Press. A collection of science fiction short stories that did much to popularize the idea of robots.

Brooks, Rodney Allen. (2003). Robot: The Future of Flesh and Machines. London: Allen Lane. A popular account of the history and probable future of robots and robotics by a leading robotics researcher.

Kurzweil, Ray. (1992). The Age of Intelligent Machines. Cambridge, MA: MIT Press. A history of research into "artificial intelligence" alongside an extremely speculative discussion of possible future developments in this area, including the ethical and social issues that they may raise.

Menzel, Peter, and Faith D'Aluisio. (2000). Robo Sapiens: Evolution of a New Species. Cambridge, MA: MIT Press. A survey of the state of robotics research at the beginning of the twenty first century in the form of a photo-essay and a set of interviews with researchers from around the world.

Moravec, Hans. (1998). Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press. An extremely speculative discussion of the possible future of robotics, which argues that machines will be more intelligent than humans by the year 2050.

Turing, Alan. (1950). "Computing Machinery and Intelligence." Mind 59(236): 433–460. An influential paper in which Turing sets out his famous "imitation game" as a means of determining when a machine can properly be said to think.

Wiener, Norbert. (1961). Cybernetics: Or Control and Communication in the Animal and the Machine, 2nd edition. Cambridge, MA: MIT Press. An important work that sets out the theoretical basis for the discipline of cybernetics—and therefore for robotics. It also includes discussion of the possible social impact of the development of robots.

More From encyclopedia.com