Airplanes to Arcades: The Development of Virtual Reality
Airplanes to Arcades: The Development of Virtual Reality
Imagine an underground chamber like a cave, with a long entrance open to the daylight and as wide as the cave. In this chamber are men who have been prisoners since they were children, their legs and their necks being so fastened that they can only look straight ahead of them and cannot turn their heads. Some way off, behind and higher up, a fire is burning, and between the fire and the prisoners and above them runs a road, in front of which a curtain-wall has been built, like the screen at puppet shows between the operators and their audience. . . . Our prisoners could [not] see anything . . . except for the shadows thrown by the fire on the wall of the cave opposite them.3
The ancient Greek philosopher Plato wrote these words about twenty-four hundred years ago. He used the striking image of the prisoners in the cave to illustrate his belief that what human beings thought of as real objects were merely poor imitations of "ideal" forms that could only be imagined. (In his metaphor, humans were the prisoners, and the shadows on the cave wall were the false reality that they perceived.) But, except for the idea that viewers were held in their cave by force, he could just as well have been describing people experiencing virtual reality. To these people, too, a shadowy, make-believe world can seem real.
Showing the Third Dimension
The pictures on the walls of today's VR "caves" are created by computers, not by flickering flames. Attempts to make imagined visions seem real, though, are far older than computers. Indeed, they may have started in actual caves more than fifteen thousand years ago. Stone Age artists covered the walls of caves at such sites as Lascaux, France, and Altamira, Spain, with vividly colored drawings of bison, reindeer, and other animals. Some archaeologists believe that, perhaps as part of religious rituals, viewers were led into these caves in darkness, following winding paths deep within the earth. When candles or torches were lit, the animals on the cave walls would have seemed to leap out of the blackness. The viewers might well have believed that they had entered a spirit world.
As time went on, artists found many ways to imitate or re-create reality. Beginning in Greece around Plato's time, painters of scenes used as backdrops in theaters suggested a third dimension of depth or distance by showing supposedly faraway objects as smaller than near ones and making lines that in reality would have been parallel (those marking the sides of a road, for instance) come together at a point. A few centuries later, artists in the Roman Empire used similar tricks when they decorated walls in the homes of wealthy citizens with views of sunny gardens and orchards.
In the nineteenth century, inventors found a different way to suggest the third dimension, by drawing on the fact that humans perceive depth by combining images received by their right and left eyes, which are several inches apart and therefore view a subject from slightly different angles. Perhaps the first invention to use this new technique was the stereoscope, which a British man named Charles Wheatstone created in 1833. It used two small drawings of the same scene that had been made from slightly different positions. The drawings were mounted on the sides of a wooden frame that the viewer held in front of his or her face. Mirrors in the center of the framework reflected the pictures into the viewer's eyes. When the frame was adjusted to the right distance, the viewer's brain combined the two images into a single one that looked three-dimensional. Later inventors improved the stereoscope, and it became very popular in homes of the late nineteenth and early twentieth centuries. By then, the paired images used in the device were photographs rather than drawings.
A German teacher named Wilhelm Rollmann used a somewhat different technique to create pairs of images that blended into one three-dimensional picture. Beginning around 1853, he made one drawing
[Image not available for copyright reasons]
of each pair in green ink and the other in red. The drawings were placed in a stereoscope-like device so that one eye saw the green image and the other the red one. Some filmmakers in the early 1950s adapted Rollmann's approach to create movies that provided at least some illusion of a third dimension. Viewers watched the movies through cardboard-framed glasses with transparent plastic lenses, one tinted red and the other green. When seen through these glasses, the movies' overlapping red and green images formed a single picture that seemed to pop out of the screen. This way of making films was expensive, though, and the illusion of three dimensions was not very convincing. The fad for 3-D movies therefore soon died out.
Morton Heilig, an American inventor, took the 3-D concept even further in an amusement park ride he created around 1960, which he called Sensorama. The rider sat in a movable bucket seat and looked into a periscopelike viewer with a small 3-D movie screen. The viewer also contained stereo speakers, fans, and a device that sprayed out liquids with different smells. When watching a Sensorama movie that showed a motorcycle trip through a city, riders felt the vibrations and turns of traveling over potholed pavements, heard traffic passing by, felt wind in their faces, and smelled food odors from nearby restaurants. Heilig's ride was a crude attempt to create the feeling of immersion, appealing to all the senses, that later virtual reality would also seek. Technology was not yet ready to support him, however. His Sensorama machines were unreliable, and few were sold.
"Flying" on the Ground
Meanwhile, other researchers and organizations were creating illusions for more serious purposes. Commercial flying had begun to develop in the late 1920s, producing a great need for pilots, but training pilots in the air risked both expensive machinery and the lives of the student pilots and their instructors. Around 1930, therefore, Edwin Link, once a maker of pipe organs and player pianos, invented a way for pilots to take some of their flight training on the ground. His "Link trainers" consisted of a mock-up of the controls in a plane, mounted on a platform that changed its angle when the student pilot on the platform moved the controls. Unlike Heilig's Sensorama rides, Link's trainers were interactive: their users' actions affected the display. Interactivity would prove to be as important a part of virtual reality as the feeling of immersion that Heilig tried to create.
The army and navy quickly adapted Link's trainers to teach military pilots. By the early 1940s, when thousands of pilots had to be trained quickly to fight in World War II, the military services had added projection of films taken in actual airplanes to the trainers to make the imitation, or simulation, of flight more realistic. Video replaced film in the late 1940s, about the same time a third essential element of virtual reality—computers—began to develop.
In the last year of World War II, the navy used the first electric computer, the Harvard Mark I, to calculate the paths of artillery shells and missiles. The army hoped to do the same with ENIAC (Electronic Numerical Integrator and Calculator), an electronic computer invented by John Mauchly and J. Prosper Eckert of the University of Pennsylvania, but by the time ENIAC was finished in 1946, the war had ended. ENIAC contained thousands of vacuum tubes, something like light bulbs, attached to huge circuit boards. Patterns of vacuum tubes stood for numbers, and ENIAC calculated by turning tubes on and off to change the patterns.
Large businesses began to buy some of ENIAC's descendants a few years later. These early commercial computers filled whole rooms. They were extremely expensive, hard to use, likely to break down, and had far less computing power than the cheapest handheld units today. Nonetheless, compared to humans or even the Mark I, they performed their calculations with amazing speed. After their vacuum tubes were replaced by more dependable devices called transistors in the early 1960s, computers slowly became smaller, cheaper, and more reliable. More businesses and universities began to use them.
Meanwhile, the military also continued to use computers. In the 1950s, the Lincoln Laboratory, a computer research center at the Massachusetts Institute of Technology (MIT), developed a military computer program called the Semi-Automatic Ground Environment, or SAGE. Intended to warn of a possible bomber or missile attack from the Soviet Union, SAGE fed data from numerous radar stations into computers. The computers analyzed the data and put it into a form that could guide interceptor planes to targets that the radar detected.
Unlike other computer programs of the time, which showed their results on punched cards or paper printouts, SAGE displayed its information on video screens. The information appeared in real time, almost as fast as the radar stations gathered the data on which it was based. Few other computers of the era could, or needed to, update their displays so quickly. A third advance was that SAGE operators selected targets for further attention by using a light pen, which interacted with chemicals called phosphors on the inside of the video screens to send electronic signals to the computer. All three of these features later became common to computers in general and were particularly important in virtual reality.
The Sword of Damocles
A graduate student named Ivan Sutherland joined the Lincoln Laboratory in 1960. Three years later he completed a program called Sketchpad, which, like SAGE, used a video screen and a light pen. A user of Sketchpad could draw lines by touching the screen with the pen and then modify them by typing instructions on a keyboard. The program also let users enlarge or reduce their drawings, save them, and reproduce them. Most of these features had never appeared in a computer program before. Sutherland's invention was the start of computer graphics, an essential part of virtual reality.
Sutherland's inventiveness by no means stopped with Sketchpad. In 1965 he wrote an essay called "The Ultimate Display," in which he described an ideal computer system that allowed users to manipulate "objects" made from data, changing their shape and position on a screen, just as people can move physical objects in the real world. He predicted that scientists would use such displays to test theories in ways that they could never do in reality.
Sutherland realized that the technology to create his ultimate display did not yet exist, but he thought he could make a simpler device that would have some of its features. In 1966, supported by the Department of Defense, which hoped that his invention could be used in improved flight trainers, he began building what he called the Sword of Damocles. He took the name from an ancient Greek legend that told of a king who forced a courtier to eat a meal while seated under a sword suspended above his head by a single hair. Like the mythical sword, Sutherland's creation, a bulky helmet that covered a wearer's entire head, was suspended from the ceiling, though a sturdy metal rod took the place of the hair. The helmet could be turned, but its wearer, like the courtier, had to remain in more or less the same spot.
A set of glass prisms in Sutherland's helmet reflected images from two small video monitors into the helmet wearer's eyes. A computer supplied the images to the monitors. As in the old stereoscopes, each monitor showed a slightly different view of the same object, which the wearer's brain combined into a single three-dimensional picture. Unlike the stereoscope images, however, the pictures in the monitors changed when the viewer moved his or her head to look in a different direction. Sensors in the rod and helmet detected the head movement and passed information about it to the computer, which altered the display. In other words, like Link's pilot trainers, Sutherland's helmet was interactive. The Sword of Damocles was the first head-mounted display (HMD), which became one of the two chief kinds of devices through which people experience virtual reality.
Gloves and Helmets
In the early 1970s, Frederick Brooks and others at the University of North Carolina (UNC), Chapel Hill, added the element of touch to HMDs. The Sword of Damocles had included a control wand that a wearer could use to "move" objects shown on the computer screens, but Brooks's team carried this idea further. They developed a handgrip called GROPE-II, which contained tiny motors that pushed back against, or resisted, users' hand movements, a process called force feedback. The resistance could be adjusted, and variations in it created the sensation that a person was handling actual objects. UNC chemists used GROPE-II and its 1980s successor, GROPE-III, to gain the sensation of moving molecules, discovering which ones could be fitted together to make new substances.
Scientists sponsored the development of GROPE-II, but the military and the commercial airlines paid for most of the research on simulation devices. Both were looking for better flight simulators to train pilots to use planes that grew more complicated every year. The cockpits of the newest aircraft included heads-up displays, which projected information about altitude, speed, and so on at eye level
[Image not available for copyright reasons]
in a form that pilots could see but also see through. These displays meant that during battle, for instance, military pilots did not have to take their eyes off enemy aircraft and look down to obtain this information.
Working at Wright-Patterson Air Force Base in Ohio in the late 1960s, Thomas W. Furness III began trying to incorporate a heads-up display into a pilot's helmet. He wanted to keep the display in front of the pilot's gaze no matter which way the pilot looked. He also hoped to make the display visible against both bright and dark backgrounds. By the early 1970s, Furness's team had created HMDs something like Sutherland's Sword of Damocles. The HMDs included sensors that tracked head position and used computer graphics, which had replaced video in trainer simulations by then.
Furness's group went on to make a helmet that blocked out the real world almost entirely, replacing it with three-dimensional computer graphics. Furness called this system a virtual cockpit, or, more formally, the Visually Coupled Airborne Systems Simulator (VCASS). Completed in 1981, the VCASS helmet was so big and bulky that it reminded people of the one worn by Star Wars archvillain Darth Vader. Nonetheless, fighter pilots praised it.
Furness's team went on to build the SuperCockpit, an improved version of VCASS that allowed pilots to see the real world and a virtual display at the same time. Visual and sound systems in the helmet gave the pilot the feeling of being in a three-dimensional environment. Computers generated the environment and could change it according to pilots' actions. Pilots wearing the SuperCockpit could select targets by gazing in particular directions and could fire weapons with voice commands. Force-feedback gloves gave them a sensation of touch when they pressed virtual buttons or triggers displayed in the air in front of them. These features gave the SuperCockpit, finished in 1986, all the elements of future virtual reality systems. The test model that Furness's group built, however, was the only one ever made. Even the military could not afford to put the SuperCockpit into regular use.
Cheaper and Lighter
The National Aeronautics and Space Administration (NASA) was a government agency, but it had nothing like the military's giant budget. Therefore, when Michael McGreevy, a scientist at NASA's Ames Research Center in Mountain View, California, decided to experiment with simulations in the early 1980s, he knew he had to use technology far less expensive than that in VCASS and the SuperCockpit. Instead of cathode-ray tubes (CRTs) like those used in television sets, he decided to use liquid crystal displays (LCDs), which had recently been developed. These small displays, sold at the time as mini-TV sets, did not produce pictures as sharp as those in VCASS's CRTs, and unlike those in VCASS they did not show color—but they cost only a few hundred dollars each. McGreevy's team incorporated two of these displays into a mask, along with wide-angle lenses and position sensors, to create a device he called VIVED (Virtual Visual Environment Display) in 1984. The mask had the great advantages of being lightweight and costing a mere two thousand dollars instead of VCASS's $1 million. However, it also had a major drawback: It was not interactive.
By this time, computers had become so small, cheap, and reliable that most businesses had one. Ordinary people were even beginning to use them in their homes. Professionals in a number of fields, including art and entertainment as well as science and education, started experimenting with ways to make computer displays imitate reality. Among them were the developers of video games, which had come into existence about a decade earlier. Thomas Zimmerman and Jaron Lanier, working at first for the video game company Atari and later on their own, developed a glove that contained magnetic position trackers and optical fiber sensors that could tell a computer both where a wearer's hand moved and how the fingers bent. The computer altered its displays accordingly. They began selling their DataGlove, as they called it, about the time Furness's SuperCockpit was completed.
Scott Fisher, a programmer who had gone from Atari to McGreevy's team at NASA, decided to combine the two groups' technologies. He bought a DataGlove
[Image not available for copyright reasons]
and, with Zimmerman's help, adapted it to VIVED. He added an improved stereo system that imitated the way sound changes in three dimensions, as well as other technology that let the computer creating the display respond to voice commands. Fisher called the complete combination VIEW, or Virtual Interactive Environment Workstation. Among other things, NASA used it to develop a "virtual wind tunnel" for testing parts of the space shuttle.
VIEW's graphics were primitive, its display blurry, and its tracking system too slow to keep up with a user's movements well. Still, it was inexpensive, fairly comfortable to wear and use, and freed users from direct attachment to the computer. Best of all, it created a feeling of what Fisher called telepresence—the impression that its user was actually inside the virtual environment. "Thus," writes virtual reality pioneer Mark Pesce, "was virtual reality born."4
Virtual Reality's Boom and Bust
The term virtual reality did not yet exist, however. Jaron Lanier coined it in the late 1980s to describe a new system he called RB2, or "Reality Built for Two," which would allow two users to share a computer-generated environment. Lanier stressed in a 2002 interview that he had been "interested in having more than one person at a time in [the] computer-generated world, so that those people could see each other and share the world as a means of communication. To me, the term 'world' refers to what's out there outside of you, but the term 'reality' refers to what you share with other people . . . [and] have to interact with."5 Although several other terms with more or less the same meaning as virtual reality, including artificial reality, virtual worlds, and immersive computing, were created and are sometimes still used, Lanier's term has remained most popular.
Media stories about virtual reality began appearing everywhere in the early 1990s. Video gamers and fans of fantasy games such as Dungeons and Dragons™ hoped that this new technology would make the games more thrilling than ever. Scientists and engineers looked forward to studying and manipulating virtual objects, ranging from houses and cars to molecules, in ways that had never been possible before. Businesspeople dreamed of instant wealth as they set up companies to manufacture virtual reality hardware and software. To all these groups, and an excited public as well, Ivan Sutherland's ultimate display seemed just around the corner.
Virtual reality devices, however, remained bulky, expensive, and unreliable, and the computers of the day lacked the speed and power to make VR illusions really convincing. When the true state of virtual reality technology became obvious, people lost interest in the field, and many companies and organizations devoted to it went bankrupt. As late as 2000, VR pioneer Frederick Brooks complained that the technology still "barely worked."6 In the first years of the new century, however, advances in computers and VR devices have begun to stir excitement about virtual reality once again.
[Image not available for copyright reasons]