Computer Science: Microchip Technology

views updated

Computer Science: Microchip Technology

Introduction

Microchips—also called silicon chips, integrated circuits, and several other terms—are small, thin, rectangular chips or tiles of a crystalline semiconductor, usually silicon, that have been layered with large numbers of microscopic transistors and other electronic devices. These devices are a part of the chip's crystal structure, that is, integral to it—hence the term “integrated circuit.” An integrated circuit may contain billions of individual devices but is one solid object.

The prefix “micro” refers not to the chip itself, although a typical microchip is quite small—a centimeter or less on a side—but to the microscopic components it contains. The microchip has made it possible to miniaturize computers, communications devices, controllers, and hundreds of other devices. Since 1971, whole computer CPUs (central processing units) have been placed on microchips. These affordable, highly complex devices—microprocessors—have been the basis of the computer revolution.

By 2008, at least 5 billion microchips were being manufactured every year in the United States alone, and many more were being manufactured globally. Microchips and computers are now used in scientific instruments, military weapons, personal entertainment devices, communications devices, vehicles, computers, and many other applications, and are an important part of the global economy. In 2007, the global semiconductor industry sold about $256 billion worth of microchips. The social effects of cheap computation have been profound, though not as overwhelming as computer enthusiasts have repeatedly predicted.

Historical Background and Scientific Foundations

Before the chip, electronics depended on the three-electrode vacuum tube, which was invented in 1907 by American inventor Lee De Forest (1873–1961). This device allowed the amplification of variations in a current (e.g., an audio signal) by using the signal to be amplified to control the flow of a more powerful current, somewhat like a small amount of force applied to wiggling a faucet valve can produce a pattern of identical wiggles in a more forceful flow of water. Such tubes are still called “valves” in British English for this reason. The vacuum tube was the beginning of modern electronics and made possible the invention of sensitive two-way radio, television, and electronic computers. It was, however, fragile, bulky, power-hungry, expensive, and prone to breakdown. A smaller, less wasteful, more reliable, cheaper alternative could, some scientists speculated in the 1930s, be made out of solid materials. Such a device would require no vacuum, no fragile glass bulb, and no glowinghot filaments of wire. In 1947, scientists at Bell Laboratories in the United States built the first crude device of this kind. The new device, a transistor, did the same job as the vacuum tube but had none of its disadvantages.

For years, transistors were manufactured as separate (discrete) devices and wired together into circuits. Although a vast improvement over vacuum tubes, such circuits were still bulky and fragile. In 1958, the microchip was conceived independently, but at about the same time, by U.S. engineers Jack Kilby (1923–) and Robert Noyce (1927–1990). A microchip or integrated circuit has all the advantages of a discrete transistor circuit but is even smaller, more efficient, and more reliable. In 1962, microchips were used in the guidance computer of the U.S. Minuteman missile, a nuclear-tipped intercontinental ballistic missile intended to be launched from underground silos in the American Midwest. The U.S. government also funded early microchip mass-production facilities as part of its Apollo moon-rocket program, for which it required lightweight digital computers. The Apollo command and lunar modules each had microchip-based computers with 32-kilobyte memories, that is, memories capable of storing 32,000 bytes. (A byte is eight binary digits, 0s or 1s, also called bits.)

Microchip progress was rapid, because profits were enormous: The industrialized world seemed to have in insatiable appetite for ever-more-complex electronic devices, as it still does. These could be made affordable, portable, and reliable only through microchips. A little over a decade after the first integrated circuit was tested, in 1971, Texas Instruments placed a calculator on a chip, the first commercial microprocessor. By the end of the decade, many manufacturers were making chips, and the number of transistors and other devices packed onto each chip was rising quickly.

By the early 2000s, a typical desktop computer contained around a million times more memory than the Apollo computers and performed calculations thousands of times faster. The contrast between an Apollo command-module computer costing millions of dollars in 1968 and a far-more-powerful desktop computer costing $2,000 or less in the early 2000s reflected the

IN CONTEXT: NANOTECHNOLOGY

Nanotechnology extends on advances in microelectronics during the last decades of the twentieth century. The miniaturization of electrical components greatly increased the utility and portability of computers, imaging equipment, microphones, etc. Indeed, the production and wide use of now commonplace devices such as personal computers and cell phones was absolutely dependent on advances in microtechnology.

Despite these fundamental advances there remain real physical constraints (e.g., microchip design limitations) to further miniaturization based upon conventional engineering principles. Nanotechnologies intend to revolutionize components and manufacturing techniques to overcome these fundamental limitations. In addition, there are classes of biosensors and feedback control devices that require nanotechnology because—despite advances in microtechnology—present components remain too large or slow.

rapid changes in microchip technology in that interval, one of the most remarkable success stories in the history of technology.

For about 40 years, the number of electronic components that could be put on an individual microchip at a certain cost has doubled every few years. This trend has been described as Moore's Law since 1965, when U.S. engineer Gordon Moore (1929–) identified it. During those decades, engineers and physicists have continually striven to make electronic components smaller so that more could be fit on each microchip. (Simply making chips larger to fit more components on them would not have worked, since the time needed for signals to travel across a sprawling chip would slow its operation.) Since the early 1990s, however, designers have been warning that miniaturization is becoming steadily more difficult as the dimensions of transistors and other integrated devices approach the atomic scale, where quantum uncertainty will inevitably render traditional electronic designs unreliable. In 2005, an industry review of semiconductor technology found that the limits of the silicon-based microchip may be reached by about 2020. They agreed that alternative technologies, such as quantum computing or biology-based approaches, all have flaws, and that there is not yet any clear successor to pick up in 2020 where silicon leaves off.

Manufacture of a microchip begins with the growth in a factory of a pure, single crystal of silicon or other semiconducting element. A semiconductor is a substance whose resistance to electrical current is between that of a conductive metal and that of an insulating material such as glass (silicon dioxide, SiO2). This large, cylindrical crystal is then sawed into disc-shaped wafers 4–12 inches (10–30 cm) across and only 0.01–0.024 inches (0.025–0.061 cm) thick. One side of each wafer is polished and then processed to produce upon it dozens of identical microchips. These are separated after the wafer is processed, placed in tiny protective boxes called packages, and connected electrically to the outside world by metal pins protruding from the packages. In the early 2000s, manufacturers began packing multiple microprocessors onto each chip so that the processors could work in parallel, speeding computation. In 2006, the first chips to contain over 1 billion transistors appeared; in 2008, the number jumped to 2 billion. This was about 50,000 times the number of transistors in the earliest microprocessors of the 1970s.

To produce a microchip requires massive factories that cost billions of dollars and must be retooled every few years as technology advances. The basics of the microchip fabrication process, however, have remained the same for decades—by bombarding the surface of the silicon wafer with atoms of various elements, impurities termed dopants can be introduced into the wafer's crystalline structure. These atoms have different properties from the silicon atoms around them and so populate the crystal either with extra electrons or with “holes,” gaps in the crystal's electron structure that behave almost like positively-charged electrons.

Microscopically precise patterns of p-type (positively-doped, hole-rich) silicon and n-type (negatively-doped, electron-rich) silicon are projected optically onto a light-sensitive chemical coating on the wafer (a photo-resist). Other chemicals etch away the parts of the photo-resist that have not been exposed to the light, leaving a minutely patterned layer. The surface of the wafer is then bombarded by dopants, which only enter the crystal where it is not protected by the photoresist. Metal wires and new layers of doped silicon can be added by similar processes. Dozens of photoresist, etching, and deposition stages are used to build up the three-dimensional structure of a modern microchip. By crafting appropriately shaped p-type and n-type regions of crystal and covering them with multiple, interleaved layers of SiO2, polycrystalline silicon (silicon comprised of small, jumbled crystals), and metal strips to conduct current from one place to another, a microchip can be endowed with millions or billions of interconnected, microscopic transistors.

Modern Cultural Connections

Since their appearance, microchips have transformed much of human society. They are now found in computers, guided missiles, “smart” bombs, satellites for communications or scientific exploration, hand-held communications devices, televisions, aircraft, spacecraft, and motor vehicles. Without microchips, such familiar devices as the personal computer, cell phone, personal digital assistant, calculator, Global Positioning System, and video game would not exist. As chip complexity increases and cost decreases thanks to improvements in manufacturing techniques, new applications for chips are constantly being found.

It would, then, be difficult to name a department of human activity that has not been affected by the microchip. However, its effects have not been as revolutionary as predicted or supposed by forecasters and futurologists. For example, efforts to replace the printed paper book with electronic texts (e-books) downloaded to computers or other chip-based viewing devices have repeatedly failed; most of the world's people still live in poverty and do not have access to sufficient food, clean water, or medical care, much less to a computer; most e-mail carried over the Internet is unwanted junk mail (spam); despite early predictions of a “paperless office,” per capita paper consumption has risen, not fallen, since the advent of the microchip; studies have found that persons who spend more than a short amount of each day surfing the Web are more likely to suffer depression, probably as a result of decreased time spent with family and friends; and by 2007, experts estimated that up to 4 million Americans were behaviorally addicted to Internet pornography, with another 35 million viewing it regularly. Nor, on the other hand, despite intensive use of computers by governments to spy on their citizens both at home and abroad, have computers yet made it possible to produce an all-knowing dictatorship as imagined in science fiction. The microchip has produced few entirely new pastimes or economic activities. It has tended to modify existing patterns of human activity—personal, political, military, and economic—but not to transform them out of all recognition or to eliminate them.

Technologically, the great challenge as of the early 2000s was the likely upcoming death of Moore's Law, at least as regards the silicon microchip. Improvements to silicon technology, such as the breakthrough power-conserving technologies to reduce unwanted microchip heating announced by Intel and IBM in 2007, spurred hope of extending silicon's run, though not indefinitely. Quantum computing and other novel techniques were being intensively researched by governments and industries, but all still had to cross major technological hurdles before they could rival silicon's cheapness, speed, and device density.

See Also Computer Science: Artificial Intelligence; Computer Science: Information Science and the Rise of the Internet; Computer Science: The Computer.

IN CONTEXT: MICROCHIP TECHNOLOGIES

Microchip-based or enabled technologies stir both the imagination and controversy.

National identity cards are not creations of the twenty-first century. The Nazis used them, and, under apartheid, the South African government required blacks and “coloureds” to carry them at all times. (“Black” denotes only Black Africans, whereas coloured was a separate classification under the apartheid system denoting mixed race, including Indians. The term still has strong cultural connections, and use is still common, including in self-description, but it does not carry the now derogatory connotations of the U.S. term.) Under Nazi rule and apartheid, the cards listed name, residence, and work information; if found in an area to which the bearer was denied access, they were subject to arrest. Accordingly, national ID cards inspire distrust and fear among many. In an age of terrorism and identity fraud, however, some countries are considering using microchip enabled identity cards.

Many countries currently use national identity cards, including most European nations. As technology has progressed, their functions have evolved as well. Taiwan, a country that has used national ID cards since 1947, continued their use from the Japanese colonial government. Taiwan's cards also act as a police record, and the law mandates that they be carried at all times. Issues surrounding inclusion of fingerprint data stirred heated debate as citizens feared loss of privacy data.

In the summer of 2005, shortly after the terrorist attacks on the London, England, subway system, the British Parliament reopened the debate on national identity cards. During World War II the United Kingdom implemented a national ID card system, but it ended the program in 1952. Proponents believe that the cards would help thwart terrorism because every person entering, working in, or living in the country would be required to have one. They would increase the possibility of identifying terrorists before an attack could be carried out. Opponents argue that they cannot guarantee stopping terrorism and could facilitate the quarantining of individuals based on family lineage, ethnic background, or country of origin. Critics contend that they do not want their personal data compiled into a database that could possibly be seen by a computer hacker, nor do they want the government to have large files of their personal information. Also, individuals fear that the cards could make their movements and financial transactions too easy to track. Despite concerns, in February 2006, the British Parliament passed the Identity Cards Act. Registry will be mandatory when applying for documents like a passport, but individuals will not have to carry them at all times. Additionally, the cards will be recognized travel documents in the European Union, and they will contain a microchip holding a set of fingerprints as well as facial and iris scans.

bibliography

Books

Reid, T.R. The Chip: How Two Americans Invented the Microchip and Launched a Revolution. New York: Random House, 2001.

Yechuri, Sitaramarao S. Microchips: A Simple Introduction. Arlington, TX: Yechuri Software, 2004.

Periodicals

Macilwain, Colin. “Silicon Down to the Wire.” Nature 436 (2005): 22–23.

Williams, Eric, et al. “The 1.7 Kilogram Microchip: Energy and Material Use in the Production of Semiconductor Devices.” Environmental Science and Technology 36 (2002): 5,504–5,510.

Larry Gilman

About this article

Computer Science: Microchip Technology

Updated About encyclopedia.com content Print Article