views updated May 21 2018


Computers and computer networks have changed the way in which people work, play, do business, run organizations and countries, and interact with one another on a personal level. The workplace of the early twentieth century was full of paper, pens, and typewriters. The office of the early twenty-first century is a place of glowing monitor screens, keyboards, mice, scanners, digital cameras, printers, and speech recognition equipment. The office is no longer isolated; it is linked by computer networks to others like it around the world. Computers have had such an effect that some say an information revolution is occurring. This revolution may be as important as the printing revolution of the fifteenth century, the industrial revolution of the nineteenth century, or the agricultural revolutions of the ancient and medieval worlds.

The computer was invented to perform mathematical calculations. It has become a tool for communication, for artistic expression, and for managing the store of human knowledge. Text, photographs, sounds, or moving pictures can all be recorded in the digital form used by computers, so print, photographic, and electronic media are becoming increasingly indistinguishable. As Tim Berners-Lee (1998), developer of the World Wide Web, put it, computers and their networks promise to become the primary medium in which people work and play and socialize, and hopefully, they will also help people understand their world and each other better.

During the last half of the twentieth century, electronic digital computers revolutionized business, learning, and recreation. Computers are now used in newspaper, magazine, and book publishing, and in radio, film, and television production. They guide and operate unmanned space probes, control the flow of telecommunications, and help people manage energy and other resources. They are used to categorize and preserve the store of human knowledge in libraries, archives, and museums. Computer chips called "embedded microprocessors" are found in the control systems of aircraft, automobiles, trains, telephones, medical diagnostic equipment, kitchen utensils, and farm equipment. The effect on society has been so great that digital information itself is now exchanged more rapidly and more extensively than the commodities or manufactured goods it was originally supposed to help manage. Information has become an essential commodity and, some would argue, a necessary social good.

The history of computing is several stories combined. One is a hardware story—a tale of inventions and technologies. Another is a software story—a tale of the operating systems that enabled specific computers to carry out their basic functions and the applications programs designed to deliver services to computer users. A third story tells how computers provide answers to the problems of society, and how they in turn create new possibilities for society.

Computers and the Media

The computer has transformed print journalism and magazine and book production, changing the ways in which stories are researched, written, transmitted to publishers, typeset, and printed. Through computing and telecommunications, a news story breaking in Asia can be sent within seconds to North America, along with digital pictures. Word-processing software and more sophisticated desktop publishing programs allow authors to create and revise documents easily and to check them for spelling, grammar, and readability.

Copies of digital documents can be printed on demand, and because computers check for transmission errors, all the copies will be identical. While the first word-processing programs offered little more than typewriter-style characters, the introduction of graphical user interfaces (GUIs) in the 1980s and 1990s opened new design possibilities. Writers could choose from a variety of type fonts, select different page layouts, and include photographs and charts. Some feared that this might eliminate jobs since tasks performed by authors, editors, typesetters, proofreaders, graphic designers, and layout artists could all be performed by one person with a computer.

Laptop or notebook computers gave writers even more flexibility. A reporter on location could compose a story and transmit it immediately to a newspaper (using a modem and a hotel room telephone) on the other side of the globe and, perhaps, to wire news services such as The Associated Press or the Reuters news agency. Satellite uplinks, cellular phones, and infrared "beaming" between machines provide even more possibilities. Moreover, digital photography eliminates the time taken to develop photographs, and digital pictures can be transmitted as easily as text.

Computers have revolutionized radio, television, and film production as well. Computerized camera switching and special-effects generators, electronic music synthesizers, photographic exposure control, and digital radio and television programming are all examples. Computer graphics can be used to superimpose sports statistics over a picture of a game in progress or allow a commentator to explain a key play by drawing a diagram over a television picture. Computers have made it possible to produce the entire programming lineup of a radio station without relying on tape recorders except for archival materials or for recordings made in the field.

Digital sound editing can eliminate noise, mix voice and music, and give producers second-by-second precision in the assembly of programs. Computerized film processing can provide better quality images or allow images to be converted from color to black-and-white and vice versa. While movie animation has traditionally involved photographing thousands of separately drawn pictures or "cells," computer animation can use fewer drawings and produce thousands of variations. Special effects are much more convincing when the computer handles the lighting, perspective, and movement within the movie scene.

Speech recognition and dictating software can convert voice recordings directly to word-processed text, and translation programs can then rewrite the word-processed text into another human language. Musicians can compose new works at a computer keyboard and create a printed score from the finished version.

Even when an organization's primary medium is print, radio, or television, it has become common to provide more in-depth coverage on an associated website. While some radio and television networks simultaneously broadcast and webcast their programming, perhaps the most powerful potential will be found in ever-growing digital archives. Using search engines and, increasingly, programs called "intelligent agents," users can retrieve items from the archives, print fresh copies, or compare different accounts of the same event.

Most young people probably first use a computer for entertainment. Individual-and multiple-player games, online "chat" rooms, newsgroups, electronic mailing lists, and websites provide computer-mediated education and leisure activities that were never possible before.

At first, computer programmers wrote games to amuse themselves. The classic "dungeons and dragons" game, Adventure, invented by Will Crowther and Bob Woods, was a favorite. Players gave commands such as "go left" or "take lamp," and the computer printed replies such as "OK." There were no pictures. Simple games that used graphics, with names such as Pong and Pacman, became available during the 1970s. As personal computers and handheld games became practical to produce, an entire electronic games industry was born. Nintendo and Sega are two familiar games companies. Computerized video games and lottery ticket machines soon became such popular attractions in shopping malls and corner stores that critics began to warn that they might become addictive.

Research Databases

Computing has changed the way writers research and prepare scientific articles. During the early 1970s, a small number of databases containing "abstracts" (i.e., summaries of scholarly and popular articles) could be searched offline. Users submitted lists of subjects or phrases on coding forms. Keypunchers typed them onto computer cards, and operators processed them on mainframe computers. The answers would be available the next day. Library catalogs were printed on paper cards or computer output microform (COM). A microfiche is a transparent plastic slide, roughly the size of an ordinary index card, but it contains images of many pages of computer output.

The Library of Congress, and national libraries in other countries, had by this time converted most of the descriptions of the books they owned into machine-readable form. Toward the end of the 1970s, research databases and library catalogs were becoming widely available online. The Dialog database, and library services such as the Online Computer Library Center (OCLC), made it possible to search the contents of many journals or the holdings of many libraries at once. Standards such as the Machine-Readable Cataloging format (MARC) made it possible to exchange this information worldwide and to display it on many different types of computers. However, limits on computer disk space, telecommunications capacities, and computer processing power still made it impractical to store the full text of articles.

Because of the costs, researchers working for large institutions were the main users of these services. By the mid-1980s, when microcomputer workstations became widely available and compact disc read only memory (CD-ROM) became a practical distribution method, much research could be conducted without connecting to large central databases. Companies such as EBSCO and Info Trac began licensing CD-ROMs to their subscribers. With better magnetic "hard" disks and faster microcomputer chips, full-text storage and retrieval finally became workable.

By the end of the twentieth century, databases and catalogs could be accessed over the Internet, on CD-ROM, or through dial-up connections. Some of the special databases include ERIC (for educational issues), Medline and Grateful Med (for medical issues), and Inspec (for engineering issues). Legal research was simplified by services such as Lexis and Westlaw, which allowed identification and cross-referencing of U.S. and international statute and case law. In one of the more interesting applications of computing technology, the Institute for Scientific Information in Washington, D.C., introduced its citation indexing services, which allow researchers to discover important authors and issues by revealing which authors quote one another. Some databases are free of charge, and some are available for a fee.

A researcher at a public library, in a television newsroom, or in a medical practice can perform searches against thousands of special databases and millions of sites on the World Wide Web. While this sort of research was possible with printed directories in the past, it was time consuming and labor intensive. However, searching for data electronically can have unexpected results. Because the computer does not really understand what the string of letters "Jim Smith" means, it will faithfully report any occurrence it finds, regardless of the context. Information retrieval theory and informetrics are two fields that study the implications.

The Computer Industry

In the late 1960s, some writers scoffed at the potential of computers. The mainframe machines of the time occupied entire rooms, and only large institutions could afford them. No computer ever conceived, suggested one writer, had ever weighed less than a human being or been capable of performing as many tasks.

Without the transistor and the integrated circuit, computers would still fill large rooms. Without the laser and improved plastics, optical storage media such as CD-ROMs and digital versatile discs (DVDs) would not be possible. Magnetic tapes and disks have also improved greatly over the years and can now store much more information than they could in the past. It is difficult to buy an item in the supermarket or to borrow a book from a library without that item having a barcode label on it. Credit and debit cards with magnetic strips make it easier to access bank accounts and make retail purchases. Inventions such as these are part of the story of computing, although they are often overlooked.

For example, a minicomputer of the mid-1980s could cost about $500,000 and could contain 64 kilobytes (kb) of random access memory (RAM). By the end of the century, a magnetic floppy disk containing 1.4 megabytes (Mb) of memory sold for less than a dollar, a CD-ROM disk that held 650 Mb was less than two dollars, and desktop microcomputers with 64 Mb of RAM were common household items.

As the industry grew, so did the legends of inventors who made fortunes or revolutionized the industry. William R. Hewlett and David Packard started their company in a garage. Graduate students David Filo and Jerry Yang developed the Yahoo! Internet directory in a dormitory room. Steve Jobs of Apple Computer, Bill Gates of Microsoft, and the heads of many other companies in California's Silicon Valley became known around the world.

Computer engineers and programmers have often exchanged their ideas openly, out of scientific duty. The Xerox Corporation hit on the idea of the graphical user interface (GUI), developed the "mouse," and then told everyone how to produce them. Linus Torvalds developed the Linux operating system as a personal project and then made it available for free. Universities also have a long history of developing software and computers and then sharing the knowledge.

The History of Computers

While digital computers are a relatively recent invention, analog devices have existed for thousands of years. The abacus, sometimes considered to be a computer, was used in medieval China and by the Aztecs of Central America, and earlier "counting boards" were found in ancient Babylon. Another analog device, the slide rule, continues to have a following because some engineers still prefer them to electronic calculators. Circular slide rules, called "dead-reckoning computers," were used by aircraft pilots well into the 1970s to perform navigational tasks.

During the Middle Ages, the Franciscan scholar Ramon Llull used circular disks that had letters and numbers (representing terms from philosophy) written on them. By turning the wheels, Llull could come up with new combinations of concepts. Llull's work continued to influence logicians. Gottfried Wilhelm von Leibnitz made it the topic of a treatise, Dissertio de arte combinatoria, in 1666.

During the industrial revolution, mass-production devices such as the Jacquard loom became common. Designs to be woven into cloth could be punched onto the cards that controlled the loom. Charles Babbage, working with Lady Ada Lovelace in the early nineteenth century, first thought of using punched cards to do mathematics. Their Analytical Engine wove numbers into tables the way the loom wove cloth from strands of thread. The modern Ada computer language commemorates their work. Toward the end of the nineteenth century, Herman Hollerith, who founded International Business Machines (IBM), developed the punched cards used in early digital computers.

In a 1936 paper, "On Computable Numbers," the British mathematician Alan Turing first suggested the idea of a general-purpose computing machine. With electronic digital computers, Turing's idea became realizable. Turing and the Hungarian-American mathematician John von Neumann are two of the many pioneers of digital computing. Turing designed machines called, individually, the Bombe and Colossus to break the "Enigma" cypher—a secret code used by Germany during World War II. He also proposed the famous "Turing test" for artificial intelligence. The Turing test suggests that if a person cannot tell the difference between responses from a computer and responses from a human, then the computer must be considered to be "intelligent."

The first generation of electronic computers, which included the Mark 1, the ENIAC, and other machines built with vacuum tubes, were huge, expensive, and apt to fail or "crash." Grace Hopper once repaired the U.S. Navy's Mark II computer by removing a moth from its circuitry. The term "debugging" is often associated with this incident.

The transistor made it possible to produce computers in quantity. However, mainframe computers such as the IBM 370 were still huge by modern standards, and only universities, government agencies, or large companies could afford them. By the 1980s, with integrated circuits, a new generation of minicomputers was born. Digital Equipment Corporation (later Compaq), Hewlett-Packard, and Data General were some of the key manufacturers. These machines were about the size of a refrigerator.

By the end of the 1970s, desktop microcomputers began appearing in smaller offices and in ordinary people's homes. Beginning with the Osborne, the Commodore 64, the Apple, and the IBM PC, microcomputers and their software systems came to dominate the market. These machines used microcomputer chips—room-sized central processing units shrunk to less than the size of a penny. The Intel 8080 and the Motorola 6800 were among the very first such chips, appearing in the latter half of the 1970s. Many programmers joked about these new "toys." During the next decade, microcomputers would grow into powerful workstations—powered by chips from Intel and Motorola and built by companies such as Sun Microsystems, IBM, Apple, Dell, Toshiba, Sony, and Gateway, to name just a few.

Digital Information

Computing involves three activities: input, process, and output. Data enters the computer through a keyboard or mouse, from a camera, or from a file previously recorded on a disk. A program or "process" manipulates the data and then outputs it to a screen, printer, disk, or communications line.

Over the years, many different input devices have been used, including punched paper tape, punched cards, keyboards, mice, microphones, touch-screens, and video cameras. Output devices have included paper printouts, teletypewriters, and video monitors. The part of the computer that does the processing is known as the central processing unit (CPU). Collectively, everything other than the CPU, including memory boards, disks, printers, keyboards, mice, and screens can be thought of as peripheral devices, or just "peripherals."

There are two sorts of computer software. Operating systems, such as Microsoft Windows, Macintosh, or UNIX, allow machines to perform their basic functions—accepting input, running programs, and sending output to users. Applications programs, such as word processors, Internet browsers, electronic mail programs, or database management programs, do the work required by computer users.

Digital computers use data that has been encoded as series of zeros and ones—binary digits or bits. Text, images, sounds, motion pictures, and other media can all be represented as strings of zeros and ones and processed by digital computers. Programs—the instructions on how to manipulate data—also are represented in binary form. The earliest digital computers were designed to store and manipulate the numbers and letters of the alphabet that were found on typewriter keyboards. The American Standard Code for Information Interchange (ASCII) uses 128 combinations of bits to represent the letters, numbers, and symbols on a typewriter keyboard. Plain text worked well when computers were used primarily for mathematics.

Binary numbers can represent visual and audio information as well. By the end of the 1980s, designers had expanded the coding systems to store drawings, photographs, sounds, and moving pictures. Each dot on a screen is called a "picture element" (or "pixel"). To display graphics on the screen, computers use groups of binary numbers—ones and zeros—to represent the color, intensity of light, and position of each pixel.

Modern computers almost always use some type of GUI. Programmers use small graphics called "icons" to represent a program, a document, a movie, or a musical work. When a user selects an icon, the computer can open a file or program that is associated with it. This technique is object-oriented programming.

When the price of computers dropped, it became possible to distribute work among several machines on a network instead of using a large central computer. A piece of software called a "server" could now send information to smaller programs called "clients" located at the workstations. Shared files remain on large computers called "file servers," so several users can access them at once. Internet browsers, such as Netscape and Internet Explorer, are good examples of "client/server" design at work, where the browser is a client and an Internet site hosts the server software and the large files of information.

There are many programming languages, each better at addressing certain types of problems. The Formula Translation language (FORTRAN) was developed to handle scientific problems. The Beginner's All-purpose Symbolic Interchange Code (BASIC) and the Common Business-Oriented Language (COBOL) were better for office automation. The languages C, C++, Java, and Visual Basic use libraries of small, interchangeable programs that perform frequently required tasks, such as sorting items or displaying them on a screen. Programmers can combine these small programs into more complex systems, allowing programmers to build new applications quickly. Other languages, such as Prolog and LISP, were invented for work in artificial intelligence, while Ada was designed to address military needs.

Once personal computers were available, the demand for special software packages or "applications" increased. Spreadsheets, such as the early Super Calc and Excel, have simplified accounting and statistical processes, and they allow users to try out various financial scenarios. If the costs or quantities of items change, the results will appear immediately on the screen. A whole range of database management packages, including dBase, Fox-Pro, Oracle, and Access, help users do inventories, maintain customer profiles, and more. Because records in databases can be matched against ones in different files, say a customer demographic file with a warehouse inventory file, businesses can predict supply and demand trends and improve the delivery of goods and services. Geographic information systems, online census data, and telephone directories make it easier to market products in areas where there is demand. Some critics argue that using data for reasons other than those for which it was collected is an invasion of privacy. In many countries, freedom of information and privacy protection laws have been passed to address these issues.

Computing and Knowledge

Computers have changed the world in which people live and work, and they have provided new ways of thinking about, and making sense of, that world. At the beginning of the twenty-first century, computer science is a mature academic discipline, with almost every university or college offering computer courses.

As an academic subject, computer science may involve information theory, systems analysis, software engineering, electrical engineering, programming, and information studies that examine the use of digital information. The founders of information theory, Claude Shannon and Warren Weaver, published The Mathematical Theory of Communication in 1949. The mathematician Nor-bert Wiener, who coined the term "cybernetics," showed how computing theories could be applied to problems of communication and control in both animals and machines. Ludwig von Bertalanffy founded general system theory because he saw that large complex systems did not necessarily behave in the same what that their individual components did. He is considered one of the founders of systems analysis.

Professional associations have also played important roles in the development of computing theory, practice, and standards. The Association for Computing Machinery, the Institute of Electrical and Electronic Engineers, the International Standards Organization, and the W3 Consortium are all agencies concerned with computing methods and standards. Less widely known groups, such as the International Society for Systems Sciences and Computer Professionals for Social Responsibility, concern themselves with professional ethics and the social effect of computing. Computing has its own journals and magazines that are aimed at special groups of professionals and at consumers.

Modern computing researchers come from many backgrounds. In turn, scholars from other areas apply computing theory and systems analysis to their own disciplines—from philosophy to psychology to social work. Centers such as the Media Lab at the Massachusetts Institute of Technology or the Xerox Corporation's Palo Alto Researcher Center bring together experts from many fields to design "neural networks" that simulate the human brain, to build smaller and faster machines, or to find better ways of managing digital information. Nicholas Negroponte, Marvin Minsky, and their colleagues at the Media Lab are associated with developments in artificial intelligence and robotics.

Some people fear that while computers relieve humans of repetitive tasks, they may also "de-skill" workers who forget how to do such tasks by hand. Others suggest that having to cope with computers on the job adds extra stress, raises expectations of promptness, and requires ongoing retraining of workers. Because computing has made it possible to recombine and repackage stories, pictures, and sounds, some fear that the work of authors may one day be regarded as interchangeable, much like mechanical parts. In addition, as people depend more on computers, they become more vulnerable to system failure. If the world's computers should fail all at once, economic and social chaos might result. A series of Internet "worms" and "viruses" heightened concern over society's dependence on computers during 1999 and 2000. Governments, banks, companies, and individuals worried that the clocks in their computers might fail at the beginning of 2000, but the "Y2K" crisis they feared did not occur.

Computer designers and computer users think about computers in different terms, and they use different jargon. Hackers, who explore aspects of computers that designers could not have foreseen, have their own way of looking at and talking about computers. People who use computers for destructive purposes are more properly called "crackers." Finally, those people who do not have access to computers run the risk of economic and educational hardships.

The Internet and the Future

During the early 1980s, the Defense Advanced Research Projects Agency (DARPA)—the central research and development organization for the U.S. Department of Defense—commissioned work on a standard design for its wide area networks, computer connections that could link entire countries or continents. In response, communications standards called the Transmission Control Protocol and the Internet Protocol were published in 1981.

Many computer networks, with names such as Decnet, Usenet, and Bitnet, were already in operation, but within about a decade, the new standards were adopted around the world. At first, because there were no graphics, the Internet was used for electronic mail and discussions and for text-only directory services such as Gopher (from the University of Minnesota) and WAIS (wide area information service). Then Berners-Lee and his colleagues at CERN, the European nuclear research center in Switzerland, came up with a new set of protocols that could be used to mix pictures and sounds with text and let users locate any document on any network computer anywhere in the world. The result was the World Wide Web.

Briefly, this is how the web works. Every computer on the Internet has a numeric Internet Protocol (IP) address, which looks like four groups of numbers separated by periods. Because humans would have trouble with addresses such as 123.12.345.1, websites also have "domain names," such as "" or "," which are easier to understand. Scattered around the world, domain name servers (DNSs) provide large telephone-directory style lists, which map the names to the numbers.

Every item on the web, whether a file of text, a picture, or a sound, can be found and retrieved by its uniform resource locator (URL). A URL contains the domain name of the computer on which the item is stored and, optionally, additional information about the file folders and file names on that computer. Documents on the web, called "pages," are written in the Hyper Text Markup Language (HTML) and exchanged using the HyperText Transmission Protocol (HTTP).

Berners-Lee (1998) believes that once most of human knowledge is made available over the Internet, and once the Internet becomes the primary way in which individuals communicate with one another, humans will have the wisdom to use computers to help analyze society and to improve it.

While the promise is bright, the Internet presents many challenges for information scientists. While URLs provide a way of locating individual documents anywhere on the network, the web is always in flux, and URLs are quite "volatile" or apt to change from day to day or even from minute to minute. In addition, because material on the web may look highly polished, it is sometimes hard for users to distinguish reliable information from unreliable information. Metadata—data about data—is one of the schemes proposed to reduce confusion. Metadata tags are similar to subject, author, and title entries in a library catalog, and can be written at the top of a web document.

Increasingly, the computer network is the medium through which scientists assemble and exchange knowledge from many sources and train future generations. The Human Genome Project and simulations to train surgeons or aircraft pilots are examples. Many scholars publish directly to the Internet by posting their discoveries to the World Wide Web, newsgroups, or mailing lists. This speeds the process of information exchange, but since such works are not examined by editors, it also increases the chances of error and makes it harder for readers to determine whether the information is reliable. The need to be able to index and describe web-pages has led to the development of metadata as a way of categorizing electronic documents. However, with millions of authors publishing to the web, the task of indexing and describing their work is staggering.

Computers continue to become smaller, less expensive, more powerful, and more essential to society. So far, dire predictions of de-skilled workers or massive unemployment due to an increased use of computers in the workplace have yet to materialize. In the future, computers will be still smaller and many times more powerful as engineers find ways to use nanotechnology to build microscopic machines. Some people predict that computers will eventually use individual molecules, or even subatomic particles, to store and manipulate the ones and zeros that make up digital information.

By building microprocessors into cars, aircraft, and even household devices such as microwave ovens, designers have produced a raft of "smart" devices. Steve Mann and his colleagues at MIT and the University of Toronto have even developed smart clothing, which can detect signs of sudden illness in the wearer. Increasingly, computers will be able to assist people with disabilities. Smart cars and smart houses have obvious social benefits. However, the same technologies can be used to produce smart weapons. Sensors in a smart office can prevent burglaries or announce guests. They can also monitor employees, minute by minute. Will ubiquitous computers have positive or negative effects on society? This is a question for which only the future can provide an answer.

See also:Artificial Intelligence; Computer Software; Computer Software, Educational; Databases, Electronic; Diffusion of Innovations and Communication; Digital Communication; Digital Media Systems; Geographic Information Systems; Internet and the World Wide Web; Knowledge Management; Libraries, Digital; Library Automation; Privacy and Encryption; Ratings for Video Games, Software, and the Internet; Retrieval of Information; Standards and Information; Technology, Adoption and Diffusion of; Webmasters.


Berners-Lee, Tim. (1998). "The World Wide Web: AVery Short Personal History." <>.

Bertalanffy, Ludwig von. (1976). General System Theory, Foundations, Development, Applications. New York: G. Braziller.

Biermann, Alan W. (1997). Great Ideas in Computer Science: A Gentle Introduction, 2nd edition. Cambridge, MA: MIT Press.

Brookshear, J. Glenn. (1999). Computer Science: An Overview. New York: Addison-Wesley.

Carlson, Tom. (2001). "The Obsolete Computer Museum." <>.

Gardner, Martin. (1982). Logic Machines and Diagrams. Chicago: University of Chicago Press.

Hiltz, Starr Roxanne, and Turoff, Murray. (1993). The Network Nation: Human Communication Via Computer, revised edition. Cambridge, MA: MIT Press.

Kidder, Tracy. (1997). The Soul of a New Machine. New York: Modern Library.

Negroponte, Nicholas. (1995). Being Digital. New York:Vintage Books.

Raymond, Eric S. (1998). The New Hacker's Dictionary. Cambridge, MA: MIT Press.

Shannon, Claude, and Weaver, Warren. (1949). The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Sudkamp, Thomas A. (1996). Languages and Machines: An Introduction to Theory of Computer Science. New York: Addison-Wesley.

Turing, Alan M. (1936). "On Computable Numbers:With an Application to the Entscheidungsproblem." Proceedings of the London Mathematical Society (2nd series) 42:230-265.

Valovic, Thomas. (2000). Digital Mythologies: The Hidden Complexities of the Internet. New Brunswick, NJ: Rutgers University Press.

Wiener, Norbert. (1965). Cybernetics; Or, Control and Communication in the Animal and the Machine, 2nd edition. Cambridge, MA: MIT Press.

Christopher Brown-Syed

Terri L. Lyons


views updated May 18 2018

COMPUTING The use of an electronic device that accepts data, performs mathematical and logical operations at speed on those data, and displays the results. Computers, although initially developed as calculating devices and open to a range of uses, have become central to communicative technology, and relate to language in at least three ways: (1) They require their own artificial languages in order to function. (2) Their use has adapted natural language to new ends, such as the processing of texts by computer. (3) Their users have developed their own styles and registers for working with them and talking about them. Since the 1950s, these factors have developed explosively and are major influences on late 20c English, the language most closely involved in computing.


The present-day computer derives from British work during the Second World War on cryptographic machines and is the most recent in a line of calculating devices that includes the abacus, the Jacquard loom, Babbage's Analytical Engine, and Hollerith's tab-sorter. Its primary purpose has been to compute, not to compile or converse. There are two kinds of computer: analog and digital. Analog computers, which are related to the slide rule and tables of logarithms (and virtually obsolete), use the strengths of voltages to represent the size of numbers, whereas digital computers use electrical signals only in the on/off form. Currently, digital computers consist of four major parts: (1) A processor or central processing unit (CPU), which executes commands, performing arithmetical, logical, and manipulative operations on the data stored in the second part. (2) A memory, the information store. Most computers have at least two kinds of memory: primary and secondary. Primary memory is usually silicon chips, typically DRAM (dynamic random access memory) chips. ‘Random access’ means that any part may be obtained immediately, as with a book that can be opened to any page. The process is fast, usually less than one microsecond to obtain an item of information. Secondary memory is usually magnetic disk, made of one or more platters rotating under a reading head. It is not random access: a particular part of the disk cannot be read until it rotates under the reading head, which usually takes several milliseconds. Storage is measured in bytes, one byte containing eight bits, and representing storage for one character in European alphabets. See ASCII. (3) Input/output equipment, which enables the user to get information into and out of the machine. The information is entered most commonly through a keyboard but also through removable disks, tapes, and other devices. Output goes to display screens, to printers (which produce text etc., usually known as hard copy), and also to disks and tapes. (4) Communications equipment, which permits a computer to ‘talk’ to other machines and to people located at a distance from it. The equipment includes a modem (an acronym for ‘modulator demodulators’), which connect computers by telephone line, and networks to let machines talk at high speed to each other, as for example in using the INTERNET and the WORLD-WIDE WEB.

Computer programs

Since computers work very fast, they cannot be directed step by step. Instead, a script must first be written for the computer to follow. The script typically contains sequences to be repeated, so that the script is much shorter than the operation as executed. The computer responds to machine language, which is binary code (strings of 0s and 1s), in which the operations are very simple (such as elementary arithmetic or moving one piece of data from one place to another). Such scripts are written in higher-level languages called computer programs (BrE following AmE in this spelling, but AmE follows BrE in doubling the m in programming). A distinction is now universally made between the equipment as hardware and software, the latter now generally made available as commercial software packages.

Computer languages

Also programming languages, high-level languages. Digital computers can follow directions written in a great variety of artificial languages that provide precise specifications of operations to be done and the order in which they must be done. Although strings of letters are used to name commands in these languages, they are quite different from natural language. Among other things, they must be logical and unambiguous: unlike people, computers do not know that the and in I like bread and jam means ‘both together’, while the and in I like cats and dogs does not imply that both must be present at once (= ‘I like cats and I like dogs’). Compared with natural language, high-level computer languages normally have: (1) Very short words: most programmers save effort by giving variables names such as x, one or two letters long, and by using many abbreviations, such as del for delete. (2) Very short utterances: written English sentences might average 20 words in length, but statements in programming language are typically only six items long. (3) Little syntactic variety: the typical computer language at present has a grammar of about 100 rules, compared with thousands in a formal grammatical description of English.

Specific languages

The many programming languages are divided into business languages (verbose, emphasizing simple operations on complex data) and scientific languages (terse, emphasizing complex operations on simple data). They often have distinctive histories and functions, and names of etymological interest. ALGOL, a language suitable for expressing algorithms, is the computational equivalent of Esperanto, created in 1960 by an international committee. Its name, a reduction of Algorithm Language, is a homonym of the star Algol (Arabic, ‘the ghoul’). BASIC is short for Beginner's All-Purpose Symbolic Instruction Code, designed at Dartmouth College in New Hampshire in 1965 by J. Kemeny and T. Kurtz. It is often the first programming language learned and is similar to the Basic of BASIC ENGLISH, also an acronym. ADA was designed in a competition run by the US Department of Defense from 1974 to 1980, going through successive refinements with such names as Strawman, Woodenman, Tinman, Ironman. The French computer scientist Jean Ichbiah led the winning team. It was named after Lady Ada Lovelace, daughter of the poet Byron and a supporter of Charles Babbage, the inventor of the Analytical Engine, an early mechanical digital computer. She is often called the first programmer. For some years, the goal of ‘programming in English’ (that is, using a more or less unrestricted subset of the natural language) attracted attention, but it has so far proved unattainable.

Processing text

Computers, among other things, are extensions of writing and print systems, and have therefore been used with greater or less success to do such things as evaluate, index, parse, translate, correct, and ‘understand’ text. When a suitably programmed computer is fed English, it can process it at several levels, but with decreasing competence as the task becomes more complex. The following sequence is typical:

1. The character level.

Text can be entered into a computer by three means: keying it, typically into a word processor which will format the text (arranging the line lengths and character positions); scanning it, using a machine which transfers a paper version into an image followed by a program that seeks to recognize the characters in it; transferring it electronically, typically by diskette or telephone, from another compatible computer. Transfer is the fastest and most accurate method, but currently the least used. When a cleanly typed or printed original is available, without too many fonts or typographic complexities, scanning is faster and easier than rekeying. Once the text is entered, computers can print it in a wide variety of typefaces, sizes, and page formats, using either a printer or a desktop publishing system.

2. The word level.

A spelling checker can find some kinds of typing mistakes, usually by comparing words with a dictionary list and noting those that are not in that list. Programs can make word lists and concordances (lists of each word with some context before and after it). By noting the most frequent words in a document, and comparing the word frequencies in a particular text with the average word frequencies in English, a program can suggest words that might be used for indexing the document. The counting of relative word frequencies and comparison with word frequencies from a standard sample can also help in guessing the authorship of anonymous works or measuring the readability level of a text.

3. The sentence level.

On the level of syntax, PARSING programs can try to define the structure of sentences and relationships among words. This is typically done by applying grammar rules of the form ‘a verb phrase may be a verb followed by an adverb’. Unfortunately many sentences are ambiguous. In the preceding sentence, a computer would not know whether Unfortunately modified the verb (implying that it is sad that ambiguous sentences occur) or the adjective many (suggesting disappointment that ambiguous sentences are so frequent). Adding a comma after Unfortunately could, however, serve as a means of disambiguation. However, some kinds of grammatical and stylistic errors can be diagnosed, and grammar checkers and style checkers have become available to help in the writing of business letters and the propagation of PLAIN ENGLISH.

4. The message level.

At the level of word-and-sentence meaning, semantic analysis can map a sentence into a knowledge-presentation language. Some research projects have been able to take such sentences as Which ships are in port? and answer them by looking at a table of ship locations, but such systems currently operate in strictly limited subject areas. Other applications of semantics include machine translation and direct generation of language by computers (that is, the computer produces text without human input).

The above levels of activity depend on computational linguists writing rules of analysis, accumulating a GRAMMAR of syntactic and/or semantic rules for such a language as English. An alternative strategy for processing written language, however, uses reference books: the use of a MACHINE-READABLE dictionary or thesaurus may help a computer make reasonable guesses about which sense of an ambiguous word is intended in a particular context. Another strategy relies on the statistical properties of large corpora to determine word relationships. Such methods have allowed parsing without writing a grammar in advance, a higher quality of error correction in spelling, and the automatic recognition of phrases. However, they handle uncommon constructions less well than the grammar-based procedures handle them, and depend for their success on the fact that such constructions are uncommon. See COMPUTERESE, COMPUTER USAGE, CONCORDANCE, CORPUS, EMOTICON, ICON.


views updated Jun 27 2018


Applications relevant to elementary particle and high-energy physics (HEP) computing can be categorized as follows:

  1. Triggering and data acquisition
  2. Simulation
  3. Data handling and storage
  4. Commodity hardware and software
  5. Data analysis and visualization
  6. Control and monitoring systems
  7. Information systems and multimedia

Triggering and Data Acquisition

In addition to their specialized detection and measurement systems (for example, calorimeters, drift chambers, etc.), the detectors in high-energy physics experiments are, in fact, sophisticated computing systems. Without reliable triggering and data acquisition (DAQ) computing systems present in these detectors, all other experimental computing is of little consequence. Triggering and DAQ systems ensure that the physics events occurring in the detector are observed, measured, and accurately transformed into analyzable data. In a typical experiment, the first level trigger, implemented in hardware, initiates the data collection. Data from the front-end electronics are digitized and collected with electronic data modules. A readout computer reads experiment conditions from the control system, reads event fragments from the data modules over the local network, and builds events from fragments. These event data are then written to buffer storage and/or transmitted via a local network to archival storage for additional processing and eventual analysis.

The scale of triggering and DAQ systems can be seen in the design of the ALICE experiment at the Large Hadron Collider (LHC) at the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland. The ALICE detector will measure up to 20,000 particles in a single interaction event resulting in a data collection rate of approximately seventy five million bytes per event. The event rate is limited by the bandwidth of the data storage system. Higher rates are possible by selecting interesting events and subevents or efficient data compression.


A computer simulation (sometimes referred to as a Monte Carlo simulation) of particle interactions in the experimental configuration is essential to most HEP experiments. Computer software providing these simulations plays a fundamental role in the design of detectors and shielding components, in the investigations of the physics capabilities of the experiment, and in the evaluation of background (nonexperimental, for example, cosmic and/or terrestrial radiation) data. Simulation software must be complete and capable of generating simulated experimental data comparable in scope to genuine experimental data. The simulation software must support a description of the experimental detector from the point of view of the materials used and the geometry adopted, both for the structural and the active event-detecting components. The configurations adopted for the data output and the logic of DAQ on the quality of the physics results are also modeled in order to evaluate their impact on the overall performance of the detector. The simulation must be able to describe the properties and the physics processes of the particles involved both in the expected signal/output and in the background. Especially important is the capability to handle physics processes across a wide energy range, which in such experiments simulation may span several orders of magnitude. An ideal simulation system is also flexible and open to evolution and to the integration of external tools. This is particularly important since a number of software tools are already commonly used within the scientific community where a particular experimental environment may require the ability to extend the simulation functionalities, for instance, to include the ability to deal with peculiar physical processes. One of the most powerful and widely used simulation toolkits is GEANT4 developed at CERN.

Data Handling and Storage

Particle physics experiments generate enormous amounts of data. For example, the BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to accommodate 200 terabytes (200 million million bytes) of data per year. As of April 2002, the BaBar database contained over 500 terabytes of data in approximately 290,000 files. This database is the largest known in the world (with the possible exception of some with military/government content). Such data rates and database sizes push the limits of state-of-the-art data handling and database technologies.

In order to handle such large volumes of data, experiment data handling and storage systems/database must be designed to

  • provide reliable and robust storage of the raw detector data, simulation data, and other derived data;
  • keep up with production processing; be able to process raw data files within minutes of writing them to tape;
  • provide easy, rapid, and intuitive access to data on a variety of systems at a wide variety of locations where processing and data storage resources are available to physicists;
  • provide accurate detailed information on the processing steps that transformed event data— from the trigger through reconstruction and all the way to the creation of individual or group datasets;
  • provide mechanisms for policy-based allocation and use of disk, central processing unit (CPU), network, and tape drive resources.

Commodity Hardware and Software

Commodity hardware and software refers to the hardware and software architectures and configurations used to accomplish off-line batch and interactive data processing. In the past such processing was often accomplished by large mainframe computers. In recent years, large (200 or more) compute farms of inexpensive computers have become a common replacement for these mainframe systems. These compute farms are fundamentally groups of networked desktop systems (without monitors and keyboards) that are housed in a single location and which function as a single entity. A computer farm streamlines internal processes by distributing the workload between the individual components of the farm and expedites computing processes by harnessing the power of multiple CPUs. The farms rely on load-balancing software that accomplishes such tasks as tracking demand for processing power from different machines, prioritizing the tasks, and scheduling and rescheduling them depending on priority and demand that users put on the network. When one computer in the farm fails, another can step in as a backup. Combining servers and processing power into a single entity has been relatively common for many years in research and academic institutions. Compute farms provide an effective mechanism for handling the enormous amount of computerization of tasks and services that HEP experiments require. Farms of Intel-based computers running the Linux operating system (OS) have become common at many HEP institutions.

The computing grid is the next generation of compute farms. A grid is a distributed system of computing resources (a cyberinfrastructure) in which computers, processor farms, disks, major databases, software, information, collaborative tools, and people are linked by a high-speed network. The term grid was coined as a result of the analogy with an electrical power distribution system. Grid resources are made available transparently to a distributed community of users through a set of new middleware that facilitates distributed collaborative working in new ways. The nine-institution Particle Physics Data Grid collaboration—consisting of Fermi National Laboratory, SLAC, Lawrence Berkeley Laboratory, Argonne National Laboratory, Brookhaven National Laboratory, Jefferson National Laboratory, CalTech, the University of Wisconsin, and the University of California at San Diego—will develop the distributed computing concept for particle physics experiments at the major U.S. high-energy physics research facilities.

Data Analysis and Visualization

Analysis systems are often at the core of an experiment's physics efforts, and the constraints imposed by those systems can heavily influence the physics event reconstruction and analysis framework. Conversely, an analysis system which lacks key features (or worse, implements them incorrectly) can be a serious handicap. Physicists are constantly searching for new and interesting ways to extract physical information through two-dimensional and three-dimensional computer visualization/modeling, animation, histogram plotting, etc. Key also is the development of techniques for data interactivity— methods for interacting with a program or data. These techniques often include graphical user interfaces (GUIs) but also scripting, browsing and other technologies. There have even been some attempts to utilize virtual reality techniques wherein a physicist becomes "immersed" in experimental data. Development of data analysis and visualization tools has been the subject of numerous international collaborations. The result has been the creation of specialized software libraries used, supported, and maintained by these collaborations but generally available to all physicists.

Control and Monitoring Systems

The infrastructure surrounding experiment detectors is highly complex. The hardware devices used in detectors and the systems of experiments consist of commercial devices used in industry, specific devices used in physics experiments, and custom devices designed for unique application. The control and monitoring system must insure that these devices interface correctly with one another by providing testing and error diagnostic functionality. The administrative component of a control and monitoring system provides access to the control of an experiment often distributed between supervision and process control functions.

Information Systems and Multimedia

The World Wide Web (WWW) is the best example of how the requirements of physics research and the need for experimental collaboration have led to developments in information systems and multimedia. The Web was developed to allow physicists in international collaborations to access data and information easily, quickly, and in a device-independent (i.e., computer and operating system) manner. There has been increasing use of collaborative environments supporting point-to-point and multipoint videoconferencing, document, and application sharing across both local and wide area networks; video on demand (broadcast and playback); and interactive text facilities. Resources such as the HEP preprints database at SLAC and the Los Alamos National Laboratory electronic preprint server, officially known as the e-Print Archive, support physicist research and authoring. The first U.S. web server, at SLAC, was installed to provide access to the pre-prints database. Other information systems and multimedia applications include electronic logbooks used to improve and replace paper logbooks, and streaming media servers to provide widespread access to seminars and lectures.

See also:Detectors, Collider


Foster, I., and Kesselman, C., eds. The GRID: Blueprint for a New Computing Infrastructure (Morgan Kaufmann, San Francisco, 1999).

GEANT4. <>.

Particle Physics Data Grid. <>.

White, B. "The World Wide Web and High-Energy Physics." Physics Today51 (11), 30–36 (1998).

Bebo White


views updated Jun 08 2018

computer Device that processes data (information) by following a set of instructions called a program. All digital computers work by manipulating data represented as numbers. The tallying principle of the abacus was mechanized in calculating machines, such as those devised by Charles Babbage, in which complicated calculations were processed by means of geared wheels. By the mid-1940s mechanical machines were replaced by electronic versions. Some used groups of electromagnetic switches, called relays, to register binary numbers. At any instant, each switch could be either on or off, corresponding to the digits 1 or 0 in the binary system. Stages in the long-term development of electronic digital computers are termed computer generations. In 1946, engineers at the University of Pennsylvania developed the first generation computer. The 27-tonne machine called ENIAC (Electronic Numerical Indicator and Computer) used electronic valves instead of relays. Programming ENIAC to do a particular task was a lengthy process that consisted of changing wired connections. John Von Neumann helped to develop techniques for storing programs in code to avoid this problem. In 1951, UNIVAC 1 became the first computer offered for general sale. This second generation computer used a transistor to perform the same role as valves. As a result, computers became smaller and more commonplace. In the 1960s, a third generation of computers appeared with the invention of integrated circuits, leading to a further reduction in size. Fourth generation computers, developed in the 1980s, are even smaller, utilizing powerful microprocessors. Microprocessors contain a complete central processing unit (CPU) which controls operations. The latest microprocessors contain more than a million transistors and other components, all in a package little bigger than a postage stamp. Read-Only Memory (ROM) and Random Access Memory (RAM) chips act as permanent and temporary electronic memories for storing data. A typical desktop computer system consists of: a main unit, containing a central processor together with memory chips and storage devices (usually magnetic disks); a monitor, containing a cathode-ray tube; a keyboard; a mouse and printer. Computer programs are usually stored on disks and transferred to the machine's RAM when required. The keyboard and mouse are called input devices, since they allow the user to feed information into the computer. The keyboard enables the user to enter letters, numbers and other symbols. The mouse, a graphical user interface (GUI), is a small device moved by hand, which enables the user to control the computer by positioning a pointer on the monitor screen, to select functions from a list. Fifth generation computers using very large-scale integration (VLSI) chips will utilize the developments of artificial intelligence (AI) and may be controlled by spoken commands. A magnetic disk drive, such as a hard disk, acts as both an input and output device. It can supply programs and data to the computer, and store its output. Most computers have CD-ROM drives; these receive data from an optical storage disk. Many other peripherals are used, such as a scanner which converts images into a digital signal so that they can be stored and displayed by the computer, and other hardware for storing and manipulating sounds. The modern computer market is dominated by PCs – the generic term used to refer to machines based on the original IBM personal computer produced in the early 1980s. All these machines use an operating system (such as DOS or Windows) produced by the giant software corporation, Microsoft. Other popular operating systems include Apple Macintosh and UNIX.


views updated May 29 2018

computer A device or system that is capable of carrying out a sequence of operations in a distinctly and explicitly defined manner. The operations are frequently numerical computations or data manipulations but also include input/output; the operations within the sequence may depend on particular data values. The definition of the sequence is called the program. A computer can have either a stored program or wired program. A stored program may exist in an alterable (read-write or RAM) memory or in a nonalterable (ROM) memory. See also digital computer, analog computer, von Neumann machine.


views updated May 23 2018

com·put·er / kəmˈpyoōtər/ • n. an electronic device for storing and processing data, typically in binary form, according to instructions given to it in a variable program. ∎  a person who makes calculations, esp. with a calculating machine.


views updated May 23 2018

About this article


All Sources -
Updated Aug 13 2018 About content Print Topic