Computers are built of interconnected electronic components encased in sturdy containers. Examples of hardware include electrical connections, circuit boards, disk drives, monitor, and printer. These components are referred to as hardware. On its own, computer hardware is functionally useless. For it to be of use, commands must be supplied to the central processing unit, the core device that actually processes information. Every task that a computer performs— from mathematical calculations, to the composition and manipulation of text, to modeling—requires a series of explicit instructions. These instructions are referred to as computer software.
Software tells a computer what to do and how to do it. Software instructions can come directly from a person using the computer (i.e., as when using a spreadsheet or word processing program) or may run automatically in the background, without user intervention (i.e., virus protection programs, operating systems, device drivers).
Some special-purpose devices, such as hand-held calculators, wristwatches, many high-tech weapons, and automobile engines, contain built-in operating instructions that cannot be altered. Personal computers found in businesses, homes, schools, and scientific work, however, are general-purpose machines because they can be programmed to do many different types of jobs, from telling time to playing chess, editing video, or performing scientific calculations. They contain enough built-in programming (their operating system) to enable them to interpret and use commands provided by a wide range of external software packages which can be loaded into their memory.
Computer software objects—programs—are collections of specific commands written by computer programmers. Once a software program has been loaded into the memory of a computer, the instructions remain there until deleted, either deliberately or accidentally. The instructions do not have to be entered by the user every time the computer is used; they may never be used.
English mathematician Charles Babbage conceived the ancestor of modern computers and computer software in 1856. Babbage dubbed his sophisticated calculating machine the “analytical engine.” While the analytical engine never became fully operational, Babbage’s design contained all the crucial parts of modern computers. It included an input device for entering information, a processing device (that operated without electricity) for calculating answers, a system of memory for storing answers, and even an automated printer for reporting results.
The analytical engine also included a software program devised by the daughter of poet Lord Byron, Ada Augusta. Her programs were coordinated sets of steps designed to turn the gears and cranks of the machine so as to produce a particular desired result. The instructions were recorded as patterns of holes on punch cards—a system that had been used since the 1750s by operators of weaving looms to produce woven cloth having specific and desired patterns. Depending on the pattern of holes in a card, and on the sequence of different cards, the computer would translate the instructions into physical movements of the machines’ calculating mechanical parts.
The design of this software utilized features that are still used by software programmers today. One example is the subroutine, a series of instructions that could be used repeatedly for different purposes. Subroutines simplify the writing of the software program, as one set of instructions can be applied to more than one task. Another example is called the conditional jump, which is an instruction for the machine to jump to different cards if certain criteria were met. A final example is called looping. Looping is the ability to repeat the instructions as often as needed to satisfy the demands of a task.
The first modern computers were developed by the British and U.S. governments during World War
II. Their purpose was to calculate the paths of artillery shells and bombs and to break German secret codes. By present-day standards, these machines were primitive; they used electromechanical relays or the bulky vacuum tubes that were the predecessors of today’s microscopic transistors. As a result, the machines were massive, occupying large rooms. Additionally, the myriad of settings were controlled by on-off switches, which had to be reset by hand for each operation. This was time-consuming and difficult for the programmers. Yet these devices were not simple, performed essentially tasks successfully, and pioneered all the fundamental ideas that are used today in designing every single digital computer.
John von Neumann, a U.S. mathematician, suggested that computers would be more efficient if they stored their programs in memory. This would eliminate the need to manually setting every single instruction each time a problem was to be solved. This and other suggestions by von Neumann transformed the computer from a fancy adding machine into a machine capable of simulating many real-life, complex problems.
The instructions computers receive from software are written in a computer language, a set of symbols that convey information. Like spoken languages used by humans, computer languages come in many different forms.
Computers use a very basic language to perform their jobs. The language ultimately can be reduced to a pattern of “on-or-off” responses, called binary digital information or Machine Language. Computers work using nothing but electronic “switches” that are either on or off, as represented by 1 and 0.
Because computers handle on-off signals at the physical level, they can ultimately only execute code supplied to their components as strings of 1s and 0s (instantiated as high and low voltages inside the machine). Yet human beings have trouble writing complex instructions using binary, 1 or 0 language. A simple command to a computer might look like 00010010100101111001010101000110—which, besides taking a long time to write out, most of us would not find easy to distinguish at a glance from any similar string of 32 bits. Because code written in binary form is tedious and time consuming to write, programmers invented assembly language. It allows programmers to assign a separate code to different machine language commands. Another, special program called a compiler translates the codes back into 1s and 0s for the computer.
Assembly language was problematic in that it only worked with computers that had the same type of “computer chip” or microprocessor (an electrical component that controls the main operating parts of the computer).
The development of what are referred to as high-level languages helped to make computers common objects in work places and homes. They allowed instructions to be written in languages that many people and their computers could recognize and interpret. Recognizable commands like READ, LIST, and PRINT could be used when writing instructions for computers. Each word may represent hundreds of instructions in the 1s and 0s language of the machine.
Because the electronic circuitry in a computer responds to commands expressed in terms of 1s and 0s, code written in a high-level language command must be translated into machine language before it can be executed. The software products needed to translate high-level language back into machine language are called translators or (more commonly) compilers.
Operating-system software is vital for the performance of a computer. Before a computer can use application software, such as a word processing or a game-playing package, the computer must run the instructions through the operating system software. The operating system software contains many builtin instructions, so that each piece of application software does not have to repeat simple instructions (i.e., printing a document).
Disk Operating System or DOS was a popular operating system software program for many personal computers in use until the late 1990s. Microsoft DOS (MS-DOS, the basis of all the early generations of the Windows operating system) was for many years the most popular computer operating system software. So prevalent was Windows that it prompted charges that Microsoft was monopolizing the software industry. In April 2000, the United States district court ruled that Microsoft has violated antitrust laws, and that the company was to be broken up, in order to foster competition. However, upon appeal, the breakup order was reversed in 2002. Several less drastic judgments have been upheld in European courts in cases charging Microsoft with monopolistic practices.
An operating software that is gaining in popularity is called Linux. Linux was initially written by Linus Torvalds in 1991. Linux is an open-source software. This means that anyone can modify the software. In contrast, the key codes that permit Windows to be modified are the property of Microsoft, and are not made available to the consumer.
A Linux-based operating system called Lindows has been devised. Commercially available as of January 2003, Lindows allows a user to run the Windows program on a Linux-based operating system. This development allows users to run Microsoft programs without having to purchase the Microsoft operating system. Not surprisingly, this concept is facing legal challenges from Microsoft.
Starting in 2006, Macintosh computers have contained Intel chips that run not only Macintosh’s proprietary operating system, OS X, but the Windows operating system as well. The longstanding problem of intercompability between the Mac and PC worlds had finally dissolved.
Software provides the information and instructions that allow computers to create documents, solve simple or complex calculations, operate games, create images, maintain and sort files, and complete hundreds of other tasks.
Word-processing software, for example, makes writing, rewriting, editing, correcting, arranging, and rearranging words convenient.
Database software enables computer users to organize and retrieve lists, facts and inventories, each of which may include thousands of items.
Spreadsheet software allows users to keep complicated and related figures straight, to graph the results and to see how a change in one entry affects others. It is useful for financial and mathematical calculations. Each entry can be connected, if necessary, to other entries by a mathematical formula. If a spreadsheet keeps track of a computer user’s earnings and taxes, for example, when more earnings are added to the part of the spreadsheet that keeps track of them (called a “register”), the spreadsheet can be set up, or programmed, to automatically adjust the amount of taxes. After the spreadsheet is programmed, entering earnings will automatically cause the spreadsheet to recalculate taxes.
Graphics software lets the user draw and create images. Desktop publishing software allows publishers
Computer hardware— The physical equipment used in a computer system.
Computer program— Another name for computer software, a series of commands or instructions that a computer can interpret and execute.
to arrange photos, pictures, and words on a page before any printing is done. With desktop publishing and word processing software, there is no need for cutting and pasting layouts. Today, thanks to this type of computer software, anyone with a computer, a good quality printer, and the right software package can create professional looking documents at home. Entire books can be written and formatted by the author. The printed copy or even just a computer disk with the file can be delivered to a traditional printer without the need to reenter all the words on a type-setting machine.
Sophisticated software called CAD, for computer-aided design, helps architects, engineers, and other professionals develop complex designs. The software uses high-speed calculations and high-resolution graphics to let designers try out different ideas for a project. Each change is translated into the overall plan, which is modified almost instantly. This system helps designers create structures and machines such as buildings, airplanes, scientific equipment, and even other computers.
Software for games can turn a computer into a space ship, a battlefield, or an ancient city. As computers get more powerful, computer games get more realistic and sophisticated.
Communications software allows people to send and receive computer files and faxes over phone lines. Education and reference software makes tasks such as learning spoken languages and finding information easier. Dictionaries, encyclopedias, and other reference books can all be searched quickly and easily with the correct software.
Utility programs keep computers more efficient by helping users to search for information and inspecting computer disks for flaws.
A development of the Internet era has been the creation of file sharing. This allows the sharing of computer files across the electronic network of the Internet. A host computer equipped with the necessary software can download other programs from the Internet. A prominent example of this concept—currently defunct because of copyright infringement implications—was Napster, which is used from 1999 to 2001 to download music files (MP3 files) to a personal computer. Via file sharing, a user can freely acquire—sometimes legally, sometimes not— many files that would otherwise have to be purchased. Napster was forced to shut down its free music-sharing system in 2001 by court order.
Pressman, Roger S. and R. Pressman. Software Engineering: A Practitioner’s Approach. 6th ed. New York: McGraw-Hill Science/Engineering/Math, 2004.
Sommerville, Ian. Software Engineering. 8th ed. Indianapolis, IN: Addison Wesley, 2006.
Tomayko, James and Orit Hazzan. Human Aspects of Software Engineering. Charles River Media, Hingham, MA: 2004.
Dean Allen Haycock