In recent years there has occurred enormous technological development and widespread application of automatic information-handling techniques. These are relevant to three broad classes of tasks with which the scientist frequently finds himself faced: (a) data processing, the sorting and summarization of large files of information, usually representing the results of empirical observation; (b) mathematical computation, the derivation of numerical values associated with mathematically defined constructs (for example, matrix and differential equation systems); and (c) simulation, the generation of data supposed to represent the “behavior” of some temporal process (for example, a physical or economic system). Of course, a given task may have more than one of these components; the widely used technique of regression analysis, for example, may be thought of as consisting of a data-processing phase, for the selection and reduction of input data, followed by a computational phase, for the calculation of regression coefficients and related quantities.
Historically, various special-purpose mechanical aids have been used to facilitate operations in one or another of the areas cited. Most recently, the general-purpose automatic digital computer has been developed as an effective and flexible tool for handling all three types of application. Accordingly, after a brief survey of older methods, the main body of the present discussion will be concerned with this device.
Methods for data processing are strongly conditioned by the characteristics of the medium in which the data are recorded. A file in which each data entry is recorded on a physically separate carrier, such as a card, has obvious manipulatory advantages. Thus traditional techniques for data processing have centered around the concept of a unit record. Various schemes for aiding the manual manipulation of files of cards have been devised; a useful one depends on notching the edges of cards in coded patterns to permit mechanization of selection and sorting. The most significant line of development here, however, has been that of the punched card system, in which special-purpose machines are used to perform sorting, collating, reproducing, and tabulating operations on decks of cards carrying information entirely in the form of coded patterns of punched holes. Such systems, the basic principles of which were developed by Herman Hollerith in the closing years of the last century, are in widespread use today. Their efficient use depends on the fact that file manipulation procedures can often be specified as a sequence of elementary operations repetitively performed. Processing of any complexity, however, may require many runs of the data cards through the same or different machines, and this is a factor that limits the magnitude of the tasks that can be attempted by the use of these techniques.
Mathematical computation can be mechanized in two basic ways, digital and analog. In digital computation numbers are represented and manipulated as symbolic entities; the abacus, in which the configuration of beads on a rod is taken as a coded representation of a decimal digit, is an elementary example of a computational aid based on the digital principle. In analog computation, by contrast, numbers are represented by the measure of physical aspects of some system; thus the slide rule, on which numerical magnitudes are marked off as lengths on a series of scales, is a basic example of a device for computing by the analog method.
The desk calculator is a special-purpose digital device that serves to mechanize the most onerous part of pencil-and-paper computing; since it performs only the fundamental arithmetic operations on numbers manually entered into its keyboard, however, elaborate computations still require a great deal of routine human labor. This sort of process can be mechanized one step further by incorporating elementary arithmetic capability into punched card equipment, but the inability of such machines to do more than a repetitive sequence of quite simple operations on a given run is still a limitation.
Manual analog methods include graphic techniques, such as the plotting of lines representing equations in order to determine their solution as an intersection point, and the use of nomograms, which may depend on predrawn forms suited to particular problems. In addition, a number of mechanical devices more advanced than the slide rule have been developed, such as the mechanical integrator. The evolutionary end product of this development is the modern analog computer, in which numerical magnitudes are represented as electrical voltages and circuit units interconnected by means of plug wires correspond to mathematical systems with various parameters. Analog techniques are subject to fundamental limitations regarding accuracy, since they depend on the precision with which physical quantities can be measured, and are restricted as to the types of problems that can be handled, since they must be capable of being formulated in physical terms.
Analog techniques also can be applied successfully in the area of simulation; in fact, here the distinction between mathematical computation and simulation becomes blurred. For, if an analog computer is set up to “solve” the differential equation that governs the behavior of a physical system through time, it can be thought of in a very real sense as a simulator of that system. This has led to the exploitation of such techniques in experimental work, where there is to be interaction, say, between an actual physical system and a simulated one.
Simulation by analog techniques is often useful but is limited; by contrast, any system with a mathematical description can be simulated by digital techniques. In fact, however, before the advent of the general-purpose computer, digital simulation of systems of any complexity was so inefficient as to be of little practical utility.
Further reading on the material of this section may be found in Brooks and Iverson (1963, pp. 52–144), Calingaert (1965, pp. 87–126), and Borko (1962, pp. 22–57).
The modern automatic digital computer, at the center of a system incorporating appropriate input, output, and auxiliary information storage devices, can be applied to data processing, mathematical computation, and simulation tasks in an integrated and flexible manner. This versatility is partly due to the physical speed and capacity made possible by contemporary technology, and partly due to a feature of the computer that was largely missing in earlier mechanisms—the internally stored program. Once a program or schedule of operations has been prepared, it can be introduced into the computer and used to control a long complicated sequence of steps, which are thus executed automatically at an extremely rapid rate without human intervention. The possibility of preparing programs of a general nature, which can then be tailored to particular cases by the specification of parameter values, and the fact that programs, once prepared, may be run by different users on different computers make the digital computer a tool of such power as to permit consideration of projects that would not formerly have been contemplated at all; this in turn often leads to the development of more advanced models and hence acts to influence the substance of the discipline to which the computer is applied. Thus, for example, the availability of programs incorporating provision for preliminary data editing, extensive computation of residuals, and graphic display of results has markedly influenced the manner in which statistical problems may be attacked.
This potential, however, is limited by the capacity of people to plan and implement effective programs; furthermore, the nature of computer operation is such as to make it most desirable for the user to have some comprehension of its inner workings rather than to depend completely on a professional programmer to interpret his needs. The following sections therefore contain a compact introduction to the subject, which, it is hoped, will at least enable the reader to become aware of some of the more obvious pitfalls that the computer user must anticipate.
In order to avoid too narrow a conception of function, it is appropriate to think of the general task for which an automatic digital computer is designed to be “symbol processing,” rather than “computation” in the strictest (numerical) sense. The organization and operation of a computer may be basically described in terms of five functional units, as depicted schematically in the block diagram of Figure 1.
The storage unit (usually picturesquely termed the “memory”) acts as a repository for symbolic information and also as an intermediary for all interaction among other units. Internal processing of information is implemented by the transmission of symbolic operands from storage to the operations unit, where they are subjected to arithmetic or other transformations, the results of which are then transmitted back to storage. In order to set this process up initially an input unit is provided to introduce information from the external world into the storage unit, and in order that final results may be provided for the user there must also be an output unit for the opposite function.
Thus far, the functions described are roughly analogous to those performed by a man doing a hand calculation: he maintains a worksheet (storage) from which he derives numbers to be entered into the keyboard of his desk calculator (operations) and on which he copies the results; before the calculation proper starts some initial values may have to be entered (input), and when it is finished a separate sheet containing only the relevant results is prepared (output). The word “automatic” applied to a computer, however, implies mechanization not only of these physical processes but of the directing function supplied by the man, who causes a sequence of actions to be performed on the basis (presumably) of a predetermined schedule. In the computer the actuating function is performed by the control unit, and what distinguishes the computer from earlier devices is that the sequence of actions called for by this control can be in accordance with a pattern of arbitrary complexity, as specified by a symbolic program recorded in storage. Before processing can begin, therefore, a program must be introduced (via the input unit) into storage; the fact that the computer can be switched from one task to another by merely introducing a different program is what gives it its “general purpose” designation. Note also that the program, because it consists of symbolic information recorded in storage, can be altered by internal operations; this gives rise to powerful and flexible procedural techniques in which programs systematically modify themselves in order to achieve maximum processing efficiency.
Technological considerations dictate the use of bistable, or two-state, devices as the basic components of digital systems; this leads to the conception of all symbolic information ultimately coded in terms of just two symbols, the bits 0, 1. Particular patterns of several bits may then be interpreted as coded characters (letters, digits, etc.), which play the role of the elementary units of symbol processing. This notion is familiar from consideration of punched card or teletype tape codes, in which specified patterns of no-punch, punch represent characters drawn from a given set. In a computer, as in these contexts, the basic character set may include in addition to numeric digits the letters of the alphabet and selected special symbols such as mathematical signs and punctuation marks.
The unit of transmission of information within the computer may be the character, but on the more powerful computers efficiency dictates that characters be transmitted and operated upon in larger aggregates called for convenience words. Data-processing applications generally involve much manipulation of alphanumeric words, considered simply as arbitrary strings of characters; arithmetic operations, however, are meaningful only when applied to numeric words, which represent numbers in a more or less conventional manner. Finally, words formed according to a particular format are interpretable by the control unit as instruction words, and it is in terms of these that programs are specified for automatic execution.
Such special interpretations of computer words are applied only by the operations and control units; as far as storage goes, there is no distinction. Storage is organized into a series of subunits called locations, each of which is designated uniquely by an integer label, its address. The capacity of each addressed location is generally the same, being a character or a word of some standard length. Computer instructions are then typically designed to specify an operation to be performed and one or more storage addresses giving the locations at which operands are to be found or results stored.
Computers with storage capacities of the order of hundreds of thousands of characters and which carry out internal operations at the rate of hundreds of thousands a second have become available in the last few years, and in the more advanced systems both capacity and rate may effectively be in the millions. Although basic input/output operations (card or paper tape reading/punching, or printing) are generally much slower, delays can be reduced to a considerable extent by designing these units so that their operation can proceed concurrently with the main internal sequencing. Finally, the capabilities of the computer can be considerably enhanced by imbedding it in a system containing additional bulk storage (perhaps millions of characters stored in magnetic form on drums or disks) and intermediate input/output devices (magnetic tape units, for example), all communicating via the internal storage unit.
Further discussion may be found in Gregory and Van Horn ([I960] 1963, pp. 58–125), Green (1963, pp. 12–29), and Borko (1962, pp. 60–91).
The popular conception of computer function seems to be that reflected in statements such as “Now they have a computer that can check everybody’s tax return” or “Now they’ve come up with a computer that can simulate world politics.” Although the area of intended application in the broad sense may influence design, as the previous discussion suggests, computers are not built to do specific tasks but are programmed to do them. This is important to the user because it shows the need for focusing attention not on computer availability but on program availability. And although the uninitiated but reasonably knowledgeable scientist may be aware of this fact, he often fails to anticipate the extent to which unqualified descriptions such as “regression analysis” fail to characterize a task definitively; details of matters such as input/output formats and computational methods are what determine whether or not a program is appropriate to the needs of a given user, and here information may be poorly communicated or not provided at all. Even worse, it may be erroneously communicated, partly due to lack of insight on the part of the program designer and partly because the tools for precise communication in the area of symbolic processing are as yet only imperfectly developed. For these reasons it is desirable that the user have at least a rudimentary notion of the nature of computer instructions and programming techniques. This section and the one following outline briefly some of the concepts involved.
The first problem to be faced when planning to use a computer is that of precise specification of the task to be carried out, in a form sufficiently perspicuous so that the procedure can be communicated in human terms. Some form of flow chart can generally fulfill this function on several levels, from over-all to detailed. The basic idea here will be illustrated by an example.
Imagine that it is required to design an automatic procedure for computing the sample mean and standard deviation,
for several sets of xi, not all of which necessarily contain the same number n of items (in all cases it must be that n ≥ 2, however). If these are to be processed in a single input file (perhaps a deck of punched cards) some means must be provided for distinguishing the data sets from each other, and one practical way to do this is to suppose each file entry (card) contains two values x, m, the first of which is normally data (an xi value) and the second of which is a “tag” to distinguish legitimate data items (m = 0) from “dummy” items used to signal the end of a data set (m = 1) or the end of the entire file (m = 2).
A complete flow chart for this task is given in Figure 2; it covers the repetitive reading of data sets and the computation and recording (for ex-ample, printing) of n,x, and s for each. The flow chart is a formal specification of a procedure that is described as follows:
Initially, set the sum r and the index i to 0, in anticipation of a data run. Then read a pair of values x, m and test (in two steps) whether m is 0 or 1. If m = 0 add x to the sum r, and also increment the value i and enter x as xi in a data list; then return to read the next entry. If m = 1 the data set is complete, and the current value of i is then n for the set and x is determined as r/n. Now compute a second sum p by adding values (xi –;x)2 for i = 1, 2, …, n; following this compute s as the square root of q = p/(n – 1) and record the values n,x, s. Repeat this entire process until the test gives m ≠ 0 and m ≠. 1 (so presumably m = 2); this indicates that there are no more input files and the procedure is finished.
The stylized formalism of the flow chart employs mathematical notation of the conventional sort but also makes use of special symbols signifying procedural action. These are “←”, meaning ”set” (to a value); “:”, meaning “compare” (for branching); and “—”, meaning “execute” (a standard subprocedure). Thus ”r ← r + x” means “set r to its former value plus x”; “i:n” means “compare i and n” (which causes the procedure to repeat as indicated if i n and go ahead if i > n); and “SQRT(s;q)” means “execute a subprocedure SQRT(s;q)” (which is designed to have the effect ).
The flow chart shows the structure of the computation in terms of procedural loops and re-entry
points, without concern for details of printing formats, etc., which of course must be supplied if the procedure is to be actually carried out by computer. It is important to realize that these flow chart conventions are independent of the characteristics (and the idiosyncrasies) of any particular computer but that they do reflect general aspects of computer operation; thus both the notion that indices such as i can be manipulated independently in dealing with sets of values Xi and that parametrized subroutines such as SQRT(; ) can be called upon and executed by a main program correspond to important features of actual computer operation.
The flow chart symbolism given here, for which detailed rules can be stated, is suitable primarily for expressing procedures of mathematical computation. The same basic idea, however, can be used to specify data-processing procedures; the main additional concept needed is a more versatile way of representing external files of given format and organization. Also, simulation and other internal processing applications that may involve non-numeric elements can be handled by an extension of the flow charting technique.
Although no standardization of flow chart convention exists, and the choice depends to a certain extent on the application at hand, it nevertheless seems reasonable to ask that a flow chart represent an actual computer program accurately enough so that the filling in of details is routine and is unlikely to lead to serious discrepancies between what the user thought was to be done and what the programmer in fact specified was to be done.
For additional discussion of flow charting see Calingaert (1965, pp. 19–23).
To get some insight into the way in which a flow chart specification translates into a computer program, assume a computer with 1,000 storage locations 000, 001, …, 999, each of which holds a 12-character word interpretable as an alphanumeric datum, a number, or an instruction in the format “ωαβγ”, where w is a 3-character alphabetic operation code and each of α, β, γ is a 3-digit storage address.
Then the instruction
might then have the significance “Add the contents of storage locations 500 and 501, and record the result in storage location 500.” This would be appropriate only in case the 12-character words initially contained in storage locations 500 and 501 were in numeric format (say, 11 decimal digits preceded by algebraic sign), so that the addition operation has meaning. The execution of this instruction by the control unit results in the destruction of the original number in storage location 500, since it is replaced by the sum; the number in storage location 501, however, is not altered by the operation.
In order to be executed by the control unit, the ADD instruction itself must exist some place in storage, say at location 099; thus one must distinguish between the locations (here 500, 501) to which the instruction explicitly refers and the location (here 099) at which the instruction resides. Concern with the location of the instruction itself is necessary because of the sequential nature of computer operation; somehow there must be specified the order in which instructions are obtained and executed by the control unit, and the most straightforward way to do this is to design the computer so that instructions are executed in sequential order by resident address unless there is an explicit indication (in an instruction) to the contrary. To see the manner in which such an indication is typically set forth, consider a second hypothetical instruction
which is to be given the interpretation “Compare the numbers in storage locations 500 and 550, and if the former is less than the latter, take the next instruction from storage location 099.” This effects a “conditional jump” in the instruction sequence; if the stated condition is met, the next instruction executed is that residing in location 099, but otherwise it is the instruction whose address follows in sequence. If the COM instruction itself resides in location 100, then the pair of instructions introduced form a “program loop” schematized as follows:
The effect of this piece of program is to add the contents of 501 repeatedly to the contents of 500, which therefore acts as an “accumulator” for the sum; this addition process is continued (by virtue of the “jump back” to 099) until the accumulated sum exceeds the number residing in location 550, at which time the program proceeds in the normal sequence with the instruction following that numbered 100.
If one imagines that locations 500, 501, and 550 contain values, i, 1, and n, respectively, then the two-word loop is a computer implementation of the flow chart component shown in Figure 3 (which happens to perform a rather meaningless function
in itself but with appropriate instructions inserted between the ADD and COM becomes one of the loops of Figure 2). This would serve to suggest the manner in which flow charts may be translated into computer programs. Two important techniques in this connection, indexing of a set of values with a computed value (so as to be able to reference a general element Xi) and linking of subprocedures into a program (so as to be able to execute a subroutine such as SQRT and jump back to the main program at the proper point when finished) may be accomplished in a computer by programmed modification of instruction addresses.
Although the example illustrates some facets of computer control that are typical, the details of coding, format, etc. vary widely in practice. For example, it is common to have several different coding systems employed in the same computer for various purposes, perhaps a character code used for general alphanumeric information, a different code for number representation (often based on the binary radix 2 instead of the decimal radix 10), and a special format, not interpretable in either alphanumeric or pure numeric terms, for instructions. Also, the basic “three address” pattern of instructions given here is not typical; usually instructions are “one address” and require explicit recognition of registers for retaining intermediate results in the operations and other units. Thus on most computers the addition operation of the example requires three instructions: one that clears an accumulator register in the operations unit and records the contents of location 500 in it, one that adds the contents of 501 to this and retains the sum in the accumulator register, and one that records the contents of the accumulator in location 500. One or more special index registers are also customarily provided to facilitate indexing and linking operations.
Besides instructions for internal operations such as those described, the computer must have instructions for controlling the operation of input/ output and other auxiliary devices with which it communicates; these would of course be used in procedures such as the READ and RECORD subroutines of the earlier example. The repertoire of a large computer may number instructions in the hundreds, many of these representing minor variants of each other to permit processing efficiency.
Two characteristic features of computer programs may be perceived from the discussion: a procedure is formulated for execution in terms of minutely small steps, and the symbolic representation of these steps is not conducive to quick apprehension of their effect in the aggregate. Not only does the preparation of a program for a given task require a great amount of routine labor, but the bookkeeping necessitated by the numerical addressing scheme tends to produce errors. Since computers are generally capable of performing routine tasks more reliably than humans, the question naturally arises of whether some of the burden of program preparation cannot be passed along to the very devices intended to execute the programs. This leads to the concept of program-preparation programs, which translate programs written in some language more or less “procedure-oriented” into direct computer code.
A basic example is the assembly program, which permits the programmer to write instructions in a form such as
|GO:||ADD, I, ONE, I|
|COM, I, N, GO.|
These instructions may be thought of as assembly language versions of those given earlier. GO is a label corresponding to the instruction address 099, and I, ONE, and N are labels corresponding to the data addresses 500, 501, and 550, respectively. The programmer writes in terms of such mnemonic labels, and the assembly program translates these symbolic instructions (made up of letters, commas, spaces, etc.) into corresponding direct computer instructions in appropriate format; generally, the programmer does not specify at all the actual numerical addresses that will be assigned to the various labels by the assembly program. There are also other features incorporated into assembly programs to simplify the specification of numeric constants, etc.
Assembly language makes a program more perspicuous but does not compress its level of detail. Higher order program-preparation programs exist for this purpose; for example, an algebraic translator, which permits specification of the computer action already discussed in a form such as
|GO:||1 = 1 + 1|
|IF I > N, GO.|
This describes the procedure in terms much closer to a flow chart specification and also in general achieves an over-all symbolic compression. The program to translate such a language into computer code is correspondingly more complicated, however, and the resulting computer version may be considerably more inefficient than a program prepared directly in assembly language. There is an added disadvantage, which can only be appreciated through experience: the programmer is a level removed from the actual computer process, and this raises problems in the area of precise specification of computer procedures.
Further discussion of computer programming is given in Borko (1962, pp. 92–133), Green (1963, pp. 30–64, 75–99), and McCracken and Dorn (1964, pp. 1–42).
From the previous section it may be seen that the user does not in general deal with the computer directly but through the mediation of some program system that provides for translation from procedure-oriented language into actual computer code. On the larger computers, where efficiency is at a premium, this function is expanded to include the batching of programs for sequential execution with a minimum of human intervention and the keeping of the computing log and accounting records. Such a system makes for highly efficient computer utilization but removes the user one level more from direct access to the computer; “debugging” under such circumstances, for example, requires a succession of runs, with the user progressively detecting and correcting the errors in his program.
This implies that in addition to a programming language the user must learn (or have access to someone who knows) the operating techniques associated with the particular computer installation with which he deals. The knowledge of what translators and diagnostic programs are available, and of details concerning the formats in which control information must be submitted with programs in order that they be usable, assumes a perhaps disproportionate but nevertheless critical importance. Although details vary widely from installation to installation, the type of support one may look for is indicated by the following categorization of “system routines”:
loader—an elementary “job initiation” program that controls the reading of a program to be executed, perhaps supplying address modification based on the locations the program is to occupy in storage, etc.
assembler—a program that translates a program in assembly language (which may consist of several sections, each programmed using symbolic addresses and mnemonic operation codes) into a single coherent computer code (in a form that can be processed by a loader).
translator—(often called a “compiler”) a program that translates a program in some algebraic or other problem-oriented language into some more elementary (perhaps assembly language) form.
monitor—a program that synchronizes translating, assembling, and loading functions and, further, has a certain batching, scheduling, and accounting capability permitting a whole set of jobs to be run with minimal human intervention.
multiprocessor—a supermonitor based on an ability to carry out several functions concurrently (for example, those of a central processor and several input/output devices) and dynamically schedule these for maximal efficiency.
The order of listing here corresponds to both historical evolution and increasing level of complexity; monitor and multiprocessing facilities, in particular, are appropriate only to the more powerful computer installations. At the time of this writing there is evolving an even more advanced program system concept, growing out of the idea of multiprocessing: an executive system that integrates one or more central processors, several auxiliary input/output processors, and a large number of remote user-operated consoles in a way so as to make efficient use of the centralized facilities but at the same time permit each user to interact directly, in an “on-line” fashion, with the computer complex. In order to achieve this, the system must be able to run programs piecemeal and respond, in what is to a human a short time, to a multiplicity of external demands. The requirements of this sort of “time-sharing” operation have led to radical new concepts in both the computer-design and programming areas.
Translators are generally identified by the problem-oriented language that they accept as input, such as Fortran, Algol (two general algebraic languages), Cobol (a standard file-processing language), Simscript (a simulation language), etc. A similar function is accomplished less efficiently but often more straightforwardly by an interpreter program, which both translates and executes simultaneously, thereby carrying out the whole computing task in one rather than several phases.
These system programs, along with various diagnostic programs for analyzing both computer operation and program operation, may be categorized as service routines; they serve the user by facilitating the execution of the particular procedure whose results he desires. This function may be contrasted with that of standard routines and standard subroutines, generally available on a “library” basis, which actually carry out all or a part of the desired task. One may anticipate the availability of subroutines for at least some of the following:
radix conversion—if the computer uses a number base other than 10, which is often the case. (This function will usually be incorporated at the assembly level.)
special arithmetic—”floating-point” (flexible scale) and “double-precision” (extended length) numeric manipulation if not directly available on the computer instruction level; also complex arithmetic, etc.
function evaluation—calculation of square root, exponential and logarithm, sine and cosine, etc. for arbitrary arguments. (A call for standard functions may be incorporated into translator specifications.)
polynomial and matrix operations—addition, multiplication by scalar, multiplication; evaluation, calculation of zeros (for polynomials); transposition, inversion, diagonalization (for matrices).
input/output editing—formating and sequencing card and paper tape reading and punching, magnetic tape, drum and disk file operations, printing and other forms of graphic display. (This function will sometimes be incorporated at the monitor level.)
By definition, a subroutine is supposed to serve as part of a larger program; its usefulness therefore depends not only on its conception with regard to generality and comprehensiveness but on its being accompanied by an appropriate set of specifications characterizing its action and the directions for “connecting” it into the program that uses it. Standard routines, by contrast, are programs for tasks that can be regarded as independent in their own right; the appropriate form of these depends very much on the applications environment. Programs for regression, factor analysis, linear programming calculations, and solution of differential equations are generally in this category, as well as those for standard file-processing operations such as sorting and input/output media conversion (card-to-tape, tape-to-card, etc.).
The importance to successful computer usage of a good available set of service routines, standard routines, and subroutines cannot be underestimated, and it is this that leads to “software” costs equaling or exceeding “hardware” costs for an installation, a fact often unanticipated by the administration that set up the budget. Some alleviation of the difficulty is provided by the existence of various “user organizations,” generally associated with a particular class or type of computer, which maintain a pool of programs available to their members. These are only guaranteed up to minimum standards of documentation, however, and the problem of making effective use of work already done by others remains one of the most formidable and often exasperating problems facing the user who wishes to reap the benefits afforded by computers without investing the time to understand fully the device that he is entrusting with a major role in his research or operations.
The topics in this section are further developed in Borko (1962, pp. 140–170), Green (1963, pp. 65–74), and Gregory and Van Horn ([I960] 1963, pp. 439–478).
Although computer applications in general are subject to the problem that imperfect understanding of procedures by the user may lead to unsatisfactory results, computation related to familiar mathematical concepts is especially dangerous in this regard. This is because the user often starts out with some preconceptions based on his mathematical understanding and is thus likely to neglect or misjudge the effect of computational error, which may be considerable. It is customary to measure error in either absolute or relative units, depending on circumstances; thus if x is a computed value and it is assumed that it represents some “true” value XT, then δ = x – XT, or possibly its magnitude | δ |, is defined as the absolute error in x, and δ/xr (which is approximated by δ/x) is defined as the relative error. Computational error may also be classified according to source as follows: inherent error, reflecting inaccuracies in initially given data; generated error, reflecting inaccuracies due to the necessity of rounding or otherwise truncating the numeric results of arithmetic operations; and analytic error, reflecting inaccuracies due to the use of a computing procedure that represents only an approximation to the theoretical result desired.
In the example of the mean and standard deviation calculation given earlier, one might wish to analyze the effect of inherent error in the input x values on the computed x and s values, that is, to analyze the manner in which errors λi attributed to the Xi induce errors δx and δS in x and s, respectively. In the case of simple functions the effect can be estimated using the traditional techniques of calculus; in a calculation of complicated structure, however, it may be difficult even to get a feeling for the manner in which errors in input values propagate. Of course, in this type of situation, inherent errors in the data may be just what the calculation itself is concerned with, and hence it might be more appropriate to consider the input values x as exact as far as the computation is concerned. Even then it would be important, however, to have some idea of the “sensitivity” of the computed results as a function of input; one might want to know, for example, that the closer to zero x gets, the more relative effect it will show from given relative changes in x values.
In any actual implementation of the procedure diagrammed in Figure 2 there will be generated error introduced at each arithmetic step, the exact nature of which depends on the rounding rules followed by the computer arithmetic unit, which are often obscure to one who communicates only via a procedure-oriented language. Furthermore, analytic error is necessarily introduced at the stage where the square root is called for, since functions such as this can be computed only approximately. The closeness of the approximation, and hence the level at which analytic error is introduced, depends on properties of the square root procedure, which also are frequently not very well documented for the user.
When it is considered that error effects of all three types may be interwoven in a calculation, and may propagate through all the calculational steps in a complicated manner, it is seen that the problem of error assessment is nontrivial. Furthermore, it is not generally taken into account how catastrophic relative error propagation effects can be; the subtraction of two numbers each with a relative error of order 10-8 which happen to have a relative difference of order 10-6 will yield a result with relative error of order 10-2, and the arithmetic procedures used on most contemporary computers are such that this loss of significance is not brought to the user’s attention unless special monitoring techniques are programmed into the calculation. And while it is generally true that both generated and analytic error can be minimized by computational techniques (carrying more precision and using more refined approximations), an assessment is required to know when such measures should be resorted to. Poor computational technique can magnify the effect of generated and analytic error unnecessarily; a typical case is use of the formula instead of S(xix)2 for calculating the standard deviation, since then the result tends to depend on subtraction of two large and nearly equal numbers. Finally, inherent error, as its name implies, is dependent only on the level of input error and the functional connection between input and output values, neither of which are subject to remedial action in the computational context. When small relative errors in input induce large relative errors in output, a process is said to be “ill-conditioned”; if this obtains, the most one can hope for is a means of detecting the situation.
The subject of error analysis, in its several facets, has been extensively studied; although no comprehensive set of directives can be given, the user will be well repaid for becoming familiar with at least some of the work already done in this area, as indicated in McCracken and Dorn (1964, pp. 43–64) and Hamming (1962, pp. 24–40).
Robert L. Ashenhurst
Borko, Harold (editor) 1962 Computer Applications in the Behavioral Sciences. Englewood Cliffs, N.J.: Prentice-Hall.
Brooks, Frederick P. Jr.; and Iverson, Kenneth e. 1963 Automatic Data Processing. New York: Wiley.
Calingaert, Peter 1965 Principles of Computation. Reading, Mass.: Addison-Wesley.
Green, Bert F. JR. 1963 Digital Computers in Research: An Introduction for Behavioral and Social Scientists. New York: McGraw-Hill.
Gregory, Robert H.; and Van HORN, RICHARD L. (1960)
1963 Automatic Data Processing Systems. 2d ed. Belmont, Calif.: Wadsworth.
Hamming, R. W. 1962 Numerical Methods for Scientists and Engineers. New York: McGraw-Hill.
MCCracken, Daniel D.; and Dorn, William S. 1964 Numerical Methods and FORTRAN Programming. New York: Wiley.
Youden, W. W. 1965 Computer Literature Bibliography: 1946–1963. National Bureau of Standards, Miscellaneous Publication No. 266. Washington: U.S. National Bureau of Standards.
"Computation." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (February 21, 2018). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/computation
"Computation." International Encyclopedia of the Social Sciences. . Retrieved February 21, 2018 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/computation
Modern Language Association
The Chicago Manual of Style
American Psychological Association
com·pu·ta·tion / ˌkämpyoŏˈtāshən/ • n. the action of mathematical calculation. ∎ the use of computers, esp. as a subject of research or study.
"computation." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (February 21, 2018). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/computation
"computation." The Oxford Pocket Dictionary of Current English. . Retrieved February 21, 2018 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/computation