Programming

views updated Jun 08 2018

Programming

Linear programming

Applications and special techniques

Nonlinear programming

Discrete (integer) programming

Miscellaneous problems and methods

BIBLIOGRAPHY

“Programming”—more properly, “mathematical programming” to be distinguished from “computer programming”—is a term applied to certain mathematical techniques designed to solve the problem of finding the maximum or minimum of a function subject to several constraints that are usually expressed as inequalities. In itself, mathematical programming has no economic content, but mathematical programming problems arise frequently in the context of economics and operations research, where inequality restrictions are often encountered. A large variety of programming formulations exists, and mathematical programming has been applied in a considerable number of concrete cases. The existence of large electronic computers has made the actual computation of solutions to mathematical programming problems fairly routine.

Because of theoretical and computational differences among various programming formulations, it is convenient to classify programming problems according to whether they are linear or nonlinear and according to whether they are discrete or continuous. This article focuses mainly on linear continuous, nonlinear continuous, and linear discrete (integer) programming. Baumol (1958) provides an elementary introduction to these topics, and Dorfman, Samuelson, and Solow (1958) discuss them at an intermediate level.

Linear programming

Formulation

As an illustration of a linear programming problem, consider the case of an entrepreneur who may produce any or all of n commodities, the produced amounts of which are measured by x1, x2, … , xn. Assume that the profit per unit of the ith commodity is constant and is given by ci Assume, further, that the entrepreneur has m resources (land, labor of various kinds, raw materials, machines) available to him, their amounts being measured by b1, b2, … , bm. Assume, finally, that the activity of producing one unit of the zth commodity requires (or “uses up”) a1i units of resource 1, a2i units of resource 2, and so on. The problem of choosing production levels for each commodity in such a manner that total profit is maximized can then be stated as follows: maximize

subject to

Equation (1), referred to as the objective function, represents total profit in this example. Inequalities (2) express the requirement that for each of the m resources the total amount of a resource used up in the production of all commodities must not exceed the available amount. Inequalities (3) express the requirement that production levels be nonnegative. Equation (1), together with inequalities (2) and (3), is called a linear program. The term “linear” refers to the fact that both the objective function and the inequalities (2) are linear. Linear programs in which the xi measure the levels or intensities at which productive activities are operated are sometimes called activity analysis problems. In compact matrix notation the above linear program can be written as follows: maximize

ex;

subject to

Axb;

x0.

(Here the symbol “≦” is used to indicate that one matrix is less than or equal to the other, element by element, without any requirement that strict inequality hold for any component. The definition of “≧” is analogous.)

A set of x-values is called a feasible solution if it satisfies inequalities (2) and (3). If the set of all feasible solutions contains an element for which (1) attains a maximum, then that element is called the optimal solution—or simply the solution—of the linear program. The solution need not be unique.

The inequalities (2) may be rewritten as equalities by introducing m new nonnegative variables v1, … vm and rewriting the constraints as

The new variables, called slack variables or disposal activities, are also required to be nonnegative. Their introduction changes neither the set of feasible solutions for x nor the optimal solution(s). In a linear programming problem with m constraints of type (2’), a basic feasible solution is one in which at most m of the n + m variables appear with nonzero values. A fundamental theorem of linear programming is that if a linear programming problem possesses a feasible solution with an associated profit level zf, it also possesses a basic feasible solution with an associated profit level zb such that zb≥ zf.

The mathematical meaning of this theorem is that if the linear programming problem has any solution, then at least one solution will always occur at an extreme point of the convex polyhedron defined by (2). The economic meaning, in terms of the previous example, is that the number of commodities produced will never exceed the number of resource limitations.

Duality

One of the fundamental facts of linear programming is that with every linear program one can associate another program, known as the dual of the first, that stands in a particular relationship to it. The relationship between a (primal) linear program and its dual program leads to a number of important mathematical theorems and meaningful economic interpretations.

Given the linear program defined by (1), (2), and (3), the corresponding dual program is to minimize

t = b1u1 + b2u2 + … + bmum,

subject to

It may be noted that the relationship between a primal problem and its dual is as follows: (a) If the primal problem is a maximization problem, then the dual problem is a minimization problem, and conversely, (b) The coefficients on the righthand side of the inequalities (2) in the primal problem become the coefficients in the objective function in the dual, and conversely, (c) The coefficient on the left-hand side of the constraints (2) in the primal that is associated with the ith constraint and jth variable becomes associated with the zth variable in the jth constraint in the dual. (d) A new set of variables is introduced in the dual, and each dual variable is associated with a primal constraint. If a primal (dual) constraint is an equality, the corresponding dual (primal) variable is unrestricted as to sign; otherwise it is required to be nonnegative. (e) The symmetry of these relations implies that the dual of a dual is the primal.

The economic interpretation of the dual variables and the dual program is as follows: Let ut be the imputed price of a unit of the zth resource. The dual program then minimizes the total imputed value of resources subject to the constraints that the imputed cost of producing a unit of any commodity not be less than the profit derived from producing a unit of that commodity. This interpretation becomes meaningful in the light of the following duality theorems: (1) If both the primal and the dual problems possess feasible solutions, then z≤t for any feasible solution; that is, the value of the (primal) objective function to be maximized does not exceed the value of the (dual) objective function to be minimized. (2) If both the primal and the dual problems possess feasible solutions, they both possess optimal solutions, and the maximum value of z is equal to the minimum value of t. (3) If in an optimal solution to the primal and dual problems a constraint in the primal (dual) problem is satisfied as a strict inequality, the corresponding dual (primal) variable has zero value; if a primal (dual) variable in the solution is positive, the corresponding dual (primal) constraint is satisfied as an equality. (4) In an optimal solution to the primal and dual problems, the value of the ith dual variable is equal to the improvement in the primal objective function resulting from an increase of one unit in the availability of the ith resource.

In terms of the earlier economic interpretation, the meaning of these theorems is as follows: If a resource is not fully used up (one of the primal constraints is satisfied with strict inequality), the corresponding imputed price of that resource is zero, from which we can also infer that an addition to the entrepreneur’s supply of that resource will not contribute to profit. If the jth constraint of the dual program is satisfied as an inequality, diverting resources to production of the jth commodity will reduce profits resulting from production of all other commodities by an amount greater than the profits resulting from production of the jth commodity; hence, that commodity will not be produced. These relationships, expressed in theorem (4), are known as complementary slackness.

The discovery and development of the theory of linear programming and of the notions of duality can be credited to John von Neumann ( see Dantzig 1963, p. 24), D. Gale, H. W. Kuhn, and A. W. Tucker (1951), L. V. Kantorovich (1960), G. B. Dantzig (1951), and others. A notable application of linear programming to the theory of the firm is due to R. Dorfman (1951).

Computational methods

Linear programs are normally solved with the aid of the simplex method, which consists of two phases: phase i provides a feasible solution to the linear program, and phase n, taking over at the point at which the task of phase i is completed, leads to the optimal solution (if one exists). In many problems a feasible solution can be found by inspection, and phase i can be bypassed.

Phase II of the simplex method is an iterative procedure by which a basic feasible solution to the linear program is carried into another basic feasible solution in such a manner that if the objective function is to be maximized, the objective function is monotone nondecreasing–that is, the value of the objective function at the new basic feasible solution is greater than or equal to the value of the objective function at the preceding basic feasible solution. It is a finite method (except in the case of degeneracy; see below) in that it terminates in a finite number of steps either with the optimal solution or with the indication that no maximum exists –that is, that the objective function is unbounded.

In order to solve the problem given by (1), (2), and (3), we first rewrite it in the form of the condensed simplex tableau:

In this tableau the basic variables are v1, … , vm, and the nonbasic variables are x1, … , xn. The current solution is thus v1= b1, … , vm = bm ,x1 = 0, … , xn = 0, It is assumed in the current formulation that the b, are nonnegative coefficients. An iteration consists of a pivot operation by which the roles of some vi and xi are exchanged. If the variable to enter the current solution is xk and the variable to leave it is vr, the exchange is effected by solving the constraint equation

vr = –ar1x1 –… arkxk –… –arnxn = br

for xk and substituting the solution into the other equations. This substitution yields a new tableau that can be interpreted in the same fashion as the old one. The substitution also yields the rules by which the elements of the new tableau can be calculated from the old one. Specifically, denoting the elements of the new tableau by we have

The element ark is called the pivot of the iteration. It is chosen in the following manner: (a) Its column index can refer to any column k that has ck> 0; this ensures that the objective function is nondecreasing. If there is no such ck, the maximum has been reached, (b) After such a column h is chosen, if all aik ≤ 0 in that column, the problem is unbounded. Otherwise we choose the ark> 0 for which bi/aik is smallest. This ensures that the new basic solution is feasible.

It is standard procedure to choose for introduction into the solution that xc from among all eligible ones for which ck is largest. Extensive experimental evidence suggests that the number of iterations required to solve linear programs is highly sensitive to the particular pivot choice criterion employed and that many criteria exist which are more efficient than the standard one (Kuhn & Quandt 1963).

A characteristic of this form of the simplex method is that it is a primal algorithm–that is, the current solution to the primal problem is feasible throughout the computation. The dual simplex method is completely analogous and differs only in that dual feasibility is preserved throughout the computations.

The column vectors in a simplex tableau can be expressed in terms of the inverse (if it exists) of the matrix of coefficients corresponding to the basic variables. This inverse matrix is altered when a basic variable is replaced by a nonbasic variable. The change in the basic inverse can be expressed by multiplying the old inverse by a certain elementary matrix. From the computational point of view we can simplify the solution of a linear program by keeping track of the successive elementary matrices necessary to effect the requisite transformations. Using this procedure is called using the product form of the inverse.

If k denotes the pivot column and if ajk > 0 and bi = 0 at any stage in the solution of a linear program, the program is said to be degenerate. Degenerate linear programs can cycle—that is, it is possible that a particular sequence of simplex tableaus will repeat indefinitely without any further improvement in the objective function, even though the maximum has not been reached. Although artificial examples exhibiting the property of cycling have been constructed (Hoffmann 1953; Beale 1955), in practice, cycling does not seem to have occurred. Since the appearance of degeneracy in a problem is preceded by a nonunique choice of a pivot in a given column of a tableau, cycling could normally be avoided by making a random choice among potential pivots if more than one is available. Should this procedure fail to avoid degeneracy, there are various methods of perturbing the original problem that guarantee that cycling and degeneracy will be avoided. The development of the simplex method as a whole can be credited largely to G. B. Dantzig (Dantzig, Orden, & Wolfe 1954; Dantzig 1963).

Applications and special techniques

The diet problem

The diet problem is to find the cheapest diet that satisfies prescribed nutritional requirements. One of the first problems to be formulated as a linear programming problem (Stigler 1945), it is formally analogous to the activity analysis problem. Suppose there are n foods whose prices are p1 , … , pn and m nutrients whose minimum requirements are b1, … ,bm, Let xl, … , xn be the amounts of each food included in the diet, and let ai; be the amount of the zth nutrient in the j’th food. Then the diet problem is to minimize

subject to

Matrix games

Two-person zero-sum games can easily be formulated and solved as linear programs [see Game theory]. Let alt, i = 1, … , m, j = 1, … , n, be the payoff from player u to player I if i employs his zth pure strategy and n his jth. Irrespective of the (pure) strategy employed by player n, player I, utilizing a maximin criterion, would wish to use the strategy that will ensure him an amount at least equal to some number V, called the value of the game. He will play his various pure strategies with probabilities x1, … , xm. The requirement that his expected gain not be less than V (irrespective of the strategy employed by player n) can be expressed by

We transform this statement of the problem into a linear program by defining zi = xi/V, i = 1, … , m. Then, assuming that V > 0, the problem is to minimize

subject to

If V ≤ 0, we choose a λ large enough so that a solution can be found to where . This transformation leaves the optimal mixed strategy unchanged. For further material on the application of linear programming to matrix games, see Tucker (1960).

Transportation and assignment problems

Assume that a commodity is available at m sources in amounts ai ; i=l,---,m, and is required at n destinations in amounts ai , i = l, … , m, Assuming that the total supply equals the total demand and that the unit cost of shipping from source i to destination j is the constant cij i=l, … ,m, j = 1, … ,n, how can demands be satisfied so that no source is required to ship more than is available at that source and so that the total cost of shipping is minimized? The formulation of this problem, known as the transportation problem, is as follows: define xif as the amount shipped from source i to destination j; then minimize

subject to

The equations above express the requirements that all demands be satisfied and that no availabilities be exceeded.

One of the m + n equalities above will always be linearly dependent on the others; thus, in the absence of degeneracy, a solution will consist of m + n –I nonzero shipments. An important property of the transportation problem is that if the at and bj are integers, the solution of the transportation problem will also consist of integers. Variants of the transportation problem are obtained (a) by assigning capacity limits to the various routes from source to destination, expressed by constraints of the type xitkij, and (b) by permitting a shipment from i to j to go via some intermediate destination(s), resulting in what is called the transshipment problem (Orden 1956).

The transportation problem is closely related to the assignment problem. Consider the problem of assigning n persons to n tasks in such a manner that exactly one person is assigned to each task. It is assumed that we can measure the effectiveness of each person in each task on some cardinal scale.

If the effectiveness of the ith person in the jth task is given by eij, we wish to maximize

subject to

The solution to the above mathematical problem is the solution to the assignment problem, since its formal analogy with the transportation problem guarantees that the solution values for the xij will be integers. Hence, each xij will be either 0 or 1; xij = 0 means that the ith person is not assigned to the jth task, and xij= 1 means that he is. The equality constraints ensure that one and only one man will be assigned to each task.

There are various methods for solving the transportation and the assignment problems. The most common are the simplex method—which is computationally easier in these cases than in a general linear programming problem—and variants of the Hungarian method due to H. W. Kuhn (1955). Some of the variants of the Hungarian method can also be used to solve the so-called network flow problem, the objective of which is to maximize the flow from a single source to a single destination (sink) through a network of intermediate nodes (Ford & Fulkerson 1962). The close relation between these network-oriented problems and the technique of graph theory has resulted in fruitful applications of graph theory to certain types of linear programming problems. The transportation problem was originated by F. L. Hitchcock (1941) and T. C. Koopmans (1951).

Decomposition in linear programming

Certain problems are characterized by the fact that they can be represented as two (or more) almost independent problems. A case in point is the production scheduling problem of a corporation that has two divisions. Assume for argument’s sake that each division faces production constraints that are independent of those faced by the other division. Thus, the amounts of labor, raw materials, and other material inputs used by one division in no way depend upon the amounts of these inputs used by the other division. There may, however, exist certain over-all corporate constraints, perhaps of a financial nature. In such circumstances the profit maximization problem of the corporation may be expressed as the problem of maximizing

subject to

Here X i and X 2 are vectors of activity levels; b 0, b 1 , and b 2 are vectors of resource availabilities; and A, lA 2, B t, and B 2 are matrices of input coefficients specifying the amount of the ith resource necessary to sustain one unit of the jih activity. The first set of equalities represents the over-all corporate constraints; the second and third sets represent the two divisions’ respective constraints.

Very large problems (involving tens of thousands of constraints) exhibiting this type of structure can be solved efficiently by the decomposition principle. The decomposition principle rests upon the following basic observations: (a) The solution of a linear program can always be expressed as a convex combination of the extreme points of the convex feasible set of solutions. This representation is called the executive or master program, (b) If the given problem is rewritten in terms of the extreme points, we obtain a new problem with substantially fewer constraints but with many more variables, (c) The solution method is not contingent on our finding all extreme points. On the contrary, columns of coefficients can be generated when needed (for introduction into the simplex tableau) by the solution of certain small subprograms, each consisting of the independent divisional constraints, (d) The solution process is an iterative one that consists of the following steps: (i) a feasible solution to the executive program is found; (ii) subprograms are solved to determine what new column is to be introduced into the executive program; (Hi) the new (restricted) executive program is solved; (iv) the corresponding subprograms are solved again; and so on, until an optimal solution is reached in a finite number of steps.

The decomposition principle lends itself to applications of decentralized decision making. From the computational point of view, the decomposition principle extends the power of electronic computers (Dantzig 1963, chapter 23).

Nonlinear programming

Formulation

Cases of mathematical programming in which either the objective function or the constraints, or both, are nonlinear are referred to subject to subject to as instances of nonlinear programming. Examples of nonlinear programming arise frequently in economic contexts. The activity analysis example discussed in the section “Linear programming” contains two particularly severe restrictions: (a) the profit that can be obtained from producing an extra unit of a commodity is constant and is thus independent of the level of production, and (b) the amounts of additional inputs needed to produce an additional unit of a commodity are constant and do not depend on the level of production. Thus, the linear programming formulation accounts neither for the possibility of (eventually) declining average profit owing to negatively sloped demand functions nor for the possibility of increasing marginal amounts of inputs, which are required for the maintenance of constant unit additions to output because of the well-known phenomenon of diminishing returns. Nonlinear programming is well suited to dealing with the case of such realistic modifications of economic models. The general nonlinear programming problem can be formulated as follows: maximize

subject to

Duality and the Kuhn-Tucker conditions

The necessary and sufficient conditions for a solution to the inequalities (5) and (6) to be an optimal solution are called the Kuhn–Tucker conditions (Kuhn & Tucker 1951, pp. 481–492). These conditions represent a powerful generalization of the duality theorems of linear programming. They are based on the assumption that the objective function f and the constraints g1, …, gm are differentiable, concave functions. (The function h(x) is concave if, for any choice of two points x1 and x2and 0 < θ <1 , the value of the function at any point between x1 and x2, f[θx1 + (1 –θ)x2], is not smaller than the value of the linear function between f(x1) and f(x2) given by θf(x1) + (1 –θ)f(x2).) The maximization problem can then be reformulated in terms of the differential calculus using Lagrangian multipliers. The new Lagrangian objective function is

The so-called saddle value problem is to find nonnegative vectors and such that

(Here Φ (x 0u 0) is called the saddle value of Φ, and the point (x 0u 0) is called the saddle point of Φ.) That is, the saddle value problem is to find x 0 and u 0 such that the Lagrangian function Ф has a maximum with respect to u at u 0 a minimum with respect to u at it0, and a saddle point at (x 0u 0). The following are the crucial theorems: (1) A necessary and sufficient condition for x 0 to be a solution to the (primal) maximum problem is that there exist a u 0 such that x 0 and u 0 are a solution to the saddle value problem. (2) Necessary and sufficient conditions for x 0 and u 0 to be a solution to the saddle value problem are

and

for all i.

If theorem (2) is applied to the linear case, it immediately reduces to the familiar duality theorems, with the middle conditions in (7) and (8) ensuring complementary slackness. The frequent case in which the constraints are linear but the objective function is not linear has an immediate economic interpretation: the first part of conditions (7) states, for example, that the imputed value of the resources necessary to produce a unit of the jth commodity must not be less than the marginal profit contribution of that commodity.

Recently, the conditions under which the KuhnTucker conditions hold have been somewhat relaxed by H. Uzawa (1958).

Computational methods

A number of methods have been developed for solving nonlinear programs of various types. The success achieved by these methods depends in general on the configuration of the particular problem. Several methods require that the objective function and the constraints be concave; strict concavity is necessary for some methods. If the objective function is not concave, the maximum is generally not unique; thus, solution methods that are capable of solving nonlinear programs of this type usually obtain local maxima but not global maxima. Solution methods fall broadly into three classes: (a) gradient methods, which, starting with some solution to the problem, evaluate the slope (gradient) of the nonlinear objective function and alter the current solution by moving in the direction of the steepest gradient; (b) the simplex method, applicable in the case of a quadratic objective function and linear constraints, which obtains its usefulness by virtue of the fact that in the case mentioned the Kuhn-Tucker conditions are expressible as linear equations; and (c) decomposition approaches, which arise from solving certain suitable linear programming problems based on evaluating the nonlinear objective function over a lattice of points in the space of feasible solutions (Wolfe 1963).

Discrete (integer) programming

Formulation

In a large variety of linear programming problems, one has to dispense with the usual assumption that the space of solutions is finely divisible. Cases in which the set of solutions consists of discrete points at which the components of the solution vector have integer values are called discrete or integer programming problems.

Integer programming problems arise essentially for one of two reasons that are perhaps conceptually distinct but are in practice not always distinguishable. One reason is that the values of some or all variables in a linear programming problem may have to be restricted to integers because fractional solutions may not make sense from the economic point of view. Certain economic activities (the building of a bridge, the dispatching of an aircraft) are by their nature indivisible, and any solution involving such variables must be integer. The other reason is that certain otherwise intractable mathematical problems, often of a combinatorial nature, have natural formulations in terms of integer-valued variables. In such instances fractional-valued variables in the solution must be ruled out for mathematical and logical reasons.

Applications

The following examples illustrate the types of problems that can be formulated as integer programs.

The fixed-charges transportation problem. Consider a standard transportation problem with m sources, n destinations, unit shipping costs cij, requirements bj, and availabilities ai. In addition, assume that a certain fixed charge eij, independent of the amount shipped, is incurred if xij is positive –that is, if the ith source ships to the jth destination–and is not incurred if xij is zero. An integer programming formulation is to minimize

subject to

where tr, is an integer and M is any suitably large number, say M = max (a^, bj). Constraint (9) ensures that tij = 0 whenever x,, = 0. Inequalities (10), together with the requirement that t,-;be an integer, ensure that Uj = 1 if x is positive. Hence, a cost eij will be incurred if and only if x{j is positive.

The dispatching problem. Let there be m customers whose orders must be delivered and n activities, each of which represents the act of delivering one or more customers’ orders by truck. These activities A} can be represented by column vectors of zeros and ones; the ith element of A, is 0 if the jth activity does not deliver the ith customer’s order and 1 if it does. These various activities arise from the fact that several orders can occasionally be combined in a single truck. If c, represents the cost of the jth activity (per unit), an optimal dispatching pattern is found by minimizing

subject to

where xi is an integer and J is a column vector of ones. In the solution, x, = 1 or x, = 0, depending on whether the jih method of customer deliveries is or is not employed. This problem is closely related to the covering problem of graph theory.

The traveling salesman problem. Given n cities and the distance between each two, what is the shortest tour that begins and ends in a given city and passes through every other city exactly once? This problem can be represented as an integer program in several ways.

Computational methods

The computational methods employed in solving integer programs are due primarily to R. E. Gomory (1958). An outline of the working of these methods is as follows:

(a) The problem is solved by conventional methods as a linear program. If the solution consists of integers, that is the solution to the integer program. This is, in fact, always the case with the transportation and assignment problems.

(b) If the solution to the linear program is not integer, a new constraint is generated from the parameters of the previous solution with the property that the constraint excludes part of the feasible region (hence the name “Gomory-cut”) without, however, excluding any integer points in the feasible region. The introduction of such a constraint adds a new, fictitious variable to the problem and renders the primal problem infeasible; hence, a pivot step by the dual simplex method can be undertaken.

(c) After a finite number of Gomory-cuts and dual simplex iterations, the integer solution to the problem can be achieved.

There exists some choice about how to generate new constraints. Several methods appear to be more successful with some problems than with others. The relative success of various computer codes in solving integer programs seems to depend in some sense on the structure of the problem. The relative computational efficiency of the various methods is only imperfectly understood.

It may be noted that the dual values in integer programs do not have all the properties and interpretations that dual values ordinarily have (Gomory & Baumol 1960). These problems do not possess the property of continuity, and the KuhnTucker conditions are therefore not applicable. (For an excellent recent treatment of various aspects of integer programming, see Balinski 1965.)

Miscellaneous problems and methods

This final section is devoted to a brief description of some additional problems in programming.

Parametric programming

Parametric programming deals with (linear) programming problems in which either the objective function or the constraining inequalities are parametrized as follows: maximize

subject to

where h1, and k2 are parameters. Maximizing with respect to the x, as before, one can deduce from such formulations the sensitivity of the solution to variations in the coefficients of the objective function and to variations in the right-hand sides of the first set of inequality constraints. By employing variants and extensions of the simplex method, problems of this type can be solved, and the range of fc-values for which a solution exists can be determined.

Stochastic programming

If some of the coefficients in a programming problem are not known with certainty but can be regarded as depending upon the outcome of some chance event, the problem is one of stochastic programming. Such programming problems arise in a variety of situations in which it is desired to schedule production to meet an uncertain demand. Various formulations may differ in the following respects: (a) the coefficients in the objective function, in the left-hand sides of the inequality constraints, or in the righthand sides of the inequality constraints may be random variables; (b) the decision problem may be nonsequential or may be sequential in that decision variables fall into two or more groups and decisions must be made with respect to each group at a different time; (c) the objective may be to maximize expected profit or to minimize the variance of profit, subject to a constraint on expected profit. Stochastic programming problems are often nonlinear, and solution methods may differ from case to case, involving ordinary linear programming, separable programming, and other techniques (Hadley 1964).

Dynamic programming

Dynamic programming, which was developed primarily by R. Bellman (1957), employs a set of powerful methods that can be applied to problems in which decisions can be represented as occurring sequentially. Mathematically it rests upon Bellman’s principle of optimality, according to which an optimal set of n decisions is characterized by the fact that the last n – 1 decisions in the set are optimal with respect to the state resulting from the first decision, whatever that first decision may have been. Many problems in operations research are capable of being formulated as dynamic programs. These problems resemble the types of mathematical programming problems discussed above in that they characteristically involve maximization (or minimization) of some function subject to inequality constraints and to nonnegativity requirements. The computational difficulties involved in reaching solutions vary, depending on the problem.

Richard E. Quandt

[See alsoOperations research.]

BIBLIOGRAPHY

Balinski, M. L. 1965 Integer Programming: Methods, Uses, Computation. Management Science 12:253–313.

Baumol, William J. 1958 Activity Analysis in One Lesson. American Economic Review 48:837–873.

Beale, E. M. L. 1955 Cycling in the Dual Simplex Algorithm. Naval Research Logistics Quarterly 2:2697–275.

Bellman, Richard 1957 Dynamic Programming. Princeton Univ. Press.

Dantzig, George B. 1951 The Programming of Interdependent Activities: Mathematical Model. Pages 1932 in Cowles Commission for Research in Economics, Activity Analysis in Production and Allocation: Proceedings of a Conference. Edited by Tjalling C. Koopmans. New York: Wiley.

Dantzig, George B. 1963 Linear Programming and Extensions. Princeton Univ. Press.

Dantzig, George B.; Orden, Alex; and Wolfe, Philip 1954 Notes on Linear Programming. Part I: The Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints. RAND Corporation Memorandum RM-1264. Santa Monica, Calif.: The Corporation.

Dorfman, Robert 1951 Application of Linear Programming to the Theory of the Firm, Including an Analysis of Monopolistic Firms by Non-linear Programming. Berkeley: Univ. of California Press.

Dorfman, Robert; Samuelson, Paul A.; and Solow, Robert M. 1958 Linear Programming and Economic Analysis. New York: McGraw-Hill.

Ford, Lester R. Jr.; and Fulkerson, D. R. 1962 Flows in Networks. Princeton Univ. Press.

Gale, David; Kuhn, Harold W.; and Tucker, Albert W. 1951 Linear Programming and the Theory of Games. Pages 317–329 in Cowles Commission for Research in Economics, Activity Analysis in Production and Allocation: Proceedings of a Conference. Edited by Tjalling C. Koopmans. New York: Wiley.

Gomory, Ralph E. (1958) 1963 An Algorithm for Integer Solutions to Linear Programs. Pages 269–302 in Robert L. Graves and Philip Wolfe (editors), Recent Advances in Mathematical Programming. New York: McGraw-Hill. → First appeared as Princeton-IBM Mathematics Research Project, Technical Report No. 1.

Gomory, Ralph E.; and Baumol, William J. 1960 Integer Programming and Pricing. Econometrica 28:521–550.

Hadley, George 1964 Nonlinear and Dynamic Programming. Reading, Mass.: Addison-Wesley.

Hitchcock, Frank L. 1941 The Distribution of a Product From Several Sources to Numerous Localities. Journal of Mathematics and Physics 20:224–230.

Hoffman, A. J. 1953 Cycling in the Simplex Algorithm. U.S. National Bureau of Standards, Report No. 2974. Washington: Government Printing Office

Kantorovich, L. V. 1960 Mathematical Methods of Organizing and Planning Production. Management Science 6:366–422.

Koopmans, Tjalling C. 1951 Optimum Utilization of the Transportation System. Volume 5, pages 136–145 in International Statistical Conference, Washington, D.C., 1947, Proceedings. Washington: The Conference.

Kuhn, Harold W. 1955 The Hungarian Method for the Assignment Problem. Naval Research Logistics Quarterly 2:83–97.

Kuhn, Harold W.; and Quandt, Richard E. 1963 An Experimental Study of the Simplex Method. Volume 15, pages 107–124 in Symposium in Applied Mathematics, Fifteenth, Chicago and Atlantic City, 1962, Experimental Arithmetic, High Speed Computing and Mathematics: Proceedings. Providence: American Mathematical Society.

Kuhn, Harold; and Tucker, A. W. 1951 Nonlinear Programming Pages 481–492 in Berkeley Symposium on Mathematical Statistics and Probability, Second, Proceedings. Edited by Jerzy Neyman. Berkeley: Univ. of California Press.

Orden, Alex 1956 The Transshipment Problem. Management Science 2:276–285.

Stigler, George J. 1945 The Cost of Subsistence. Journal of Farm Economics 27:303–314.

Tucker, A. W. 1960 Solving a Matrix Game by Linear Programming. IBM Journal of Research and Development 4:507–517.

Uzawa, Hirofumi 1958 The Kuhn-Tucker Theorem in Concave Programming. Pages 32–37 in Kenneth J. Arrow, Leonid Hurwicz, and Hirofumi Uzawa, Studies in Linear and Non-linear Programming. Stanford Univ. Press.

Wolfe, Philip 1963 Methods of Nonlinear Programming. Pages 67–86 in Robert L. Graves and Philip Wolfe (editors), Recent Advances in Mathematical Programming. New York: McGraw-Hill.

Programming

views updated May 11 2018

PROGRAMMING

A computer can be an extremely efficient technological tool. The use of computers provides humans with the ability to perform a variety of tasks. The actual computer, however, does little on its own. Computer programs enable the computer, or the hardware, to perform a desired function or task. Computer programs are step-by-step instructions written specifically to instruct the computer on how to accomplish a task. The act of writing these computer programs is referred to as programming.

PROGRAMMING LANGUAGES

A programming language includes the rules defining how a computer program is written. Computer programs fall into two major types of programming languages: low-level languages and high-level languages. Low-level languages are more similar to machine language, which is the language that computers understand directly. High-level languages, however, are often more similar to English and easier for humans to understand.

Initially, programmers used machine language to write computer programs. The computer's "native language" is comprised of a series of binary digits. Binary digits, referred to as bits, are the basic units of memory and can store only two different values, 0 or 1. A group of eight bits makes up a byte and is the amount of memory used to store one character of information, such as a letter or number. Each central processing unit (CPU) for a computer has a unique machine language.

Code for computing gross pay for an employee
Machine language
LOADGROSSPAY
SUBTRACTDEDUCTIONS
STORENETPAY
High-level language
NetPay=GrossPay-Deductions

For example, the machine language for an IBM computer model is unique compared to that of a Macintosh computer model. Because machine-language programming is extremely time-consuming and cumbersome, programmers began using English-like abbreviations to code the programs. This method of coding resulted in the development of assembly languages. With these languages, assemblers convert or translate the English-like code into machine language, increasing the speed at which programs can be written.

Although assembly languages improved the efficiency of program development, these languages still required many instructions to perform simple tasks. High-level languages were developed to improve programming efficiency by requiring fewer coding statements to accomplish more tasks. These languages use compilers or interpreters that convert high-level code into machine code.

Figure 1 illustrates the distinction between machine and high-level languages.

TYPES OF PROGRAMMING LANGUAGES

The two primary methods of programming are procedural and object-oriented. Procedural programming involves coding a set of statements called procedures that are executed sequentially by the compiler. This method of programming was used considerably when users were interacting with text-based computers. Using this approach, the programmer determines the sequence of actions that occur within the program. Programming languages such as COBOL and FORTRAN are examples of procedure languages.

The object-oriented method of programming (OOP) evolved when operating systems migrated to a more visual environment such as the Microsoft Windows family. Windows-based applications include graphical user interfaces (GUI, pronounce "gooey") to make programs friendlier and easier to use. The elements such as buttons, menus, and windows included in a GUI are called objects. Programmers must provide code that handles the user's interactions with these objects. Because the user can select the objects in any order, the program must respond to the user. Thus, the programmer no longer determines the sequence of execution within the program.

The concept of reusability has increased the popularity of OOP languages as well. OOP languages enable programmers to design and code applications that permit interchangeable software components. These reusable components can be used in other programs. Popular OOP languages are Visual Basic .NET, Java, C++, and Python.

CATEGORIES OF COMPUTER PROGRAMS

Systems programs and application programs are the two main types of computer programs. Systems programs or systems software is typically written in low-level language whereas application software is coded using high-level language. Systems software enables the computer to control and maintain its hardware (mouse, monitor, CPU) and interact with the user. There are three major types of systems software: operating systems, utilities, device drivers.

One of the most important types of systems software is the operating systems software. Operating systems software enables the computer application to communicate with the computer hardware. It also provides an interface or a link between the user and the computer. Utilities, another type of systems software, are specialized programs designed to make computing easier. A common example is antivirus software that protects the computer from harmful files and programs. The third type of system software is device drivers. They are specialized programs designed to allow input/output devices to communicate with the rest of the computer and its software. Device driver software is included with most hardware components. When a printer or scanner is purchased, for example, a device driver that is included must be installed on the computer before it can be used.

APPLICATIONS SOFTWARE

Applications software enables the end user to perform useful tasks or functions. If there is a standard task that needs to be accomplished, such as financial budgeting, application packages or software can be purchased from a software vendor at a retail store. Microsoft Excel, for example, is a popular spreadsheet application package that is used for budgeting. If there is a customized problem that is specific to the needs of an end user or company, an applications programmer can design a software package to solve the problem. Other examples of popular applications include word processors, database managers, spreadsheet programs, graphics applications, and Web browsers.

PROGRAM DEVELOPMENT CYCLE

Regardless of the programming language used, the process of developing a program is similar. The steps are as follows:

  1. Analyzing and determining the program specifications is the first and most important step to program development. It is defining the problem and determining what the program should accomplish. Failing to complete this step will often result in ineffective and undesired output.
  2. Designing the program involves planning the solution to the problem by determining a logical sequence of steps. Called an algorithm, this sequence of steps should include precise details of how to solve the problem. Three methods commonly used to develop the algorithm are flowcharts, pseudocode, and hierarchy charts. Flowcharts provide a pictorial representation of logic using diagrams. Pseudocode is written with English-like expressions rather than diagrams. Hierarchy charts are used to show the relationships between sections in a program. Most programmers use pseudocode and hierarchy charts instead of flowcharts to depict their logic.
  3. Choosing the interface includes designing the GUI. The GUI is a user-friendly interface that allows the user to input data into the application and displays the output to the user. This step is needed only if a user interface is included in the program.
  4. Coding the program involves translating the algorithm into a programming language and then entering the code into a code editor. This involves using the appropriate programming software to enter the program instructions or code into the computer's memory.
  5. Compiling the program includes using a compiler or interpreter to convert the code into machine language.
  6. Testing and debugging the program involves locating and removing any errors in the program. Testing is the process of finding errors and debugging is the process of correcting the errors. During this step, the programmer also does a "walk through" of the program to ensure that the program is functioning properly and that it includes all of the program specifications.
  7. Documenting the program is a critical step that involves providing detailed descriptions of the procedures, the variables, and the data names used in the program. It also includes the purpose and objectives of the program. This information is intended to allow another person to understand the program. Internal documentation is found within the code of the program. External documentation is found separate from the program and may include instruction manuals or online help.

POPULAR PROGRAMS

Because of the popularity of applications that provide GUIs for end users which are user-friendly, many programming packages today include a "visual" component. Some of the most commonly used languages by programmers are Visual Basic, NET, Visual C++, and C#. Java, JavaScript, and XML are used for interactive web development.

see also Information Processing ; Software

bibliography

Deitel, Harvey M., Deitel, Paul J., and Nieto, Tem R. (2003). Simply Visual Basic .NET: An application-driven tutorial approach. Upper Saddle River, NJ: Prentice Hall.

Gaddis, Tony, Irvine, Kip, and Denton, Bruce (2003). Starting out with Visual Basic .NET. Boston: Addison Wesley.

Schneider, David (2003). An introduction to programming using Visual Basic .NET (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Stern, Nancy, Stern, Robert A., and Ley, James P. (2003). COBOL for the 21st century (10th ed.). New York: Wiley, 2003.

Venit, Stewart (2002). Extended prelude to programming: Concepts and design. El Granada, CA: Scott Jones.

Ronda B. Henderson

Programming

views updated Jun 27 2018

Programming

Award-winning computer designer and engineer W. Daniel Hillis captured the essence of programming when he said: "The magic of a computer lies in its ability to become almost anything you can imagine, as long as you can explain exactly what that is. The hitch is in explaining exactly what you want. With the right programming a computer can become a theater, a musical instrument, a reference book, a chess opponent. No other entity in the world except a human being has such an adaptable, universal nature."

Computer programming has many facets: It is like engineering because computer programs must be carefully designed to be reliable and inexpensive to maintain. It is an art because good programs require that the programmer use intuition and a personal sense of style. It is a literary effort because programs must be understood by computers, and this requires mastery of a programming language. That is not allprograms must be analyzed to understand how they work and usually must be modified periodically to accommodate changing requirements. Therefore, as programs are written, programmers should care about how elegant they are, and they should understand how they arrived at the solution of a problem.

Techniques

Telling a computer what to do is not as easy as it sounds. Every detail of the computer's desired operation must be precisely described, and plans must be made for all possible occurrences. For example, if a store has a billing program set up to send monthly bills to all customers, then the computer will send out a bill for $0 to those who owe nothing. If one tells a computer to send a threatening letter to customers who have not paid, then those who owe nothing will receive menacing letters until they send in payments of $0! Avoiding this kind of mix-up is one aspect of computer programming. The programmer's art is stating exactly what is desired. In this example, it means making a distinction between customers who have not sent any money because they do not owe anything, and those who actually still owe money.

A combination of thorough problem definition and straightforward programming techniques lead to precise and effective programs. Therefore, programmers should observe the following steps:

  • Define the problem exactly, because this constitutes about 80 percent of the difficulty of programming;
  • Design the program simply because simple programs are easier to develop and maintain, and they result in more reliable, secure, robust, and efficient code;
  • Execute the program with different sets of data. If possible, test the program by hand with just one input; this is a great way to find bugs, and is easy to use and understand.

The task of writing a computer program requires great attention to detail. Computers demand absolute completeness and precision in their instructions: they do only what they are told and their instructions cannot contain any ambiguity . This is true of all software. It applies equally to a simple program that makes a computer play a tune and to a huge program that monitors traffic at an airport.

In programming, nothing can be left to chance. Non-programmers tend to forget that actions they take for granted must be spelled out in great detail for the machine. Every action must be broken down into its most elementary parts to produce an algorithm . No detail, however self-evident to the human mind, can be omitted or taken for granted in a computer program. A good programmer must be capable of thinking about the big-picture that generates useful algorithms, and paying attention to the details that convert those algorithms into unambiguous computer code.

In its simplest form, an algorithm is like a recipe, but a computer programmer must specify extra steps that a cook would usually skip over. For example, a recipe might call for two eggs, without specifying that the eggs must be fresh, uncooked, and added to the mixture without the shell. If such criteria were assumed and not detailed precisely in a computer program, the recipe would fail. When hundreds or thousands of instructions covering every contingency have to be spelled out for the computer, expense naturally rises and bugs creep in.

There are several ways programmers can approach the problem systematically. Two of them are flowcharts and pseudocode . A flowchart is a graphic representation of the algorithm using standard symbols that can then be translated into computer language instructions. Pseudocode involves refining the problem in several stages starting with English sentences, which are then restated in subsequent steps with more computer-like words and statements.

Structured Programming

Structured programming, championed in 1968 by Dutch computer scientist Edsger W. Dijkstra, has exerted a major influence on the development of software ranging from small personal computer programs to multimillion-dollar defense projects. Only three control structures are required to turn out useful structured programs:

  • Simple sequencing or instructing the computer to do one thing after another;
  • Looping, or telling the computer to perform the same set of instructions while a condition holds or until a condition is met;
  • Decision making, or enabling the computer's ability to branch in one of two directions depending on the outcome of a condition.

In addition, structured programs are typically divided into modules , which each perform one function, and do it well. The algorithms are easy to follow because there is only one logic entry into each module and one logic exit from it. Since the modules are small, usually not exceeding one page of code, they are easy to debug .

Programmers

A programmer's goal is not to solve a problem but to map out instructions that will let the computer find the solution. Performing rapid calculations is a job tailor-made for a computer, but designing the step-by-step algorithm that tells the computer exactly how to proceed with a given assignment is better left to human intelligence. To make the algorithm as clear as possible, a programmer always starts by assembling the known facts and setting out a clear statement of the problem. Then the programmer begins devising a logical progression of steps for the computer to follow en route to the solution. Often the programmer will use an extreme version of the problem just to see if the logic of the algorithm holds up. This test often discovers missing steps or inaccurate instructions that would cause the computer to flash error messages.

Finally, the programmer has to consider the types of data the computer will be handling and decide on the best method for storing and retrieving the data for processing. By making the right decision regarding language, logic, and programming techniques, programmers can harness the power of the computer with maximum effectiveness.

The computer programmer is the link between a problem and its computer solution. Good programmers write well-structured and clear programs that others can read and modify. Writing good programs requires creativity, but it must be tempered with great patience and intense discipline so that the resulting programs will be correct and efficient.

see also Algorithms; Compilers; Design Tools; Logo; Procedural Languages.

Ida M. Flynn

Bibliography

Bentley, Jon. Programming Pearls, 2nd ed. Reading, MA: Addison-Wesley, 2000.

Hillis, W. Daniel. The Pattern on the Stone. New York: Basic Books, 1998.

Time-Life Books. Understanding Computers: Software. Alexandria, VA: Time-Life Books, 1986.

ambiguity the quality of doubtfulness or uncertainty; often subject to multiple interpretations

programming

views updated May 11 2018

programming Preparation of a computer so that it can perform a specific task. Before being given data, a computer must be given a set of instructions, or a program, telling it how to deal with the data. Each instruction is a single step, and all information must be in the form of binary numbers. For computer programmers, languages have been developed that make the task of programming increasingly intuitive.

programming

views updated May 29 2018

programming In the broadest sense, all technical activities involved in the production of a program, including analysis of requirements and all stages of design and implementation. In a much narrower sense it is the coding and testing of a program from some given design. This narrower usage is most common in the context of commercial programming, where a distinction is often drawn between systems analysts, who are responsible for analysis of requirements and design, and programmers, who are responsible for implementation and testing.

About this article

Programming

All Sources -
Updated Aug 08 2016 About encyclopedia.com content Print Topic