In an optimization problem, there is a (real-valued) function that is to be maximized or minimized. This function is frequently called the objective function, a term that seems to have arisen in the realm of planning and programming, particularly linear programming, through the work of mathematician George Dantzig (1914–2005). Prior to 1947, when Dantzig invented the linear programming problem and the simplex method for its solution, military logistical plans, called “programs,” involved large-scale decision-making based on ground rules. Dantzig created mathematical models to capture the conditions that needed to be satisfied and a criterion for choosing one feasible solution over another. This made a significant contribution to a vital sphere of activity. Dantzig ushered in a new era in decision-making and brought forth the term objective function as a numerical mathematical expression for the objective that was to be achieved by the program.
Thus, an objective function measures the “goodness” of a feasible vector, that is, a vector whose coordinates satisfy all the imposed side conditions, if any. To illustrate, in a linear programming problem,
the objective function is the linear form p 1x 1 + p 2x 2 + … + pnxn, which might, for instance, measure the total revenue resulting from sales in the amounts x1, x2, …, xn at unit prices p 1, p 2, … pn. The inequalities in this illustration represent side conditions (or constraints) on the variables x 1, x 2, …, xn.
This is not to say that all objective functions (or all constraints) are of this type. They may be linear or nonlinear, depending on how goodness is defined in the applied context. The function being minimized in a parameter estimation by the “least-squares” criterion is an example of a nonlinear (actually quadratic) objective function. In problems of this sort, the “variables” in question may be “free” (unconstrained) or constrained. In the nonlinear case, convexity (or lack of it) becomes an important issue from the optimization-theoretic standpoint.
The underlying concept of an objective function— under a different name or no name at all—had existed for centuries before Dantzig introduced this particular terminology. One has only to recall the method of multipliers devised by Joseph-Louis Lagrange (1736–1813) for equality-constrained optimization problems. Many synonymous terms are in use. Among the more abstract ones are maximand for maximization problems and minimand for minimization problems. These terms can be used in the respective optimization problems no matter what the application may be. In applied areas such as econometrics, one finds the term criterion function. Still others with an obvious connection to economics are social welfare function, economic welfare function, loss function, and profit function. Further examples coming from other fields are distance function and flow value; the point being that the term used in place of objective function might refer to what it is measuring.
SEE ALSO Koopmans, Tjalling; Maximization; Preferences; Preferences, Interdependent; Principal-Agent Models; Programming, Linear and Nonlinear; Rationality; Representative Agent; Social Welfare Functions; Utility Function
Bergson, Abram. 1938. A Reformulation of Certain Aspects of Welfare Economics. Quarterly Journal of Economics 52: 310–334.
Dantzig, George B. 1963. Linear Programming and Extensions. Princeton, NJ: Princeton University Press.
Koopmans, Tjalling C. 1951. Introduction. In Activity Analysis of Production and Allocation, ed. Tjalling C. Koopmans, 1–12. New York: Wiley.
Lagrange, Joseph-Louis. 1797. Théorie des fonctions analytiques. Paris: Imprimerie de la République.
Lange, Oskar. 1942. The Foundations of Welfare Economics. Econometrica 10: 215–228.
Wood, Marshall K., and George B. Dantzig. 1951. The Programming of Interdependent Activities: General Discussion. In Activity Analysis of Production and Allocation, ed. Tjalling C. Koopmans, 15–18. New York: Wiley.
Richard W. Cottle