Econometrics

views updated May 18 2018

Econometrics

A brief history

A survey of econometrics

BIBLIOGRAPHY

Succinctly defined, econometrics is the study of economic theory in its relations to statistics and mathematics. The essential premise is that economic theory lends itself to mathematical formulation, usually as a system of relationships which may include random variables. Economic observations are generally regarded as a sample drawn from a universe described by the theory. Using these observations and the methods of statistical inference, the econometrician tries to estimate the relationships that constitute the theory. Next, these estimates may be assessed in terms of their statistical properties and their capacity to predict further observations. The quality of the estimates and the nature of the prediction errors may in turn feed back into a revision of the very theory by which the observations were organized and on the basis of which the numerical characteristics of the universe postulated were inferred. Thus, there is a reciprocating relationship between the formulation of theory and empirical estimation and testing. The salient feature is the explicit use of mathematics and statistical inference. Nonmathematical theorizing and purely descriptive statistics are not part of econometrics.

The union of economic theory, mathematics, and statistics has been more an aspiration of the econometrician than a daily achievement. Much of what is commonly known as econometrics is mathematical economic theory that stops short of empirical work; and some of what is known as econometrics is the statistical estimation of ad hoc relationships that have only a frail basis in economic theory. That achievement falls short of aspiration, however, ought not to be discouraging. It is part of the developmental process of science that theories may be advanced untested and that the search for empirical regularities may precede the systematic development of a theoretical framework. A consequence of this, however, is that although the word “econometrics” clearly implies measurement, much abstract mathematical theorizing that may or may not ultimately lend itself to empirical validation is often referred to as part of econometrics. The meaning of the word has frequently been stretched to apply to mathematical economics as well as statistical economics; and in common parlance the “econometrician” is the economist skilled and interested in the application of mathematics, be it mathematical statistics or not. In this article I shall accept this extended definition and consider both econometrics in its narrow sense and mathematical economic theory.

A brief history

The use of mathematics and statistics in economics is not of recent origin. In the latter part of the seventeenth century Sir William Petty wrote his essays on “political arithmetik” [see the biography ofPetty]. This fledgling work, remarkable for its time, was econometric in its methodological framework, even from the modern point of view. Despite the fact that it was not referred to by Adam Smith, it had a discernible influence on later writers. In 1711 Giovanni Ceva, an Italian engineer, urged the adoption of the mathematical method in economic theory. Although many statistical studies appeared during the intervening years, the revolutionary impact of the mathematical method did not occur until the latter part of the nineteenth century. More than any other man, Leon Walras, professor at the University of Lausanne, is acknowledged to be the originator of general equilibrium economics, which is the basic framework of modern mathematical economics [see the biography ofWalras]. His work, removed from any immediate statistical application, developed a comprehensive system of relationships between economic variables, including money, in order to explain the mutual determination of prices and quantities of commodities and capital goods produced and exchanged. Walras conceived of the economy as operating along the lines of classical mechanics, the state of the economy being determined by a balancing of forces between all market participants. His general equilibrium system was, however, essentially static because the values of the economic variables did not themselves determine their own time rates of change. For that reason the term “equilibrium” is something of a misnomer because, since Walras’ general system was not explicitly dynamic, its solution cannot be described as an equilibrium state. Nevertheless, as is still true in much of economic theorizing, there were side discussions of the adjustment properties of the economy, and so, in a wider context, the solution can be regarded as the result of a balancing of dynamic forces of adjustment.

The significant combination of mathematical theory and statistical estimation first occurred in the work of Henry Luddell Moore, a professor at Columbia University during the early part of the twentieth century [see the biography ofMoore, Henry L.]. Moore did genuine econometric work on business cycles, on the determination of wage rates, and on the demand for certain commodities. His major publication, culminating some three decades of labor, was Synthetic Economics, which appeared in 1929. Incredibly, this work, of such seminal importance for the later development of a significant area of social science, sold only 873 copies (Stigler 1962).

Econometrics came to acquire its identity as a distinct approach to the study of economics during the 1920s. The number of persons dedicated to this infant field grew steadily, and, on December 29, 1930, they established an international association called the Econometric Society. This was achieved in large measure through the energy and persistence of Ragnar Frisch of the University of Oslo, with the assistance and support of the distinguished American economist, Irving Fisher, a professor at Yale University [see the biography ofFisher, Irving]. To call this small minority of economists a cult would impute to them too parochial and evangelical a view; nevertheless, they had a sense of mission “to promote studies that aim at a unification of the theoretical-quantitative and the empirical-quantitative approach to economic problems and that are penetrated by constructive and rigorous thinking similar to that which has come to dominate in the natural sciences” (Frisch 1933).

Their insights and ambitions were well founded. During the following years and through many a methodological controversy about the role of mathematics in economics (a topic now rather passé) their numbers grew and their influence within the wider profession of economics was steadily extended. Today all major university departments of economics in the Western world, including most recently those in the Soviet-bloc countries, offer work in econometrics, and many place considerable stress upon it. Specific courses in econometrics have been introduced even at the undergraduate level; textbooks have been written; the younger generation of economists entering graduate schools arrive with improved training in mathematics and statistical methods, gravitate in what appear to be increasing proportions toward specialization in econometrics, and soon excel their teachers in their command of econometric techniques. Membership in the Econometric Society increased from 163 in 1931 to over 2,500 in 1966. The society’s journal, Econometrica, has virtually doubled in size over these years, and nearly all other scholarly journals in economics publish a regular fare of articles whose mathematical and statistical sophistication would have dazzled the movement’s founders in the 1920s and 1930s.

Areas of application of econometrics within economics have been steadily widened. There is now scarcely a field of applied economics into which mathematical and statistical theory has not penetrated, including economic history. With the increasing interest and concentration in econometrics on the part of the economics profession, the very notion of specialization has become blurred. With its success as a major intellectual movement within economics, econometrics is losing its identity and is disappearing as a special branch of the discipline, becoming now nearly conterminous with the entire field of economics. These remarks must not be misunderstood, however. There remain many problems and much research in economics that is neither mathematical nor statistical, and although the modern economist’s general level of training and interest in mathematics and statistics far exceeds that of his predecessors, a quite proper gradation of these skills and interests inevitably continues to exist. Moreover, to repeat, much of what is known as econometrics still falls short of the interrelating of the mathematical-theoretical and the statistical, which is the aspiration contained in the field’s definition.

A survey of econometrics

Since econometrics is no longer a small enclave within economics, a survey of its subject matter must cover much of economics itself.

General equilibrium

Pursuing Walras’ conception of a general economic equilibrium, mathematical economists have in recent years been engaged in a far more thorough analysis of the problem than Walras offered [seeEconomic Equilibrium]. In the earlier work a general economic equilibrium was described by a system of equalities involving an indefinitely large number of economic variables, but a number equal to the number of independent equations. It was presumed that a system of simultaneous equations with the same number of unknowns as independent equations would have an “equilibrium” solution. This is loose mathematics, and in recent times economic theorists have been concerned to redevelop the earlier theory with greater rigor. Equality of equations and unknowns is neither a necessary nor a sufficient condition for either the existence or the uniqueness of a solution. Consequently, one cannot be sure that the early theory is adequate to explain the general equilibrium state to which the economy is postulated to converge. This might be because the theory does not impose conditions necessary to assure the existence of a general equilibrium state or because the theory might be indeterminate in that several different solutions are implied by it. The modern equilibrium theorist has therefore tried to nail down the necessary and sufficient conditions for the existence and uniqueness of the general economic equilibrium.

The concept of an equilibrium is that of a state in which no forces within the model, operating over time, tend to unbalance the system. Even if such a state can be demonstrated to exist within the framework of some general equilibrium model, there remains the question of whether it is stable or unstable, that is, whether, for any departure of the system from it, forces tend to restore the original equilibrium or to move the system further away. An analysis of these questions, which are rather more involved than suggested here, requires the explicit introduction of dynamical adjustment relationships.

Questions of the existence, uniqueness, and stability of an equilibrium are, in the present context, not questions about the actual economy, but questions regarding the properties of a theoretical model asserted to describe an actual economy. In this sense, their examination is oriented toward an improved understanding of the implications of alternate specifications of the theory itself rather than toward an improved empirical understanding of how our economy works.

Most of this work, moreover, has been restricted to an examination of the general equilibrium model of a competitive economy, which is a special case indeed. It is a case of particular interest, however, because, under idealized assumptions, welfare economists have imputed features to a competitive equilibrium that satisfy criteria which are regarded as interesting for a social evaluation of economic performance [seeWelfare Economics], According to a concept of Pareto’s, a state of the economy (not to be thought of as unique) is said to be optimal if there is no other state that is technologically feasible in which some individual would be in a position he prefers while no individual would be in a position that he finds worse [see the biography ofPareto]. The conditions under which a general economic equilibrium would be optimal in this sense have therefore been subject to rigid scrutiny. Thus, Pareto welfare economics is intimately involved in the modern examination of general equilibrium systems, but it is not well developed as an empirical study.

The positive economist, concerned with prediction, has also been concerned with general equilibrium systems in principle, but from a different point of view. His central question is, How does a change in an economic parameter (a coefficient or perhaps the value of some autonomous variable not itself determined by the system) induce a change in the equilibrium value of one or more other variables that are determined by the system? In short, How does the equilibrium solution depend upon the parameters? This is a problem in comparative statics which contrasts two different equilibria de-fined by a difference in the values of one or more parameters.

Comparative statics—partial equilibrium

It is in the problem of comparative statics—the comparison of alternative equilibrium states—that we can most ably distinguish between general equilibrium economics and partial equilibrium economics, a familiar contrast in the literature.

Suppose that, in the neighborhood of an equilibrium, a general system of simultaneous economic relationships is differentiated totally with respect to the change in a particular parameter, so that all direct and indirect effects of that change are accounted for. One might then hope to ascertain the direction of change of a particular economic variable with respect to that parameter. For example, if a certain tax rate is increased or if consumer preferences shift in favor of a particular commodity, will the quantity demanded of some other commodity increase, diminish, or stay the same? This question can sometimes be answered on the basis of the constellation of signs (plus, minus, zero) of many or all of the partial derivatives of the functions constituting the system (assuming here that they are continuously differentiable). Theoretical considerations or common sense may enable one to specify a priori the signs of these partial derivatives, for example, to assert that an elasticity of demand is negative or that a cross elasticity of demand is positive. In some cases, however, the theorist is not comfortable in making such assertions about a derivative, and hence some signs may be left unspecified. The question is whether the restrictions that the theorist is willing to impose a priori suffice to determine whether the total derivative of the economic variable of interest with respect to a given parameter is positive, negative, or zero. The formal consideration of the necessary and sufficient restrictions needed to resolve this question unambiguously constitutes the study of qualitative economics and presents a mathematical problem in its own right (Samuelson 1947; Lancaster 1965). In some situations it may be critical to know not only the signs of various partial derivatives but also their relative algebraic magnitudes. This points to the need for the statistical estimation of these derivatives, a task belonging to econometrics in its most narrow meaning.

At times it is also useful to know that certain derivatives are sufficiently close to zero that if they are assumed equal to zero the conclusion about the sign of the total derivative being investigated would not be affected. The trick or art of deciding when to regard certain partial derivatives as zero, that is, of deciding that certain economic variables do not enter in any significant way into certain relationships, is the essence of partial equilibrium analysis, so called because it tends to isolate a portion of the general system from other portions that have little interaction with it. Partial equilibrium analysis is, thus, a special case of general equilibrium analysis, in which more daring a priori restrictions have been introduced with the object of deducing more specific and meaningful results in comparative statics. Just as general equilibrium economics has been commonly associated with the name of Walras, so partial equilibrium economics has been associated with the work of Alfred Marshall [see the biography ofMarshall].

In qualitative economics some light is shed upon the signs of the partial derivatives of the system by considering the dynamic stability of the model. With assumptions about the nature of the dynamic adjustment relationships, correspondences might be found between the conditions necessary for an equilibrium to be stable and the signs of the partial derivatives. Thus, just as stability depends upon assumptions about whether different variables enter a given relationship positively or negatively, so, also, the way in which those variables enter a given relationship may sometimes be inferred from the assumption that an equilibrium is stable. This is the famous correspondence principle, due to Samuelson. [SeeStatics and dynamics in economics.]

Spatial models

Most general equilibrium models have conceived of the economy as existing at a single point in space, thereby ignoring transportation costs, the regional specialization of resources, and locational preferences. Some studies, however, explicitly introduce the spatial dimension in which a general equilibrium occurs. This provides a framework for the study of interregional location, specialization, and interdependency in exchange. [SeeSpatial economics, article onthe general equilibrium approach.] These models, because of their greater complexity, generally involve more-special assumptions, such as linearity of relationships and the absence of opportunities for substitution among factor services in production. They have also, however, lent themselves more directly to empirical work.

In the application of partial equilibrium analysis to problems of spatial economics, it is assumed, moreover, that the locations of certain economic activities are determined independently of the location decisions regarding other economic activities, and therefore the former can be regarded as fixed in the analysis of the latter. [For a discussion of this line of inquiry, seeSpatial economics, article Onthe partial equilibrium approach.]

Aggregation and aggregative models

Since general equilibrium systems are conceived of as embracing millions of individual relationships, they obviously do not lend themselves to quantitative estimation. Much interest, therefore, inheres in reducing the dimensionality of the system, so that there is some possibility of econometric estimation. This means that relations of a common type, such as those describing the behavior of firms in a given industry or households of a certain character, need to be aggregated into a single relationship describing the behavior of a collectivity of comparable economic agents. The conditions needed to make such aggregation possible and the methods to be used are still in a rather preliminary stage of exploration. But a literature is developing on this subject. [SeeAggregation.]

An older problem is simply that of aggregating into a single variable a multiplicity of similar variables. This is the familiar problem of “index numbers’*—for example, how best to represent the prices of a great variety of different commodities by a single price index. The index number problem, therefore, has its theoretical aspects [seeIndex numbers, article ontheoretical aspects] as well as its statistical aspects [seeIndex numbers, articles onpractical applicationsandsampling]. The theory has been useful in guiding the interpretation of alternative statistical formulas.

The major efforts in the empirical study of general equilibrium systems that have to some limited degree been aggregated come under the heading input-output analysis. This approach, originated by Wassily Leontief in the late 1930s, consists essentially in considering the economy as a system of simultaneous linear relationships and regarding as constant the relative magnitudes of the inputs into a production process that are necessary to produce the process’s output. These inputs may, of course, be the outputs of other processes. Thus, with fixity of coefficients relating the inputs and outputs of an integrated production structure, it is possible to determine what “bill of goods” can be produced, given an itemization of the quantities of various “primary” nonproduced inputs that are available. Alternatively, the quantities of primary inputs necessary to produce a given bill of goods can also be determined. The coefficients of such a system can be estimated by observing the ratios of inputs to outputs for various processes in a given year or by averaging these ratios over a sequence of years or by using engineering estimates. This may be done for an economy divided into a large number of different sectors (a hundred or more), or it may be done for portions of an economy, such as a metropolitan area. Moreover, the sectoring of the economy may be by regions as well as by industries, and the former makes the method applicable to the study of interregional or international trading relations. A great deal of empirical research has been done on input–output models, tables of coefficients having now been developed for over forty countries. The quantitative analysis of the workings of these models has, as one may readily surmise, required the availability of large-scale computers. [SeeInput–output analysis.]

Aggregative models in economics may be of either the partial or the general equilibrium type. Those of a partial equilibrium type deal with a single sector of the economy in isolation, under the assumption that the external economic variables that have an important impact on that sector are not in turn influenced by its behavior. Thus, for example, a market model of demand and supply for a particular commodity may regard the total income of consumers and its distribution as determined independently of the price and output of the particular commodity being studied. Yet, the market demand and supply functions are aggregates of the demand and supply functions of many individuals and firms. Aggregative models of the general equilibrium type may explain the mutual determination of many major economic variables that are aggregates of vast numbers of individual variables. Examples of the aggregate variables are total employment, total imports, total inventory investment, etc. These models are generally called macroeconomic models, in contrast to microeconomic models, which deal, in a partial equilibrium sense, with the individual household, firm, trade union, etc. Many macroeconomic models treat not only so-called real variables, which are physical stocks and flows of goods and productive services, but also monetary variables, such as price levels, the quantity of money, the value of total output, and the interest rate. Models of this sort have been especially common since 1936, having been stimulated by John Maynard Keynes’s General Theory of Employment, Interest and Money and by the literature that devolved therefrom.

One type of aggregative, macroeconomic model is that which distinguishes a few important sectors of the economy or which relates macroeconomic variables of two or more economies interrelated in trade. Much of the theory of international trade deals with models of this sort [seeInternational trade, article onmathematical theory]. In fact, since this has been a natural mode for the analysis of international economic problems, international trade theory has historically been one of the liveliest areas for the development of economic theory, both mathematical and otherwise. More-narrowly econometric studies in this area have focused on estimates of import demand elasticities.

Moreover, macroeconomic models have lent themselves to the study of economic change, and it is with these models that the most significant work in economic dynamics has occurred. Dynamical systems in economics are those in which the values of the economic variables at a given point in time determine either their own rates of change (continuous, differential equation models) or their values at a subsequent point in time (discrete, difference equation models). [For a general discussion of dynamic models seeStatics and dynamics in economics.] Thus, dynamic models involve both variables and a measure of their changes over time. The former often occur as “stocks,” and the latter as “flows.” When both stocks and flows enter into a given model, there are complexities in reconciling the desired quantities of each. These problems become especially important when monetary variables are introduced, for example, when we consider the desire of individuals both to hold a certain value of monetary assets and to save (add to assets) at a certain rate. [Specific problems of stock-flow models are discussed in Stock-flow analysis.]

Dynamic models arise both in the theory of longrun economic growth [seeEconomic growth, article Onmathematical theory], where both macroeconomic and completely disaggregated general equilibrium models have been employed, and in the theory of business fluctuations or business cycles [seeBusiness cycles, article onmathematical models], where macroeconomic models are most common. Not all models intended to explain the level of business activity need be cyclical in character. The modern emphasis is more on macroeconomic models, cyclical or not, that explain the level of business activity and its change by a dynamical system that responds to external variables. These include variables of economic policy (government deficit, central bank policy, etc.) and other variables that, while having an important impact on the economy, have their explanation outside the bounds of the theory, for example, population growth and the rate of technological change. Thus, outside variables, known as exogenous or autonomous variables, play upon the dynamical economic system and generate fluctuations over time that need not be periodic. These models lend themselves to empirical investigation, and a great deal of work has been done in estimating them [seeEconometric models, aggregate]. The structure of these models has been refined and developed as a consequence of the empirical work.

The great advantage of aggregative models, of course, is that they substantially reduce the vast number of variables and equations that appear in general equilibrium systems and thereby make estimation possible. Even so, these models can be quite complex, either because they still contain a large number of variables and equations or because of nonlinearities in their functional forms. The modern computer makes it possible to estimate systems of this degree of complexity, however. But if one is interested in analyzing the dynamical behavior of these systems, the difficulties often transcend our capabilities in mathematical analysis. The computer once again comes to the rescue. It is possible, with the computer, to simulate complex systems of the type being considered, to drive them with exogenous variables, and to shock them with random disturbances drawn from defined probability distributions. In that way the performance of these systems under a variety of assumptions regarding the behavior of the exogenous variables and for a large sample of random variables can be surveyed.

[Simulation studies of this sort are discussed inSimulation, article oneconomic processes.]

Variables that commonly arise in macroeconomic models are aggregate consumer expenditure, inventory investment, and plant and equipment investment. Aggregate consumer expenditure, or consumption, reflects the behavior of households in deciding how much to spend on consumer goods, which in some studies may be further broken down into categories such as consumer durables, nondurables, and services. Using regression techniques, consumer expenditure is made to depend upon other variables, some of which are economic in character (consumer income, change in income, highest past income, the consumer price level and its rate of change, interest rates and terms of consumer credit, liquid assets, etc.) and some of which are demographic (race, family size, urban-rural residence, etc.). The empirical study of the dependency of consumer expenditure on variables of these kinds has been intensive during the past twenty years. [For a survey of this work, seeConsumption function.]

The behavior of inventory investment has likewise been the object of intensive study, both in terms of how inventories have varied over time relative to the general level of business activity and in terms of how inventory investment has responded to such variables as the interest rate, sales changes, unfilled orders, etc. [This work is reviewed inInventories, article oninventory behavior.] There are some subtle issues involved in formulating an inventory investment function. Sometimes inventories accumulate when firms intend they should, and other times they accumulate despite the desire of firms to reduce them, for example, when sales fall off rapidly relative to the capability of firms to alter their rates of output. Theoretical work concerned with the optimum behavior of firms in matters of inventory policy can therefore provide some underpinning to the selection and interpretation of the role of different variables in an inventory investment function. [SeeInventories, article on Inventory control theory.]

The dependency of plant and equipment investment upon such variables as business sales, sales changes, business profits, liquidity, etc., may also be studied by econometric techniques, and different theories have been advanced to support notions about the relative importance of these different variables. As with the consumption function and the determination of inventory investment, the plant and equipment investment function has also been the subject of intensive empirical research over the past couple of decades. [This work is reviewed inInvestment, article onthe aggregate investment function.]

Decision making

Though it is methodologically proper for the economist to postulate ad hoc relationships between macroeconomic variables (Peston 1959), it is more gratifying, more unifying of economic theory, if the behavior of the macrovariables can be derived from elementary propositions regarding the behavior of the microvariables whose aggregates they are. This is the aggregation problem, referred to earlier. The aspiration is that an axiomatic theory of the behavior of the individual economic decision maker, most importantly the individual (or household) and the firm, can serve as a fundament to theories of the interaction of the aggregated macrovariables. Most of the behavioral theory of firms and households, however, is in the context of partial equilibrium analysis, because the individual economic agent does not bother to take account of the very slight influence that his own decisions exert on the market or on the economy as a whole. Thus, each household and each competitive (but not monopolistic) firm regards market prices as fixed and unaffected by its own choices. But in linking together such partial equilibrium models of the behavior of vast multitudes of individual households and firms, one cannot ignore the impact of their combined behavior on the very market variables they regard as constants. Thus, the partial equilibrium micromodels must be reorganized into more general models allowing for these individually unperceived but collectively important interactions.

Microeconomic theory is largely deductive, proceeding systematically from axioms regarding preference and choice to theorems regarding economic behavior. To proceed carefully through the logical intricacies of this deductive theory, formal mathematics is heavily invoked. Market decisions of economic agents are usually hypothesized to be prudent or rational decisions, by which is meant that they conform by and large to certain basic criteria of decision making that are thought to have wide intuitive appeal as precepts of prudent or rational choice. The situations in which a decision maker may be called upon to choose can be formulated in a variety of ways. There are “static” situations, where the decision is not assumed to have a temporal or sequential character. There are “dynamic” situations, in which a sequence of decisions must be made and in some consistent way. The decision problem may also be categorized according to the knowledge the decision maker believes he has about the consequences of his decisions. At one extreme is the case of complete certainty, where the consequences are thought to be completely known in advance. Other cases involve risk and arise when the decision maker is assumed to know only the probability distribution of the various outcomes that can result from the decision he makes. Finally, at the other extreme, the decision problem may be conceived of as involving almost complete uncertainty, in which case the decision maker knows what the possible outcomes are but has no a priori information about their probabilities. [For a discussion of various criteria proposed for these different situations, seeDecision making, article oneconomic aspects.] Fundamental, however, is the notion that the decision maker has preferences and that he exercises these within the range of choice available to him. An index giving his preferences is commonly called utility and is conceived of as a function of the objects of his choice. In particular, where an individual with a fixed income is choosing among various “market baskets” of commodities, utility is commonly postulated to be a function of the components of the market basket. Axiomatic systems that are necessary and sufficient for the existence of such a function have been the object of intensive study by mathematical economists. [This central problem and many subtle aspects of it are considered inUtility.] Great effort, with perhaps little benefit for empirical economics, has gone into the refinement of the axiomatics of utility theory or of the theory of consumer choice; unfortunately, much less work has been done to strengthen the assumptions of the theory so as to increase its empirical content. From the theory of consumer behavior comes the concept of the demand function of the consumer for a particular commodity, depending characteristically on all prices and on income.

As for the theory of the firm, prudent, purposeful behavior is also assumed, and in the theory’s most common formulation it is supposed that the firm wishes to maximize some measure of its preference among streams of future profits. This must be done subject to the prices that the firm must pay for factor services, the market opportunities it confronts when selling its products, and its internal technology of production. From this analysis comes the theory of production and of supply. [For the theory of production of the firm, seeProduction; for econometric studies of production relationships and of the cost of production, seeProduction and cost analysis; and for econometric studies of demand and supply, seeDemand and supply, article Oneconometric studies.]

In the derivation of the theory of consumer behavior and the theory of the firm, purposeful and prudent behavior has characteristically been associated with the notion that the decision maker attempts to maximize some function subject to market and technological constraints. Thus, the mathematics of constrained maximization has served the economist as the most important tool of his trade. In an effort to develop models of maximizing behavior that would lend themselves better to quantitative formulation and solution, interest came to focus on problems where the function being maximized is linear and the constraints constitute a set of linear inequalities. Methods for solving such problems became known as linear programming. With further advances, nonlinearities and random elements were introduced, and the method came to be applied, as well, to problems of sequential decision making. The entire area is now known as mathematical programming [seeProgramming]. Because of their practical usefulness, these methods lent themselves to the analysis of various specific planning and optimization problems, especially to problems internal to the operation of the firm. Stimulated by the availability of these techniques, as well as by advances in probability theory and some wartime experience in systems analysis, there has come to flourish a modern quantitative approach to the problems of production and business management. This is known as management science or operations research [seeOperations research]. This development is a case of fission, management science now being regarded as distinct from econometrics, although both fields have much in common and share many a professor and practitioner.

The most complex problems in the area of prudent decision making are those that involve strategical considerations. In its essence this means that the consequence of a decision or action taken by one participant depends upon the actions taken by others; but their actions in turn depend upon the actions of each of the other participants. Thus, the structure of the problem is not that of simple maximizing, even in the face of risk or uncertainty, but is that of the strategical game. [SeeGame Theory, article onTheoretical Aspects.] Based upon considerations of the prudent strategy of the individual participant and of the incentives for subsets of participants to form coalitions, the theory of games can be presented as a general equilibrium problem and has become intimately associated with the modern work in general equilibrium economics. In a more partial context, game theory has appeared applicable to the decision problems of firms in oligopolistic and bilateral monopoly situations. These are characterized by the fact that each firm, in choosing its best course of action, must take into account the effect of its action on the actions of other firms, which also perform in a prudent way. In general, the early enthusiasm for the application of game theory to these problems of industrial behavior has thus far been confirmed in only limited degree. [For a review of the applications of game theory to business behavior, seeGame theory, article oneconomic applications.]

Distribution processes

A concern of long standing in economics has been the size distribution of economic variables. What determines the distribution of family incomes or the distribution of the assets or sales of firms in a given industry? In past years these problems have been dealt with descriptively by fitting frequency distributions to the data of different countries, different years, or different industries. A good fit to data from different sources could be declared an empirical “law”; thus, the Pareto law of income distribution. In more recent years the size distribution problem has been redefined. Econometricians now regard it as one of formulating a dynamic process of growth or decay with random elements. The task is to estimate the parameters of the process and to determine whether there is an equilibrium distribution of the size of units and what that distribution is. A good fit can thus have a theoretical mechanism behind it, and the parameters can be made to depend upon other economic variables that may change or may be controlled. [In this connection, seeSize distributions in economicsandMarkov chains.]

Statistical methods

In the natural sciences the investigator must make his own measurements. In economics, however, the economy itself generates data in vast quantities. Taxpayers, business firms, banks, etc., all record their operations, and in many cases these records are available to the economist. Unfortunately, these data are not always precisely the kind the economist wants, and they must frequently be adjusted for scientific purposes. In recent decades the government has been engaged increasingly in the accumulation and processing of economic data. This has been of tremendous help in the development of econometrics. Not only is this the case for the United States and western European governments but data are also accumulated in the planned economies, where they are of critical importance in the planning operation. [SeeEconomic data.] The absence of adequate data is felt most severely in the study of the underdeveloped economies, although, through the United Nations and other organizations, an increasing amount of data for those parts of the world is being gathered and collated.

A major form in which economic data occur is that of successive recordings of economic observations over time. Thus, there may be many years of price data for particular commodities, of employment data, etc. The econometrician, therefore, has traditionally been heavily concerned with time series analysis [seeTime series] and especially with the use of regression methods, where the various observations are ordered in a temporal sequence. This has lent itself to the development of dynamical regression equations attempting to explain the observation of a particular date as a function not only of other variables but also of one or more past values of the same variable. Thus, the dynamic regression relationship is a difference equation incorporating a random term. When many past values of the variable are introduced into the difference equation, so that it is one of very high order, it becomes difficult to estimate the coefficients of these past variables without losing many degrees of freedom. As a result, the econometrician has tried to impose some pattern of relationship on these coefficients, so that they may all be estimated as functions of relatively few parameters. This is the technique of distributed lag regressions. [SeeDistributed lags.]

The techniques just described have largely come to replace the older methods of time series decomposition, whereby a time series is split up into such components as trend, cycles of various lengths, a seasonal pattern of variation, and a random component. These methods implied the interaction of recurrent influences of regular periodicity and amplitude. With the move toward the difference equation and regression approach, exogenous variables have been introduced and random disturbances made cumulative in their effects. The temporal performance of a time series is thereby described less in terms of some inherent law of periodicity and more in terms of a succession of responses to random influences and to the temporal variation of other causal variables. Forecasting is not, therefore, the inexorable extrapolation of rhythms but is the revised projection, period by period, of an incremental relationship depending on present and past values, on exogenous variables, and on random elements. [SeePrediction and forecasting, economic.]

Nevertheless, it has always been sensible to assume a rather strict periodicity for the seasonal component because of the recurrent nature of seasons, holidays, etc. As a result, when studying time series where the observations are daily, weekly, or monthly, it is customary first to estimate and remove the seasonal influence. [Techniques for doing this are discussed inTime series, article onSeasonal adjustment.]

The other kind of data that the economist uses is cross-sectional. For example, he may use a sample of observations, all made at approximately the same time, of assets, income, and expenditures of different households, firms, or industries. [SeeCross-section analysis.] By observing differences in the behavior of the individuals in the sample and, again usually through regression analysis, ascribing these differences to differences in other variables beyond the control of these individuals, the econometrician attempts to infer how the behavior of similar economic units would change over time if the values of the independent variables were to alter. There are many pitfalls in this process of inferring change over time for a given firm or household on the basis of differences among firms and households at a given point of time. What becomes especially useful are data that are both cross sectional and time series in character, as, for example, when the budgets of a sample of households are observed, each over a number of successive years. To obtain usable information of a cross-section or of a cross-section and time-series sort commonly requires the design of a sample survey. [The application of survey methods in economics is discussed inSurvey analysis, article Onapplications in economics.]

A very common problem in econometrics arises when different variables are related in different ways. For example, aggregate investment depends on national income, but national income depends, in a different way, on aggregate investment. In demand and supply analysis, equilibrium quantity exchanged and market price must satisfy both a demand and a supply function simultaneously. This simultaneity of multiple relationships between the same variables presents special problems in the application of regression methods. These problems have been much studied over the past twenty years, and various devices for dealing with them are now available. These methods are often quite complex, but with advances in statistical theory and in the availability of data and with the use of the large-scale computer they have come into common use in estimating both partial equilibrium and macro-economic models, sometimes of quite large dimension. Although touched upon only briefly here, this commanding problem in statistical methodology is perhaps the most central feature of econometric analysis and is the subject of a number of texts and treatises. It is also probably the largest block of material covered in most special courses in econometrics. [SeeSimultaneous equation estimation.]

To those engaged in research at the frontiers of any science, progress seems always to be exceedingly slow; but a review of the accomplishments of econometricians both in the development of economic theory and in its quantitative estimation and testing over the past two or three decades gives one the feeling of great achievement. But as old problems are solved, new ones are invented. Thus, the advance of econometrics continues unabated.

Robert H. Strotz

BIBLIOGRAPHY

Works dealing with the nature and history of econometrics are Divisia 1953; Frisch 1933; Tintner 1953; 1954. Basic works in the field are Allen 1956; Malinvaud 1964; Samuelson 1947.

Allen, R. G. D. (1956) 1963 Mathematical Economics. 2d ed. New York: St. Martins; London: Macmillan. Divisia, FranÉois 1953 La Société d’Économétrie a atteint sa majorité. Econometrica 21:1–30.

[Frisch, Ragnar] 1933 Editorial. Econometrica 1:1–4.

Lancaster, K. J. 1965 The Theory of Qualitative Linear Systems. Econometrica 33:395–408.

Malinvaud, Edmond (1964) 1966 Statistical Methods in Econometrics. Chicago: Rand McNally. → First published in French.

Peston, M. H. 1959 A View of the Aggregation Problem. Review of Economic Studies 27, no. 1:58–64.

Samuelson, Paul A. (1947)1958 Foundations of Economic Analysis. Harvard Economic Studies, Vol. 80. Cambridge, Mass.: Harvard Univ. Press. → A paperback edition was published in 1965 by Atheneum.

Stigler, George J. 1962 Henry L. Moore and Statistical Economics. Econometrica 30:1–21.

Tintner, Gerhard 1953 The Definition of Econometrics. Econometrica 21:31–40.

Tintner, Gerhard 1954 The Teaching of Econometrics. Econometrica 22:77–100.

Econometrics

views updated May 14 2018

Econometrics

BIBLIOGRAPHY

Econometrics is a branch of economics that confronts economic models with data. The metrics in econometrics suggests measurements. As Lawrence Klein (1974, p. 1) pointed out, measurement alone describes only the theoretical side of econometrics. Its empirical side deals with data and the estimation of relationships. Econometricians construct models, gather data, consider alternative specifications, and make forecasts or decisions based on econometric models (Granger 1999, p. 62). Many textbooks do econometrics rather than define it, mainly because it is not all science, for it requires a set of assumptions which are both sufficiently specific and sufficiently realistic (Malinvaud 1966, p. 514). As with any empirical discipline, econometric model building may not precede data analysis. One may be amused to find that econometrics can be used to answer the question Which came first: the chicken or the egg? by the use of causality testing (Thurman and Fisher 1988). Sometimes econometricians use a minimum of assistance from theoretical conceptions or hypotheses regarding the nature of the economic process by which the variables studied are generated (Koopmans 1970, p. 113). Other times, econometric models such as in time-series analysis use clearly defined approaches such as identification, estimation, and diagnostics.

The tradition for introductory econometrics is to start with a single equation emanating from economic theory and knowledge of how to fit the theory to a sample of data. For example, on the economic side, econometricians have some a priori notions of the demand schedule such as the law of demand, implying that more will be bought as the price falls. This is enough of a hypothesis to allow statistical testing. The econometrician needs to confront this demand hypothesis with a sample of data, which is either time-series or cross-section.

The econometricians best friend is randomness. One way to appreciate randomness is to assume that the econometrician wants to explain how prices vary with the quantity sold in the form of a linear single equation model Pt =a +bQt + εt, where P is price, Q is quantity, a, b are coefficients to be estimated, t is time, and ε is an error term. The error term is the main random mechanism in this model. It is normally distributed with a zero mean and a constant variance, independence of the independent variables, and uncorrelated for different sets of observations. Besides the assumptions, the error term makes the dependent variable probabilistic, clarifying that a statistical test may not be based on the independent variables, which are not stochastic. Another requirement of randomness is that the observations should be kept sequentially in time in order to detect whether the errors are related serially, which is called serial or autocorrelation of the error terms. This is measured by the Durbin-Watson statistic, ideally equal to 2. Other preliminary diagnostic tests would require the t-statistics of the coefficients to be approximately 2 or greater, and the adjusted R-square should be in the 90 percent range. The test of a good econometric model should emphasize the quality of the output of the model rather than merely the apparent quality of the model (Granger 1999, p. 62).

Besides single equations, econometricians study systems of equations models. A system of equations is necessary to capture interrelations or feedbacks among economic variables. In microeconomics, the demand and supply curves and their equality are thought of as a model to study market conditions such as equilibrium, excess demand, or excess supply. In macroeconomics, the Keynesian consumption and investment functions and a national income identity are required to study full employment and full production. A system of equations is usually solved or reduced to a single equation for forecasting purposes, which requires variables to be classified either as given (exogenous), such as the money supply and tax rates, or as variables determined by structural equations within the system (endogenous), such as prices and quantities. When the value of a variable is not in doubt at the current time, perhaps because we are relying on its previous values, then the variable is classified as predetermined. Structural equations are required in order to estimate the coefficients, whereas identity equations are required to sum up definitional terms such as that gross national product is the sum of consumption and investments. The Keynesian system of equations requires that planned savings must be equal to planned investment, which is referred to as an ex ante condition, as opposed to an ex post condition, where the variables are equal from an accounting perspective. The reduced form of the model can be used for policy purposes as instrument-versus-target models as suggested by Jan Tinbergen (1952) for the attainment of social welfare goals as suggested by Henri Theil (1961), or to simulate probable outcomes.

A system of equation models has peculiarities on both the model and the estimation sides. On the modeling side, the main difficulties reside with identification and reflection problems. Briefly stated, the identification problem requires that enough information be present in the model to make each equation represent a definite economic relation such as supply or demand. The reflection problem is concerned with getting a unique group data in order to explain individual behavior. Depending on the results of the identification problem, appropriate techniques for establishing a system of equations are available, such as ordinary least square (OLS), and three-stage least squares (3SLQ).

Some pitfalls are common to both single and systems of equations. Multicollinearity occurs when the independent variables are related, such as when one variable measures activity for a day and another variable measures the same activity for a week, requiring that one is seven times the other. A dummy variable trap occurs when binary variables such as for the treatment of sex, seasonality, or shocks all add up to a column of ones.

Expectations can be treated in both single and systems of equations. An expected variable may be present in the model, which requires one to specify, before estimation, how expectations are formed. One method calls for an adaptive mechanism to correct for past errors. The most recent method of rational expectation models requires the econometrician to adjust the expected value of the variable for all the information that is available. For instance, if ones average commuting distance to work is 10 minutes, and one hears on the news that a traffic jam has occurred, an adjustment must be made to the average time for the forecast of the arrival time to be rational. Econometricians are trying to build large-scale rational expectation models to rival standard models such as the Wharton Econometric model, the Data Resource model, or the Federal Reserve Board U.S. model, but such achievements are not in sight as yet.

SEE ALSO Bayesian Econometrics; Causality; Classical Statistical Analysis; Expectations; Heteroskedasticity; Klein, Lawrence; Koopmans, Tjalling; Least Squares, Ordinary; Matrix Algebra; Models and Modeling; Multicollinearity; Random Samples; Regression; Regression Analysis; Residuals; Statistics; Structural Equation Models; Tinbergen, Jan

BIBLIOGRAPHY

Granger, Clive W. J. 1999. Empirical Modeling in Economics: Specification and Evaluation. London: Cambridge University Press.

Klein, Lawrence R. 1974. A Textbook of Econometrics. 2nd ed. Englewood Cliffs, NJ: Prentice Hall.

Koopmans, Tjalling C. 1970. Scientific Papers of Tjalling C. Koopmans. Vol. 1. New York: Springer-Verlag.

Malinvaud, Edmond. 1966. Statistical Methods of Econometrics. Amsterdam: North-Holland.

Thiel, Henri. 1961. Economic Forecasts and Policy. 2nd ed. Amsterdam: North-Holland.

Thurman, Walter N., and Mark E. Fisher. 1988. Chicken, Eggs, and Causality, or Which Came First? American Journal of Agricultural Economics (May): 237238.

Tinbergen, Jan. 1952. On the Theory of Economic Policy. Amsterdam: North-Holland.

Lall Ramrattan
Michael Szenberg

econometrics

views updated Jun 11 2018

econometrics Economic analysis using a combination of empirical data, techniques of statistical estimation, and (usually) some form of multivariate analysis, such as regression analysis, applied to economic theory. Econometric models of the economy are used in forecasting and policy analysis.

About this article

econometrics

All Sources -
Updated Aug 24 2016 About encyclopedia.com content Print Topic

NEARBY TERMS

econometrics