Loss Functions

views updated

Loss Functions

LOSS FUNCTIONS AND RISK

LOSS FUNCTIONS AND REGRESSION FUNCTIONS

LOSS FUNCTIONS FOR TRANSFORMATIONS

LOSS FUNCTIONS FOR ASYMMETRY

LOSS FUNCTIONS FOR FORECASTING FINANCIAL RETURNS

LOSS FUNCTIONS FOR ESTIMATION AND EVALUATION

LOSS FUNCTION FOR BINARY FORECAST AND MAXIMUM SCORE

LOSS FUNCTIONS FOR PROBABILITY FORECASTS

LOSS FUNCTION FOR INTERVAL FORECASTS

LOSS FUNCTION FOR DENSITY FORECASTS

LOSS FUNCTIONS FOR VOLATILITY FORECASTS

LOSS FUNCTIONS FOR TESTING GRANGER-CAUSALITY

BIBLIOGRAPHY

The loss function (or cost function ) is a crucial ingredient in all optimizing problems, such as statistical decision theory, policymaking, estimation, forecasting, learning, classification, financial investment, and so on. The discussion here will be limited to the use of loss functions in econometrics, particularly in time series forecasting.

When a forecast ft, h of a variable Yt + h is made at time t for h periods ahead, the loss (or cost) will arise if a forecast turns out to be different from the actual value. The loss function of the forecast error et + h = Yt + h ft, h is denoted as c (Yt + h ft, h ). The loss function can depend on the time of prediction, and so it can be ct + h (Yt + h, ft, h ). If the loss function does not change with time and does not depend on the value of the variable Yt + h, the loss can be written simply as a function of the error only, ct + h (Y,t + h, ft, h ) = c (e t + h )

Clive Granger (1999) discusses the following required properties for a loss function: (1) c (0) = 0 (no error and no loss); (2) mine c (e) = 0, so that c (e) 0; and (3) c(e) is monotonically nondecreasing as e moves away from zero so that c (e1 ) c (e2 ) if e1 > e2 > 0 and if e1 < e2 < 0.

When c 1(e ), c 2(e ) are both loss functions, Granger (1999) shows that further examples of loss functions can be generated: c (e ) = ac 1(e ) + bc 2(e ), a 0, b 0 will be a loss function; c (e ) = c 1(e )a c 2(e )b, a > 0, b > 0 will be a loss function; and c (e ) = 1(e > 0)c 1(e ) + 1(e < 0)c 2(e ) will be a loss function. If h (·) is a positive monotonic nondecreasing function with h (0) finite, then c (e ) = h (c 1(e )) h (0) is a loss function.

LOSS FUNCTIONS AND RISK

Granger (2002) notes that an expected loss (a risk measure) of financial return Y t + 1 that has a conditional predictive distribution F t(y ) Pr (Y 1 y ǀI t) with Xt It may be written as

with A , A both > 0 and some θ > 0. Considering the symmetric case A1 = A2, one has a class of volatility measures Vθ = Ą[ǀy f ǀMθ ], which includes the variance with θ = 2, and mean absolute deviation with θ = 1.

Zhuanxin Ding, Clive Granger, and Robert Engle (1993) study the time series and distributional properties of these measures empirically and show that the absolute deviations are found to have some particular properties, such as the longest memory. Granger remarks that given that the financial returns are known to come from a long-tail distribution, θ = 1 may be more preferable.

Another problem raised by Granger is how to choose optimal Lp -norm in empirical works, to minimize Ą[ǀεt ǀp ]for some p to estimate the regression model Yt = Xtβ + εt As the asymptotic covariance matrix of β̂ depends on p, the most appropriate value of p can be chosen to minimize the covariance matrix. In particular, Granger (2002) refers to a trio of papers (Nyquist 1983; Money et al. 1982; and Harter 1977) that find that the optimal p = 1 from Laplace and Cauchy distribution, p = 2 for Gaussian, and p = (min/max estimator) for a rectangular distribution. Granger (2002) also notes that in terms of the kurtosis κ, H. L. Harter (1977) suggests using p = 1 for K > 3.8; p = 2 for 2.2 K 3.8; and p = 3 for κ < 2.2. In finance, the kurtosis of returns can be thought of as being well over 4, so p = 1 is preferred.

We consider some variant loss functions with θ = 1, 2 below.

LOSS FUNCTIONS AND REGRESSION FUNCTIONS

Optimal forecasting of a time series model depends extensively on the specification of the loss function. Symmetric quadratic loss function is the most prevalent in applications due to its simplicity. The optimal forecast under quadratic loss is simply the conditional mean, but an asymmetric loss function implies a more complicated forecast that depends on the distribution of the forecast error as well as the loss function itself (Granger 1999), as the expected loss function is formulated with the expectation taken with respect to the conditional distribution. Specification of the loss function defines the model under consideration.

Consider a stochastic process Zt (Yt, Xt ), where Yt is the variable of interest and Xt is a vector of other variables. Suppose there are T + 1 ( R + P ) observations. We use the observations available at time t,R t < T + 1, to generate P forecasts using each model. For each time t in the prediction period, we use either a rolling sample {Z t - R + 1, , Zt } of size R or the whole past sample {Z1,., Zt} to estimate model parameters β̂t. We can then generate a sequence of one-step-ahead forecasts

Suppose that there is a decision maker who takes a one-step point forecast ft1 f (Zt, β̂t ) of Y t + 1and uses it in some relevant decision. The one-step forecast error e t + 1 Y t + 1 f t, 1 will result in a cost of c (e t + 1), where the function c (e ) will increase as e increases in size, but not necessarily symmetrically or continuously. The optimal forecast f* t, 1 will be chosen to produce the forecast errors that minimize the expected loss

where F t(y ) Pr (Yt + 1 y ǀIt ) is the conditional distribution function, with It being some proper information set at time t that includes Z t-j, j 0. The corresponding optimal forecast error will be

Then the optimal forecast would satisfy

When we interchange the operations of differentiation and integration,

the generalized forecast error,

forms the condition of forecast optimality:

H 0: Ą(g t + 11ǀIt ) = 0 a.s.,

that is, a martingale difference (MD) property of the generalized forecast error. This forms the optimality condition of the forecasts and gives an appropriate regression function corresponding to the specified loss function c (·).

To see this, consider the following two examples. First, when the loss function is the squared error loss

the generalized forecast error will be and thus which implies that the optimal forecast

is the conditional mean. Next, when the loss is the check function, c (e ) = [α 1(e < 0)] · e p α(e t + 1), the optimal forecast f t, 1, for given α (0, 1), minimizing

can be shown to satisfy

Hence, is the generalized fore cast error. Therefore,

and the optimal forecast is the conditional α-quantile.

LOSS FUNCTIONS FOR TRANSFORMATIONS

Granger (1999) notes that it is implausible to use the same loss function for forecasting Yt + h and for forecasting h t + 1 = h (Y t + h) where h (·) is some function, such as the log or the square, if one is interested in forecasting volatility. Suppose the loss functions c 1(·), c 2(·) are used for forecasting Y t + h and for forecasting h (Yt + h), respectively. Let e t + 1 Y t + 1 f t, 1 will result in a cost of c 1(e t + 1), for which the optimal forecast f *t, 1 will be chosen from min where F t(y ) Pr (Y t + 1 y ǀI t). Let t + 1 h t + 1 h t, 1 will result in a cost of c 2(εt + 1), for which the optimal forecast h*t, 1 will be chosen from min where H t(h ) Pr (h t + 1 h ǀI t). Then the optimal forecasts for Y and h would respectively satisfy

It is easy to see that the optimality condition for does not imply the optimality condition for in general. Under some strong conditions on the functional forms of the transformation h (·) and of the two loss functions c 1(·), c 2(·), the above two conditions may coincide. Granger (1999) remarks that it would be strange behavior to use the same loss function for Y and h (Y ). This awaits further analysis in future research.

LOSS FUNCTIONS FOR ASYMMETRY

The most prevalent loss function for the evaluation of a forecast is the symmetric quadratic function. Negative and positive forecast errors of the same magnitude have the same loss. This functional form is assumed because mathematically it is very tractable, but from an economic point of view, it is not very realistic. For a given information set and under a quadratic loss, the optimal forecast is the conditional mean of the variable under study. The choice of the loss function is fundamental to the construction of an optimal forecast. For asymmetric loss functions, the optimal forecast can be more complicated as it will depend not only on the choice of the loss function but also on the characteristics of the probability density function of the forecast error (Granger 1999).

As Granger (1999) notes, the overwhelming majority of forecast work uses the cost function c (e ) = ae2, a > 0, largely for mathematical convenience. Asymmetric loss function is often relevant. A few examples from Granger (1999) follow. The cost of arriving ten minutes early at the airport is quite different from arriving ten minutes late. The cost of having a computer that is 10 percent too small for a task is different from the computer being 10 percent too big. The loss of booking a lecture room that has ten seats too many for your class is different from that of a room that has ten seats too few. In dam construction, an underestimate of the peak water level is usually much more serious than an overestimate (Zellner 1986).

There are some commonly used asymmetric loss functions. The check loss function c (y, f ) [α 1 (y < f )] · (y f ), or c (e ) [α 1 (e < 0)] · e, makes the optimal predictor f the conditional quantile. The check loss function is also known as the tick function or lil-lin loss. The asymmetric quadratic loss c (e ) [α 1(e < 0)] · e 2 can also be considered. A value of α = 0.5 gives the symmetric squared error loss.

A particularly interesting asymmetric loss is the linex function of Hal Varian (1975), which takes the form

c 1(e, α) = exp (αet+1 1) αe t + 1 1,

where α is a scalar that controls the aversion toward either positive (α > 0) or negative (α < 0) forecast errors. The linex function is differentiable. If α > 0, the linex is exponential for e > 0 and linear for e < 0. If α < 0, the linex is exponential for e < 0 and linear for e > 0. To make the linex more flexible, it can be modified to the double linex loss function by

which is exponential for all values of e (Granger 1999). When α = β, it becomes the symmetric double linex loss function.

LOSS FUNCTIONS FOR FORECASTING FINANCIAL RETURNS

Some simple examples of the loss function for evaluating the point forecasts of financial returns are the out-of-sample mean of the following loss functions studied in Yongmiao Hong and Tae-Hwy Lee (2003): the squared error loss c (y, f ) = (y f )2; absolute error loss c (y, f ) = ǀy f ǀ; trading return c (y, f ) = sign(f ) · y (when y is a financial asset return); and the correct direction c (y, ŷ ) = sign(f ) · sign(y ), where sign(x ) = 1(x > 0) 1(x < 0) and 1(·) takes the value of 1 if the statement in the parentheses is true and 0 otherwise. The negative signs in the latter two are to make them the loss to minimize (rather than to maximize). The out-of-sample mean of these loss functions are the mean squared forecast errors (MSFE), mean absolute forecast errors (MAFE), mean forecast trading returns (MFTR), and mean correct forecast directions (MCFD):

These loss functions may further incorporate issues such as interest differentials, transaction costs, and market depth. Because the investors are ultimately trying to maximize profits rather than minimize forecast errors, MSFE and MAFE may not be the most appropriate evaluation criteria. Granger (1999) emphasizes the importance of model evaluation using economic measures such as MFTR rather than statistical criteria such as MSFE and MAFE. Note that MFTR for the buy-and-hold trading strategy with sign (f t, 1) = 1 is the unconditional mean return of an asset because in probability as P where µ = Ą(Y t). MCFD is closely associated with an economic measure as it relates to market timing. Mutual fund managers, for example, can adjust investment portfolios in a timely manner if they can predict the directions of changes, thus earning a return higher than the market average.

LOSS FUNCTIONS FOR ESTIMATION AND EVALUATION

When the forecast is based on an econometric model, to the construction of the forecast, a model needs to be estimated. Inconsistent choices of loss functions in estimation and forecasting are often observed. We may choose a symmetric quadratic objective function to estimate the parameters of the model, but the evaluation of the model-based forecast may be based on an asymmetric loss function. This logical inconsistency is not inconsequential for tests assessing the predictive ability of the forecasts. The error introduced by parameter estimation affects the uncertainty of the forecast and, consequently, any test based on it.

However, in applications, it is often the case that the loss function used for estimation of a model is different from the one(s) used in the evaluation of the model. This logical inconsistency can have significant consequences with regard to comparison of predictive ability of competing models. The uncertainty associated with parameter estimation may result in invalid inference of predictive ability (West 1996). When the objective function in estimation is the same as the loss function in forecasting, the effect of parameter estimation vanishes. If one believes that a particular criteria should be used to evaluate forecasts, then it may also be used at the estimation stage of the modeling process. Gloria González-Rivera, Tae-Hwy Lee, and Emre Yoldas (2007) show this in the context of the VaR model of RiskMetrics, which provides a set of tools to measure market risk and eventually forecast the value-at-risk (VaR) of a portfolio of financial assets. A VaR is a quantile return. RiskMetrics offers a prime example in which the loss function of the forecaster is very well defined. They point out that a VaR is a quantile, and thus the check loss function can be the objective function to estimate the parameters of the RiskMetrics model.

LOSS FUNCTION FOR BINARY FORECAST AND MAXIMUM SCORE

Given a series {Yt }, consider the binary variable Gt + 1 1(Yt + 1 > 0). We consider the asymmetric risk function to discuss a binary prediction. To define the asymmetric risk with A1 A2 and p = 1, we consider the binary decision problem of Clive Granger and Hashem Pesaran (2000b), and Tae-Hwy Lee and Yang Yang (2006) with the following 2x2 payoff or utility matrix:

UtilityGt+1 = 1Gt+1 = 0
Gt,1(Xt) = 1u11u01
Gt,1(Xt) = 0u10u00

where u ij is the utility when G t,1(Xt ) = j is predicted and G t + 1 = I is realized (i, j = 1, 2). Assume u 11 > u 10 and u 00 > u 01, and uij are constant over time; (u 11 u 10) > 0 is the utility gain from taking correct forecast when G t, 1(X t) = 1; and (u 00 u 01) > 0 is the utility gain from taking correct forecast when G t, 1 (X t) = 0. Denote

π (X t) = ĄYt+1 (G t + 1ǀX t) = Pr (G t + 1 = 1ǀX t).

The expected utility of G t, 1(X t) = 1 is u 11π (X t) + u 01(1 π,(Xt )), and the expected utility of G t, 1 (X t) = 0 is u 10π (X ) + u 00(1 π (X t)). Hence, to maximize utility, conditional on the values of Xt, the prediction Gt, 1 (X t) = 1 will be made if

u 11 π (X t) + u 01(1 π (Xt )) > u 10π (Xt ) + u 00(1 π (Xt )), or

By making a correct prediction, our net utility gain is (u 00 u 01) when Gt + 1 = 0, and (u 11 u 10) when G t+1 = 1. Put another way, our opportunity cost (in the sense that you lose the gain) of a wrong prediction is (u 00 u 01) when G 11 = 0 and (u 11 u 10) when G t+1 = 1. Since a multiple of a utility function represents the same preference, (1 α ) can be viewed as the utility gain from correct prediction when G = 0, or the opportunity cost of a false alert. Similarly,

can be treated as the utility gain from correct prediction when G t + 1 = 1 is realized, or the opportunity cost of a failure-to-alert. We thus can define a cost function c (e t + 1) with e t + 1 = G t + 1 G t + 1 (X t):

CostGt+1 = 1Gt+1 = 0
Gt, 1(Xt )=101-α
Gt, 1(Xt )=0α0

That is,

which can be equivalently written as c (e t + 1) = pα (e t + 1), where pα (e ) [a 1(e < 0)e ] is the check function. Hence, the optimal binary predictor maximizing the expected utility minimizes the expected cost Ą(pα (et + 1 )ǀXt).

The optimal binary prediction that minimizes ĄY t + 1(pα (e t + 1ǀXt) is the conditional α-quantile of Gt + 1 denoted as

This is a maximum score problem of Charles Manski (1975).

Also, as noted by James Powell (1986), using the fact that for any monotonic function h (·), Qα (h (Yt + 11)ǀXt) = h (Qα (Yt + 1ǀXt )), which follows immediately from observing that Pr (Y t + 1 < y ǀX) = Pr [h (Y t + 1) < h (y )ǀX t], and noting that the indicator function is monotonic, Qα (Gt + 1ǀXt) = Qα (1(Y t + 1 > 0)ǀXt) = 1(Qα (Y t + 1ǀX t) > 0).

Hence,

where Qα (Y t + 1ǀXt) is the α-quantile function of Yt + 1 1 conditional on Xt. Note that ĄYt + 1(pα (e t)ǀXt ) with et Gt + 1, Qα (G t + 1ǀXt ), and Ąy t + 1(pα (ut + 1 )ǀXt ) with ut + 1 Yt + 1 Qα (Yt + 1ǀXt ). Therefore, the optimal binary prediction can be made from binary quantile regression for Gt + 1. Binary prediction can also be made from a binary function of the α-quantile for Y t + 1

LOSS FUNCTIONS FOR PROBABILITY FORECASTS

Francis Diebold and Glenn Rudebusch (1989) consider the probability forecasts for business-cycle turning points. To measure the accuracy of predicted probabilities, that is, the average distance between the predicted probabilities and observed realization (as measured by a zero-one dummy variable). Suppose we have time series of probability forecast where pt is the probability of the occurrence of a turning point at date t. Let be the corresponding realization with dt = 1 if a business-cycle turning point (or any defined event) occurs in period t and dt = 0 otherwise. The loss function analogous to the squared error is Briers score based on the quadratic probability score (QPS):

The QPS ranges from 0 to 2, with 0 for perfect accuracy. As noted by Diebold and Rudebusch (1989), the use of the symmetric loss function may not be appropriate, as a forecaster may be penalized more heavily for missing a call (making a Type II error) than for signaling a false alarm (making a Type I error). Another loss function is given by the log probability score (LPS)

which is similar to the loss for the interval forecast. Major mistakes are penalized more heavily under LPS than under QPS. Further loss functions are discussed in Diebold and Rudebusch (1989).

Another loss function useful in this context is the Kuipers score (KS), which is defined by

KS = Hit Rate False Alarm Rate,

where the hit rate is the fraction of the bad events that were correctly predicted as good events (power, or 1 probability of Type II error), and the false alarm rate is the fraction of good events that have been incorrectly predicted as bad events (probability of Type I error).

LOSS FUNCTION FOR INTERVAL FORECASTS

Suppose Yt is a stationary series. Let the one-period-ahead conditional interval forecast made at time t from a model be denoted as

Jt, 1 (α ) = (Lt, 1 (α ), Ut, 1 (α )), t = R, , T,

where Lt, 1 (α ) and Ut, 1 (α ) are the lower and upper limits of the ex ante interval forecast for time t + 1 made at time t with the coverage probability α Define the indicator variable Xt + 1 (α ) = 1[Yt + 1 Jt, 1 , 1(α )]. The sequence is IID Bernoulli (α ). The optimal interval forecast would satisfy Ą(Xt + 1 (α )ǀI t) = α, so that {Xt (α ) (α } will be an MD. A better model has a larger expected Bernoulli log-likelihood

Hence, we can choose a model for interval forecasts with the smallest out-of-sample mean of the negative predictive log-likelihood defined by

LOSS FUNCTION FOR DENSITY FORECASTS

Consider a financial return series . This observed data on a univariate series is a realization of a stochastic process YT {YT : Ω đ, T = 1, 2, ;, T } on a complete probability space (Ω, ÁT, pT 0), where ; = đT xT T = 1đ and ÁT= B(RT ) is the Borel σ-field generated by the open sets of đT , and the joint probability measure P T 0(B)P0 [YT B], B B(đT) completely describes the stochastic process. A sample of size T is denoted as yT (y1, , yT).

Let σ-finite measure vT on B(đT) be given. Assume P T 0(B ) is absolutely continuous with respect to vT for all T = 1, 2, , so that there exists a measurable Radon-Nikodým density gT (yT) = dPT 0/dv , unique up to a set of zero measure-VT.

Following Halbert White (1994), we define a probability model P as a collection of distinct probability measures on the measurable space (ΩÁT). A probability model P is said to be correctly specified for YT if P contains P T 0 Our goal is to evaluate and compare a set of parametric probability models , where Suppose there exists a measurable Radon-Nikodým density for each θ θ, where θ is a finite-dimensional vector of parameters and is assumed to be identified on θ, a compact subset of đK (see White 1994, Theorem 2.6).

In the context of forecasting, instead of the joint density gT (yT ), we consider forecasting the conditional density of Yt, given the information Át-1 generated by Yt-1. Let π(Yt ) πt(Y tǀÁt-1) g t(Yt)ǀgt-1(Yt-1 ) for t = 2,3, and π(Y1 ) π1(Y1 ǀρ0) g1(Y1 ) = g1(Y1 ). Thus the goal is to forecast the (true, unknown) conditional density πt(Yt ).

For this, we use a one-step-ahead conditional density forecast model for t = 2,3, and If almost surely for some £0 £, then the one-step-ahead density forecast is correctly specified, and it is said to be optimal because it dominates all other density forecasts for any loss functions as discussed in the previous section (see Granger and Pesaran 2000a, 2000b; Diebold et al. 1998; Granger 1999).

In practice, it is rarely the case that we can find an optimal model. As it is very likely that the true distribution is in fact too complicated to be represented by a simple mathematical function (Sawa 1978), all the models proposed by different researchers can be possibly misspecified and thereby we regard each model as an approximation of the truth. Our task is then to investigate which density forecast model can approximate the true conditional density most closely. We have to first define a metric to measure the distance of a given model to the truth, and then compare different models in terms of this distance.

The adequacy of a density forecast model can be measured by the conditional Kullback-Leibler information criterion (KLIC) (1951) divergence measure between two conditional densities,

where the expectation is with respect to the true conditional density and Ąπt Following White (1994), we define the distance between a density model and the true density as the minimum of the KLIC

where t (ψ: Ψ,θ) is the pseudotrue value of θ(Sawa 1978). We assume that is an interior point of Θ. The smaller this distance is, the closer the density forecast Ψt (ǀρ t-1; Ψ*t -1) is to the true density Ψt (ǀρt-1 ).

However, t (Ψ:ψ, θ*t-1 )is unknown since Ψ*t-1 is not observable. We need to estimate Ψ*t-1 . If our purpose is to compare the out-of-sample predictive abilities among competing density forecast models, we split the data into two parts, one for estimation and the other for out-of-sample validation. At each period t in the out-of-sample period (t = R + 1, , T ), we estimate the unknown parameter vector Ψt -1 and denote the estimate as Using we can obtain the out-of-sample estimate of by

where P = T R is the size of the out-of-sample period. Note that

where the first term in P (Ψ:ψ) measures model uncertainty (the distance between the optimal density Ψt(y t ) and the model and the second term measures parameter estimation uncertainty due to the distance between θ*-t-1 and Ôt-1.

Since the KLIC measure takes on a smaller value when a model is closer to the truth, we can regard it as a loss function and use P (Ψ:ψ) to formulate the loss-differential. The out-of-sample average of the loss-differential between model 1 and model 2 is

which is the ratio of the two predictive log-likelihood functions. With treating model 1 as a benchmark model (for model selection) or as the model under the null hypothesis (for hypothesis testing), P (Ψ:ψ1)-p (Ψ:ψ2) can be considered as a loss function to minimize. To sum up, the KLIC differential can serve as a loss function for density forecast evaluation as discussed in Yong Bao, Tae-Hwy Lee, and Burak Saltoglu (2007).

LOSS FUNCTIONS FOR VOLATILITY FORECASTS

Gloria González-Rivera, Tae-Hwy Lee, and Santosh Mishra (2004) analyze the predictive performance of various volatility models for stock returns. To compare the performance, they choose loss functions for which volatility estimation is of paramount importance. They deal with two economic loss functions (an option pricing function and a utility function) and two statistical loss functions (the check loss for a value-at-risk calculation and a predictive likelihood function of the conditional variance).

LOSS FUNCTIONS FOR TESTING GRANGER-CAUSALITY

In time series forecasting, a concept of causality is due to Granger (1969), who defined it in terms of conditional distribution. Tae-Hwy Lee and Weiping Yang (2007) use loss functions to test for Granger-causality in conditional mean, in conditional distribution, and in conditional quantiles. The causal relationship between money and income (output) has been an important topic that has been extensively studied. However, those empirical studies are almost entirely on Granger-causality in the conditional mean. Compared to conditional mean, conditional quantiles give a broader picture of a variable in various scenarios. Lee and Yang (2007) explore whether forecasting the conditional quantile of output growth may be improved using money. They compare the check (tick) loss functions of the quantile forecasts of output growth with and without using the past information on money growth, and assess the statistical significance of the loss-differential of the unconditional and conditional predictive abilities. As conditional quantiles can be inverted to the conditional distribution, they also test for Granger-causality in the conditional distribution (using a nonparametric copula function). Using U.S. monthly series of real personal income and industrial production for income, and M1 and M2 for money, for 1959 to 2001, they find that out-of-sample quantile forecasting for output growth, particularly in tails, is significantly improved by accounting for money. On the other hand, money-income Granger-causality in the conditional mean is quite weak and unstable. Their results have important implications for monetary policy, showing that the effectiveness of monetary policy has been underestimated by merely testing Granger-causality in mean. Money-income Granger-causality is stronger than it has been known, and therefore the information on money growth can (and should) be more widely utilized in implementing monetary policy.

SEE ALSO Autoregressive Models; Generalized Least Squares; Least Squares, Ordinary; Logistic Regression; Maximum Likelihood Regression; Optimizing Behavior; Regression; Regression Analysis; Time Series Regression

BIBLIOGRAPHY

Bao, Yong, Tae-Hwy Lee, and Burak Saltoglu. 2007. Comparing Density Forecast Models. Journal of Forecasting 26: 203225.

Diebold, Francis X., and Glenn D. Rudebusch. 1989. Scoring the Leading Indicators. Journal of Business 62 (3): 369391.

Diebold, Francis X., Todd A. Gunther, and Anthony S. Tay. 1998. Evaluating Density Forecasts with Applications to Financial Risk Management. International Economic Review 39: 863883.

Ding, Zhuanxin, Clive W. J. Granger, and Robert F. Engle. 1993. A Long Memory Property of Stock Market Returns and a New Model. Journal of Empirical Finance 1: 83106.

González-Rivera, Gloria, Tae-Hwy Lee, and Santosh Mishra. 2004. Forecasting Volatility: A Reality Check Based on Option Pricing, Utility Function, Value-at-Risk, and Predictive Likelihood. International Journal of Forecasting 20 (4): 629645.

González-Rivera, Gloria, Tae-Hwy Lee, and Emre Yoldas. 2007. Optimality of the RiskMetrics VaR Model. Unpublished manuscript, University of California, Riverside.

Granger, Clive W. J. 1969. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods. Econometrica 37: 424438.

Granger, Clive W. J. 1999. Outline of Forecast Theory Using Generalized Cost Functions. Spanish Economic Review 1: 161173.

Granger, Clive W. J. 2002. Some Comments on Risk. Journal of Applied Econometrics 17: 447456.

Granger, Clive W. J., and M. Hashem Pesaran. 2000a. A Decision Theoretic Approach to Forecasting Evaluation. In Statistics and Finance: An Interface, eds. Wai-Sum Chan, Wai Keung Li, and Howell Tong. London: Imperial College Press.

Granger, Clive W. J., and M. Hashem Pesaran. 2000b. Economic and Statistical Measures of Forecast Accuracy. Journal of Forecasting 19: 537560.

Harter, H. L. 1977. Nonuniqueness of Least Absolute Values Regression. Communications in StatisticsTheory and Methods A6: 829838.

Hong, Yongmiao, and Tae-Hwy Lee. 2003. Inference on Predictability of Foreign Exchange Rates via Generalized Spectrum and Nonlinear Time Series Models. Review of Economics and Statistics 85 (4): 10481062.

Koenker, Roger, and Gilbert Bassett Jr. 1978. Regression Quantiles. Econometrica 46 (1): 3350.

Kullback, L., and R. A. Leibler. 1951. On Information and Sufficiency. Annals of Mathematical Statistics 22: 7986.

Lee, Tae-Hwy, and Weiping Yang. 2007. Money-Income Granger-Causality in Quantiles. Unpublished manuscript, University of California, Riverside.

Lee, Tae-Hwy, and Yang Yang. 2006. Bagging Binary and Quantile Predictors for Time Series. Journal of Econometrics 135: 465497.

Manski, Charles F. 1975. Maximum Score Estimation of the Stochastic Utility Model of Choice. Journal of Econometrics 3 (3): 205228.

Money, A. H., J. F. Affleck-Graves, M. L. Hart, and G. D. I.Barr. 1982. The Linear Regression Model and the Choice of p. Communications in StatisticsSimulations and Computations 11 (1): 89109.

Nyquist, Hans. 1983. The Optimal Lp-norm Estimation in Linear Regression Models. Communications in StatisticsTheory and Methods 12: 25112524.

Powell, James L. 1986. Censored Regression Quantiles. Journal of Econometrics 32: 143155.

Sawa, Takamitsu. 1978. Information Criteria for Discriminating among Alternative Regression Models. Econometrica 46:12731291.

Varian, Hal R. 1975. A Bayesian Approach to Real Estate Assessment. In Studies in Bayesian Econometrics and Statistics:In Honor of Leonard J. Savage, eds. Stephen E. Fienberg and Arnold Zellner, 195208. Amsterdam: North Holland.

West, Kenneth D. 1996. Asymptotic Inference about Prediction Ability. Econometrica 64: 10671084.

White, Halbert. 1994. Estimation, Inference, and Specification Analysis. Cambridge, U.K.: Cambridge University Press.

Zellner, Arnold. 1986. Bayesian Estimation and Prediction Using Asymmetric Loss Functions. Journal of the American Statistical Association 81: 446451.

Tae-Hwy Lee