The Cholesky decomposition factorizes a positive definite matrix A into a lower triangular matrix L and its transpose, L’ :
A = LL’
This decomposition is named after André-Louis Cholesky (1875-1918), a French artillery officer who invented the method in the context of his work in the Geodesic Section of the Army Geographic Service.
The k × k real symmetric matrix A is positive definite if and only if x’Ax > 0 for any nonzero k -vector. For such matrices, the corresponding Cholesky factor L (sometimes called the matrix square root ) always exists and is unique. Matrices of this sort arise in many econometric contexts, making the Cholesky decomposition a very useful computational tool. For example, it can be used to solve the normal equations of least squares to produce coefficient estimates in multiple regression analysis. In this case, the place of A is occupied by the matrix of squares and cross-products of the regressors, X’X.
Given the Cholesky decomposition of A, the set of linear equations Ax = b in the unknown vector x may be written as LL’x = b. Writing y for L’x, we get Ly = b, which may be solved for y, then y = L’x is solved for x. It is trivial to solve equations on the pattern Mx = b for triangular M.
Algorithms for computing the decomposition are based on the following relationships between the elements aij of A and the elements lij of L :
Element lij can be computed if we know the elements to the left and above. The Cholesky-Crout algorithm starts from the upper left corner of L and calculates the matrix column by column.
The beauty of the Cholesky method is that it is numerically stable and accurate (as noted by Turing 1948) while requiring fewer floating-point operations and less workspace (computer memory) than alternative methods. It does, however, have a problem if the matrix A is very ill-conditioned. (In the econometric context mentioned above, this occurs if there is a high degree of collinearity among the variables in the data matrix X.) Computing the decomposition requires that we calculate a sequence of square roots. The values under the square root sign in equation (2) are always positive in exact arithmetic, but for ill-conditioned A they may be very small. Given the rounding error inherent in finite-precision computer arithmetic, these values may go negative—in which case the algorithm cannot continue—or they may simply fall below the magnitude at which rounding error is an acceptable proportion of the computed value. A practical implementation of the Cholesky algorithm for a digital computer must check for this condition and terminate if need be. If the Cholesky method fails, one can resort to the computationally more expensive QR or SVD decomposition methods.
Besides solving sets of linear equations, the Cholesky decomposition has a further use in econometrics that deserves mention, namely, decomposing the covariance matrix for a set of residuals in the context of a vector autoregression. This sort of analysis was pioneered by Christopher Sims (1980) and quickly became popular. The role of the decomposition is to permit the simulation of the response of a system to a disturbance in any one of the variables, and also to perform an accounting of the proportions of the forecast error variance attributable to disturbances in each of the variables.
SEE ALSO Matrix Algebra
Golub, Gene H., and Charles F. Van Loan. 1996. Matrix Computations. 3rd ed. Baltimore, MD: Johns Hopkins University Press.
O’Connor, John J., and Edmund F. Robertson. 2005. André-Louis Cholesky. School of Mathematics and Statistics, University of St. Andrews: MacTutor History of Mathematics archive. http://www-history.mcs.st-andrews.ac.uk/Biographies/Cholesky.html.
Sims, Christopher. 1980. Macroeconomics and Reality. Econometrica 48 (1): 1-48.
Turing, Alan M. 1948. Rounding-Off Errors in Matrix Processes. Quarterly Journal of Mechanics and Applied Mathematics 1 (1): 287-308.