Matrix Algebra

views updated May 14 2018

Matrix Algebra

BIBLIOGRAPHY

Real numbers can be used to convey one-dimensional information, such as a familys total expenditure in a month. However, if one wants to record the monthly expenditures of two families (indexed by 1, 2) on three itemsfood, entertainment, and health (indexed by 1, 2, 3)then one needs to use a rectangular array of real numbers, or a matrix. A matrix (A) is defined as a rectangular array of numbers, parameters, or variables. The members of the array are referred to as the elements of the matrix and are usually enclosed in brackets, parentheses, or double vertical lines.

The first row of matrix A provides information on the first familys expenditures on food, entertainment, and health, and the second row gives similar information related to the second family. For instance, a23 is the expenditure of the second family on health, and a12 is the first familys expenditure on entertainment. Matrix is a concept of linear algebra, and it has wide applications in many fields, including economics, statistics, computer programming, operations research, industrial organization, and engineering.

Like numbers, elementary operations such as addition and multiplication can also be performed on matrices. A typical element in matrix A is written as aij, which is the element located in row i and column j of the matrix. For instance, a 13 is the element in the first row and third column, or the first familys expenditure on health in the first month. Matrix A is also written as {aij } 2×3, which simply means that A is a matrix with two rows and three columns whose typical element is aij The expenditures of the two families in the second month on food, entertainment, and health can be written as another matrix, B = {bij } 2×3. Then, the sum of two months expenditures for both families on all three items can be written in the form of a third matrix C, which is the sum of the corresponding elements of A and B; it is defined as:

C= {aij + bij }2×3 = A + B

Let us now assume that in the third month the two families spend exactly the same amount of money on every item as they spent in the first month. In other words, the matrix of expenditure in the third month is the same as the matrix of expenditure in the first month. Then we can write a matrix D, which will tell us the total expenditure of the two families in three months on each of the three items as:

D ={2aij + bij }2×3 =2A + B.

2A is called scalar multiplication of a matrix, in which every element of A is multiplied by the scalar 2.

Multiplication of two matrices can be illustrated from input-output models used to illustrate producer theory in economics. Let us suppose that there are two firms, each producing three goods (indexed by 1, 2, 3) with two factors of production (indexed by 1, 2). Let A = {aij }2×3 be the matrix of input-output coefficients. Thus, aij is the amount of factor i needed to produce one unit of good j. The first firm wants to produce X1, X2 X3 and the second firm wants to produce Y1, Y2, Y3 quantities of the three goods. Let us define a matrix of these outputs as:

Then the matrix multiplication of A and Z can be written as:

It may be noticed that A is a 2×3 and Z is a 3×2 matrix. Their product will be a 2×2 matrix. The general rule is that if A is m×n and Z is nXq then their product will be a matrix of the order of m×q. The number of columns of A must be equal to the number of rows of Z. If this condition is not satisfied, matrix multiplication is not possible. The meaning of the product matrix (AZ) is very simple. Suppose labor and capital are the two factors of production. The first column of AZ gives the total quantities of labor and capital used by the first firm to produce all three goods, and the second column of AZ gives the amounts of all factors used by the second firm to produce these three goods.

The transpose of a matrix AT is obtained by interchanging the rows and columns of A. The i th row of A is the i th column of AT. For instance, the transpose of A, written as AT, is:

A square matrix is a matrix with an equal number of rows and columns. The equivalent of the number one is an identity matrix I, which is a square matrix with 1 in its principle diagonal and 0 elsewhere. The following identity matrix is of dimension 3×3:

Obviously, IT = I, AI = IA = A, for any matrix A for which these products are defined. A null matrix (written as O) is a matrix whose elements are all zero.

Matrix operations satisfy certain mathematical laws:

  1. A + (B+C) = (A+B) + C (associative law for addition)
  2. A(BC) = (AB)C (associative law for multiplication)
  3. A + B = B + A (commutative law)
  4. c(A+B) = cA + cB, where c is a scalar (distributive law for scalar products)
  5. C(A+B) = CA + CB, provided the products are defined (distributive law for matrix multiplication)

A crucial application of matrices is that matrix algebra can be used to solve a system of linear simultaneous equations of the form: AX = B. A is a matrix of order m×n whose elements are real numbers, X is an n × 1 matrix of variables whose values have to be solved, and B is an m × 1 matrix of right-hand-side constant terms of m linear equations. An example of these equations for the case m = n = 3 follows:

X + Y + Z = 4

3X + YZ = 6

X + Y 2Z +4

In the above system of equations,

Something that is equivalent to division for real numbers is called matrix inversion in matrix algebra. Only a square matrix can be inverted. If A is a square matrix of order n×n, then its inverse, A-1, is a matrix such that A1 = A1 A = I . If there is a system of linear simultaneous equations AX = B, where A is n×n, X is n× 1, and B is n× 1, then the solution is X = A-1 B, provided A-1 exists.

SEE ALSO Cholesky Decomposition; Hessian Matrix; Input-Output Matrix; Inverse Matrix; Jacobian Matrix; Linear Systems; Mathematical Economics; Programming, Linear and Nonlinear; Vectors

BIBLIOGRAPHY

Chiang, Alpha C., and Kevin Wainwright. 2005. Fundamental Methods of Mathematical Economics. 4th ed. Boston: McGraw Hill/Irwin.

Monica Das

Identity Matrix

views updated Jun 27 2018

Identity Matrix

BIBLIOGRAPHY

The identity matrix In is an n matrix with 1s along the main diagonal and 0s in the off-diagonal elements. It can be written as In = diag (1, 1.1). For instance, for n = 3, the matrix looks like

The columns of the identity matrix are known as the unit vectors. For the above example, these are e 1 = (1 0 0), e 2 = (0 1 0), and e 3 = (0 0 1). If the dimension of the matrix is 1 × 1, the matrix reduces to the scalar 1.

The identity matrix has the following properties:

  1. It is square, that is, it has the same number of rows and columns.
  2. It is symmetric, that is, transposing rows with columns (or vice versa) we obtain the matrix itself, that is, I = I where I is the transpose matrix.
  3. It is idempotent, that is, I 2 = I ; in the scalar case this is equivalent to I 2 = 1.
  4. For any n × n matrix A, multiplication by the identity matrix delivers the matrix A itself, that is, AI = A ; in the scalar case this is equivalent to a × 1 = a.
  5. It has the commutative property, that is, for any n × n matrix A, AI = IA = A ; in the scalar case, this is equivalent to a ×1 = 1 × a = a.
  6. For any nonsingular n × n matrix A, there exists amatrix A 1 such that AA 1 = A 1 A = I where A 1 is called the inverse matrix of A. In the scalar case, this property is equivalent to the inverse operation of multiplication (or division), that is,
  7. It has full rank; the n columns (or the n rows) of the matrix are linearly independent vectors and consequently the determinant is different from zero. The only symmetric, idempotent, and full rank matrix is the identity matrix.
  8. Because I is a diagonal matrix, its determinant is equal to the product of the elements in the main diagonal, which in this case is equal to 1 regardless of the dimension of the matrix. A positive determinant is a necessary and sufficient condition for the identity matrix to be a positive definite matrix. The trace of the identity matrix is tr I n = n, which is the sum of the elements in the main diagonal.
  9. The n eigenvectors of the identity matrix are the unit vectors, and all the n eigenvalues are equal to 1.

Matrix algebra is a fundamental tool for the econometric analysis of general regression models. Classical estimation methodologies such as Ordinary Least Squares (OLS), Nonlinear Least Squares, Generalized Least Squares, Maximum Likelihood, and the Method of Moments rely on matrix algebra to derive their estimators and their properties in an elegant and compact format. The identity matrix shows up in several technical proofs. For instance, the identity matrix is an integral part of a projection matrix. In the OLS regression of y on X with a sample of size n, the projection matrix is P (In X (X X )1 X ). It is important because when P is applied to a vector such as y, the result is the fitted values of y through the regression, that is,ŷ = Py.

BIBLIOGRAPHY

Strang, Gilbert. 1980. Linear Algebra and Its Applications. 2nd ed. New York: Academic Press.

Gloria González-Rivera

Matrices

views updated May 11 2018

Matrices


A matrix, singular for matrices, is a rectangular array of numbers. Matrices naturally arise in describing a special class of functions called linear transformations. But the concept of matrices originated in the work of the two mathematicians Arthur Cayley and James Sylvester while solving a system of linear equations. In 1857, Cayley wrote Memoir on the Theory of Matrices.

A matrix can be seen as a collection of rows of numbers. Each number is called an element, or entry, of the matrix. An illustrative example of a matrix, C, is below.

The order of the numbers within the row as well as the order of the rows are important. A matrix can be described by rows and columns. C has 3 rows and 3 columns, and hence it is a 3 × 3 matrix. A 2 × 3 matrix has 2 rows and 3 columns and a 4 × 2 matrix has 4 rows and 2 columns.

The size or dimension of a matrix is the number of rows and the number of columns, written in that order, and in the format m × n, read "m by n." If n = m, which means that the number of rows equals the number of columns, then the matrix is called a square matrix.

Symbolically, the elements of the first row of a matrix are a 11, a 12, a 13, The second row is a 21, a 22, a 23, , and so on. The first digit in the subscript indicates the row number and the second digit indicates the column number. Therefore, the element a ij is located in row i and column j.

Addition and Subtraction

Addition and subtraction are defined for matrices. To add or subtract two matrices, they must have the same dimension. Two matrices are added or subtracted by adding or subtracting the corresponding element of each matrix. A matrix can also be multiplied by a real number. If C is a matrix and k a real number, then the matrix k C is formed by multiplying each element of C by k.

How can two matrices be multiplied? A useful definition of matrix multiplication involves a unique and unusual technique. To multiply two matrices A and B, the number of columns of the first matrix A has to equal the number of rows of the second matrix B. Let A be an m × n matrix, and B an n × p matrix. The product AB is a matrix C that has the dimension of m × p. The first element c 11 of the matrix C is obtained by multiplying the elements of the first row of A with the elements of the first column of B.

The second element in the first row, c12, is obtained by multiplying the first row of A by the second column of B. Similarly, multiplying the row i of A with column j of B produces the element c ij of matrix C.

Matrix multiplication is not commutative: that is, AB is not always equal to BA. The number 1 has a special property in arithmetic. Every number multiplied by 1 remains unchanged, hence 1 is called the multiplicative identity. Is there a special matrix which when multiplied by any matrix A leaves A unchanged? If A is a square matrix, then there exists an identity matrix I, such that AI = IA = A.

If A is a 3 × 3 square matrix, then I is given by determinant.

In an identity matrix, all elements are 0 except the diagonal elements that are all equal to 1. In arithemetic, a nonzero number has a multiplicative inverse, such that multiplying the number and its multiplicative inverse always produces 1, the identity. Does every square matrix B has an inverse B-1? The answer is no. There is a special class of square matrices that have an inverse, such that B B1 = I = B1B. These square matrices have a nonzero determinant.

Rafiq Ladhani

Bibliography

Dugopolski, Mark. Elementary Algebra, 3rd ed. Boston: McGraw-Hill, 2000.

Miller, Charles D., Vern E. Heeren, and E. John Hornsby, Jr. Mathematical Ideas, 9th ed. Boston: Addison-Wesley, 2001.

matrices

views updated May 21 2018

ma·tri·ces / ˈmātrəˌsēz/ • plural form of matrix.

identity matrix

views updated May 29 2018

identity matrix (unit matrix) A diagonal matrix, symbol I, with each diagonal element equal to one.