Linear systems are systems of equations in which the variables are never multiplied with each other but only with constants and then summed up. Linear systems are used to describe both static and dynamic relations between variables.
In the case of the description of static relations, systems of linear algebraic equations describe invariants between variables such as:
a11x + a12x1 = c1
a21x + a22x 2 = c2
Here, one would be interested in the values of x1 and x2 for which both equations hold. This system of equations can easily be written in matrix form:
or, more concisely:
Ax = c
The solution can be written as x = A-1 c if the matrix A is invertible. Another frequent application of systems of linear algebraic equations is the following:
yi = b0 + b1xi 1 + b2xi 2 + b3xi 3 + … + bmxi m + ei
The above is a regression equation stating that for every object i, its attribute Y has a value that can be expressed as the weighted sum of its attributes X1 through Xm, with a measurement error of E (whose variance should be a minimum in the classical regression analysis). The term linear is derived from the fact that the graphical representation of the above equation for m = 1 is a straight line (with intercept b0 and slope b 1). The above equation holds for all i, thus the system of equations:
y 1 = b 0 + b 1x 11 + b 2x 12 + b 3x 13 + … + b m x 1m + e 1
y 2 = b 0 + b 1x 21 + b 2x 22 + b 3x 23 + … + b m x 2m + e 2…
y n = b 0 + b 1x n1 + b 2x n2 + b 3x n3 + … + b m x nm + e n
for all n objects is often written in the abbreviated form, using matrices and vectors,
y = Xb + e
where y and e are column vectors containing all yi and ei, respectively (i = 1 … n ), b is a row vector containing all bj (j = 0 … m ), and X is a n × (m + 1) matrix (with n rows and m + 1 columns) containing xij in the cells in row i and column j (where xi 0 = 1 for all i ). Here one is interested in the values of the regression coefficients bj (j = 0 … m ) which minimize the variance of the regression residual E. This is solved by calculating eT e which is the sum of the squares of the still unknown ei and calculating the derivatives of eT e with respect to all (also unknown) bj. These derivatives will be 0 for eT e = min, and the solution of this minimization problem is expressed as follows:
Linear systems are also used to describe dynamic relationships between variables. An early standard example from political science is English physicist Lewis Fry Richardson’s (1881-1953) model of arms races, which consists of the following simplifying hypotheses:
- The higher the armament expenses of one military block, the faster the increase of the other block’s armament expenses (as the latter wants to adapt to the threat as quickly as it can).
- The higher the armament expenses of a military block, the slower those expenses will increase (as it becomes more difficult to increase the proportion of military expenses with respect to the gross national product).
Calling the armament expenses of the two blocks x1 and x2, respectively, and their increase rates xẋ 1 and xẋ 2, respectively, one can model the increase rates as proportional both to the military expenses of the other block and the nonmilitary expenses of the own block (the total max max expenses and being constant), with some proportionality constants a, b, m, and n :
or, in shorter form:
Such linear systems of differential equations usually have a closed solution, that is, there is a vector-valued function q (t ) that fulfills this vector-valued differential equation. Usually, the precise form of q (t ) is not very interesting, but generally speaking it has the form:
q (t ) = θ 1 q 1 e λ1t + θ 2 q 2 e λ2t + qs
where q 1, q 2 and qs are constant vectors and θ 1, θ 2 λ 1 and λ 2 are constant numbers of which mainly qs, λ1 and λ2 are of special interest. qs is the stationary state of the system of differential equations, that is, once the system has acquired this state, it will never leave that state, as the derivatives with respect to time vanish. λ 1 and λ 2 are the so-called eigenvalues of the matrix A, which, as the exponents of the two exponential functions in the above equation, determine whether the function q (t ) will grow beyond all limits over time or whether the two first terms in the right-hand side of the above equation will vanish as time approaches infinity. For negative values of both λ 1 and λ 2 the latter will happen, and the overall function will approach its stationary state (in which case the stationary state is called stable or an attractor or sink ). If both eigenvalues are positive, then the function will grow beyond all limits (in which case the stationary state is called unstable or a repellor or source ). If one of the λ s is positive while the other is negative, the stationary is also unstable, but is called a saddle point because the function will first approach the stationary state and then move away. There is a special case when λ 1 and λ 2 are pure imaginary numbers—which happens if 4ab < 0 and m = n = 0, as . In the arms race model this is not a reasonable assumption, as one block would increase its arms expenses faster, the smaller the arms expenses of the other block are (exactly one of a and b would be negative and the respective block would behave strangely, the other constant would be positive, so the other block would behave normally) and both would increase or decrease its arms expenses regardless of what their current values are (m = n = 0 means that there is no influence of the current value of arms expenses upon its change for both blocks). In this case both variables x 1 and x 2 oscillate around the stationary state.
The example demonstrates that systems of linear differential equations always have a closed solution, which can be expressed in several different forms. There is always exactly one stationary state (except in the case that the matrix A is not invertible) that can either be a sink or a source or a saddle or a center. Nonlinear systems often have more than one stationary state, but their behavior can be analyzed in a similar way, taking into account that a linear approximation of a nonlinear system behaves approximately the same in a small neighborhood of each of its stationary states.
SEE ALSO Input-Output Matrix; Matrix Algebra; Nonlinear Systems; Simultaneous Equation Bias; Vectors
Arnold, Vladimir I. 1973. Ordinary Differential Equations. Cambridge, MA: MIT Press.
Richardson, Lewis Fry. 2003. Mathematics of War and Foreign Politics. In The World of Mathematics, vol. 2, ed. James R. Newman. Mineola, NY: Dover Publications.
Robinson, Derek J. S. 1991. A Course in Linear Algebra with Applications. River Edge, NJ: World Scientific.
Klaus G. Troitzsch