# approximation theory

views updated

approximation theory A subject that is concerned with the approximation of a class of objects, say F, by a subclass, say PF, that is in some sense simpler. For example, let F = C [a,b],

the real continuous functions on [a,b], then a subclass of practical use is Pn, i.e. polynomials of degree n. The means of measuring the closeness or accuracy of the approximation is provided by a metric or norm. This is a nonnegative function that is defined on F and measures the size of its elements. Norms of particular value in the approximation of mathematical functions (for computer subroutines, say) are the Chebyshev norm and the 2-norm (or Euclidean norm). For functions fC [a,b]

these norms are given respectively as

For approximation of data these norms take the discrete form

The 2-norm frequently incorporates a weight function (or weights). From these two norms the problems of Chebyshev approximation and least squares approximation arise. For example, with polynomial approximation we seek pnPn

for which ||fpn|| or ||fpn||2

are acceptably small. Best approximation problems arise when, for example, we seek pnPn

for which these measures of errors are as small as possible with respect to Pn.

Other examples of norms that are particularly important are vector and matrix norms. For n-component vectors x = (x1,x2,…,xn)T

important examples are

Corresponding to a given vector norm, a subordinate matrix norm can be defined for n × n matrices A, by

For the vector norm

this reduces to the expression

where aij is the i,jth element of A. Vector and matrix norms are indispensible in most areas of numerical analysis.