# approximation theory

**approximation theory** A subject that is concerned with the approximation of a class of objects, say *F*, by a subclass, say *P* ⊂ *F*, that is in some sense simpler. For example, let *F* = *C *[*a*,*b*],

the real continuous functions on [*a*,*b*], then a subclass of practical use is *P _{n}*, i.e. polynomials of degree

*n*. The means of measuring the closeness or accuracy of the approximation is provided by a metric or

*norm*. This is a nonnegative function that is defined on

*F*and measures the size of its elements. Norms of particular value in the approximation of mathematical functions (for computer subroutines, say) are the

*Chebyshev norm*and the

*2-norm*(or

*Euclidean norm*). For functions

*f*∈

*C*[

*a*,

*b*]

these norms are given respectively as

For approximation of data these norms take the discrete form

The 2-norm frequently incorporates a weight function (or weights). From these two norms the problems of

*Chebyshev approximation*and

*least squares approximation*arise. For example, with polynomial approximation we seek

*p*∈

_{n}*P*

_{n}for which ||

*f*–

*p*|| or ||

_{n}*f*–

*p*||

_{n}_{2}

are acceptably small. Best approximation problems arise when, for example, we seek

*p*∈

_{n}*P*

_{n}for which these measures of errors are as small as possible with respect to

*P*.

_{n}Other examples of norms that are particularly important are

*vector*and

*matrix norms*. For

*n*-component vectors

**= (**

*x**x*

_{1},

*x*

_{2},…,

*x*)

_{n}^{T}

important examples are

Corresponding to a given vector norm, a subordinate matrix norm can be defined for

*n*×

*n*matrices

*A*, by

For the vector norm

this reduces to the expression

where

*a*is the

_{ij}*i,j*th element of

*A*. Vector and matrix norms are indispensible in most areas of numerical analysis.

#### More From encyclopedia.com

#### You Might Also Like

#### NEARBY TERMS

**approximation theory**