# numerical integration

views updated

numerical integration (quadrature) The problem of finding the numerical value for a definite integral. The underlying approximation behind most methods is the replacement of a function f(x) by an interpolation polynomial, based on a set of points x1, x2,…,xn. This leads to integration rules of the form ∫baw(x)f(x)dxw1f(x) + w2f(x2) + … + wnf(xn)

in which the wi are called weights.

The standard problem has a,b finite and w(x) ≡ 1. For this case the rules with equally spaced points xi are called Newton–Cotes rules. Well-known examples are the trapezium rule and Simpson's rule. Most program libraries implement the more powerful Gaussian rules in which the points xi are chosen to maximize the degree of precision. This is achieved by choosing the xi as the zeros of the Legendre polynomials that are orthogonal polynomials with respect to w(x) ≡ 1 on the interval [–1, 1]. Another important idea is the extrapolation method due to Romberg, based on the trapezium rule.

For infinite range problems Gaussian rules can also be defined in terms of suitable orthogonal polynomials. A useful case is where w(x) = ex, a = 0, b = ∞

where the appropriate orthogonal polynomials determining the xi are the Laguerre polynomials.

In practice the interval of integration is subdivided and the chosen rule applied to each subinterval, together with a companion rule to provide an error estimate (see error analysis). By then subdividing the interval where the error is largest, a greater concentration of effort is placed where the integrand is most difficult. This is known as adaptive quadrature. Such nonuniform distribution of effort, adapted to the particular problem, is essential for the efficient solution of all practical problems.

Multiple integrals over a large number of dimensions may be treated by Monte Carlo methods, involving the use of randomly generated evaluation points.