ArticlePDF Available

Adaptive Quadrature—Revisited

Authors:

Abstract

First, the basic principles of adaptive quadrature are reviewed. Adaptive quadrature programs being recursive by nature, the choice of a good termination criterion is given particular attention. Two Matlab quadrature programs are presented. The first is an implementation of the well-known adaptive recursive Simpson rule; the second is new and is based on a four-point Gauss-Lobatto formula and two successive Kronrod extensions. Comparative test results are described and attention is drawn to serious deficiencies in the adaptive routines quad and quad8 provided by Matlab.
BIT 0006-3835/00/4001-0084 $15.00
2000, Vol. 40, No. 1, pp. 084–101 c
Swets & Zeitlinger
ADAPTIVE QUADRATURE—REVISITED
WALTER GANDER and WALTER GAUTSCHI
Institut f¨ur Wissenschaftliches Rechnen, ETH, CH-8092 urich, Switzerland
email: gander@inf.ethz.ch, wxg@cs.purdue.edu
Dedicated to Cleve B. Moler on his 60th birthday
Abstract.
First, the basic principles of adaptive quadrature are reviewed. Adaptive quadrature
programs being recursive by nature, the choice of a good termination criterion is given
particular attention. Two Matlab quadrature programs are presented. The first is
an implementation of the well-known adaptive recursive Simpson rule; the second is
new and is based on a four-point Gauss–Lobatto formula and two successive Kronrod
extensions. Comparative test results are described and attention is drawn to serious
deficiencies in the adaptive routines quad and quad8 provided by Matlab.
AMS subject classification: 65D30, 65D32.
Key words: Adaptive quadrature, Gauss quadrature, Kronrod rules.
1 The basic idea of adaptive quadrature.
Let [a, b] be the interval of integration, assumed to be bounded, and fareal
integrable function. To compute
I=
b
a
f(x)dx,
one generally proceeds as follows (see also, e.g., [2, Ch. 6]). First one integrates
fusing two different numerical integration methods, thus obtaining the approx-
imations I1and I2. Typically, one, say I1, is more accurate than the other. If
the relative difference of the two approximations is smaller than some prescribed
tolerance, one accepts I1as the value of the integral. Otherwise the interval [a, b]
is divided, e.g., in two equal parts [a, m]and[m, b], where m=(a+b)/2, and
the two respective integrals are computed independently:
I=
m
a
f(x)dx +
b
m
f(x)dx.
Received August 1998. Communicated by Lothar Reichel.
ADAPTIVE QUADRATURE—REVISITED 85
One now again computes recursively two approximations for each integral and,
if necessary, continues to subdivide the smaller intervals.
As is well known, it is impossible to construct a program that is foolproof, i.e.,
that correctly integrates any given function [1]. It is easy to construct a function
fwhich a given program will not integrate correctly [10]. Our task is therefore
to design an algorithm that works well for as many functions as possible.
M.T. Heath writes in his recent textbook [9]:
“If the integrand is noisy, or if the error tolerance is unrealistically
tight relative to the machine precision, then an adaptive quadrature
routine may be unable to meet the error tolerance and will likely ex-
pend a large number of function evaluations only to return a warning
message that its subdivision limit was exceeded. Such a result should
not be regarded as a fault of the adaptive routine but as a reflection
of the difficulty of the problem or unrealistic expectations on the part
of the user, or both.”
This is true for the two adaptive quadrature functions quad and quad8 provided
by Matlab. However, our routines will show that Heath’s assessment is overly
pessimistic in general—we can do much better than he says.
2 Termination criterion.
If I1and I2are two estimates for the integral, a conventional stopping criterion
is
if abs(i1-i2) < tol*abs(i1),(2.1)
where tol is some prescribed error tolerance. This criterion by itself is not
sufficient. Consider for example f(x)=x, the integration interval [0,1], I2:
Simpson’s rule using the step size h,I1: Simpson’s rule for the step size h/2
and tol =10
4. If implemented in Matlab, this procedure terminates fatally
with the error message Segmentation fault. In this example the two Simpson
values never agree to 4 decimals in the first interval containing 0.
The function quad in Matlab is based on an adaptive recursive Simpson’s rule.
The stopping criterion (2.1) is supplemented by a limitation on the number of
recursion levels. This prevents failure in the example above. However, it is not
clear how many recursion levels should be allowed, and the value LEVMAX = 10
used in quad is often inadequate. A warning error message is given if the recur-
sion level limit is reached. In case of f(x)=xand [a, b]=[0,1] we obtain with
the call quad(’f’,0,1,1e-12) the warning Recursion level limit reached
1024 times and the value 0.666665790717630 is returned, which is correct to
only 6 digits instead of the requested 12 digits.
We somehow have to terminate the recursion if the magnitude of the partial
integral I1or I2is negligible compared to the whole integral (see [4, p. 209],
[3]). Therefore, we have to add the criterion
|I1|
b
a
f(x)dx
,(2.2)
86 WALTER GANDER AND WALTER GAUTSCHI
where ηis another prescribed tolerance and where we have to use an estimate
for the unknown integral. With both criteria (2.1) and (2.2) used together, and
with some reasonable choices of tol and η, a working algorithm can be obtained.
If, e.g., for the above example we use
if (abs(i1-i2) < 1e-4*abs(i1)) | (abs(i1<1e-4))(2.3)
as stopping criterion, we obtain I=0.666617217 in 41 function evaluations.
The stopping criterion (2.3), however, is still not satisfactory because the user
has to choose tol and η, which depend on the machine and on the problem. A
wrong selection is easily possible.
To improve the criterion, we first need a rough estimate
is, is =0,(2.4)
of the modulus of the integral I. The stopping criterion (2.2) would then be
|I1|·is. In order to eliminate η, we stop the recursion machine-independently
by
ifis+i1==is.(2.5)
In the same spirit we may as well replace the criterion (2.1) by
if is + (i1-i2) == is.(2.6)
The criterion (2.6) will in general be met before the criterion (2.5) and therefore
we shall require only (2.6). There are cases, e.g., 1
01/1x2dx, where, when
ignoring the singularity, the subdivision will continue until an interval contains
no machine number other than the end points. In this case we also need to
terminate the recursion. Thus, our termination criterion is
if (is + (i1-i2) == is) | (m <= a) | (b<=m),(2.7)
where m=(a+b)/2. This, in particular, guarantees termination of the program,
and an explicit limitation on the number of recursion levels is no longer necessary.
Using the stopping criterion (2.7), we attempt to compute the integral to
machine precision. If we wish to compute the integral with less accuracy, say
within the tolerance tol , it suffices to magnify the estimated value is:
is = is*tol/eps,
where eps denotes the machine precision.
3 Adaptive Simpson quadrature.
The idea of adaptive Simpson quadrature is old [12]. However, in order to
obtain good performance, a careful implementation is necessary. The Matlab
function quad compares two successive Simpson values (relative and absolute
difference) and has a limitation on the number of recursion steps. If we compute
1
0xdxwith tol =10
8we obtain the message “Warning: Recursion level
ADAPTIVE QUADRATURE—REVISITED 87
limit reached in quad. Singularity likely.”. The routine returns
0.6666657907152264 (a value correct to only 6 digits) and needs 800 function
evaluations.
First, we propose to use for is a Monte Carlo estimate which also uses the
function values in the middle and at the end points of the interval (those values
are used for Simpson’s rule):
is =ba
8f(a)+f(m)+f(b)+
5
i=1
f(ξi).(3.1)
Here m=(a+b)/2andξ=a+[0.9501 0.2311 0.6068 0.4860 0.8913](ba)is
a vector of random numbers in (a, b). If by accident we get is =0,thenweuse
the value is =ba.
With this choice of is, we adopt the stopping criterion (2.7). Furthermore,
we do not compare successive Simpson values i1=S(h)andi2=S(h/2) but
overwrite i1 with one step of Romberg extrapolation:
i1 = (16*i2 - i1)/15.
In order to avoid recomputation of function values, we pass fa =f(a), fm =
f((a+b)/2) and fb =f(b) as parameters. In every recursion step, only two
new function evaluations are necessary to compute the approximations i1and
i2. The following Matlab function adaptsim has the same structure as quad.For
1
0xdxwith tol =10
8we obtain with adaptsim the value 0.6666666539870345
(correct to almost 8 digits) using only 126 function evaluations.
function Q = adaptsim(f,a,b,tol,trace,varargin)
%ADAPTSIM Numerically evaluate integral using adaptive
% Simpson rule.
%
% Q = ADAPTSIM(’F’,A,B) approximates the integral of
% F(X) from A to B to machine precision. ’F’ is a
% string containing the name of the function. The
% function F must return a vector of output values if
% given a vector of input values.
%
% Q = ADAPTSIM(’F’,A,B,TOL) integrates to a relative
% error of TOL.
%
% Q = ADAPTSIM(’F’,A,B,TOL,TRACE) displays the left
% end point of the current interval, the interval
% length, and the partial integral.
%
% Q = ADAPTSIM(’F’,A,B,TOL,TRACE,P1,P2,...) allows
% coefficients P1,... to be passed directly to the
% function F: G = F(X,P1,P2,...). To use default values
88 WALTER GANDER AND WALTER GAUTSCHI
% for TOL or TRACE, one may pass the empty matrix ([]).
%
% See also ADAPTSIMSTP.
%
% Walter Gander, 08/03/98
% Reference: Gander, Computermathematik, Birkhaeuser, 1992.
global termination2
termination2 = 0;
if (nargin < 4), tol = []; end;
if (nargin < 5), trace = []; end;
if (isempty(tol)), tol = eps; end;
if (isempty(trace)), trace = 0; end;
if tol<eps
tol= eps;
end
x = [a (a+b)/2 b];
y = feval(f, x, varargin{:});
fa = y(1); fm = y(2); fb = y(3);
yy = feval(f, a+[.9501 .2311 .6068 .4860 .8913]*(b-a),...
varargin{:});
is = (b - a)/8*(sum(y)+sum(yy));
if is==0, is = b-a; end;
is = is*tol/eps;
Q = adaptsimstp(f,a,b,fa,fm,fb,is,trace,varargin{:});
function Q = adaptsimstp (f,a,b,fa,fm,fb,is,trace,varargin)
%ADAPTSIMSTP Recursive function used by ADAPTSIM.
%
% Q = ADAPTSIMSTP(’F’,A,B,FA,FM,FB,IS,TRACE) tries to
% approximate the integral of F(X) from A to B to
% an appropriate relative error. The argument ’F’ is
% a string containing the name of f. The remaining
% arguments are generated by ADAPTSIM or by recursion.
%
% See also ADAPTSIM.
%
% Walter Gander, 08/03/98
global termination2
m = (a + b)/2; h = (b - a)/4;
x=[a+h,b-h];
ADAPTIVE QUADRATURE—REVISITED 89
y = feval(f, x, varargin{:});
fml = y(1); fmr = y(2);
i1 = h/1.5 * (fa + 4*fm + fb);
i2 = h/3 * (fa + 4*(fml + fmr) + 2*fm + fb);
i1 = (16*i2 - i1)/15;
if (is + (i1-i2) == is) | (m <= a) | (b<=m),
if ((m <= a) | (b<=m)) & (termination2==0);
warning([’Interval contains no more machine number.’,...
’Required tolerance may not be met.’]);
termination2 =1;
end;
Q = i1;
if (trace), disp([a b-a Q]), end;
else
Q = adaptsimstp (f,a,m,fa,fml,fm,is,trace,varargin{:}) + ...
adaptsimstp (f,m,b,fm,fmr,fb,is,trace,varargin{:});
end;
The minimal number of function evaluations is 10, which is attained if the
error test is met in the very first call to adaptsimstp.
Discontinuous functions are integrated quite well by adaptsim. For example,
if we integrate
f(x)=
x+1,x<1
3x, 1x3
2,x>3
(3.2)
on [0,5] with adaptsim(’f’,0,5,1e-6), i.e., with tol =10
6, we obtain in-
stead of the exact value 7.5 the value 7.49996609147638 with 98 function evalu-
ations. Using quad with the same tolerance tol =10
6, one obtains the value
7.50227769215902 (correct to only 3 digits) with 88 function evaluations. The
difference in performance is due to the different termination criteria and the
artificial limitation to 10 recursion levels used in quad.
4 Adaptive Lobatto quadrature.
4.1 The basic quadrature rule.
As basic quadrature rule we use the Gauss–Lobatto rule with two (symmetric)
interior nodes. On the canonical interval [1,1], the two interior nodes are the
zeros of π2(x), where
1
1
(1 x2)π2(x)p(x)dx =0 forall pP1.
Thus, up to a constant factor, π2is the Jacobi polynomial P(α,β)
2of degree 2
corresponding to parameters α=β=1. Since P(1,1)
2(x)=const·(x21
5), the
90 WALTER GANDER AND WALTER GAUTSCHI
interior nodes are x±1=±1
5. By symmetry, the formula has the form
1
1
f(x)dx =a[f(1) + f(1)] + bf(1
5)+f(1
5)+RGL(f),
where
RGL(f)=0 for fP5.
Exactness for f(x)=1andf(x)=x2yields
2a+2b=2
2a+2
5b=2
3hence a=1
6,b=5
6.
Thus, the basic quadrature rule on [1,1] is
1
1
f(x)dx =1
6[f(1) + f(1)] + 5
6f(1
5)+f(1
5)+RGL(f).(4.1)
We note that using Maple one can compute the basic quadrature rule directly
by means of the ansatz a(f(1) + f(1)) + b(f(x1) + f(x1)) and requiring that
it be exact for f(x)=1,x
2and x4:
u1 := 2*a +2*b:
u2 := 2*a + 2*b*x1^2:
u3 := 2*a + 2*b*x1^4:
solve({u1=2, u2=2/3, u3=2/5},{a,b,x1});
The result is:
x1 =RootOf (5 Z21),a=1/6,b=5/6,
in agreement with (4.1).
4.2 Kronrod extension of the Gauss–Lobatto formula.
To estimate the error of (4.1) we construct the Kronrod extension of (4.1). (For
background information and history on Gauss–Kronrod extensions see, e.g., [7].)
By a well-known theorem on quadrature rules of maximum algebraic degree of
exactness (cf. [8, Theorem 3.2.1]), the three Kronrod points are the zeros of
π
3(x), a (monic) polynomial of degree 3 satisfying
1
1
(1 x2)π2(x)π
3(x)p(x)dx =0 forall pP2.
Here, π2(x)=x21
5=: π1(x2), and by symmetry
π
3(x)=
1(x2)
for some π
1P1. It suffices, therefore, to choose π
1(x2) such that
1
1
(1 x2)π1(x2)π
1(x2)x2dx =0.
ADAPTIVE QUADRATURE—REVISITED 91
Putting x2=tyields
1
0
(1 t)π1(t)π
1(t)t1
2dt =0,
and, with π
1(t)=tc,weobtain
1
0
(1 t)(t1
5)(tc)t1
2dt =0,
that is,
c1
0
t1
2(t26
5t+1
5)dt =1
0
t3
2(t26
5t+1
5)dt,
giving 32c=64
3,orc=2
3. The three Kronrod points, therefore, are
x
±1=±2
3,x
0=0.
By symmetry, the Kronrod extension has the form
1
1
f(x)dx =A[f(1) + f(1)] + Bf2
3+f2
3
+Cf(1
5)+f(1
5)+Df(0) + RGLK (f),
where
RGLK (f)=0 for fP9.
Exactness for f(x)=1,x
2,x
4,x
6yields
2A+2B+2C+D=2,
2A+2 ·2
3B+2 ·1
5C=2
3,
2A+2 ·4
9B+2 ·1
25 C=2
5,
2A+2 ·8
27 B+2 ·1
125 C=2
7.
Gauss elimination gives
A=11
210 ,B=72
245 ,C=125
294 ,D=16
35 .
Thus,
1
1
f(x)dx =11
210 [f(1) + f(1)] + 72
245 f(2
3)+f(2
3)
+125
294 f(1
5)+f(1
5)+16
35 f(0) + RGLK (f).(4.2)
Again, we can compute this extension directly, using Maple and the ansatz
A[f(1) + f(1)] + B[f(x1) + f(x1)] + C[f(1/5) + f(1/5)] + Df(0),
requiring exactness for f(x)=1,x
2,x
4,x
6and x8:
92 WALTER GANDER AND WALTER GAUTSCHI
x2 := 1/sqrt(5);
u1 := 2*A + 2*B +2*C +D = 2;
u2 := 2*A + 2*B*x1^2 + 2*C*x2^2 = 2/3;
u3 := 2*A + 2*B*x1^4 + 2*C*x2^4 = 2/5;
u4 := 2*A + 2*B*x1^6 + 2*C*x2^6 = 2/7;
u5 := 2*A + 2*B*x1^8 + 2*C*x2^8 = 2/9;
solve({u1,u2,u3,u4,u5},{A,B,C,D,x1});
The result, as above, is:
B=72
245 ,A=11
210 ,D=16
35 ,C=125
294 ,x1 =RootOf (3 Z22).
4.3 Kronrod extension of (4.2).
It will be desirable to estimate how much more accurate (4.2) is compared
to (4.1). We try to estimate the respective errors by constructing a Kronrod
extension of (4.2), hoping that one exists with real nodes and positive weights.
There will be six symmetrically located Kronrod points ±x1,±x2,±x3,which,
it is hoped, interlace with the nodes of (4.2). Again, on the basis of [8, Theorem
3.2.1], with n=13and
ωn(x)=(x21)x22
3x21
5
6(x),
π
6(x)=(x2x2
1)(x2x2
2)(x2x2
3),
the 13-point quadrature rule to be constructed will have degree of exactness
d=12+kprovided that π
6is chosen to satisfy the “orthogonality” condition
1
1
(x21)(x22
3)(x21
5)
6(x)p(x)dx =0 forall pPk1.
The optimal value of kis k= 6, yielding a formula of degree d= 18 (actually,
d= 19 because of symmetry). If we let
αi=x2
i,i=1,2,3,
and make the substitution x2=t,π
6(x)=π
3(x2), the orthogonality relation
becomes
1
0
(t1)(t2
3)(t1
5)tπ
3(t)p(t)dt =0 forall pP2,
where
π
3(t)=(tα1)(tα2)(tα3)=t3at2+bt c.
Putting p(t)=1,t,t
2in this relation, one finds, after some tedious calculations,
that the coefficients a,b,cmust satisfy
30a13b=35,
595a510b+221c= 588,
11172a11305b+9690c= 10395.
ADAPTIVE QUADRATURE—REVISITED 93
Solving for a,b,cgives
a=37975
27987 ,b=4095
9329 ,c=9737
475779 ,
and then solving the cubic equation π
3(t) = 0 yields, with the help of Maple, to
38 decimal digits,
α1=.88902724982774341965844097377815423496,
α2=.41197571308045073755318461761021278774,
α3=.055877017082515815275600620781569019026,
and hence
x1=α1=.94288241569547971905635175843185720232,
x2=α2=.64185334234578130578123554132903188354,
x3=α3=.23638319966214988028222377349205292599.
It is a fortunate circumstance that the αiturn out to be all positive, hence also
the xi,andmoreoverthe±xiinterlace with the nodes of (4.2).
Alternatively, Maple can be used to compute the zeros of π
3directly as follows:
restart;
Digits :=40;
pis := t->(t-a1)*(t-a2)*(t-a3);
sols := solve({int((t-1)*(t-2/3)*(t-1/5)*sqrt(t)*pis(t),t=0..1)=0,
int((t-1)*(t-2/3)*(t-1/5)*sqrt(t)*pis(t)*t,t=0..1)=0,
int((t-1)*(t-2/3)*(t-1/5)*sqrt(t)*pis(t)*t^2,t=0..1)=0},
{a1,a2,a3}):
evalf(sols);
The desired Kronrod extension has the form
1
1
f(x)dx =A[f(1) + f(1)] + B[f(x1)+f(x1)] + Cf2
3+f2
3
+D[f(x2)+f(x2)] + Ef(1
5)+f(1
5)
+F[f(x3)+f(x3)] + Gf(0) + RGLK K (f),R
GLKK (P19 )=0.(4.3)
Exactness for the first seven powers of x2yields, after division by 2, the system
A+B+C+D+E+F+1
2G=1,
A+α1B+2
3C+α2D+1
5E+α3F=1
3,
A+α2
1B+4
9C+α2
2D+1
25 E+α2
3F=1
5,
A+α3
1B+8
27 C+α3
2D+1
125 E+α3
3F=1
7,
A+α4
1B+16
81 C+α4
2D+1
625 E+α4
3F=1
9,
A+α5
1B+32
243 C+α5
2D+1
3125 E+α5
3F=1
11 ,
A+α6
1B+64
729 C+α6
2D+1
15625 E+α6
3F=1
13 .
94 WALTER GANDER AND WALTER GAUTSCHI
The solution is, to 38 digits,
A=.015827191973480183087169986733305510591,
B=.094273840218850045531282505077108171960,
C=.15507198733658539625363597980210298680,
D=.18882157396018245442000533937297167125,
E=.19977340522685852679206802206648840246,
F=.22492646533333952701601768799639508076,
G=.24261107190140773379964095790325635233.
By good fortune, it consists of entirely positive entries.
Note that even here we can use Maple to obtain the result by “brute force”.
Using the ansatz (4.3) with unknown knots a1,a2and a3, and requiring that it
be exact for the monomials 1,x
2,x
4,...,x
18, we obtain 10 nonlinear equations
in 10 unknowns:
x1:=sqrt(2/3); x2:=sqrt(1/5);
u1:=2*A+2*B+2*C+2*D+2*E+2*F+G = 2;
u2:=2*A+2*B*a1^2+2*C*x1^2+2*D*a2^2+2*E*x2^2+2*F*a3^2 = 2/3;
u3:=2*A+2*B*a1^4+2*C*x1^4+2*D*a2^4+2*E*x2^4+2*F*a3^4 = 2/5;
u4:=2*A+2*B*a1^6+2*C*x1^6+2*D*a2^6+2*E*x2^6+2*F*a3^6 = 2/7;
u5:=2*A+2*B*a1^8+2*C*x1^8+2*D*a2^8+2*E*x2^8+2*F*a3^8 = 2/9;
u6:=2*A+2*B*a1^10+2*C*x1^10+2*D*a2^10+2*E*x2^10+2*F*a3^10 = 2/11;
u7:=2*A+2*B*a1^12+2*C*x1^12+2*D*a2^12+2*E*x2^12+2*F*a3^12 = 2/13;
u8:=2*A+2*B*a1^14+2*C*x1^14+2*D*a2^14+2*E*x2^14+2*F*a3^14 = 2/15;
u9:=2*A+2*B*a1^16+2*C*x1^16+2*D*a2^16+2*E*x2^16+2*F*a3^16 = 2/17;
u10:=2*A+2*B*a1^18+2*C*x1^18+2*D*a2^18+2*E*x2^18+2*F*a3^18 = 2/19;
sols:=solve({u1,u2,u3,u4,u5,u6,u7,u8,u9,u10},
{A,B,C,D,E,F,G,a1,a2,a3});
Maple solves this system in 7 minutes on a SUN Sparcstation 20/514
(50 MHz SuperSparc processor) and gives a solution containing very complicated
expressions (several pages long). However, evaluating the expressions as floating
point numbers (Digits:=15; evalf(sols);) yields (rounded to 10 digits)
{a1 = -.2363831997, a2 = -.6418533423, E = .1997734052,
a3 = -.9428824157, D = .1888215742, F = .09427384020,
G = .2426110719, B = .2249264653, C = .1550719873,
A = .01582719197},
a permutation of the solution given above.
4.4 The adaptive procedure.
For an arbitrary interval [a, b], the formulae (4.1) and (4.2) can be written
respectively as
b
a
f(x)dx h
6{f(a)+f(b)+5[f(mβh)+f(m+βh)]}(4.4)
ADAPTIVE QUADRATURE—REVISITED 95
and
b
a
f(x)dx h
147077[f(a)+f(b)] + 432[f(mαh)+f(m+αh)]
+ 625[f(mβh)+f(m+βh)] + 672f(m),(4.5)
where
h=1
2(ba),m=1
2(a+b)=2
3=1
5.
A similar reformulation holds for (4.3).
The adaptive Lobatto procedure is similar to the adaptive Simpson procedure
of Section 3, with the second Kronrod extension (i.e., the formula (4.3) relative
to the initial interval [a, b]) providing the estimate is, and (4.4) and (4.5) playing
the roles of i2andi1, respectively. There are three additional features, however:
(i) If the ratio ρof the error of (4.5) and the error of (4.4), as determined for
the initial interval [a, b]bycomparisonwithis, is less than 1, then the
basic error tolerance tol is relaxed to tol, since we always accept the
more accurate approximation (4.5). (A similar relaxation of the tolerance
has already been suggested by Lyness in [12, Modification 1].)
(ii) At each recursive level, the current interval [a, b] is subdivided into six
subintervals when the error tolerance is not met, namely the intervals
[a, m αh], [mαh, m βh], [mβh,m], [m, m +βh], [m+βh, m +αh],
and [m+αh, b] determined by (4.4) and (4.5). In this way, all function
values computed are being reused.
(iii) Consistent with (ii), the termination criterion (2.7) is modified by replacing
the last two conditions by mαh aand bm+αh, respectively.
The adaptive Lobatto procedure requires five new values of fto be computed
at each level of the recursion.
4.5 Matlab code.
The adaptive Lobatto procedure is implemented by the recursive Matlab pro-
gram below.
function Q=adaptlob(f,a,b,tol,trace,varargin)
%ADAPTLOB Numerically evaluate integral using adaptive
% Lobatto rule.
%
% Q=ADAPTLOB(’F’,A,B) approximates the integral of
% F(X) from A to B to machine precision. ’F’ is a
% string containing the name of the function. The
% function F must return a vector of output values if
% given a vector of input values.
96 WALTER GANDER AND WALTER GAUTSCHI
%
% Q=ADAPTLOB(’F’,A,B,TOL) integrates to a relative
% error of TOL.
%
% Q=ADAPTLOB(’F’,A,B,TOL,TRACE) displays the left
% end point of the current interval, the interval
% length, and the partial integral.
%
% Q=ADAPTLOB(’F’,A,B,TOL,TRACE,P1,P2,...) allows
% coefficients P1,... to be passed directly to the
% function F: G=F(X,P1,P2,...). To use default values
% for TOL or TRACE, one may pass the empty matrix ([]).
%
% See also ADAPTLOBSTP.
% Walter Gautschi, 08/03/98
% Reference: Gander, Computermathematik, Birkhaeuser, 1992.
global termination2
termination2 = 0;
if(nargin<4), tol=[]; end;
if(nargin<5), trace=[]; end;
if(isempty(tol)), tol=eps; end;
if(isempty(trace)), trace=0; end;
if tol < eps
tol = eps;
end
m=(a+b)/2; h=(b-a)/2;
alpha=sqrt(2/3); beta=1/sqrt(5);
x1=.942882415695480; x2=.641853342345781;
x3=.236383199662150;
x=[a,m-x1*h,m-alpha*h,m-x2*h,m-beta*h,m-x3*h,m,m+x3*h,...
m+beta*h,m+x2*h,m+alpha*h,m+x1*h,b];
y=feval(f,x,varargin{:});
fa=y(1); fb=y(13);
i2=(h/6)*(y(1)+y(13)+5*(y(5)+y(9)));
i1=(h/1470)*(77*(y(1)+y(13))+432*(y(3)+y(11))+ ...
625*(y(5)+y(9))+672*y(7));
is=h*(.0158271919734802*(y(1)+y(13))+.0942738402188500 ...
*(y(2)+y(12))+.155071987336585*(y(3)+y(11))+ ...
.188821573960182*(y(4)+y(10))+.199773405226859 ...
*(y(5)+y(9))+.224926465333340*(y(6)+y(8))...
+.242611071901408*y(7));
s=sign(is); if(s==0), s=1; end;
ADAPTIVE QUADRATURE—REVISITED 97
erri1=abs(i1-is);
erri2=abs(i2-is);
R=1; if(erri2~=0), R=erri1/erri2; end;
if(R>0 & R<1), tol=tol/R; end;
is=s*abs(is)*tol/eps;
if(is==0), is=b-a, end;
Q=adaptlobstp(f,a,b,fa,fb,is,trace,varargin{:});
function Q=adaptlobstp(f,a,b,fa,fb,is,trace,varargin)
%ADAPTLOBSTP Recursive function used by ADAPTLOB.
%
% Q = ADAPTLOBSTP(’F’,A,B,FA,FB,IS,TRACE) tries to
% approximate the integral of F(X) from A to B to
% an appropriate relative error. The argument ’F’ is
% a string containing the name of f. The remaining
% arguments are generated by ADAPTLOB or by recursion.
%
% See also ADAPTLOB.
% Walter Gautschi, 08/03/98
global termination2
h=(b-a)/2; m=(a+b)/2;
alpha=sqrt(2/3); beta=1/sqrt(5);
mll=m-alpha*h; ml=m-beta*h; mr=m+beta*h; mrr=m+alpha*h;
x=[mll,ml,m,mr,mrr];
y=feval(f,x,varargin{:});
fmll=y(1); fml=y(2); fm=y(3); fmr=y(4); fmrr=y(5);
i2=(h/6)*(fa+fb+5*(fml+fmr));
i1=(h/1470)*(77*(fa+fb)+432*(fmll+fmrr)+625*(fml+fmr) ...
+672*fm);
if(is+(i1-i2)==is) | (mll<=a) | (b<=mrr),
if ((m <= a) | (b<=m)) & (termination2==0);
warning([’Interval contains no more machine number. ’,...
’Required tolerance may not be met.’]);
termination2 =1;
end;
Q=i1;
if(trace), disp([a b-a Q]), end;
else
Q=adaptlobstp(f,a,mll,fa,fmll,is,trace,varargin{:})+...
adaptlobstp(f,mll,ml,fmll,fml,is,trace,varargin{:})+...
adaptlobstp(f,ml,m,fml,fm,is,trace,varargin{:})+...
98 WALTER GANDER AND WALTER GAUTSCHI
adaptlobstp(f,m,mr,fm,fmr,is,trace,varargin{:})+...
adaptlobstp(f,mr,mrr,fmr,fmrr,is,trace,varargin{:})+...
adaptlobstp(f,mrr,b,fmrr,fb,is,trace,varargin{:});
end;
The minimal number of function evaluations is 18 and occurs if the error test
is met in the very first call to adaptlobstp. This can be expected only in
cases where fis very regular on [a, b] and the tolerance tol is not too stringent.
Discontinuities of fin the interior of [a, b], on the other hand, cannot be expected
to be handled efficiently by our routine; but the routine has been observed to
cope rather efficiently with other difficult behavior, as long as fremains bounded
on the interval [a, b] and smooth in its interior.
5 Test results.
In comparing adaptive quadrature routines, one must take into account a
number of characteristics, of which the more important ones are:
(i) efficiency, as measured by the number of function evaluations required to
meet a given error tolerance;
(ii) reliability, the extent to which the requested error tolerance is achieved;
and
(iii) tolerance responsiveness, the extent to which the efficiency is sensitive to
changes in the error tolerance.
We will try to convey these characteristics graphically by displaying a his-
togram over four tolerances: tol =eps (the machine precision1), tol =10
9,tol
=10
6,andtol =10
3, the height of each of the four bars in the histogram
indicating the number of function evaluations in a logarithmic scale. A bar that
is completely white signifies that the requested tolerance has been attained; a
shaded bar means that the result produced has a relative error that exceeds the
tolerance by a factor larger than 1 but less than or equal to 10. A black bar
indicates a discrepancy by a factor larger than 10. Thus, a white bar identi-
fies a routine that is reliable for the tolerance in question, a shaded bar one
that is slightly unreliable, and a black bar one that might be severely unreliable.
The tolerance responsiveness can be seen from how rapidly the histogram falls off
with decreasing tolerance. A histogram that is flat (or partially flat) at relatively
high numbers of function evaluations indicates poor tolerance responsiveness.
We compared our routines adaptsim and adaptlob with the worst and best
routines in the IMSL library (DQDAG,DQDAGS), the worst and best routines of the
NAG library (D01AHF,D01AJF,D01AKF),andwiththeroutinesquad and quad8
from Matlab. The results are displayed in the 23 histograms of Table 2 in [5,
pp. 17–20], of which the first 21 refer to Kahaner’s collection of test functions
[11] and the last two to functions taken from [6]. Here, in Figure 5.1 we present
1The choice tol = eps makes our routines, esp ecially adaptsim, work much harder than
necessary, without yielding any noticeable gain in accuracy compared to, say, tol =10·eps.
ADAPTIVE QUADRATURE—REVISITED 99
adaptsim adaptlob IMSL − IMSL + NAG − NAG + quad quad8
100
101
102
103
104f(x)=if x>=.3 then 1 else 0 fi, Intervall:[0,1]
# function calls
adaptsim adaptlob IMSL − IMSL + NAG − NAG + quad quad8
100
101
102
103
104f(x)=1/(1+x), Intervall:[0,1]
# function calls
adaptsim adaptlob IMSL − IMSL + NAG − NAG + quad quad8
100
101
102
103
104f(x)=if x>0 then x/(exp(x)−1) elseif x=0 then 0 fi, Intervall:[0,1]
# function calls
adaptsim adaptlob IMSL − IMSL + NAG − NAG + quad quad8
100
101
102
103
104f(x)=if x>1e−15 then log(x) else 0 fi, Intervall:[0,1]
# function calls
Figure 5.1: Comparison with other adaptive quadrature routines.
four typical examples. The tests were conducted on three different machines
with four different versions of Matlab: SGI 02 R5000 (Matlab 5.0 and 5.1), HP
A 9000/770 (Matlab 5.0), and SUN SPARCstation 20 (Matlab 5.0, 5.1, 5.2,
and 5.2.1). The results were nearly identical. (The only significant difference
was observed in connection with function #22, for which the routine adaptsim
with tol = eps—and only for this tolerance—on the machine SGI 02 returns
a totally false answer with the minimum number 10 of function evaluations,
whereas on the SUN SPARCstation it integrates the function correctly in 49926
function calls.) The graphics shown is based on the results obtained on the
SUN SPARCstation with Matlab version 5.0. The tests of the NAG and IMSL
routines were carried out in fortran on the HP/Convex Exemplar SPP2000/X-32
machine.
The following observations can be made.
In terms of efficiency, the routine adaptlob performs distinctly better than
adaptsim when the accuracy requirement is high. For machine precision
eps it outperforms adaptsim in all but one example, and often significantly
so. (The one exception is the discontinuous function #2, for which, how-
ever, adaptsim is slightly unreliable.) For the accuracy tolerance 109,it
100 WALTER GANDER AND WALTER GAUTSCHI
does so in about half the cases. For lower tolerances, adaptsim is generally
(but not always) more efficient than adaptlob, but less reliable.
Compared with the other routines, those of the IMSL and NAG libraries
are the most serious competitors. The best of them performs distinctly
better than our routines in about one-third of the cases.
In terms of reliability, the routine adaptlob is by far the best, exhibiting
only one serious failure out of the 4 ·23 = 92 individual runs. It is followed
by the IMSL and NAG library routines, which failed 6 or 7 times. The
routines quad and quad8 are by far the least reliable, having seriously failed
in 30 resp. 15 cases. It is perhaps of interest to note that the second half
of the termination criterion (2.7) for adaptsim and the analogous one for
adaptlob has never been invoked in any of the 23 test cases. As already
observed in Section 2, there are cases, however, for example the function
f(x)= 1
1x2for 0 x<1andf(1) = 0, where for tol = eps that
part of the stopping criterion is indeed activated, both in adaptsim and
adaptlob. Also for the example (3.2) and tol = eps, one of our routines,
adaptlob (but not the other), terminates in this manner.
Both of our routines show excellent response to changes in the tolerance, in
contrast to some of the other routines, where the response is more sluggish.
In view of these (admittedly limited) test results it would appear that the rou-
tines adaptsim and adaptlob are worthy contenders for inclusion in software
libraries.
Acknowledgments.
The authors are indebted to Leonhard Jaschke for carrying out the extensive
testing and for providing the graphical representation of the results. The second
author is grateful to the first for the kind hospitality accorded him during his
stays at the ETH.
REFERENCES
1. Carl de Boor, On writing an automatic integration algorithm,inMathematical
Software, John R. Rice ed., Academic Press, New York, 1971, pp. 201–209.
2. Philip J. Davis and Philip Rabinowitz, Methods of Numerical Integration,2nded.,
Academic Press, Orlando, 1984.
3. Walter Gander, A simple adaptive quadrature algorithm, Research Report No.
83-03, Seminar f¨ur Angewandte Mathematik, ETH, Z¨urich, 1993.
4. Walter Gander, Computermathematik,Birkh¨auser, Basel, 1992.
5. Walter Gander and Walter Gautschi, Adaptive quadrature—revisited, Research
Report #306, Institut f¨ur Wissenschaftliches Rechnen, ETH, Z¨urich, 1998.
6. S. Garribba, L. Quartapelle, and G. Reina, Algorithm 36—SNIFF: Efficient self-
tuning algorithm for numerical integration, Computing, 20 (1978), pp. 363–375.
ADAPTIVE QUADRATURE—REVISITED 101
7. Walter Gautschi, Gauss–Kronrod quadrature—a survey,inNumericalMethods
and Approximation Theory III, G. V. Milovanovi´c, ed., Faculty of Electronic En-
gineering, University of Niˇs, Niˇs, 1988, pp. 39–66.
8. Walter Gautschi, Numerical Analysis: An Introduction,Birkh¨auser, Boston, 1997.
9. Michael T. Heath, Scientific Computing, McGraw-Hill, New York, 1997.
10. William M. Kahan, Handheld calculator evaluates integrals, Hewlett-Packard Jour-
nal 31:8 (1980), pp. 23–32.
11. David K. Kahaner, Comparison of numerical quadrature formulas,inMathemat-
ical Software, John R. Rice ed., Academic Press, New York, 1971, pp. 229–259.
12. J. N. Lyness, Notes on the adaptive Simpson quadrature routine, J. Assoc. Comput.
Mach., 16 (1969), pp. 483–495.
... • $Area -numerical integration by an adaptive Gauss-Lobatto [2] quadrature; ...
Article
Full-text available
In this paper, we propose a general method for solving structural mechanics problems by developing a “functional model”. It is implemented by defining the required geometrical and physical relationships in their general form via analytical and numerical functions. For that purpose, a special programming language Calcpad is used. The obtained data structure is processed by a core engine, which generates the computer code, required for the solution. This model can be used for calculation of different partial cases, after specifying the specific input functions and parameters. In this way, the need for developing a different procedure for each separate case is eliminated. The proposed method is applied for the analysis of a column with variable, arbitrary shaped cross section, accounting for initial imperfections, geometric and material nonlinearity.
... For a given strain diagram, the respective stresses in concrete and reinforcement bars are evaluated. The resulting compression force in concrete is calculated via numerical integration using adaptive Gauss-Lobatto quadrature [1]. The equilibrium of the internal and external axial forces is found by solving the equation: ...
Article
Full-text available
Unlike older design codes, Eurocode 2 does not provide a procedure for calculation of the capacity of axially loaded columns. For that purpose, a general algorithm is developed, using the principles of structural mechanics and mathematics: It starts with constructing the P-Δ relationship (buckling curve) for the element, accounting for initial imperfections, minimum eccentricity, material nonlinearity and second order effects. Then, the respective bending moments are determined from the displacements and the P-M relationship is obtained. Finally, the axial load capacity in planar domain is determined by intersecting the P-M diagram with the interaction curve for the RC section. In spacial domain, the P-Δ relationship is defined by the respective “buckling surface”. Its intersection with the interaction surface represents a spacial curve in the (P, Mx, My) coordinate system. The axial capacity is determined as the minimum of P values along the curve. The solution is performed by numerical methods.
... The solutions of wave equations often contain complex integrals which are solved directly explicitly. But the complex integrals could be solved by the numerical integration method such as the Simpson method, the trapezoidal method, the newton-cotes method, and so on, in which the Gauss-Kronrod quadrature numerical integration method in the Simpson method is particularly valid for integrating oscillating functions [49][50][51][52]. ...
Article
Full-text available
The coupled resonance mechanism of interface stratification of thin coating structures excited by horizontal shear waves is investigated by the forced vibration solution derived from the global matrix method, the integral transformation method, and the plane wave perturbation method. The interface shear stress reaches the peak at coupling resonance frequencies which are an inherent property of the structure, and decreases with the increase of coating thickness or the increase of shear wave velocity difference between the substrate and coating. At the coupling resonance frequency, the thin coating structure is more easily stratified at the interface. The result could provide a theoretical basis for the popularization and application of ultrasonic deicing/defrosting/de-accretion technology.
... where l(u) and l (v) are line segments parameterized with u, v ∈ [0, 1]. In general, Equation (11) does not have a closed form formula and we applied Simpson quadrature [27] to numerically evaluate the integral. It should be mentioned that the concept of line kernel can be extended to area/volume kernel to further reduce the size of training data. ...
Article
Full-text available
This paper proposes a novel method for occupancy map building using a mixture of Gaussian processes. Gaussian processes have proven to be highly flexible and accurate for a robotic occupancy mapping problem, yet the high computational complexity has been a critical barrier for large-scale applications. We consider clustering the data into small, manageable subsets and applying a mixture of Gaussian processes. One of the problems in clustering is that the number of groups is not known a priori, thus requiring inputs from experts. We propose two efficient clustering methods utilizing (1) a Dirichlet process and (2) geometrical information in the context of occupancy mapping. We will show that the Dirichlet process-based clustering can significantly speed up the training step of the Gaussian process and if geometrical features, such as line features, are available, they can further improve the clustering accuracy. We will provide simulation results, analyze the performance and demonstrate the benefits of the proposed methods.
... An efficient adaptive strategy is given in following Algorithm [6,7,11,12,14]. We go on applying the above adaptive quadrature scheme using the proposed quadrature rules to each of the sub intervals covering [a, b] until the termination criterion is satisfied. If the termination criterion is not satisfied in one or more of the sub intervals, then those sub intervals must be further subdivided and the entire process is repeated. ...
Article
Full-text available
In this paper we successfully developed a quadrature rule SR4b(f)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SR_{4b} (f)$$\end{document} of increased precision. In the process we combined four quadrature rules of lower precision using a non-conventional generalized approach. The new rule so formed is termed as quartic quadrature. We presented the generalized approach which is capable of combining a bunch of quadrature rules of lower precisions to produce a generalized quadrature of higher precision. We analysed the truncation error of the new rule SR4b(f)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SR_{4b} (f)$$\end{document}. We also compared the error estimate of this rule with those of its four ingredient quadratures. We found that the rule SR4b(f)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SR_{4b} (f)$$\end{document} is theoretically dominating its ingredients. We used this rule as base rule in an adaptive integration scheme. We took some line integrals of analytic functions as test integrals. We obtained highly encouraging results by numerically evaluating the test integrals in both non adaptive and adaptive mode using the new formula SR4b(f)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$SR_{4b}(f)$$\end{document}.
Chapter
Adaptive quadrature is a powerful tool for numerical integration and involves a careful selection of the points where the integrand f(x) is evaluated such that the approximation to the definite integral is within some specified error tolerance. After motivating the need for adaptive quadrature by providing a brief overview of numerical quadrature, the basic principles of adaptive quadrature are introduced. The most well‐known adaptive quadrature method, adaptive Simpson rule, is presented with an accompanying example before concluding with a discussion and extension of the example to another widely used adaptive method, Gauss–Kronrod quadrature.
Article
In this paper, an oversampling collocation method based on shifted generalized Jacobi functions as a set of basis is proposed to find the numerical solutions of Volterra integral equations with contaminated data. We proved that the oversampling collocation method decreases the covariance matrix of the coefficients vector of the approximated solution in the sense of 2-norm, which indicates the oversampling collocation method is more robust(which means anti-jamming of white noise) than the standard collocation method when the right side of the Volterra equation has contaminated data. Numerical experiments demonstrate the effectiveness of this method and its consistency with the theoretical analysis.
Article
This paper presents a study on the Polynomial Chaos based approach for uncertainty quantification. It discusses employing different polynomial chaos based techniques for uncertainty quantification of RF circuits. Here, the performance of different Stochastic Collocation techniques such as the pseudo-spectral projection, linear regression, and interpolation technique, is investigated using two illustrative circuit examples. The first example deals with the uncertainty quantification of phase noise in a 2.4 GHz CMOS LC tank oscillator, while in the second one, gain of a 2.4 GHz low-noise amplifier is analyzed in the presence of uncertainty. The applied techniques are investigated and compared with the traditional Monte Carlo simulations approach. The advantages and disadvantages of each of the presented techniques are discussed, which are validated using the illustrations. The results of our study provide guidance for choosing the appropriate technique for modeling and quantifying uncertainty for similar circuits.
Article
An abstract is not available.
Article
A new adaptive algorithm for the integration of analytic functions is presented. The algorithm processes the integration interval by generating local subintervals whose length is controlled through a feedback loop. Control is performed by means of a relation derived on an analytical basis and valid for an arbitrary integration rule: two different estimates of an integral are used to compute the interval length necessary to obtain an integral estimate with accuracy within the assigned error bounds. The implied method for local generation of subintervals and an effective assumption of error partition among subintervals give rise to an adaptive algorithm provided with an accurate and very efficient integration process. The particular algorithm obtained by choosing the 6-point Gauß-Legendre integration rule is considered and extensive comparisons are made with other outstanding integration algorithms.
A simple adaptive quadrature algorithm, Research Report No. 83–03
  • Walter
  • Gander
A simple adaptive quadrature algorithm
  • Walter Gander