Page 1

TR/61 April 1976

THE ACCURATE NUMERICAL INVERSION OF

LAPLACE TRANSFORMS

by

A. Talbot

Page 2

W9261050

Page 3

The accurate numerical inversion of Laplace transforms

A. TALBOT

Mathematics Department, Brunei University, Uxbridge

Abstract

Inversion of almost arbitrary Laplace transforms is

effected by trapezoidal integration along a special

contour. The number n of points to be used is one

of several parameters, in most cases yielding absolute

errors of order 10-7 for n = 10, 10-11 for n = 20,

10-23 for n = 40 (with double precision working), and

so on, for all values of the argument from 0+ up to

some large maximum.

The extreme accuracy of which the method is capable

means that it has many possible applications of

various kinds, and some of these are indicated.

Page 4

Page 5

- 1 -

1.

The inversion of Laplace transforms is a topic of fundamental

importance in many areas of applied mathematics, as would be

evident by a glance at, for example, Carslaw and Jaeger (1948).

In the more standard applications the inversion can he accomplished

by the use of a dictionary of transforms, or in the case of

rational function transforms by partial fraction decomposition.

Where these methods are of no avail recourse may be had to the

inversion integral formula, which is likely to lead to an

intractable integral, or to an infinite series, often with

terms involving the roots of some transcendental function. It

is clear that in all but the simplest cases considerable effort

is needed to obtain an accurate numerical value of the inverse

for a specified value of the argument.

It is therefore natural that attention has been paid by

mathematicians, engineers, physicists and others to alternative

ways of evaluating the inverse. Early methods (e.g. Widder (1935),

Tricomi (1935), Shohat (1940)) involved expansion of the inverse

in series of Laguerre functions. Salzer (1955) evaluated the

inversion integral by Gaussian quadrature using an appropriate

system of orthogonal polynomials. Since 1955 et very large number

of methods for numerical inversion have been published: see for

example the partial bibliography in Piessens and Branders (1971)

or the fuller one in Piessens (1974) . A useful critical survey

of earlier work was given by Weeks (1966).

Many of the methods use either orthogonal series expansions, or

weighted sums of values of the transform at a set of points,

usually complex points. In either case considerable preliminary

work must be carried out. In the second type this may be done in

advance once and for all for each selected set of points, and the

points and weights stored in the computer. However, if more points

are desired for the sake of gaining increased accuracy,much further

computational effort must be expended first.

In general the methods hitherto published have been intended for

use with transforms of particular types, e.g. rational functions

Introduction

Page 6

- 2 -

2.

in the transform variable s, functions of s, functions

representable by polynomials in 1/s, and so on.

attainable have depended very much on the particular transform

F(s) to be inverted, as well as on the argument, t, of the

inverse f(t). The highest accuracies so far claimed have

probably been those obtained by Piessens and Branders (1971),

Piessens (1971, 1972), and Levin (1975), who in particular cases

obtained errors of orders 10-12 to 10-15.

The accuracies

The method to be described in the present paper is of the second

type, but is unlike any previously published method. The

number n of points to be used is one of several arbitrary para-

meters. No preliminary computational work, is required. The

method is almost universal in its application. The theoretical

error is expressible in closed form by means of contour integrals,

and for a given t decreases roughly exponentially with increase

of n, being typically of order 10-4 or 10-5 for n = 6, 10-7 for

n = 10, 10-11 for n = 20, 10-23 for n = 40 (with double precision

working) and so on. The actual decrease of error is of course

limited by the precision of the computer, but the "round-off"

error is very easily estimated from the value of one single term.

In practice the orders of error quoted are always attainable, by

proper choice of the other parameters, for all values of t from

0+ to some maximum value, usually ranging between 20 and 100 or

more, and depending on the accuracy required and the positions of

the singularities of F(s). The computer execution time is roughly

proportional to n. Using a CDC 7600 the average time per

inversion when n = 20 (giving errors nearly all of order 10-11

or less) is about 1 ms.

In essence the method is contained in an unpublished Ph.D. thesis

(J.S. Green, 1955) which was supervised by the present author.

However the potentialities as regards accuracy attainable only

became apparent much later, and turn on the correct choice of

the various parameters.

Possible applications of the method are numerous, and many have

already been tested. These include:

(a) The direct one-step solution, for any specified value of

Page 7

- 3 -

the independent variable, of any linear constant—coefficient

differential equation with arbitrary right-hand side possessing

a Laplace transform calculable as a function of the complex

transform variable s.

(b) The time-domain solution of any linear network or system

(e.g. control system) using either standard network or system

analysis or the solution of simultaneous algebraic linear

equations.

(c) In particular, the solution of a system governed by a

state-matrix A:

du

=

, ) t ( v

+

Au

dt

by combining the inversion process with Fadeev's method for

evaluation of (sI—A)-1. That is to say, given any vector v(t)

and any initial conditions, the equation can be solved for the

vector u(t) for a given t in one step to almost any desired

degree of accuracy.

(d)

of the parabolic equation

The direct one-step solution for any specified x and t

0t,bxa,

t

u

x

u

2

2

>≤≤

∂

∂

=

∂

∂

with arbitrary initial condition

u(x,0) = ø (x) , a < x < b

and a variety of (or perhaps arbitrary) end - conditions on u

or ∂u/∂x.

(e) The evaluation of some difficult integrals to great accuracy,

by inversion of their transforms taken with respect to a pre-

existing or artificially introduced parameter.

(f)

inversion of their transforms, to many more decimal places than

are available at present, provided a computer of sufficient

precision in its arithmetic and in its exponential and sine -

cosine subroutines is at hand. For example, with a CDC 7600

in double precision J0(t) can be found to 21 or more decimal

places for t ≤ 100. Triple precision would raise the number

of places to about 35 , and so on.

The direct evaluation of transcendental functions, by

Page 8

- 4 -

4.

2. Description of the method

Let f(t), defined for t > 0, have the Laplace transform

(1)

dt ) t ( f

st

e

0

) s (F

−

∞

=∫

with abscissa of convergence γ0 , S Othat the integral converges

in the half-plane Res > γ0 hut diverges in Res < γ0 . Our

starting point for numerical inversion of F(s) is the standard

inversion formula

1

) t ( f

B

π

∫

where B is the "Bromwich contour" from -i

γ > γ0, SO that B is to the right of all singularities of F(s).

0t, ds ) s ( Fe

i2

st

>=

(2)

γ

∞ to + i

γ

∞, where

Direct numerical integration along B is impractical on account

of the oscillations of est as Ims → ± ∞. The difficulties were

to some extent overcome by Filon (1929) and others since, and

probably Levin (1975) has gone as far as anybody in this direction

by use of his remarkable convergence—acceleration algorithms. But

his method would require considerably more effort to improve on

the orders of error 10-12 to 10-15 which he has been able to achieve

for some functions F(s) and some values of t.

Here we overcome the difficulty by avoiding it: we replace B by

an equivalent contour L starting and ending in the left half-plane,

so that Re s → - ∞ at each end. This replacement is permissible,

i.e. L is equivalent to B, if

(i) L encloses all singularities of F(s), (3)

and

(ii) |F(s) | → 0 uniformly in Re s ≤ γo as | s | → ∞ . (4)

Condition (ii) holds for almost all functions likely to be

encountered, and we shall assume it satisfied by all F(s) considered

here. Condition (i) may well not be satisfied by a given F(s),

but for the time being we shall assume that it does hold.

To choose one particular path out of all possible equivalent paths,

we write the integral in (2) as

(5)

iv uw,dse

) s (w

B

+=

∫

Page 9

5.

- 5 -

Now if ŝ is a saddle-point of the integrand, i.e. a zero of dw/ds,

then in general, as is well-known, there is a pair of steepest-

descent paths through ŝ on which, if we write w(s) = û + i ,

(a) v = const. = , so that e

v ˆ

and

(b) u decreases steadily from û at ŝ to -∞ at both ends.

It is obvious that this pair of paths forms a contour L which, if

equivalent to B, is likely to be very suitable for numerical

integration of (5).

v ˆ

w does not oscillate,

For later reference we note that if we write µ2 = û - u, then

µ2 ≥ 0 on L, and we may suppose µ to increase steadily from

-∞ to +∞ along L. Furthermore, on L we have

μ2 = - w = -

w ˆ

2

1Ŵ" . (s - ŝ)2 + . . . . ,

(6)

from which, retaining only the first term of the Taylor expansion,

we derive the Saddle-Point approximation formula

) 8 (.

) " w ˆ2 (

√

e

) 7 ( dµ

dµ

ds

e

i2

e

dse

i2

1

π

w ˆ

π

2

µ

w ˆ

π

w

B

−−

=

−

∞

∞−∫∫

Now referring to (7) we recall that for real integrals of the form

∫

∞−

abnormally accurate results. Loosely speaking, this can be explained

by reference to the derivative form of the Euler-Maclaurin formula,

on noting that the integrand, and all its derivatives vanish at ± ∞ .

(See for example Hartree (1952), sec. 6.54.) An analysis by oodwin

(1949) using contour integration provides a strict explanation of

the phenomenon.

ø(x) dx trapezoidal quadrature has long been known to yield

∞

− 2x

e

We may therefore expect that trapezoidal quadrature applied on a

steepest-descent (S.D.) contour will be exceptionally accurate.

Wow it would be quite impracticable to calculate the S.D. contour

for each function F(s) to be inverted. A little thought however

shows that this is unnecessary, for by the discussion above the S.D.

contour for any F(s) is likely to produce good results for all F,

and this does indeed turn out to be the case. Moreover, it is clear

that this will continue to hold if u in (7) is replaced by any

Page 10

- 6 -

6.

other suitable parameter, for the integrand will still have the

same type of "behaviour at the end-points.

For our method we take the simplest possible F, viz. F(s) = 1/s,

and for simplicity take t = 1. This gives

w = s - ℓns, = 1, = 0,

s ˆ v ˆ

and the S.D. contour is, taking θ = arg s as a parameter,

L : s = sc (θ) =α + i θ, α = θ cot θ ; -π < θ < π , (9)

or

r = rc (θ) = θcosec θ .

Hence the suffix c denotes 'critical'. The reason for its

use will become apparent later. The contour is shown in Fig.1.

If with L as in (9) condition (3) is satisfied by a given F(s),

we can apply (2) with L in place of B.

is not satisfied by F(s), it will in general be satisfied by the

modified function F( s + σ) for suitable choice of the positive

λ

scaling parameter λ and the shift parameter a, for if sj is a

singularity of F(s), F(λs + σ) has the corresponding singularity

s

j

λ

If however the condition

.

*

s

j

σ−

(10)

We may then replace (2) by

, ds)s ( Fe

i2

) t ( f

t )

σ

s(

L

σ+λ

π

λ

=

+λ

∫

(11)

and if F(s) is a real function it is now easy to derive the

trapezoidal approximation

−

σ

∑

) 12( )]s( Fce )

β

i1[( Re

n

e

) t ( f~

kc

s

1n

0k

t

θ=σ+λ+

λ

=

θ

τ

=

where τ = λt,

θk = kπ/n , k = 0,1, ... , n-1, (13)

and

θ−α

(

α+θ=

θ

α

−=β

/ ) 1

d

d

(14)

The term for k = n, i.e. θ = π , is omitted since it is zero. If

F(λsc + σ) = G + iH (15)

where G and H are real, (12) takes the real form

Page 11

7.

- 7 -

(16)

. }] sin)GH( cos)HG {( e [

n

e

) t ( f~

k

1n

∑

0k

t

θ= θτβ+− θτβ−

λ

=

θ

ατ

−

=

σ

Equations (12) and (16) are the basic formulae of the method.

By suitable choice of the parameters n, λ and a they are in general

capable of yielding extreme accuracy. The principles governing

the choice are simple, and depend on the error analysis given in

the next section.

3. Error Analysis

Consider the conformal transformation

.)

2

z

coth 1 (

2

z

e1

z

) z (Ss

z

+=

−

==

−

(17)

This has branch-points at z = ± 2rπi , r = 1,2, ... , and maps

the imaginary z-plane interval M between - 2πi and 2πi on to the

curve L in the s-plane. Some idea of the nature of the mapping

in relation to L is given by Fig.2, where correspondences between

z-plane and s-plane regions are indicated by shading, or its absence,

and correspondences between points by their labels. In particular

the region enclosed by L is mapped 1 - 1 from a z-plane region R

bounded on the right by M, above by a curve CFH between -∞ + πi

and 2πi, and below by the conjugate curve. We shall call points

z in this region "principal inverses" of the corresponding points

s, and shall denote them by the notation

z = Z(s). (18)

Writing z = x + iy, and

,

dz

dS

)S(F t )S(e ) z (Q

σ+λσ+λλ=

(19)

(11) becomes

(20)

and the trapezoidal approximation to f(t) is

(21)

which is seen to agree with (12) on noting that zk = 2iθk and

s(zk) =sc(θk) .

If the singularities of F(λs +σ) are all inside L, those of Q are

all in the region R, and if M1 is any path from -2πi to 2πi passing

, dy)iy(Q

2

1

π

Qdz

i2

1

π

) t ( f

2

−

2M

∫∫

π

π

==

,n / i

π

k2z,) z (Q Re2'

n

1

) t (f~

kk

1n

∑

=

0k

==

−

Page 12

- 8 -

8.

to the right of M, and M'2 any such path to the left of M but close

enough to it to exclude the singularities of Q, then by (21)

and the Residue Theorem

.

e1

dz

−

Q

−

i2

1

π

) t ( f~

nz

2

M

1

M

∫

−

=

(22)

(The integrand in (22) is regular at z = ± 2πi since by assumption

F and hence Q satisfies condition (4).)

Paths M1 , M’2 and their maps L1 , L'2 are shown in Fig. 3.

It now follows by (20) and (22) that the theoretical approximation

error is

E(t) = f~(t) - f(t) = E1 + E'2

where

.

e

dzQ

−

i2

1

π

E

,

e

dzQ

nz

i2

1

π

E

1 nz

2'

M

2

1

1

M

∫

1

−

−

∫

=

=

(23)

(23)

Now if L2 can be found such that L2 , L'2 enclose the poles but no

other singularities of F(λs + σ), then M2, the principal inverse of

L2, and M’2 enclose the poles hut no other singularities of Q, and

we may write

E'2 = E2 + EQ,

where

1

E

M

2

π

∫

,

1e

dz

−

Q

i2

nz

2

=

−

(24)

and, after simplification,

.

1

j

*

e

) sj( F

−

resje

j

E

nz

ts

0

∑

=

−

(25)

Here the summation is over poles sj of F(s), and

*

z./ ) s (

j

*

s, )j

*

s (Z

j

j

λσ−==

Thus the theoretical error is

E~ = E0 + E1 + E2.

Since in (25), z in (23) and -z in (24) all have positive

j

*

z

−

real parts, it is obvious that the components E0 , E1 and E2 all

decrease exponentially to zero as n increases. However, there

(26)

(27)

Page 13

9.

- 9 -

is still a round-off error component to consider. Now it is

clear that the largest or near-largest term in (12) is the

first, viz.

.) ( Fe

n2

T

t )

σ

(

0

σ+λ

λ

=

+λ

(28)

Moreover, because of cancellations,

computer evaluates T0 correct to c significant figures, the

round-off error in is

. |T| | f~|

0

<<

f~

Er = 0(10-CT0),

(29)

all other round-off errors in the evaluation of being negligible

by comparison. Finally, we may state that the actual error in is

f~

f~

E = E0 + E1 + E2 + Er , (30)

the four components being given by (25), (23), (24), (29)

respectively. We shall now consider these components one by one,

and obtain estimates of their orders of magnitude.

Component E0. By (25) we may indicate the dominant exponential

factor in E0 by writing

(31)

, ) t

j

p

j

*

nu(

j

min

A,e~E

0

0

A

0

−=

−

where but p

, 0

j

*

z Re

j

*

u

>−=

j = Re sj may have either sign.

If sj is known and X and 0 fixed, is found as in (26). A

j

*

s

rough idea of the value of

can then be obtained from Fig.4,

j

*

u

which shows s-plane loci u = constant, s = S(z), z = -u + iy , for

various values of u. More accurate values of

Table 1. This gives values of u = -Rez, where S(z) = s = re iθ ,

for various values of s inside L. Table 1 was computed by applying

Newton's method to the equation

may be found from

j

*

u

S(1 - e-Z) - z = 0 . (32)

Except when is near to π, a suitable starting value, ensuring

θ

convergence, is z0 = (θ- 3)/18 + 2iθ, and it is convenient to take

as independent variables in the Table θ and Κ, where

K = rc(θ)/r. (33)

For θ near to π we take r and θ as the variables, and z0 = -4+ iπ.

It will be seen from Table 1 that values of u increase rapidly as

r approaches zero, and thus A0 increases rapidly as λ increases.

Thus if the

Page 14

- 10 -

10.

We note also that poles (or s

0

j

*

s

=

j = 0 if σ = 0) make no

contribution to E0 , since u → ∞ as s → 0.

We may remark that in cases where E0 is the dominant component,

(25) is an accurate formula for E, and thus could be used as a

correction term to convert f~(t) to f(t). However in such cases

it is much simpler and more efficient merely to increase the

value of n.

Component E1 . Except near the end points of M1 , the order of

magnitude of the integrand in (23) is mainly determined by the

factor eτ S – n z. Since S(z)

−

be deformed arbitrarily far to the right, the modulus of this

factor is of order e(t-n)Rez over much of L1. It follows at

once that a necessary condition for E1 and therefore E(t) to be

small is that

n > τ = λt.

Thus for large t coupled with large λ necessitated by the

presence of remote singularities of F(s), the use of large n

is essential for good results.(Large n may not suffice, however,

because of the Er term, as we shall see).

A close estimate of E1 may be obtained by applying the saddle-

point formula (8) to the integral (23). Details of the calcula-

tion are given in Appendix 1, but it is immediately obvious from

(8) that if z1 = X1 + iy1 is a saddle-point in (23), and

s1 = S(z1) = p1 + iq1 , then

− z when Re z > 2, say, and M1 may

(34)

E1 ~ e-A1 , A1 = nb1 - σ t ,

(35)

where b1=x1-p

ρ

1, ρ = τ / n.

Now it has been found empirically (but not so far explained) that

for a great variety of functions F(s) and values of τ and n,

b1 is practically a function of only, even though x

ρ

vary appreciably. Thus the estimation of E1 and hence of E is

greatly facilitated by Table 2 which shows values of b1 (ρ ) for a

range of < 1. (By (34) E

ρ

be seen that for optimum E1 , should be between 0.3 and 0.5.

ρ

1 and p1 may

1 will be large if ρ > 1.) It will

Page 15

11.

- 11 -

Component E2. Similar results hold in general for E2 , though

there is a complication here in that unlike M1 , M2 cannot be

deformed arbitrarily far from M since it must remain to the

right of non-polar singularities of F(λS + σ). Nevertheless,

if z2 = -u2 + iy2 is a saddle-point in (24) and s2 = S(z2) = P2 + iq2 ,

then in a large number of cases

E2 ~ e-A2 , A2= nb2 - σ t,

ρ

(36)

where b2 = u2 - p

and is also given in Table 2.

2 is practically a function of ρ only,

It is clear that except for very small ρ , which are unlikely to

be used, b2 > b1 . Thus in practice we can usually neglect E2

in comparison with E1, and this we shall do henceforth unless

otherwise indicated.

However, exceptional cases can arise when E2 is by no means

negligible, and in fact dominates E. To see this, suppose that

F(s) has a branch point s0, and that = (S

0

*

s

0 – σ) λ is inside L

but "near" to it, i.e. such that if z0 = Z ( ) , u

0

*

s

0 = -Rez0 is

small and positive. Then any modification M2 of the path M’2 in

(23’), enclosing with M’2 only poles of F(λs + σ), must remain to the

right of Z0 and thus must contain an arc on which Re z is small.

Thus E2 will be abnormally large, although it will still decrease

exponentially as n increases. An example of this situation occurs

when F(s) = 1 / √ (s2 + 1) and t is large. Here so = i, and if

σ =0 the 'critical' , i.e. minimum value of λ to keep inside

0

*

s

L is λc = 2/Π.

of t tends to increase Er unless X is reduced. Thus to make Er

acceptably small when t is large it may be necessary to reduce

λ nearly to the value λc , and then E2 may be unacceptably large

unless n is very large. More details of this case, and of the

improvement effected by judicious choice of σ , will be given later.

But as is clear from (28) and (29) an increase

Much more work remains to be done in investigating this phenomenon.

Experiments with F(s) = 1/ √(s2 + 1) indicate that d2

in this case, at any rate when d = d2 < d1r. (See (40).) To what extent

this generalises to other functions is not at present known.

−− ½nu0

Page 16

- 12 -

12.

Component Er . In the approximations (32), (35) , (36) we have

included only the exponential factors, and ignored factors of

the form λF and other factors. We shall do the same for Er ,

but shall include a contribution representing the denominator

factor 2n in (28), which may be appreciable. Thus we write

Er ~ e-Ar , Ar = 2.3(c+ log102n)- τ - σ t.

In practice it is always found that for fixed t, λ and σ , E

decreases exponentially as n increases until dominated by Er

(which is practically independent of n), and then remains

approximately steady.

(37)

As an illustration of the results of this section, consider the

case

,t cosht cos) t ( f,

4s

s

) s (F

4

3

=

+

=

with t = 10, λ= 1, σ = 1.

precision, for which c is about 27 or 28 for this function F,

the following values of E (to 3 figures) were obtained for

various n:

Using the CDC 7600 in double

n 20 30 40 60 80 100 120 150

E -2.67D-2 3.88D-5 -5.03D-8 -4.09D-14 1.3D-8 -4.96D-22 2.17D-22 4.91D-22

First, T0 = (9.7D7)/n, so for n ≥ 100, (29) gives Er = 0(10-21)

or 0(10-22), agreeing with the above values. Next, F(s) has

poles 1 ± i, -1 ± i, and we find z*j = -0.6465 ± 2.7961i,

-0.5572 ± 4.6273i respectively, and resF(sj) = 1/4 for each pole.

Then by (31)

A0 = min (O.6465 n - 10 , 0.5572 n + 10) ,

and clearly the first pole-pair is dominant in E0 if n < 220.

Considering now E1, we have τ = 10 and b1 > 0.7 for n between

20 and 200, by Table 2. Thus E0 >> E1 for all n under consideration,

and E0 is dominant in E up to n = 90, when Er begins to dominate.

One does indeed find that the terms in (25) for the pole-pair 1 ± i

do accurately give the above values of E for n ≤ 80.

discrepancy, of about 2D-21, for n = 80 is almost certainly due to

the effect of Er . )

(A slight

Page 17

13.

- 13 -

4. Strategy, and some results

The order of E in (30) is determined by that of its largest

component. Thus, writing d0 = A0 /2.3, d1 = A1 / 2.3, etc.,

we have

E ~ 10-d , d = min (d0 , d1, d2 , dr).

(38)

For a given F(s), the relative sizes of the d's vary as we vary

t, n, λ, σ, c. Thus the best strategy (i.e. choice of the

parameters n, λ, σ ) will depend on the value of t, the nature

and position of the singularities of F(s), the computer precision

c, and the accuracy desired.

there are remote singularities or when t is large, a simple

general strategy applies in all cases.

Fortunately however, except when

Noting that, as pointed out earlier, in general d2 > d1 , we may

writ e

d = min (d0, d1r)

where

d1r = min (d1,dr)

δlr = min (nb1/2.3, c + 2 - τ /2.3).

(39)

−−

δlr - σ t / 2.3, (40)

Table 3 gives values (rounded below) of δ1r when c = 14 or 27,

for various values of n and τ , using Table 1 for b1. In each

column dr is dominant at the bottom entry, and remains so below

this. Whenever d0 > d1 or dr , δlr – σ t/2.3 gives a safe estimate

of the order d of error (to within one or two units), and pairs

(n, τ ) can be readily selected to produce any attainable accuracy.

If dr ≤ d1 larger n yields practically no increase in dlr, but may

be needed to ensure that d0 > dr , i.e. that the value d = d1r

is actually attained. This will certainly be the case if F(s)

has no poles except possibly at the origin (so that E0 = 0), provided

λ = τ /t is large enough to bring all singularities inside L. In

the general case when F(s) has poles away from s = 0, d0 can be found

with the help of Table 1. In fact, referring to Fig. 1, let

S0 = (p0, q.0) be any singularity, polar or non-polar, of F(s),

and for fixed σ let s0 - σ = r0eiθ.The radius to s0 - σ meets

L at the 'critical' point

, )q,p (ers

cc

i

cc

==

θ

(41)

rc= θ cosec θ , q

c = θ.

Page 18

- 14 -

14.

The value of A used must be greater than the critical value

λc bringing s0 - σ to sc , say

λ =

λc , λc = q0/θ , > 1 ,

(42)

and then

,

s

κ

*s

s

c0

==

λ

σ−

(43)

giving

λ = q0 /θ , σ = p0 - q0 cot θ .

(44)

If in particular s0 is a pole sj , then the corresponding j

*

u

in (31) can he found by entering Table 1 with θ and , or

θ and r, r = |s*|.

In general, little is gained by taking σ non-zero, and in most of

our results σ = 0. However, in cases where dr and hence d is

small because τ is large (see (40)), non—zero σ can be used to

advantage. In fact, we have

dr

−− c + 2 - (λ+ σ ) t / 2.3,

(45)

and to increase dr we must make λ + σ small. Now with a

singularity s0 as above and , σ as in (44),

λ

= θ2 cosec2θ = (θ) (46)

λ + σ is minimised if

Then

λ + σ = p0 + q0 γ (θ),

(θ) = θcosec

γ

2θ - cotθ . (47)

Table 4 gives values of (θ) and γ (θ), and also of u(θ) corresponding

to θ and (θ), for a number of values of θ. Clearly a compromise is

needed between large θ, giving large u (and thus large d0 If s0 is

a pole) and also large γ (hence small dr ), and small θ, giving

small d0 and large dr , though it should be noted that one can always

compensate for small u by taking n large.

As an illustration of this strategy consider

,) t (J ) t ( f,

)1s (

1

2

) s (F

0

=

+√

=

and suppose we require f(t) for t = 50. F(s) has branch-points

at S0 = ±i . First, if σ = 0, the minimum λ is λc = 2/Π = . 637 ,

and corresponding τC

−− 32.

Page 19

15.

- 15 -

The table below records values of d obtained with various n

and τ and the corresponding dr values.

dlr = dr.)

(In all cases

n

τ

d

dr

60 60 60 60

35 40 45 50

5 10 10

14 12 10

7

7

60 80 100 120 120 120

55 50 50 40 50 60

6 7 8 12 7 3

5 7 8 12 8 3

It will be seen that in all except the first two cases d

With the (n, τ) pair (60, 35), = τ / τ C = 1.10 and u = 0.14.

The approximate empirical formula d2

gives d2

−

giving d2

−

same but d2

−

further increase of n.

−− dr.

−− nu/2 mentioned earlier

− 4 : cf.d = 5. With (60,40), = 1.26 and u = 0.33,

− 10 = d. On the other hand with (120, 40) u is the

− 20, so that now d = dr, and does not increase with

If however we use (44) , (47) and Table 4 we note that by (45) dr

decreases from 21 to 12 as θ increases from 0.5 to 1, and any

such dr value is attainable with sufficiently large n, viz.

n > n2 = 2dr/u if we assume d2

−

the values of n2 are about 260, 180, 120, 80, 60, 40 corresponding

to dr = 21, 20, 18, 16, 14, 12. Thus if we aim for d = 20, then

θ = 0.6 and n

−

1D —19, while n = 200 gives 1D-20, thus confirming the general

strategy.

− nu/2. For 0 = 0.5 , 0.6, ... 1

− 80 are indicated. In fact, n = 180 gives an error

For larger t the dr values and hence attainable d values would

be smaller. For example, if t = 100 they would be 13, 11, 7 , 3, ...

for θ = 0.5, ... 1 . To obtain dr = 20 we would need θ = 0.3,

giving n2 = 670. If this is thought excessively high, recourse

may be had to a modified contour, viz. our contour L expanded

vertically by a factor v . With this it is possible to achieve

d = 21 with n = 250 for t = 100, using a similar strategy. Some

details of the use of this new parameter v are given in Appendix 2,

but much work remains to be done in investigating its effect on the

error.

We now consider various types of function F(s), and in a number

Page 20

- 16 -

16.

of selected examples compare actual d values obtained using

various (n,τ ) pairs with those obtained by other authors.

For all results quoted a CDC 7600 was used, with double

precision working unless otherwise stated. The execution

time per answer is roughly proportional to the number of points n,

i.e. the number of transform function evaluations, and is about

3ms. when n = 20. If single precision is used, then as Table 3

shows d -values of 11 are readily achieved with n = 20, and

the average execution time is then 1 ms. If n < 20, single

precision gives the same d - values as double precision.

In the examples below, each line of results starts with an

(n, τ ) pair.

I. Singularities only at s = 0.

In these cases E0 = 0, and Table 3, adjusted for the appropriate

value of c, may be used with confidence (to within one or two

units) for d.

).t( / ) t2( cos) t (f,e) s (F) a (

(10,4) : d = 5 - 8, t ≤ 20

(20,8.5) : d = 11 - 14, t ≤ 50

(40,10.5) : d = 23-24, t ≤ 50.

s/ s /1

π√√==

√−

Cf. Nakhla et al (1973), where 22 points (per value of t) yield

d = 3 up to t = 50. The method of Piessens (1972) for this function

yields d = 12 to 14 for t between 1 and 10, using 31 points.

⎛

=√−=

∫

(

(20,6) : d ≥ 11 , t < 10 ; d=14, 10 ≤ t ≤ 100

(30,13.5) : d≥17, t < 4 ; d = 19, 4 ≤ t ≤ 100

(40,12) : d = 20+ (probably about 25), t ≤ 100. (Values inferred

from 20-figure results using 10 different (n, τ ) pairs and comparison

).t( t 2/ du)u 2 (

0

J ue) t ( f,s/s / 1e ) s (F ) b

t 4/u

0

2

π√

⎟⎠

⎞

⎜⎝

√

−

∞

with Table 3.)

Cf. Piessens and Branders (1971 ) , example 6, where d = 9 , 10≤ t ≤ 100,

with 51 points; Piessens (1971), where d = 15, 14, 12 for t = 1, 10, 100,

with 12 points; and Levin (1975) where Gaussian quadrature together

with Levin's rational transformation of order 14/ 1 4 yields

d = 12, 15, 12 for t = 1, 10, 100.

Page 21

17.

- 17 -

(c)

and Branders (1971), example 3.)

(20, 6) : d = 11, t = 0.001; 13, 0. 1 ≤ t ≤ 1 0 ; 14 , 1 0 < t ≤ 100

(30,13.5) : d = 17, t = 0.001: 18, 0.1 ≤ t ≤ 4 ; 19, 6 ≤ t ≤ 100

(40,12) : d = 20+ , t ≤ 100.

F(s) = (√s + 0.5)/(s + √s + 0.5). (For f(t), see Piessens

Cf. Piessens and Branders (1971), where d = 7 is claimed for

t = 2, 4, 6, 10 and d = 5 for t = 14, 20. (in fact the "exact"

values quoted for t = 4 to 20 are in error, and d = 7 or 8 is

achieved throughout.) Such functions F(s) are important in

connection with electric networks containing mixed lumped and

distributed elements.

Similar values of d have been obtained by our method for

, ) ) t

π

( t 2/ e (e ),t( / 1) t ( f (s / 1) s (F

such cases.

t4/1s

√π√=√=

−√−

and other

II Poles (if any) at s = 0, other singularities elsewhere

As already indicated, the presence of branch points has a depressing

effect on d2 , which can be countered by increasing λ (and so dr ) or

n. In these cases d may be less than d1r .

(d) F(s) = 1/√(s2 + 1) , f(t) = J0(t).

(10,6) : d = 7, t ≤ 1; 5, t = 5

(20,10) : d = 13, t ≤ 5 ; 7, t = 10

(40,18) : d = 20, t ≤ 10; 13, t = 20

(50,10) : d = 25, t ≤ 6; 16, t = 10

(60, max(20,t)) : d = 19 or 20, t ≤ 20; 13, t = 40; 8, t = 50. (Taking

= max (20,t) ensures that λ ≥ 1, i.e. κ ≥ 1.57 and u* ≥ 0.65).

τ

Of. Piessens and Branders (1971), ex. 4, where 251 points yield

d = 12 up to t = 10, and 11 at t = 20 decreasing to 3 at t = 100.

An alternative method applicable only to special classes of

functions yields d = 14 up to t = 20 but poor results thereafter.

As already discussed, we can obtain even better results by using

σ < 0. For example, with σ = -1 , n = 160 and τ = max(50, 1 . 5t) we

obtain d= 12,14,14,18, 14, 12, 8 for t = 10, 20, 40, 50, 60, 80, 100.

(Here we are not using the special strategy described earlier.)

Good results can likewise be obtained for F(s) = 1/√(s2 - 1),

f(t) = I0(t), though the singularity positions are quite different.

Taking n = 60 and τ = max (7, 2t) for example gives d ≥ 20, t ≤ 5;

19, t = 10; 9, t = 20.

Page 22

- 18 -

18.

. s /e ) s (F) e (

This transform arises in pulse-propagation problems (see Longman,

1973) , and its inverse f(t) is not known in explicit form.

Levin (1975) gives 10- to 13- decimal place values of f(t) for

t = 0.5(.5)2.5. The pairs (n, τ ) = (40, 12) and (40,18), with

d1r = 23 and 21 respectively, give values of f(t) which are

identical to at least 20 decimal places for t = 0.1 to 100, and

may be presumed correct to 20 d.p.

to 12 d.p., but show errors of 3 x 10-13 in his figures for

t = 2 and t = 2.5.

Similar results are obtained with F(s) = 1/s ℓn(l + s), f(t) = E, (t);

F(s) = tan-1(1/s) (with logarithmic branch—points at s = ± i ),

f(t) = (sin t)/t; and so on.

) 1

+

s (s

√−

=

They confirm Levin's figures

III

Since the method is based on an S.D. path for the inversion integral

(2) when F(s) = 1/s it is not surprising that it should give good

results with rational functions. Now however with poles present

other than at s = 0, E0 ≠ 0 and

d = d01r = min(d0, dlr),

Rational functions

assuming as before that E2 is negligible. Here d0 = A0/2.3 is

given by (31). If pj < 0, large t actually helps to increase

d0 and d, but if pj > 0 the opposite occurs.

large as desired by increasing n, but increase of λ (and use of σ )

to increase will permit of smaller n, though d

j

Thus compromise is needed to achieve a specified d, but there is

usually no difficulty if d is not too close to the computer

precision constant c.

The only properties of F(s) relevant to the choice of n, λ and σ are

the distances and polar angles of its most distant poles, though

only a rough indication is needed, and even without this a succession

of choices with increasing n (or, up to a point, τ ) will pinpoint

f(t) with increasing accuracy. We give two examples.

d0 can be made as

*

u

r may also decrease.

(f) F(s) = (s4+4s3+4s2+4s+8)/(s + 1)5 , f(t) = e-t(l-t2+2t3/3+5t4/24).

The pole sj = -1 being negative real, arbitrary λ may be used, but

we note that u* increases with λ . If say n = 20 and τ = 9, dlr = 11

by Table 3b. Now = p/λ, p = -1 ,

j

= |p|/λ = r

*

s

|j

*

s |

*, say, and

Page 23

19.

- 19 -

A0 = nu* - pt = n(u* + pr*),

which for varying t, i.e. varying r*, is minimum when

du*/dr* = -ρ = -0.45. Inspection of Table 1 shows that for

θ = 180°, this occurs roughly when r* = 1.7, i.e. λ = 0.6,

t = 15, u* = 1. 06, A0 = 36, d0 = 16 > dlr. It follows that

d = dlr for all t. In fact we find that d ranges between

12 and 14 for t ≤ 100, and E(t) is indeed largest for t about 15.

(Note that the multiple pole is not a problem, as in other methods.)

Similarly n = 30, τ = 13.5 gives d = 19 or 20 for t ≤ 100, while

n = 40, τ = 12 gives d between 22 and 25, mostly 24. In Piessens and

Branders (1971), ex.1, d is 5-7 up to t = 16.

(g) F(S) = 999/(S+1)(S+1000) = 1/(S+1) - 1/(s+1000), f(t) = e-t –e-1000t.

Such cases, having a large ratio of time-constants, are often

described as presenting difficulties for numerical inversion, but

they are no problem with our method. For example, in Nakhla et al.

(1973), 22 points yield d between 5 and 7 for t ≤ 50, whereas

here the (n, τ ) pair (20,6) gives d between 13 and 17 for t ≤ 100,

(30,13.5) gives d between 19 and 22, and so on (up to dr .)

IV General F(s).

No new principles are involved. We give two examples.

.)u(/ due)ut cos( ) t ( f , ) 1

+

s () 1

+

s ( s) s (F) h (

u

t

0

2

π√−=√=

−

∫

Piessens and Branders (1971) obtain d = 5 or 6 up to t = 14,

d = 4 at t = 20. With the pair (40,24) we obtain d = 19 or 20

up to t = 10, 17 up to t = 20.

(i) F(s) = sℓns / (s2 + l), f (t) = - sin t Si(t) - cos t Ci(t).

Levin ( 1975) obtained d= 5, t = 0. 1 ; 12 -14, t=1-4.

Results in Piessens and Branders (1971) and others quoted there

are poor. This is stated elsewhere by Piessens to be due to

the logarithmic singularity. However this does not affect the

present method. For example, n = 40, τ = max (10.5, 1.8t) gives

d between 21+ and 11 for t ≤ 20.

5.

It will be clear from the examples that the method is almost

universal in its scope, except that there may be difficulties for

large t, depending on the positions of singularities. Where the

inversion problem arises from the solution of linear constant -

coefficient differential equations the difficulty for large t can

be overcome by using two or three steps to reach t instead of

General remarks

Page 24

- 20 -

20.

only one, and making terminal values (including derivatives if

necessary) of one step serve as initial values for the next.

This would be particularly simple for state-matrix, i.e. first-

order problems since no derivatives would need to be found.

Alternatively, for all types of problem, the difficulty can often

be overcome as we have seen by careful use of the shift parameter

σ , and of the new expansion parameter ν.

that if the difficulty is due to the existence of a remote pole

sj whose location is accurately known, then there is no need to

choose λ so large as to bring the pole inside L : it can be

left outside L and its effect taken into account by adding the

It may be remarked

residue term

) s ( F rese

j

jt

s

(48)

to f~(t).

Problems involving delay would seem at first sight to be failing

cases. For example, if

F(s) = e-as G(s),

(49)

where a > 0 and G satisfies condition (4), then F will in general

not satisfy the condition, and the method would be inapplicable to

F. However, this is a trivial failure, for we know that the

inverse g(t) of G(s) can be found, and

f(t) = 0, t < a,

= g(t - a) , t > a.

Indeed, the integrand in (2) may be written es(t-a) G(s) , so that

for t > a the method may in fact be applied directly to F.

We note in passing that in this method, unlike others, there is no

"Gibbs phenomenon" for t close to a. However, it can be shown

that for G(s) = 1/s, i.e. f(t) a delayed unit step, f~(a) = 1 - 1/2n,

while for t > a f~(t) is a function of n and τ only (if σ = 0),

not of t or λ separately. For fixed n and λ, f~ (t) → f~(a) as

t → a+. For fixed τ , f~ (t) → 1 as n increases.

In general, if F(s) has an infinite number of complex singularities

the method will fail, for no value of A can bring them all inside L.

If however as a special case

F(s) = G(s)/(1 - e -as) , (50)

Page 25

21 .

- 21 -

where the inverse g(t) of G(s) is

then the method will give g(t) in

repetition of g(t).

6. Applications

Some indication of the variety of

inversion method has already been

Here we will mention a few of the

leaving a full description of the

to later papers.

(i)

system

is normally obtained either by using some Runge-Kutta process, or

by inverting the transform

State matrix problems. The

a pulse between t = 0 and t = a,

(0,a), and f(t) is a periodic

possible applications of the

given in the Introduction.

results so far obtained,

various processes involved

solution-vector u(t) of the

) 51() t (v Au

dt

du

+=

U(s) = (sI- A)-1 W(s) , W = V(s) + u(o) (52)

by partial-fraction expansion using the eigenvalues of A,

assuming W(s) is rational. In the first case the accuracy

attainable is very limited, and in the second case great care

has to be taken to avoid serious loss of accuracy through errors

in eigenvalues and residues. To find u(t) by numerical inversion

of U(s), one must be able to evaluate (sI-A)-1 for arbitrary

complex s, and the Fadeev algorithm enables this to be done very

efficiently and accurately. As an example of results obtainable,

in a control problem concerning a boiler system of order 8, the

vector output was obtained correct to 11 or more d.p. for 38 values

of t ≤ 20 using n = 20, τ =8 and single precision, with execution

time of about 3 ms. per component per value of t. With double

precision and n = 40, τ = 14, 22 or more d.p. were obtained.

Clearly the Fadeev algorithm has contributed very little error to

these answers, and it is unlikely that the accuracy would be

appreciably reduced in the case of much larger systems, but this

has yet to be tested.

(ii) Diffusion equation. For the solution of

bxa,

t

u

x

u

2

2

≤≤

∂

∂

=

∂

∂

(53)

Page 26

- 22 -

22 .

with initial condition

u(x,0) = φ(x), a<x<b

(54)

and various end-conditions, two transform variables and

successive inversions are required. A new feature here is

that the result of the first inversion, which is involved in

the second inversion, is a non-real function of the second

transform variable, and our formulae (12) and (16) have to be

modified accordingly.

As an example of results obtainable, if a = 0, b = 1, u(0,t)=u(l,t) = 0,

and

ø(x) = 1 - |2x- 1| ,

then with (n, τ, σ ) = (30, 25, 0) in the first inversion, and

(10, 2.5, - λ /2) in the second, and using single precision,

d ≥ 7 for all x for t ≥ 0.1. Here, unlike the normal situation,

and in accordance with the theoretical analysis, results are less

good for small t: for t = 0.01, d reduces from 6 at x = 0.1 to

4 at x = 0.9. The execution time was about 20ms. per value of

u. Similar results are obtained when ø(x) = 1, with a discontinuity

in ø at x = 0 and 1 instead of the previous discontinuity in

dø/dx at x = 0.5. There is no reason to think that either

of these discontinuities worsens the results.(Later work with ø(x) =1,

still using single precision, has given d 12 for t ≥ 0.3, but this work is

not yet completed. )

It may prove possible to tackle Laplace's equation by similar means, but

this remains to be investigated.

(iii) Miscellaneous quadratures. To illustrate the use of

≥

our method for numerical quadrature, we consider the integrals

which were the subject of Burnett and Soroka (1972), namely

. )R / 1

+

R(d,Rc, dxtx

sin

cos

.)x/R 1 ()R , t (

S

C

2

d

c

√=√=−√=∫

In the paper a complicated approximation procedure was described

by which C and S were evaluated correct to 7 d.p. for a range of

values of R between 1 and 32 and between 1/2 and 1/32, and for t

between 0.1 and 100. By transforming C and S with respect to t,

and inverting the transforms, using the expansion parameter and

the special strategy embodied in Table 4, one can easily obtain

20 d.p. over the whole range of R and of t, using double precision.

ν

Page 27

23

- 23 -

(iv) Evaluation of mathematical functions. We have already

seen that we can evaluate J0(t) for t ≤ 100 correct to 21 or

more d.p. by using the CDC 7600, in double precision, to invert

its transform 1/√(s2 + 1). The only preparatory work needed

is the choice of the parameters n, λ, σ and ν, or of n and if

Table 4 is used. It would be a simple matter to obtain similar

results with other standard mathematical functions.

θ

Now the accuracy obtainable is limited by dr and therefore by

the precision constant c. If a computer were available giving

higher precision, not only in its arithmetic working but also

in its exponential and sine-cosine subroutines, then the attain-

able accuracy would be correspondingly increased, for it is

always possible to obtain d = dr by using sufficiently large n.

For example, the CDC in triple precision would have c = 41 , and

without any additional work we would be able to obtain J0(t)

correct to 35 d.p.

7.

An almost universal method of numerical inversion has been

described which is applicable over very wide ranges of t.

gives errors which can be made smaller than those obtained in

any other hitherto published method, and uses only modest amounts

of computer time. The method is simple, but requires the evaluation

of the transform F (s) at sets of complex points.

Conclusion.

It

So great is the accuracy attainable that it is suggested that

transform inversion be considered as a method of solving any

mathematical problem the solution of which can be regarded as

a value or set of values of a function f(t) whose transform F(s)

can be found as a function of s.

Acknowledgment

It has already been mentioned in the Introduction that the method

described here was essentially contained in an unpublished Ph.D.

thesis, Green (1955). It seems appropriate therefore to indicate

which parts of the present paper are in essence to be found in the

thesis. They are the following: most of section 2, but with λ = 1

Page 28

- 24 -

24.

and σ = 0 in (12) and (16); the introduction of shift σ and

scaling factor λ (though not in combination) for the purpose

of bringing singularities inside L ; the use of a fixed λ

(e.g. λ = 2.) for reducing errors for certain values of t (but

not the use of a 'fixed' and hence varying λ = τ/t, which

τ

is the key to the success of the present method); the error

analysis of section 3 leading to the components E0 , E1 , E2 (but

not the component Er , without which no proper understanding of

the behaviour of the error or rational choice of parameters is

possible); the condition ( 34) (with λ = 1) and the saddle-

point formulae (35), (36) for E1, E2 (but not the property that

b1 and b2 are practically functions of p only); Figs. 2 - 4

and Appendix 1. Needless to say, the thesis was a constant

source of reference in the earlier stages of the work leading

to the present paper, and the author is glad to make this

acknowledgment to his former student.

REFERENCES

BURNETT, D.S. and SOROKA, W.W. 1972 J.Inst.Maths Applies 10, 325-332,

CARSLAW, H.S. and JAEGER, J.C. I948 Operational methods in applied

Mathematics. Oxford : University Press.

FILON, L.N.G. 1928 Proc.Roy.Soc.Edin.49 38-47.

GOODWIN, E.T. 1949 Proc.Camb.Phil.Soc.45 , 241-5.

GREEN, J.S. 1955 The calculation of the time-responses of linear

systems. Ph.D. Thesis, University of London.

HARTREE, D.R. 1952 Numerical analysis. Oxford : University Press.

LEVIN, D. 1975 J.Comp.Appl.Maths. 1, 247 - 250.

LONGMAN, I.M. 1973 SIAM J. Appl .Math. 24 , 429-440.

NAKHLA, M., SINGHAL, K. and VLACH, J. 1973 Proc. 16th Midwest

Symposium on Circuit Theory 2, XIV. 5-1-9 .

PIESSENS, R. 1971 J. Eng.Math.5, 1-9.

PIESSENS, R. 1972 J.Inst. Maths. Applies 10, 185-192.

PIESSENS, R. 1974 A bibliography on numerical inversion of the

Laplace transform. Report TW20, Applied Mathematics and

Programming Division, Katholicke Universiteit, Leuven, Belgium.

PIESSENS, R. and BRANDERS, M. 1971 Proc. IEE 118, 1517-1522.

SALZER, H.E. 1955 M.T.A.C. 2, 1614-177.

SHOHAT, J. 1940 Duke Math. J. 6 615-626.

Page 29

25.

- 25 -

TRICOMI, P. 1935 RC Accad.Naz.Iincei, Cl.Sci.Fis. 13, 232-239, 420-426.

WEEKS, W.T. 1966 J.ACM 13, 419-426.

WIDDER, D.V. 1935 Duke Math.J. 1, 126-136.

Appendix 1 : S.P. estimation of E1 and E2.

The integral for E1 in (23)is of the form ∫

w(z) = S(Z) + ℓnS' + ℓnF(ø) - ℓn(e

τ

where ø = ø(z) = λs + σ . Then

, dze

) z (w

with

nz- 1), (A1)

) 3A(,

) 1

−

e (

en

' S )

φ

( ' P

2

λ

' S

⎝

" S

⎜

" S ) P

λ

("w

) 2A(

1e

ne

' S

" S

' S ) P

λ

( ) z ( 'w

2 nz

nz2

2

'

nz

nz

−

++

⎟

⎠

⎞

⎛

++τ=

−++τ=

where P = P(ø) = F'(ø)/F.

With S(z) given by (17), we find, for efficient computation of S' etc.,

S' = z

S (1 + z - S),

.

)Sz 1 (

S1

z

S2

) 1

−

z

S

(

' S

"S

,

Sz1

1

z

S2

1

' S

"S

2

'

⎟⎟

⎠

⎞

⎜⎜

⎝

⎛

−+

−

+=

⎟⎠

⎞

⎜⎝

⎛

−+

+−=

(A4)

The saddle-point Z1 is a root of w'(z), and may be found by

a (complex) Newton process. A suitable starting value

usually leading to rapid convergence of the iteration, is obtained

by considering the special case when F(s) = 1/s, and is given by

zl0,

Z10 = 2πi + (1 - i)√(πρ). (A5)

Taking into account the conjugate saddle-point 1 z (assuming F(s)

is real ) we find by (8) the approximate S.P. formula

.

) 2 / ) z ( "w

π

( ) 1

−

e (

) z ( ' S)S ( Fe

ReEE

1

z

nz

tS

s 11

⎪⎭

⎪⎬

⎫

⎪⎩

⎪⎨

⎧

√

σ+λ

=−−

σ+τ

(A6)

It will be noticed that the value of w''(Z1) required for Els

will already have been found for the Newton algorithm.

The evaluation of (24) for E2 is almost identical: the only

change needed in the computer program is the replacement of n by

-n, and the corresponding replacement of √(πρ) in (A5) by √(

since ρ = /n. Thus a starting value for the saddle-point z

τ

)/i,

πρ

2 is

z20 = 2πi - (1+i)√ (πρ) ,

(A7)

Page 30

- 26 -

26.

and

2

z

nz

tS

s22

) 2/ ) z ( "w( ) 1

−

e (

⎪⎩

) z ( ' S)S(

√

Fe

ReEE

⎪⎭

⎪⎬

⎫

⎪⎨

⎧

π

σ+λ

=−−

−

σ+τ

(A8)

Numerous checks have shown that (A6) gives an approximation to

E1 which, is accurate to within one or two percent ; similarly for

(A8) and E2 , except where a branch-point of F(s) abnormally

increases E2.

Appendix 2 : Generalized contours.

If the error analysis leading to (27) is examined, it will be

seen that it does not rest on the particular nature of the

function S(z): any function which maps the z-interval

M (-2πi, 2πi) onto an s-curve similar in appearance to L would probably

do equally well, and would give errors E0 , E1, E2 tending

exponentially to zero as n increases.

As a particular case, consider the family of mappings

. az)

2

z

cot1 (

2

z

s

++=

(A9)

Taking z = 2iθ, -Π < θ < Π, on M gives the s-curve

: s =

ν s

ν

L

In the special case ν = 1 (a = 0) we obtain the curve L. For

ν > 1 consists of L expanded 'vertically' by a factor .

ν

L

It is immediately obvious that by using an appreciable value of

ν, one can reduce the value of λ and hence τ required to bring a

singularity of F(s) inside L, and thus enable dr and so d to be

increased, though naturally at the cost of an increase in n .

(A 10)

= θ cot θ +

νiθ , ν = 2a + 1 .

ν

The use of this new parameter v entails slight changes to previous

formulae. We replace sc by sv in (12) and ( 15) , 1 by ν in (12). The

angles θν and the terms G,H are multiplied by ν in (16).

The strategy described in Section h involving the use of (44),

(46) and (47) can still be used, the only change being that q0 is

replaced by q0/ν. This strategy was used in obtaining J0(100)

correct to 21 d.p.

Many features of the use of the parameter v have still to be

investigated, as has the possibility of the use of quite different

mapping functions and contours.

Page 31

KAPPA = 1.05 1.10 1.15 1.20 1.23 1.30 1.35 1.43 1.45 1.50 1.55 1.63 1.70 1.80 1.90 2.00

2.10 2.20 2.40 2.60 2.80 3.0

THETA(DEGREES)

0

.10

.19 .27

.35

.43

.50

.57

.64

.70

.76

.82

.88

.98 1.08 1.17 1.26

1.34 1.41 1.55 1.68 1.80

1.9

5

.10

.19 .27

.35

.43

.50

.57

.64

.70

.76

.82

.88

.98 1.08 1.17 1.26

1.34 1.41 1.55 1.68 1.80

1.9

10

.10

.19 .27

.35

.43

.50

.57

.64

.70

.76

.82

.87

.98 1.08 1.17 1.25

1.33 1.41 1.55 1.68 1.79

1.9

15

.10

.19 .27

.35

.43

.50

.57

.63

.70

.76

.82

.87

.98 1.07 1.16 1.25

1.33 1.41 1.55 1.67 1.79

1.8

20

.10

.19 .27

.35

.43

.50

.57

.63

.69

.75

.81

.87

.97 1.07 1.16 1.24

1.32 1.40 1.54 1.67 1.78

1.8

25

.09

.18 .27

.35

.42

.49

.56

.63

.69

.75

.81

.86

.97 1.06 1.15 1.24

1.32 1.39 1.53 1.66 1.77

1.8

30

.09

.18 .27

.34

.42

.49

.56

.6?

.68

.74

.80

.86

.96 1.05 1.14 1.23

1.31 1.38 1.52 1.65 1.76

1.8

35

.09

.18 .26

.34

.41

.49

.55

.62

.68

.74

.79

.85

.95 1.05 1.13 1.22

1.30 1.37 1.51 1.64 1.73

1.8

40

.09

.18 .26

.34

.41

.48

.55

.61

.67

.73

.78

.84

.94 1.03 1.12 1.21

1.29 1.36 1.50 1.62 1.74

1.8

45

.09

.18 .26

.33

.40

.47

.54

.60

.66

.72

.77

.83

.93 1.02 1.11 1.19

1.27 1.35 1.48 1.61 1.7?

1.8

50

.09

.17 .25

.33

.40

.47

.53

.59

.65

.71

.76

.82

.92 1.01 1.10 1.18

1.26 1.33 1.46 1.39 1.70

1.8

55

.09

.17 .25

.32

.39

.46

.52

.58

.64

.70

.75

.80

.90 .99

1.08 1.16

1.24 1.31 1.43 1.57 1.68

1.7

60

.09

.17 .24

.31

.38

.45

.51

.57

.63

.68

.74

.79

.89 .98

1.06 1.14

1.22 1.29 1.42 1.54 1.66

1.7

65

.08

.16 .24

.31

.37

.44

.50

.56

.62

.67

.72

.77

.87 .96

1.04 1.12

1.20 1.27 1.40 1.32 1.63

1.7

70

.08

.16 .23

.30

.36

.43

.49

.55

.60

.66

.71

.76

.85 .94

1.02 1.10

1.17 1.24 1.37 1.49 1.60

1.7

75

.08

.15 .22

.29

.35

.42

.48

.33

.59

.64

.69

.74

.83 .92

1.00 1.08

1.15 1.22 1.35 1.46 1.57

1.6

80

.08

.15 .22

.28

.34

.40

.46

.52

.57

.62

.67

.72

.81 .89

.97

1.05

1.12 1.19 1.32 1.43 1.54

1.6

85

.07

.14 .21

.27

.33

.39

.45

.50

.55

.60

.63

.70

.79 .87

.93

1.02

1.09 1.16 1.28 1.40 1.50

1.6

90

.07

.14 .20

.26

.32

.38

.43

.48

.53

.58

.63

.67

.76 .84

.92

.99

1.06 1.13 1.25 1.36 1.46

1.5

95

.07

.13 .19

.25

.31

.36

.41

.46

.51

.56

.60

.65

.73 .81

.89

.96

1.03 1.09 1.21 1.32 1.42

1.5

00

.06

.12 .18

.24

.29

.34

.39

.44

.49

.53

.58

.62

.70 .78

.85

.92

.99 1.05 1.17 1.28 1.38

1.4

05

.06

.12 .17

.22

.28

.33

.37

.42

.46

.51

.55

.59

.67 .75

.82

.88

.95 1.01 1.12 1.23 1.33

1.4

10

.06

.11 .16

.21

.26

.31

.35

.40

.44

.48

.52

.56

.64 .71

.78

.84

.91

.96 1.08 1.18 1.27

1.3

15

.05

.10 .15

.20

.24

.29

.33

.37

.41

.45

.49

.53

.60 .67

.74

.80

.86

.92 1.02 1.13 1.22

1.3

20

.05

.09 .14

.18

.22

.27

.31

.35

.38

.42

.46

.49

.56 .63

.69

.75

.81

.87

.97 1.07 1.16

1.2

25

.04

.08 .12

.16

.20

.24

.28

.32

.35

.39

.42

.46

.32 .58

.64

.70

.76

.81

.91 1.00 1.09

1.1

30

.04

.06 .11

.15

.18

.22

.25

.29

.32

.35

.39

.42

.48 .54

.59

.65

.70

.75

.85

.94 1.02

1.1

35

.03

.07 .10

.13

.16

.19

.23

.26

.29

.32

.35

.38

.43 .49

.54

.59

.64

.69

.78 .86 .94

1.0

40

.03

.36 .08

.11

.14

.17

.20

.22

.25

.28

.30

.33

.38 .43

.48

.53

.57

.62

.70 .78 .86

.9

45

.02

.35 .07

.09

.12

.14

.17

.19

.21

.24

.26

.28

.33 .38

.42

.46

.50

.54

.62 .70 .77

.8

50

.02

.04 .06

.08

.10

.12

.14

.16

.18

.20

.22

.24

.28 .31

.35

.39

.43

.46

.53 .60 .67

.7

55

.01

.03 .04

.06

.07

.09

.11

.12

.14

.15

.17

.19

.22 .25

.28

.32

.35

.38

.44 .50 .56

.6

60

.01

.02 .03

.04

.05

.06

.08

.09

.10

.11

.12

.14

.16 .19

.21

.24

.26

.29

.34 .39 .44

.4

65

.01

.31 .02

.03

.03

.04

.05

.06

.06

.07

.08

.09

.11 .12

.14

.16

.18

.20

.24 .27 .31

.3

R =

.1

.2 .3

.4

.5

.6

.7

.8

.9

1.0

1.2

1.4

1.6 1.8

2.0

3.0

4.0

5.0

6.0 8.0 10.0 12.0

70

3.98 3.19 2.74 2.42 2.17 1.98 1.82 1.68 1.56 1.45 1.28 1.14 1.02 .92

.83

.53. 36 .26 .19. 10 .06 .03

75

4.00 3.21 2.76 2.44 2.20 2.01 1.85 1.71 1.60 1.49 1.32 1.18 1.06 .96 .88

.59 .42 .31 .23 .15 .10 .02

80

4.02 3.23 2.78 2.47 2.23 2.04 1.88 1.75 1.63 1.53 1.36 1.22 1.11 1.01 .92

.63

.47

.36

.28 .19 .13 .11

TABLE 1.VALUES OF U=-RE(Z(S))FOR ARG(S)=THETA, ABS(S)=Rl: KAPPA = R(CRIT.)/R.

Page 32

- 28 -

ρ

b1

.025 .05

.530 .726 .862 .966 1.118 1.221 1.290 1.332 1.352 1.353 1.338 1.306

.075 .10 .15 .20 .25 .30 .35 .40 .45 .50

b2

.535 .743 .896 1.021 1.223 1.385 1.523 1.644 1.751 1.848 1.937 2.019

ρ .55 .60

.65 .70 .80 . 90

b1 1.260 1 .199 1 .124 1.033 .803 .491

b2 2.094 2 . 165 2 .232 2.295 2.410 2.515

Table 2.

τ

n

10

4

5

6 8 10 12 14 16 18 20 22 24 26 28 30 33

5

3

15 8 8 8 7 5

20 10 11 11 11 10 9 7

4

8

25 12 13 12 12 1 1 10 9

7

6

30 14 5 4 3

35 3

40

2

Table 3a. Values of δ1r when c = 14

τ

n

10

15

20

25

30

35

40

45

50

55

60

65

100

4 6 8 10 12 14 16 18 20 22 24 26 28 30 33 36 39 42 45 48 51 54 57

5 5 3

8 8 8 7 5

10 11 11 10 9 7 4

12 13 14 14 14 13 12 10 8 6

14 15 17 17 17 17 16 15 14 12 10 8 4

15 17 19 20 20 20 20 19 18 17 16 14 12 9 3

16 19 21 22 23 23 22 21 20 19 18 17 16 16 12 8

18 20 22 24 24 15 13 12 6

19 22 24 25 10 9 5

20 23 25 8 6

21 25

22 26

5 4

27

Table 3b. Values of δlr when c = 27.

Page 33

- 29 -

θ

o.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5

K(θ)

1.055 1.088 1.129 1.181 1.244 1.320 1.412 1.523 1.658 1.820 2.018 2.261

0.272 0.345 0.420 0.499 0.583 0.673 0.770 0.876 0.993 1.123 1.269 1.437

0.104 0.161 0.228 0.307 0.394 0.489 0.592 0.701 0.816 0.936 1.062 1.191

γ(θ)

u(θ)

Table 4.

Page 34

- 30 -

Page 35

- 31 -

Page 36

- 32 -