ArticlePDF Available

Abstract and Figures

We developed the numerical procedures to evaluate the inverse functions of the complete elliptic integrals of the first and second kind, K(m) and E(m), with respect to the parameter m. The evaluation is executed by inverting eight sets of the truncated Taylor series expansions of the integrals in terms of m or of − log(1 − m). The developed procedures are (1) so precise that the maximum absolute errors are less than 3-5 machine epsilons, and (2) 30-40% faster than the evaluation of the integrals themselves by the fastest procedures (Fukushima 2009a, 2011).
Content may be subject to copyright.
Numerical Computation of Inverse Complete Elliptic
Integrals of First and Second Kind
Toshio Fukushima
National Astronomical Observatory of Japan, 2-21-1, Ohsawa, Mitaka, Tokyo 181-8588,
Japan
Abstract
We developed the numerical procedures to evaluate the inverse functions of
the complete elliptic integrals of the first and second kind, K(m) and E(m),
with respect to the parameter m. The evaluation is executed by inverting
eight sets of the truncated Taylor series expansions of the integrals in terms
of mor of log(1 m). The developed procedures are (1) so precise that
the maximum absolute errors are less than 3-5 machine epsilons, and (2)
30-40% faster than the evaluation of the integrals themselves by the fastest
procedures (Fukushima 2009a, 2011).
Keywords: complete elliptic integral; inversion
1. Introduction
1.1. Complete elliptic integrals of first and second kind
The complete elliptic integrals of the first and the second kind are defined
[1, Section 19.2(i)] as
K(m)Zπ/2
0
p1msin2θ, E(m)Zπ/2
0p1msin2θ dθ, (1)
where mis called the parameter. They appear in various fields of mathe-
matical physics and engineering [2, 3, 4, 5]. In discussing the real-valued
integrals, we assume that mis in the standard domain,
0m < 1.(2)
Email address: Toshio.Fukushima@nao.ac.jp (Toshio Fukushima)
Preprint submitted to Journal of Computational and Applied MathematicsJanuary 31, 2013
Figure 1: Dependence of the complete elliptic integrals on the parameter m.
Refer to [6, 7] for the process to reduce the other cases to the standard
one. See Figure 1 for the functional dependence of K(m) and E(m) on the
parameter min the standard domain.
1.2. Numerical computation of complete elliptic integrals
Many methods have been developed to compute the values of K(m)
and/or E(m) when mis given. The popular iterative schemes are those
using (1) the arithmetic-geometric mean [1, Section 19.8(i)], (2) the Landen
transformation [8, 9], (3) the Bartky transformation [10, 11], or (4) the du-
plication theorems [12, 13]. They are useful in computing the integrals in
arbitrary precision arithmetic.
For practical purposes, however, their Chebyshev approximations [14, 15,
16, 17, 18] are sufficient. Recently, we developed a method based on their
Taylor series expansions [6]. It is as precise as the Chebyshev approximation
and runs twice faster than that. Also, we extended it to the case of a general
complete elliptic integral of the second kind, αK(m) + βE(m) [7]. The
extended method is as precise as Bulirsch’s cel2 and Carlson’s rf and rd
and runs 5-18 times faster than them.
1.3. Necessity of inverse complete elliptic integrals
In practical applications, we sometimes encounter with the inversion prob-
lem of K(m) or E(m), namely the determination of mwhen K(m) or E(m)
is given. Especially, that of K(m) is required in estimating a physical quan-
tity related to mfrom 4K(m), the observed period of a certain physical
phenomenon expressed by the Jacobian elliptic functions, sn(u|m), cn(u|m),
and dn(u|m) [1, Chapter 22]. A classic example of such phenomenon is the
motion of a simple gravity pendulum [1, Section 22.19(i)]. Another is the
torque-free rotation of an asymmetrical rigid body [19].
1.4. Outline of article
Despite such practical needs, the inversion problem of K(m) and E(m)
has not been studied comprehensively. This might be due to their logarithmic
singularity around m= 1. In this short article, we report a numerical method
to invert the integrals by adopting the same approach in our previous work
of their evaluation [6, 7]: the piecewise polynomial approximation based on
2
the Taylor series expansion. In §2, we discuss the difficulties of the inversion
problem and find a clue to resolve them. In §3, we present the details of the
method we developed based on the hint we found in §2. In §4, we show the
results of the numerical experiments in order to illustrate the computational
error and the CPU times of the new method.
2. Consideration
2.1. Slow convergence of inverted Maclaurin series
If mis expected to be small, we may compute it by inverting the Maclau-
rin series expansion of the integrals by the Lagrange inversion theorem [1,
Section 1.10(vii)]. The Maclaurin series of the integrals are expressed in
terms of Gauss’s hypergeometric series [1, Formulas 19.5.1 and 19.5.2] as
K(m) = π
2F1
2,1
2; 1; m, E(m) = π
2F1
2,1
2; 1; m.(3)
Their inversion series around K(0) = E(0) = π/2 are given as
m4x9x2+31
2x3185
8x4+507
16 x5+···,(4)
m≈ −4y3y2+1
2y35
8y4+15
16y5+···,(5)
where
x2K(m)
π1, y 2E(m)
π1.(6)
One may obtain the higher order expressions by issuing a command in Math-
ematica [21] such as
InverseSeries[Series[EllipticK[m],{m,0,65}]]
The convergence of the hypergeometric series to compute K(m) and E(m) is
slow when mincreases [22]. The same is that of the inverted series. Figure 2
illustrates the order of these inverted power series needed to assure a certain
level of accuracy, say 14 digits in this case, as functions of m. Apparently,
they significantly increase according as m1.
3
Figure 2: Necessary order of the truncated inversion series of K(m) and E(m) around
m= 0. Plotted are the minimum order of the truncated inversion series of K(m) and
E(m) to provide 14 correct digits of m.
2.2. Difficulty in root finding
Another approach of the inversion is the application of a root finding
algorithm [5, Chapter 9] to the defining equations
K(m) = K, or E(m) = E. (7)
Although the precise and fast evaluation of K(m) and E(m) is established
[6, 7], it is not enough. We must find a robust and sufficiently accurate ap-
proximate solution such that the following repetition of the update formulas
such as the Newton method surely converges.
However, this is a difficult task. The feature of the curves depicted in
Figure 1 support this conjecture. Also, the convergence of the iteration
becomes slow when m1 because the absolute magnitude of the derivatives
of K(m) and E(m) grow infinitely when m1.
2.3. Logarithmic singularity
The difficulties explained in the previous subsections are caused by the
logarithmic singularity of K(m) and E(m) at m= 1 [1, Formulas 19.12.1
and 19.12.2]. Their simple expressions around the singularity are given [7] as
K(m) = K1KXlog mc, E(m) = E1EXlog mc,(8)
where mc1mis the complementary parameter and K1through EXare
the quantities being regular around mc= 0 as
K1K(mc)
πlog mc
q(mc)= 2 log 2 + log 2
21
4mc+···,(9)
KXK(mc)
π=1
2+mc
8+···,(10)
E11
2KX
+1E(mc)
K(mc)K1= 1 + log 2 1
4mc+···,(11)
EX1E(mc)
K(mc)KX=mc
4+···,(12)
where q(m) is Jacobi’s nome [1, Formula 19.5.5]. These are the base expres-
sions of the Chebyshev approximations [14, 16].
4
Figure 3: Dependence of complete elliptic integrals on p. Shown are K(m) and r
log(E(m)1) as functions of the alternate variable, p≡ −log mc.
2.4. Variable transformation
The asymptotic forms of the integrals around m= 1 shown in the previous
subsection trigger us to change the main variable from mto
p≡ −log mc.(13)
The domain of pis semi-infinite since p+when m1. However, its
practical domain can be limited since we may set the meaningful maximum of
pas pMAX ≡ −log where is the machine epsilon. It is at most a few tens as
pMAX 16.6 and 36.7 in the single and the double precision environments,
respectively.
If mcis sufficiently small, say less than , we may retain only the leading
terms in Eqs (9) and (10). Thus, we obtain an asymptotic expression of
KK(m) as
K= 2 log 2 + p
2+···.(14)
This leads to the inversion in terms of pas
p= 2K4 log 2 + ···,(15)
Namely, pis almost linear with respect to Kwhen Kis sufficiently large, say
larger than 2. Figure 3 confirms this conjecture.
In the case of EE(m), we must keep up to the first order terms in Eqs
(11) and (12). Then, we arrive at a little complicated expression as
E= 1 + (p+ 4 log 2 1) ep
4+···.(16)
We rewrite this as
r≡ −log (E1) = plog p1
4+ log 2+···.(17)
The variability of the second term in the right hand side is relatively small
as
2.264 <log p1
4+ log 2<0.814,(18)
5
when 0 <p<36.7. Thus, we ignore its contribution and obtain a solution
as
pr. (19)
This is a crude approximation. A better solution would be obtained by di-
rectly solving Eq.(16) with help of W1(x), the secondary branch of Lambert
W-function [21]. Since the numerical evaluation of W1(x) is itself a difficult
problem [23, 24, 25], we do not go further in this direction.
At any rate, Figure 3 indicates that Kand rare almost linear with respect
to p. Then, they may be candidates of the functions to be expanded in terms
of the new variable, p.
3. Method
3.1. Strategy
Based on the discussions in §2, we developed a method to invert Kand
Ewith respect to min the standard domain, 0 m < 1. Let us denote the
inverted solutions by mK(K) and mE(E), respectively. They are defined as
the functions to satisfy the relations
K(mK(x)) = x, E (mE(x)) = x. (20)
The key idea is splitting the integral value domain into two regions: that
corresponding to not-so-large m, say less than 0.9 or so, and the rest.
In the former region, we approximate mby a piecewise polynomial of K
or E. Each piece of the approximate polynomials is obtained by inverting
the Taylor series expansion of K(m) and E(m) around a certain reference
value of m. In the second region, we approximate p≡ −log mcby a piecewise
polynomial of Kor r≡ −log(E1). Similarly, each piece of the approximate
polynomials is obtained by inverting the Taylor series expansion of K(m(p))
and log(E(m(p)) 1) around a certain reference value of pwhere
m(p)1ep.(21)
Once pis obtained, we can numerically evaluate mby this relation.
In the double precision environment, we limit the order of the approxi-
mate polynomials less than 19. This is in order to reduce the typical compu-
tational time of the inverted solutions down to the level of that of elementary
functions like exp. Under this restriction, we try to minimize the number of
pieces by adjusting the reference values in terms of mor p.
6
After the approximate polynomials are fixed, we try to apply the Lanczos
economization [5, Section 5.11] to them. Namely, we rewrite each piece of
them into a finite series of the Chebyshev polynomials of the first kind [26].
Except the cases near the singularity, the magnitude of the series coefficients
smoothly decreases according as the degree of the Chebyshev polynomial.
In that case, by removing the Chebyshev polynomials of some highest de-
grees from the series, we can reduce the degree of the original approximate
polynomials significantly while keeping their precision practically unchanged.
It is well known that the finite linear sum of the Chebyshev polynomials
can be efficiently evaluated by Clenshaw’s summation algorithm [26, Sec-
tion 3.2]. See also [27, Section 3.7]. Nevertheless, the fastest evaluation is
achieved by Horner’s method of the power series expanded around the mid-
dle point of the defining interval. The coefficients of the rearranged power
series are rigorously obtained from the Chebyshev polynomial coefficients by
a conversion procedure such as given in Appendix A.2.
In place of the economized power series, however, we present here the
Chebyshev polynomial coefficients such that one can truncate them at an
arbitrary precision. This will be useful to construct the single precision pro-
cedures. Skipping the details, we show the final results below.
3.2. Case of small parameter
When the solution mis not expected to be large, say less than 0.9 or so,
we evaluate mKand mEby their piecewise approximate polynomials derived
from the inverted Taylor series expansions. The piecewise polynomials consist
of four polynomials corresponding to the four sub domains of the integral
values denoted by A, B, C, and D. Table 1 shows the integral values to
separate these sub domains. For example, as mK, we use the approximate
polynomial obtained in the sub domain A if KAK < KB, that in the sub
domain B if KBK < KC, and so on.
In constructing the piecewise polynomials, we set m0, the reference value
of min each sub domain, as 0.2, 0.5, 0.7, and 0.85 for Kand 0.2, 0.5, 0.7,
and 0.83 for E. In each sub domain, we first obtained the approximate
polynomials in the form of truncated Taylor series such as
mK=
N
X
n=0
an(KK0)n,(22)
where K0K(m0) is the reference integral value. We determined the power
7
Table 1: Separation values of Kand E. Listed are the values of Kand Eto select the
best approximate polynomial. The index means that, for K, the sub domain A is the
interval [KA, KB), the sub domain B is the interval [KB, KC), ···, and the sub domain
H is the interval [KH,). On the other hand, for E, the sub domain A is the interval
(EB, EA], the sub domain B is the interval (EC, EB], ···, and the sub domain H is the
interval [1, EH]. Also, we added the values of r≡ −log(E1) for the indices E through
H.
Index K E r
A 1.5707963267948966 1.5707963267948966
B 1.7539969906494259 1.4111237670965148
C 1.9495677498060259 1.2907868683852396
D 2.1998873957149437 1.1878333138534634
E 2.5687780604965515 1.1199612906642078 2.1286317858706077
F 3.6189656655262064 1.0556008611359877 2.8895565897882066
G 5.3616789503774974 1.0048203911429256 5.3349002042275533
H 8.7012952137792582 1.0000443476373730 10.0234511229381364
series coefficients, an, by issuing commands of Mathematica [21] like
InverseSeries[Series[EllipticK[m],{m,0.2,17}]]
which gives those of mKin the sub domain A.
Next, we examined the errors of the piecewise polynomial, mK(K) or
mE(E), by comparing the corresponding integral values, K(mK(K)) or E(mE(E)),
with the original input values of Kor E, respectively. We conducted the
comparison in the quadruple precision by using qcel, the quadruple preci-
sion extension of Bulirsch’s cel [11]. In the errors of some pieces, we found a
constant offset of the order of a few machine epsilons. It seems to be caused
by the accumulation of round-off errors in the inversion process. At any rate,
we adjusted a0in that case.
Third, by taking the balance between the measured errors of the approx-
imate polynomials obtained in the adjacent sub domains, we determined
their separation values in terms of mas 0.365, 0.600, 0.773, and 0.898 for
Kand 0.375, 0.614, 0.786, and 0.881 for E. These are translated into the
corresponding integral values as KB=K(0.365), ···,KE=K(0.898), and
EB=E(0.375), ···,EE=E(0.881). They are listed in Table 1 together
with the lower end values, KAK(0) = π/2 and EAE(0) = π/2.
8
Finally, we transformed the obtained polynomials into the finite sum of
Chebyshev polynomials as
mK=
N
X
n=0
cnTn(t) = c0T0(t) + c1T1(t) + c2T2(t) + ···,(23)
where Nis the degree of the obtained polynomial, Tn(t) is the Chebyshev
polynomial of the first kind of degree n[1, Table 18.3.1], and cnis its n-th
coefficient. Appendix A.1 describes an algorithm to do the conversion from
anto cn. Notice that we define the zeroth coefficient, c0, without the divisor
2. This is different from the definition of the Chebyshev expansion in some
literatures [26, 5, 27].
Meanwhile, the argument of the Chebyshev polynomials, t, which is in
the range [1,+1], is computed from the integral values such as
ta(Kb),(24)
where
a2
KUKL
, b KU+KL
2,(25)
are the transformation coefficients determined from KUand KL, the upper
and lower ends of the sub domain. These end values are already listed in
Table 1. We conducted exactly the same treatment for E. The resulting
Chebyshev polynomial coefficients and the transformation coefficients are
explicitly given in Tables 2 and 3 for mKand mE, respectively.
3.3. Case of large parameter
If KKEor EEE, the parameter mbecomes significantly large, say
larger than 0.89 or so. In that case, we approximate p≡ −log mcby its
piecewise polynomial in terms of Kor r≡ −log(E1). In constructing the
piecewise polynomials, we set p0, the reference value of pin each sub domain,
as 3.3, 6.0, 10.5, and 17.6 for Kand 1.8, 4.2, 8.0, and 15.0 for r. In each sub
domain, the approximate polynomials are written as
pK=
NK
X
n=0
pn(KK0)n, pE=
NE
X
n=0
qn(rr0)n,(26)
where
K0K1ep0, r0≡ −log E1ep01,(27)
9
Table 2: Chebyshev polynomial coefficients of mK: sub domains A through D. The nu-
merical values of the coefficients, cn, are multiplied by a certain powers of 10 such as
1017 in order to suppress the leading zeros. For example, the zeroth coefficients must
read +0.19 ·· ·, +0.49 ···, +0.69 · ··, and +0.84 ···. These zeroth coefficients are defined
without the divisor 2. Namely, mKis approximated as mK=c0T0(t) + c1T1(t) + ···.
where Tnis the Chebyshev polynomial of the first kind of degree n. Also, we listed aand
b, the transformation coefficients from Kto t, the argument of Chebyshev polynomials,
such that t=a(Kb). All the values are shown with a few more digits than necessary
in the double precision computation. This is in order to avoid the round-off errors in the
implementation.
A B C D
n1017cn1017 cn1017cn1017 cn
0 +19401242299345095 +48990719448270469 +69305133428229853 +84201552855821970
1 +18194753782220755 +11715040871065477 +8613490854834244 +6201036207545733
21148955521044524 739311643473559 653420708906968 648514448920333
3 +55159476593419 +34907474599398 +36436506978491 +48795524452071
42283669657743 1406019129690 1709827595538 3029715923753
5 +86634143620 +51595118298 +72527965477 +167839458005
63105064734 1783762901 2887658427 8670547452
7 +106925302 +59157015 +110075215 +427595012
83574353 1902485 4063750 20387219
9 +116785 +59756 +146392 +947243
10 3747 1842 5173 43128
11 +118 +56 +180 +1932
12 85
a+10.916991008220921 +10.226477662739623 +7.9897843924233070 +5.4216606462081334
b+1.6623966587221613 +1.8517823702277259 +2.0747275727604848 +2.3843327281057476
10
Table 3: Chebyshev polynomial coefficients of mE: sub domains A through D. Same as
Table 2 but for mE. The zeroth coefficients must read +0.19 ···, +0.49 ·· ·, +0.70 ···, and
+0.83 ·· ·. Also, the transformation from Eto tis expressed as t=a(Eb).
A B C D
n1018cn1017 cn1017cn1017 cn
0 +191491596762105577 +49694125666025830 +70197876373537509 +83448854563061028
1 +187521750862407743 +11951688581417314 +8602065171825419 +4751257054900759
23990799269547337 244061880474704 197779951710636 98795130570153
321715474908196 1685618407984 2059553190095 1253506060813
4795715611047 63628928178 96047713357 59189763242
535290453296 2953987502 5591458166 3530719901
61771319411 156070789 372015072 241289912
796704988 8986478 27007660 18000725
85606172 549838 2084262 1427478
9339873 35191 168273 118417
10 21330 2332 14065 10167
11 1376 159 1208 897
12 106 81
a12.5256337330469197 16.6200061778109266 19.4262355398589756 29.4672223697113173
b+1.4909600469457057 +1.3509553177408772 +1.2393100911193515 +1.1538973022588356
are the reference function values. We obtained the approximate polynomials
by issuing commands of Mathematica [21] like
InverseSeries[Series[Log[EllipticE[1Exp[p]] 1]],{p,1.8,17}]]
which gives the coefficients of pEfor p0= 1.8.
After constructing the approximate polynomials, we again examined their
errors by using qcel, and determined the separation values in terms of p
as 4.45, 7.95, and 14.63 for Kand 3.1, 6.0, and 11.2 for E. These are
translated into the corresponding function values as KF=K(1 e4.45),
···,KH=K(1 e14.63), EF=E(1 e3.1), ···,EH=E(1 e11.2), and
rE=log (EE1), ···, and rH=log (EH1). They are already listed
in Table 1.
For the sub domains E and F, we again transformed the obtained poly-
nomials into the form of the series of Chebyshev polynomials such as
pE=
N
X
n=0
cnTn(t),(28)
11
where the transformation of argument becomes
ta(rb),(29)
with the transformation coefficients defined as
a2
rUrL
, b rU+rL
2,(30)
where rUand rLare the upper and lower ends of the sub domain in terms of r,
which are already listed in Table 1. The Chebyshev polynomial coefficients
and the transformation coefficients are explicitly given in Tables 4 and 5.
Noting that mepp, we may truncate the Chebyshev expansion in
terms of pearlier than that in terms of m.
3.4. Case near the singularity
For the sub domains G and H, we find that the Lanczos economization is
not effective in reducing the computational labor. Of course, one may execute
the transformation into Chebyshev polynomial series. For the sub domain G,
which is semi-infinite, this is practically feasible by setting the upper end as
KI= +19.755 or rI= +34.472, which roughly corresponds to the numerical
value of Kor Ewhen m= 1 double where double 253 1.11 ×1016 is
the double precision machine epsilon.
However, we learn that the decreasing manner of the magnitude of the
resulting coefficients of the Chebyshev polynomials is quite slow. This is be-
cause the minimax type approximation with respect to pdoes not guarantee
a uniform error distribution in terms of msince mepp. Therefore, we
abandon the economization and present the original Taylor series themselves
in Tables 6 and 7. We truncated them such that the numbers of terms are
the necessary minimum to assure the maximum of |m|remain at the level
of double.
4. Numerical Experiments
Let us examine the computational cost and performance of the method
described in the previous section. The absolute errors of mobtained by the
method in the double precision environment are illustrated in Figs 4 and
5. They are measured by using qcel, the quadruple precision extension of
Bulirsch’s procedure to evaluate the general complete elliptic integral [11].
12
Table 4: Chebyshev polynomial coefficients of pK: sub domains E and F. Same as Table
2 but for pK. The zeroth coefficients must read +3.37 ·· ·, and +6.20 · ··. The argu-
ment transformation becomes t=a(Kb) again. In truncating the series of Chebyshev
polynomials, one should note that the final error in mis reduced by the factor epas
|m| ≈ ep|p|. The maximum reduction factor in each sub domain is also listed.
E F
n1015cn1014cn
0 +3372842411096568 +620234816387263
1 +1082799019684612 +174947955583875
26377916757929 226305230130
3 +804553258630 +50978226844
472934012596 8407174303
5 +5172172149 +1058249590
6325770457 103492197
7 +21898786 +7909234
81696166 487701
9 +136104 +30082
10 10427 2890
11 +780 +372
12 60 42
13 +5 +4
a+1.9044216389732810 +1.1476357111552424
b+3.0938718630113789 +4.4903223079518519
max ep0.102 0.0450
13
Table 5: Chebyshev polynomial coefficients of pE: sub domains E and F. Same as Table
4 but for pE. The zeroth coefficients must read +2.62 ·· · and +4.56 ···. The argument
transformation is written as t=a(rb) where r≡ −log(E1).
E F
n1015cn1015cn
0 +2621927976267172 +4563857924247435
1 +480434270533396 +1448978332531438
22421516965161 13787853000146
3 +60368511503 +1017879586053
41103238537 69972247182
5 +4267375 +3797035826
6 +654240 100760933
730728 8981925
8 +745 +1749497
95170996
10 +11351
11 400
12 22
a+2.62838060962536483 +0.817880966989805936
b+2.5090941878294072 +4.1122283970078800
max ep0.119 0.0450
14
Table 6: Taylor series coefficients of pK: sub domains G and H. Listed are the coefficients
of the power series expression as pK=PN
n=0 an(KK0)n.
G H
n anan
0 +10.500000000000 +17.600000000000
1 +2.0001414497655 +2.0000001973566
21.277026552E41.859964E7
3 +7.59811036E5 +1.164242E7
43.34231082E55.44254E8
5 +1.15490151E5 +2.02555E8
63.2468658E66.2470E9
7 +7.573936E7 +1.6406E9
81.476109E73.74E10
9 +2.383963E8 +7.5E11
10 3.07883E91.3E11
11 +2.83187E10
12 9.125E12
13 2.572E12
14 +5.45E13
15 3.E14
16 2.E14
17 +6.2E15
K0+6.6363331625866755 +10.186294413299100
15
Table 7: Taylor series coefficients of pE: sub domains G and H. Same as Table 6 but for
pEexpressed as pE=PN
n=0 an(rr0)nwhere r≡ −log(E1).
G H
n anan
0 +8.000000000000000000 +15.00000000000000000
1 +1.1138558445808138764 +1.0634010131365226997
27.160737009252904412E32.137214443579717823E3
3 +6.14816637761486272E4 +9.8905527968533563E5
45.7949395860221721E55.240426861207633E6
5 +5.451526070327390E6 +2.99271455626484E7
64.80075731204772E71.7869634729734E8
7 +3.7093529426652E8 +1.092804281815E9
82.222894130707E96.7254295377E11
9 +5.377967260E11 +4.096019951E12
10 +1.071061734E11 2.42723757E13
11 2.33974846E12 +1.3740808E14
12 +3.0858084E13 7.2627E16
13 3.239999E14 +3.455E17
14 +2.84089E15 1.36E18
15 2.0071E16 +3.E20
16 +9.0E18 1.4E21
17 +3.E19 +7.E23
18 2.E19
r0+7.10660216677717055 +13.5665483242003067
16
Figure 4: Parameter dependence of the errors of the inverse complete elliptic integrals: case
of small parameters. Shown are the absolute errors of the double precision computation
of mK(K) and mE(E) as functions of their values. The unit of the errors is the double
precision machine epsilon, double 1.11 ×1016.
Figure 5: Parameter dependence of the errors of the inverse complete elliptic integrals: case
of large parameters. Same as Fig. 4 but as functions of p≡ −log(1 m). The maximum
value of pshown here is that corresponding to the extreme case, m= 1 double.
More specifically speaking, we first prepare min the quadruple precision
environment. Next, we compute K(m) and E(m) in the quadruple precision
by qcel as
K(m) = qcel(1.0m, 1.0,1.0,1.0),(31)
E(m) = qcel(1.0m, 1.0,1.0,1.0m).(32)
Third, we compute the double precision values of mKand mEfrom the
double precision value of K(m) and E(m) using the procedures described in
the previous section, namely the finite series of the Chebyshev polynomials
in the sub domains A through F and the inverted Taylor series in the sub
domains G and H. Finally, we take the difference of mKand mEfrom the
original min the quadruple precision environment.
Figs 4 and 5 show that, in the double precision computation, the mag-
nitude of the absolute errors of mKand mEare less than 3 and 5 machine
epsilons, respectively.
On the other hand, Table 8 compares the CPU time of the fastest proce-
dures to compute K(m) or E(m) [6, 7] as well as the procedures to obtain
mK(K) and mE(E) using Horner’s method to evaluate the economized or
original power series. The coefficients of the economized power series are
obtained from the tables in the previous section by the conversion procedure
described in Appendix A.2. The CPU times are averaged for 226 grid points
evenly distributed in the parameter domain, 0 m < 1. The measurements
are conducted at a PC with an Intel Core i7-2675QM run at 2.20 GHz clock
under Windows 7.
Table 8 shows that the procedures to compute the inverse functions of
K(m) and E(m) are 30 and 40 % faster than those to compute themselves
[6, 7], respectively. This is mainly due to the faster convergence of the
inverted Taylor series than the Taylor series and partly due to the Lanczos
economization of the polynomial expressions.
17
Table 8: CPU time comparison. Listed are the averaged CPU time of some procedures to
compute the complete elliptic integrals and their inverse in the double precision environ-
ment. The unit of CPU time is nano second at a latest consumer PC with an Intel Core
i7-2675QM run at 2.20 GHz clock.
Target Procedure Reference CPU time
K(m)celk [6] 71
mK(K)icelk This article 55
E(m)celbd [7] 83
mE(E)icele This article 58
5. Conclusion
We developed the numerical procedures to compute mK(K) and mE(E),
the inverse complete elliptic integrals of the first and the second kind, K
K(m) and EE(m), in the double precision environment, respectively.
The key idea of the procedures is splitting the integral value domain into two
regions: that corresponding to not-so-large m, say smaller than 0.9 or so,
and that near the logarithmic singularity, m= 1. In the former region, we
obtained a piecewise power series approximation of the inverse functions by
using the inverted Taylor series expansion with respect to m. In the latter
region, we (1) changed the main variable from mto p≡ − log(1 m), (2)
regarded Kor r≡ −log(E1) as a function of p, and (3) inverted the Taylor
series of Kand rwith respect to p. The resulting piecewise polynomials con-
sist of four polynomials for each region. Their selection rules are determined
from the error measurement of the obtained polynomials by means of the
quadruple precision computation of the integral values. Except for the two
sub domains near the singularity point, m= 1, we rearranged the polynomial
expressions into the finite series of Chebyshev polynomials such that one may
truncate them at an arbitrary level of precision. This makes easy to obtain
the single precision procedures. Thanks to the effectiveness of the policy of
divide-and-rule, the developed procedures are so precise that, in the double
precision environment, the maximum absolute error of mKand mEis less
than 3 and 5 machine epsilons, respectively. Also, the procedures run signif-
icantly faster than the fastest procedures of the integral value computation
[6].
18
The double precision Fortran functions of the developed procedures, icelk
and icele, are available from the author’s personal WEB page at Research-
Gate:
https ://www.researchgate.net/profile/Toshio Fukushima/
Acknowledgments
The author thanks the anonymous referee for many valuable advices to
improve the quality of the present article including the application of the
Lanczos economization.
Appendix A. Transformation between power series and Cheby-
shev polynomial series
Any polynomial can be expressed in two ways, a power series and a series
of Chebyshev polynomials [26]. Namely, it is written as
N
X
n=0
anxn=
N
X
n=0
cnTn(x),(A.1)
where Nis the degree of the polynomial, Tn(x) is the Chebyshev polynomial
of the first kind of degree n[1, Table 18.3.1], and we assumed that the
argument xis in the standard interval, [1,+1]. Below, we summarize the
transformation between the coefficients of the power series, {an}, and those
of the Chebyshev polynomials, {cn}.
Appendix A.1. From power series to Chebyshev polynomial series
It is well known that Tn(x) is a polynomial of degree nwith integral
coefficients [27, Eq.(3.32)] as
Tn(x) =
bn/2c
X
`=0
Tn` xn2`,(A.2)
where Tn` is an integer and bxcis the floor function. Note that the range of
the second index is 0 `≤ bn/2c≤bN/2c. Its length is roughly the half of
that of the first one since 0 nN.
19
The explicit expression of Tn` is known [27, Eq.(3.33)] as
T00 =T10 = 1,(A.3)
Tn` = (1)`2n2`1n
n`n`
`.(2 n, 0`≤ bn/2c) (A.4)
The latter is simplified when n= 2`[27, Eq.(3.34)] as
T2`, ` = (1)`.(A.5)
Instead of evaluating these expressions, we compute Tn` by recursion as
Tn+1,0= 2Tn0,(1 nN1) (A.6)
Tn+1, ` = 2Tn` Tn1, `1,(2 nN1,1`≤ bn/2c − 1) (A.7)
T2`, ` =T2(`1), `1,(1 `≤ bN/2c) (A.8)
starting from the initial conditions, Eq.(A.3). The above recurrence formulas
are nothing but the translation of the three term recurrence relation of Tn(x)
[27, Eq.(3.20)],
Tn+1(x)=2xTn(x)Tn1(x),(1 n) (A.9)
in terms of the coefficients. The recursion consists of integer operations only,
and therefore it suffers no round-off errors when nis sufficiently small, say
less than 30 when using the 4 byte integers. Also, its execution is fairly fast.
Using Tn`, we obtain a transformation formula from {cn}to {an}as
an=
b(Nn)/2c
X
`=0
Tn+2`, ` cn+2`.(0 nN) (A.10)
This is simplified when n= 0 as
a0=
bN/2c
X
`=0
(1)`cn+2`.(A.11)
Usually the magnitude of cnis monotonically decreasing with the index, n.
Then, we execute the summation in the above transformation formula in the
reverse order, i.e. the order decreasing `. This trick significantly suppresses
the accumulation of round-off errors in the transformation procedure. This is
the reason why we prepare the coefficient matrix, Tn`, at the cost of additional
memory. For the readers’ convenience, we present a double precision Fortran
subroutine to do the transformation in Table A.9.
20
Table A.9: Double precision Fortran subroutine to transform the Chebyshev polynomial
coefficients, {cn}, into the power series coefficients, {an}.
subroutine cheb2poly(n,c,a)
integer JMAX,NMAX
parameter (JMAX=15,NMAX=JMAX*2)
integer n,T(0:NMAX,0:JMAX),m,j
real*8 c(0:n),a(0:n),am
logical first/.TRUE./
save first,T
if(first) then
first=.FALSE.;T(0,0)=1;T(1,0)=1
do j=1,JMAX
T(2*j,j)=-T(2*(j-1),j-1)
enddo
do m=2,NMAX
T(m,0)=2*T(m-1,0)
do j=1,(m-1)/2
T(m,j)=2*T(m-1,j)-T(m-2,j-1)
enddo
enddo
endif
do m=0,n
am=0.d0
do j=(n-m)/2,0,-1
am=am+dble(T(m+2*j,j))*c(m+2*j)
enddo
a(m)=am
enddo
return;end
21
Appendix A.2. From Chebyshev polynomial series to power series
A monomial of degree nis expressed as a finite linear sum of the Cheby-
shev polynomial of the same or lower degree [27, Eq.(3.35)] as
xn=
bn/2c
X
`=0
Bn` Tn2`(x),(A.12)
where the coefficients Bn` are explicitly written as
B00 = 1,(A.13)
Bn` =1
2n1n
`.(1 n, 0`≤ bn/2c) (A.14)
These coefficients are finite fractions in the binary machines, and therefore
can be expressed without round-off errors when nis sufficiently small, say
less than 50 in the double precision environment. Again, we prefer computing
them by recursion as
Bn+1,0= 21Bn0,(1 nN1) (A.15)
Bn+1, ` = 21(Bn, `1+Bn`),(2 nN1,1`≤ b(n1)/2c) (A.16)
B2`, ` =B2`1, `1,(1 `≤ bN/2c) (A.17)
starting from the initial condition, Eq.(A.13). At any rate, the transforma-
tion from {an}to {cn}is of the same form as Eq.(A.10) as
cn=
b(Nn)/2c
X
`=0
Bn+2`, ` an+2`.(1 nN) (A.18)
We anticipate that the magnitude of anis also monotonically decreasing with
the index, n. Then, we again execute the summation in the transformation
formula in the reverse order.
As for the most important coefficient, c0, we adopt a different approach.
We solve the expression of a0, Eq.(A.11), with respect to c0. Then, using
thus-computed coefficients cnfor n6= 0 and a0, we evaluate the solution
form of c0by Horner’s method as
c0=a0+ (c2(c4(···(cM2cM)···))) ,(A.19)
where M2bN/2c. This technique further reduces the accumulation of
round-off errors. Table A.10 lists a double precision Fortran subroutine to
do the transformation. We used its quadruple precision extension in deriving
the tables in the main text.
22
Table A.10: Double precision Fortran subroutine to transform the power series coefficients,
{an}, into the Chebyshev polynomial coefficients, {cn}.
subroutine poly2cheb(n,a,c)
integer JMAX,NMAX
parameter (JMAX=15,NMAX=JMAX*2)
integer n,m,j
real*8 a(0:n),c(0:n),B(0:NMAX,0:JMAX),cm
logical first/.TRUE./
save first,B
if(first) then
first=.FALSE.;B(0,0)=1.d0;B(1,0)=1.d0
do m=2,NMAX
B(m,0)=0.5d0*B(m-1,0)
do j=1,(m-1)/2
B(m,j)=0.5d0*(B(m-1,j-1)+B(m-1,j))
enddo
if(m.eq.m/2*2) then
B(m,m/2)=B(m-1,m/2-1)
endif
enddo
endif
do m=1,n
cm=0.d0
do j=(n-m)/2,0,-1
cm=cm+B(m+2*j,j)*a(m+2*j)
enddo
c(m)=c(m)
enddo
cm=c(n/2*2)
do j=n/2-1,1,-1
cm=c(2*j)-cm
enddo
c(0)=a(0)+cm
return;end
23
References
[1] Olver, F.W.J., Lozier, D.W., Boisvert, R.F., & Clark,
C.W. (eds),NIST Handbook of Mathematical Functions, Cam-
bridge Univ. Press, Cambridge, 2010, Chapter 4. Freely accessible at
http://dlmf.nist.gov/
[2] Byrd, P.F., & Friedman, M.D.,Handbook on Elliptic Integrals for
Engineers and Physicists, 2nd ed., Springer-Verlag, Berlin, 1971.
[3] Press, W.H., Flannery, B.P., Teukolsky, S.A., & Vetter-
ling, W.T.,Numerical Recipes: the Art of Scientific Computing, Cam-
bridge Univ. Press, Cambridge, 1986
[4] Jeffrey, A., & Zwillinger, D. (eds.),Gradshteyn and Ryzhik’s Ta-
ble of Integrals, Series, and Products, Seventh edition, Academic Press,
New York, 2007
[5] Press W.H., Teukolsky S.A., Vetterling W.T., Flannery
B.P.,Numerical Recipes: the Art of Scientific Computing, 3rd ed.. Cam-
bridge Univ. Press, Cambridge, 2007.
[6] Fukushima, T.,Fast Computation of Complete Elliptic Integrals and
Jacobian Elliptic Functions, Celest. Mech. Dyn. Astron., 105 (2009a)
305-328.
[7] Fukushima, T.,Precise and Fast Computation of the General Com-
plete Elliptic Integral of the Second Kind, Math. Comp., 80 (2011) 1725-
1743.
[8] Bulirsch, R.,Numerical Computation of Elliptic Integrals and Elliptic
Functions, Numer. Math., 7 (1965a) 78-90.
[9] Bulirsch, R.,Numerical Computation of Elliptic Integrals and Elliptic
Functions II, Numer. Math., 7 (1965b) 353-354.
[10] Bulirsch, R.,An Extension of the Bartky-Transformation to Incom-
plete Elliptic Integrals of the Third Kind, Numer. Math., 13 (1969a)
266-284.
[11] Bulirsch, R.,Numerical Computation of Elliptic Integrals and Elliptic
Functions III, Numer. Math., 13 (1969b) 305-315.
24
[12] Carlson, B.C.,Computing Elliptic Integrals by Duplication, Numer.
Math., 33 (1979) 1-16.
[13] Carlson, B.C., & Notis, E.M.,Algorithm 577. Algorithms for In-
complete Elliptic Integrals, ACM Trans. Math. Software, 7 (1981) 398-
403.
[14] Hastings, C. Jr.,Approximations for Digital Computers, Princeton
Univ. Press, Princeton, 1955.
[15] Abramowitz, M., & Stegun, I.A. (eds),Handbook of Mathematical
Functions with Formulas, Graphs, and Mathematical Tables, National
Bureau of Standards, Washington, 1964, Chapter 17.
[16] Cody, W.J.,Chebyshev Approximations for the Complete Elliptic In-
tegrals K and E, Math. Comp., 19 (1965a) 105-112.
[17] Cody, W.J.,Chebyshev Polynomial Expansions of Complete Elliptic
Integrals K and E, Math. Comp., 19 (1965b) 249-259.
[18] Cody, W.J.,Corrigenda: Chebyshev Approximations for the Complete
Elliptic Integrals K and E, Math. Comp., 20 (1966) 207.
[19] Fukushima, T.,Efficient Solution of Initial-Value Problem of Torque-
Free Rotation, Astron. J., 137 (2009b) 210-218.
[20] Fukushima, T.,Numerical inversion of a general incomplete elliptic
integral, J. Comp. Appl. Math., 237 (2013a) 43-61.
[21] Wolfram, S.,The Mathematica Book, 5th ed., Wolfram Research
Inc./Cambridge Univ. Press, Cambridge, 2003.
[22] Whittaker, E.T., & Watson, G.N.,A Course of Modern Analysis,
4th ed., Cambridge Univ. Press, Cambridge, 1958.
[23] Chapeau-Blondeau, F. & Monir, A,Evaluation of the Lambert
Wfunction and application to generation of generalized Gaussian noise
with exponent 1/2, IEEE Trans. Signal Processing, 50 (2002) 2160-2165.
[24] Veberic, D.,Lambert W function for applications in physics, Comp.
Phys. Comm., 183 (2012) 2622-2628.
25
[25] Fukushima, T.,Precise and fast computation of Lambert W-functions
without transcendental function evaluations, J. Comp. Appl. Math., 244
(2013b) 77-89.
[26] Rivlin, T.J.,Chebyshev Polynomials, 2nd ed., Wiley-Interscience, New
York, 1990.
[27] Gil, A., Segura, J., & Temme, N.M.,Numerical Methods for Spe-
cial Functions, SIAM, Philadelphia, 2007.
26
... Many problems in science and technology require the inversion of a known nonlinear function f (x). Widely studied examples include the inversion of elliptic integrals [2,10], the computation of Lambert W function [5,25], and the solution of Kepler's equation for the orbital motion of a body in a gravitational field [6,16]. ...
... in the estimation of the error (10). Not only is this the simplest choice, if the grid is not given otherwise, but it is usually also the best option. ...
... However, from the equation (19), such a huge interval for x would make the error increase dramatically, unless the number of grid points were comparable with Let us consider a more reasonable interval for this example, such as x ∈ [0. 1,10], in which f (x) still shows a strong nonlinear behaviour. In this case, our theoretical analysis is accurate when the number of grid points n xn−x 0 2 x 0 = 50, and our estimation of the error (19) is 2 orders of magnitude smaller than the bound derived from general spline analysis. ...
Preprint
Full-text available
Numerically obtaining the inverse of a function is a common task for many scientific problems, often solved using a Newton iteration method. Here we describe an alternative scheme, based on switching variables followed by spline interpolation, which can be applied to monotonic functions under very general conditions. To optimize the algorithm, we designed a specific spline routine that is faster and more accurate than those that have been described in the literature. We also derive analytically the theoretical errors of the method and test it on examples that are of interest in physics. In particular, we compute the real branch of Lambert's $W(y)$ function, which is defined as the inverse of $x \exp(x)$, and we solve Kepler's equation for the time evolution of planets and spacecrafts. In all cases, our predictions for the theoretical errors are in excellent agreement with our numerical results, and are smaller than what could be expected from the general error analysis of spline interpolation by many orders of magnitude, namely by an astonishing $3\times 10^{-22}$ factor for the computation of $W$ in the range $W(y)\in [0,10]$. In our tests, this scheme is much faster than the Newton-Raphson iteration method, by a factor in the range $10^{-4}$ to $10^{-3}$ for the execution time in the examples, when the values of the inverse function over an entire interval or for a large number of points are requested. For Kepler's equation and tolerance $10^{-6}$ rad, the algorithm outperforms Newton's method for all values of the number of points $N\ge 2$.
... Then, the integral value domain, π/2 ≤ K < K C or 1 < E ≤ π/2, is split into 8 pieces. For the first 4 sub intervals being far from the logarithmic singularity, K → ∞ and E = 1, an appropriately truncated Taylor series expansion [39] is developed again. On the other hand, in the last 4 sub intervals, the variable to be solved for is altered from m to p ≡ −ln m c . ...
... The resulting procedures [39] are (i) entirely new, (ii) sufficiently precise, and (iii) as fast as those computing K(m) and/or other single variable complete elliptic integrals described in the above as already shown in Table I. ...
Conference Paper
Full-text available
Summarized is the recent progress of the new methods to compute Legendre's complete and incomplete elliptic integrals of all three kinds and Jacobian elliptic functions. Also reviewed are the entirely new methods to (i) compute the inverse functions of complete elliptic integrals, (ii) invert a general incomplete elliptic integral numerically, and (iii) evaluate the partial derivatives of the elliptic integrals and functions recursively. In order to avoid the information loss against small parameter and/or characteristic, newly introduced are the associate complete and incomplete elliptic integrals. The main techniques used are (i) the piecewise approximation for single variable functions, and (ii) a systematic utilization of the half and double argument transformations and the truncated Maclaurin series expansions for the others. The new methods are of the errors of 5 ulps at most without any chance of cancellation against small input arguments. They run significantly faster than the existing methods: (i) slightly faster than Bulirsch's procedure for the incomplete elliptic integral of the first kind, (ii) 1.5 times faster than Bulirsch's procedure for Jacobian elliptic functions, (iii) 2.5 times faster than Cody's and Bulirsch's procedures for the complete elliptic integrals, and (iv) 3.5 times faster than Carlson's procedures for the incomplete elliptic integrals of the second and third kind.
... Many problems in science and technology require the inversion of a known nonlinear function f ( x ). Widely studied examples include the inversion of elliptic integrals [1,2] , the computation of Lambert W function [3][4][5] , and the solution of Kepler's equation for the orbital motion of a body in a gravitational field [6][7][8] . ...
Article
Numerically obtaining the inverse of a function is a common task for many scientific problems, often solved using a Newton iteration method. Here we describe an alternative scheme, based on switching variables followed by spline interpolation, which can be applied to monotonic functions under very general conditions. To optimize the algorithm, we designed a specific ultra-fast spline routine. We also derive analytically the theoretical errors of the method and test it on examples that are of interest in physics. In particular, we compute the real branch of Lambert’s W(y) function, which is defined as the inverse of xexp (x), and we solve Kepler’s equation. In all cases, our predictions for the theoretical errors are in excellent agreement with our numerical results, and are smaller than what could be expected from the general error analysis of spline interpolation by many orders of magnitude, namely by an astonishing factor for the computation of W in the range W(y) ∈ [0, 10], and by a factor for Kepler’s problem. In our tests, this scheme is much faster than Newton-Raphson method, by a factor in the range – for the execution time in the examples, when the values of the inverse function over an entire interval or for a large number of points are requested. For Kepler’s equation and tolerance rad, the algorithm outperforms Newton’s method for all values of the number of points N ≥ 2.
... Many problems in science and technology require the inversion of a known nonlinear function f (x). Widely studied examples include the inversion of elliptic integrals [1,2], the computation of Lambert W function [3,4], and the solution of Kepler's equation for the orbital motion of a body in a gravitational field [5,6]. ...
... The computation of the elliptic integrals and of the elliptic functions were studied in many papers, e.g. [2], [3], [5], as well as the included bibliography. ...
Article
Full-text available
The paper presents a method to compute the Jacobi's elliptic function \texttt{sn} on the period parallelogram. For fixed $m$ it requires first to compute the complete elliptic integrals $K=K(m)$ and $K'=K(1-m).$ The Newton method is used to compute sn(z,m), when $z\in [0,K]\cup[0,i K').$ The computation in any other point does not require the usage of any numerical procedure, it is done only with the help of the properties of sn.
... Similarly to (21), we consider the free-surface displacement of a traveling wave * U on the moving coordinate system * T and define the wave height of singly periodic KP solution: * * * * We solve (C.5) for the modulus m. Note that K(m) has logarithmic singularity at m=1, and the convergence of the hypergeometric series to compute K(m) becomes slow with increasing m; therefore, we first transform variable by using = --( ) m w 1 exp , and we recover m after computing w (Fukushima 2013). The relation that corresponds to (13) is ...
Article
We directly calculate fully nonlinear traveling waves that are periodic in two independent horizontal directions (biperiodic) in shallow water. Based on the Riemann theta function, we also calculate exact periodic solutions to the Kadomtsev-Petviashvili (KP) equation, which can be obtained by assuming weakly-nonlinear, weakly-dispersive, weakly-two-dimensional waves. To clarify how the accuracy of the biperiodic KP solution is affected when some of the KP approximations are not satisfied, we compare the fully- and weakly-nonlinear periodic traveling waves of various wave amplitudes, wave depths, and interaction angles. As the interaction angle θ decreases, the wave frequency and the maximum wave height of the biperiodic KP solution both increase, and the central peak sharpens and grows beyond the height of the corresponding direct numerical solutions, indicating that the biperiodic KP solution cannot qualitatively model direct numerical solutions for . To remedy the weak two-dimensionality approximation, we apply the correction of Yeh et al (2010 Eur. Phys. J. Spec. Top. 185 97-111) to the biperiodic KP solution, which substantially improves the solution accuracy and results in wave profiles that are indistinguishable from most other cases. © 2018 The Japan Society of Fluid Mechanics and IOP Publishing Ltd.
... in [29], and (8) the inversion of K (m) or E(m) with respect to m in [30]. This time, the computation of J(n|m) is our target in this article. ...
Data
Full-text available
... See e.g.[76]. 14 This difficulty partly accounts for the somewhat unconventional organization of our paper, discussing the GKP configuration (II) before (I).15 ...
Article
Full-text available
We demonstrate that the large-spin expansion of the energy of Gubser-Klebanov-Polyakov (GKP) strings that rotate in RxS2 and AdS3 can be expressed in terms of Lambert's W-function. We compute the leading, subleading and next-to-subleading series of exponential corrections to the infinite-volume dispersion relation of GKP strings that rotate in RxS2. These strings are dual to certain long operators of N=4 SYM theory and provide their scaling dimensions at strong coupling. We also show that the strings obey a short-long (strings) duality. For the folded GKP strings that spin inside AdS3 and are dual to twist-2 operators, we confirm the known formulas for the leading and next-to-leading coefficients of their anomalous dimensions and derive the corresponding expressions for the next-to-next-to-leading coefficients.
... in [29], and (8) the inversion of K(m) or E(m) with respect to m in [30]. This time, the computation of J(n|m) is our target in this article. ...
Article
Full-text available
We developed a novel method to calculate an associate complete elliptic integral of the third kind, J(n|m) ≡ [Π (n|m) − K(m)]/n. The key idea is the double argument formula of J(n|m) with respect to n. We derived it from the not-so-popular addition theorem of Jacobi's complete elliptic integral of the third kind, Π 1 (a|m), with respect to a, which is a real or pure imaginary argument connected with n and m as n = m sn 2 (a|m). Repeatedly using the half argument transformation (Fukushima 2010a) of a new variable, y ≡ n/m, or its complement, x ≡ (m − n)/m, we reduce |y| sufficiently small, say less than 0.3 or so. Then, we evaluate the integral for the reduced variable by its Maclaurin series expansion. The coefficients of the series expansion are recursively computed from two other associate complete elliptic integrals, B(m) ≡ [E(m) − (1 − m)K(m)]/m and D(m) ≡ [K(m) − E(m)]/m. The precise and fast computation of these two integrals is found in our previous work (Fukushima 2011a). Finally, we recover the integral value for the original n by successively applying the double argument formula of J(n|m). The new method is sufficiently precise in the sense the maximum errors are less than around 10 machine epsilons. For the sole computation of J(n|m), the new method runs 1.2-1.5 and 4.7-5.5 times faster than Bulirsch's cel and Carlson's R J , respectively. In the simultaneous computation of three associate complete integrals, the new method runs 1.6-1.7 and 5.3-8.0 times faster than cel and Carlson's R D and R J , respectively.
Article
The complete elliptic integral of the first kind arises in many applications. This article furnishes four different ways to compute the inverse of the elliptic integral. One motive for this study is simply that the author needed to compute the inverse integral for an application. Another is to develop a case study comparing different options for solving transcendental equations like those in the author’s book (Boyd, 2014). A third motive is to develop analytical approximations, more useful to theorists than mere numbers. A fourth motive is to provide robust “black box” software for computing this function. The first solution strategy is “polynomialization” which replaces the elliptic integral by an exponentially convergent series of Chebyshev polynomials. The transcendental equation becomes a polynomial equation which is easily solved by finding the eigenvalues of the Chebyshev companion matrix. (The numerically ill-conditioned step of converting from the Chebyshev to monomial basis is never necessary). The second approximation is a regular perturbation series, accurate where the modulus is small. The third is a power-and-exponential series that converges over the entire range parameter range, albeit only sub-exponentially in the limit of zero modulus. Lastly, Newton’s iteration is promoted from a local iteration to a global method by a Never-Failing Newton’s Iteration (NFNI) in the form of the exponential of the ratio of a linear function divided by another linear polynomial. A short Matlab implementation is provided, easily translatable into other languages. The Matlab/Newton code is recommended for numerical purposes. The other methods are presented because (i) all are broadly applicable strategies useful for other rootfinding and inversion problems (ii) series and substitutions are often much more useful to theorists than numerical software and (iii) the Never-Failing Newton’s Iteration was discovered only after a great deal of messing about with power series, inverse power series and so on.
Article
Numerical analysts and computer operators in all fields will welcome this publication in book form of Cecil Hastings' well-known approximations for digital computers, formerly issued in loose sheets and available only to a limited number of specialists. In a new method that combines judgment and intuition with mathematics, Mr. Hasting has evolved a set of approximations which far surpasses in simplicity earlier approximations developed by conventional methods. Part I of this book introduces the collection of useful and illustrative approximations, each of which is presented with a carefully drawn error curve in Part II.