ArticlePDF Available

Very simply explicitly invertible approximations of normal cumulative and normal quantile function

Authors:

Abstract

For the normal cumulative distribution function: Φ(x) we give the new approximation 2**(-22**(1-41**(x/10))) for any x>0, which is very simple (with only integer constants and operations-and /and power elevation**) and is very simply explicitly invertible having 1 entry of x. It has 3 decimals of precision having absolute error less than 0.00013. We compute the inverse which approximates the normal quantile function, or probit, and it has the relative precision of 1 percent (from 0.5) till beyond 0.999. We give an open problem and a noticeable bibliography. We report several other approximations.
Applied Mathematical Sciences, Vol. 8, 2014, no. 87, 4323 - 4341
HIKARI Ltd, www.m-hikari.com
http://dx.doi.org/10.12988/ams.2014.45338
Very Simply Explicitly Invertible Approximations of
Normal Cumulative and Normal Quantile Function
Alessandro Soranzo
Dipartimento di Matematica e Geoscienze
University of Trieste
Trieste – Italy
Emanuela Epure
European Commission
DG Joint Research Center
Ispra - Italy
Copyright c
2014 Alessandro Soranzo and Emanuela Epure. This is an open access
article distributed under the Creative Commons Attribution License, which permits unre-
stricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Abstract
For the normal cumulative distribution function: Φ(x) we give the
new approximation 2**(-22**(1-41**(x/10))) for any x>0, which is very
simple (with only integer constants and operations - and / and power
elevation **) and is very simply explicitly invertible having 1 entry of x.
It has 3 decimals of precision having absolute error less than 0.00013. We
compute the inverse which approximates the normal quantile function,
or probit, and it has the relative precision of 1 percent (from 0.5) till
beyond 0.999. We give an open problem and a noticeable bibliography.
We report several other approximations.
Mathematics Subject Classification: 33B20 , 33F05 , 65D20 , 97N50
Keywords: normal distribution function, normal cumulative, normal cdf,
Φ, normal quantile, probit, error function, erf, erfc, Q function, cPhi, inverse
erf, erf1, approximation
4324 Alessandro Soranzo and Emanuela Epure
1 Introduction
This paper deals with the approximation of 2 special functions, Φ(x) and φα.
Let’s remember that Φ(x) and its inverse φα:= Φ1(α) play a central role in
Statistics, essentially as a consequence of the Central Limit Theorem.
Papers [11] and recent [24] list several approximations of Φ(x), which were
published in literature directly as approximations, or bounds, for that func-
tion, or are immediately derived from approximations or bounds for related
functions (see Remark 8 below), and give new ones.
Remark 1. Though computers now allow to compute them with arbitrary
precision, such approximations are still valuable for several reasons, includ-
ing to catch the soul of the considered functions, allowing to understand at a
glance their behaviour. Furthermore, here we produce only explicitly invertible
(and, in fact, simply) approximations, which allow to keep coherence working
contemporarily with the considered functions and their respective inverses.
Let’s add, finally, that despite technologic progress, those functions – of wide
practical use – are not always available on pocket calculators.
Remark 2. The research about approximating Φ(x) floats among:
exactness, but requiring limits, as series and continued fractions
width of domains of approximation (usually x0 but not always)
precision of approximations, but affecting their simplicity
simplicity of approximations, but affecting their precision:
there are few and/or short decimal constants
if possible there are no decimal constants
explicit invertibility by elementary functions.
Remark 3. The invertibility generates this categories:
(a) not explicitly invertible
(b) explicitly invertible solving a quartic equation
(c) explicitly invertible solving a generic cubic equation
(d) explicitly invertible solving a particular cubic equations x3+ax +b= 0
(e) simply explicitly invertible solving a quadratic (or biquadratic) equation
(f) very simply explicitly invertible, with only 1 entry of x.
Remark 4. Some special functions – among which those we consider in this
paper – are monotonic and then invertible, though not by elementary functions.
Remark 5. Of course the inverse of an approximation of an invertible func-
tion fis an approximation (how good, it has to be seen) of the inverse of f.
Very simply explicitly invertible approximations 4325
Remark 6. Usually the approximations of Φ(x) are not designed to be ex-
plicitly invertible by means of elementary functions, but sometimes they are,
solving cubic or quartic equations (after obvious substitutions) or rarely in
simpler manners.
Remark 7. As well known, it is possible to explicitly solve cubic and even
quartic equations, by complicate formulas, but it is not a standard procedure in
usual mathematical practice. (In literature, such explicit invertibility usually
is not even stated when presenting the approximations of Φ(x)).
2 Preliminary Notes
Remark 8. Similar things as in Remarks 1-7 may be said for the functions
erf(x), erfc(x) and Q(x) we are going to define.
Definition 1. (Most standard; unluckily there are ambiguities in literature).
Normal cumulative distribution function:
Φ(x) := Zx
−∞
1
2πet2
2dt (1)
Error function:
erf (x) := Zx
0
2
πet2dt (2)
Q-function:
Q(x) := Z+
x
1
2πe
t2
2dt. (3)
Complementary error function:
erf c(x) = 1
2+Z+
x
2
πet2dt . (4)
Remark 8. Mutual relations, holding for any xIR:
Φ(x) = 1
2+1
2erf x
2(5)
erf (x) = 2Φ(x2) 1 (6)
Q(x) := 1 Φ(x) (7)
erf c(x) := 1 erf(x) (8)
Remark 9. We wrote := both in (3) and (7) because both are used as def-
initions in literature. We wrote := in (8) because that is usually used as
4326 Alessandro Soranzo and Emanuela Epure
definition, and not (4).
Remark 10. The approximation of Φ(x) for x0 and of its inverse for
0α1
2are sufficient because of symmetries:
Φ(x) = 1 Φ(x)xIR (9)
φ1α=φαα]0,1[.(10)
3 Our Results
3.1 New Approximation of Φ(x)
Denoting by |ε(x)|the absolute error and by εr(x) the relative error, we give
the following approximation:
(A) Φ(x)'222141x/10
|ε(x)|<1.28 ·104
|εr(x)|<1.66 ·104x0
Let η(x) be the approximation of Φ(x) considered in Formula (A):
η(x) := 222141x/10
.
The function 1.3·104|Φ(x)η(x)|is positive for 0 x5 as may be seen
by plotting it (see Figures 1 and 2). All the graphs may be obtained by profes-
sional software Mathematica(R)or for free at the site www.wolframalpha.com:
for the considered(1 ) case, write Plot[
1.3 10ˆ(-4) - Abs[1/2+(1/2) Erf[x/Sqrt[2]] - 2ˆ(-22ˆ(1 - 41ˆ(x/10)))],{x,0,5}].
1All graphs may be obtained by these instructions, using as options (for example)
WorkingPrecision ->100, PlotStyle ->Black :
phi[x ] = 1/2+ (1/2) Erf[x/Sqrt[2]]
iphi[α] = Sqrt[2] InverseErf[2 α- 1]
PHI41[x ] = 2(22(1 41(x/10)))
iPHI41[α] = (10/Log[41]) Log[ 1 - (Log[(-Log[α])/Log[2]])/Log[22]]
Fig. 1 : Plot[{0, 128/106 - Abs[PHI41[x] - phi[x]]},{x, 0, 5}, (options)]
Fig. 2 : Plot[{0, 128/106 - Abs[PHI41[x] - phi[x]]},{x, 2.6, 2.8}...
Fig. 3 : Plot[{0, 166/106 - Abs[(PHI41[x] - phi[x])/phi[x]]},{x, 0, 5}...
Fig. 4 : Plot[{0, 166/106 - Abs[(PHI41[x] - phi[x])/phi[x]]},{x, 0.16, 0.18}...
Fig. 5 : Plot[{0, 5/1000 - Abs[iPHI41[x] - iphi[x]]},{x, 0.5, 0.9926}...
Fig. 6 : Plot[{0, 5/1000 - Abs[iPHI41[x] - iphi[x]]},{x, 0.9924, 0.9926}...
Fig. 7 : Plot[{0, 1/100 - Abs[(iPHI41[x] - iphi[x])/iphi[x]]},{x, 0.5, 0.99909}...
Fig. 8 : Plot[{0, 1/100 - Abs[(iPHI41[x] - iphi[x])/iphi[x]]},{x, 0.99907, 0.99909}...
Very simply explicitly invertible approximations 4327
For x > 5 let’s consider that it is Φ(5) = 0.9999997... and Φ(x)1 and Φ is
increasing, then
(x > 5) |1Φ(x)|<106.(11)
It is, for x > 5,
x > 5>3.6378... =10
log 41 log 11
log 22 log log(1 104)
log 2 
10 log41(1 log22(log2(1 104))) < x
log41(1 log22(log2(1 104))) < x/10
1log22(log2(1 104)) <41x/10
log22(log2(1 104)) >141x/10
log2(1 104)<22141x/10
1104<222141x/10
0<1222141x/10
<104
that is to say
(x > 5) |1η(x)|<104.(12)
By (11) and (12) it is
(x > 5) |Φ(x)η(x)| ≤ |1Φ(x)|+|1η(x)|<106+ 104<1.3·104.
Then, for the relative error of Formula (A), for 0 x5, see Fig. 3 and Fig.
4, and for x5 it is Φ(x)>0.9 (see above) and then
|222141x/10 Φ(x)|
|Φ(x)|<|222141x/10 Φ(x)|
0.9=|ε(x)|
0.9<1.3 104
0.9<1.7 104.
Fig. 1 Absolute error [PHI41] Fig. 2 Its zoom [PHI41Z]
1
2
3
4
5
0.00002
0.00004
0.00006
0.00008
0.00010
0.00012
2.65
2.70
2.75
2.80
1.¥10-6
2.¥10-6
3.¥10-6
4.¥10-6
Fig. 3 Relative error [PHI41R] Fig. 4 Its zoom [PHI41RZ]
1
2
3
4
5
0.00005
0.00010
0.00015
0.165
0.170
0.175
0.180
1.¥10-7
2.¥10-7
3.¥10-7
4.¥10-7
5.¥10-7
6.¥10-7
7.¥10-7
4328 Alessandro Soranzo and Emanuela Epure
3.2 Inversion: Approximation of φα.
Remembering Remark 5, inverting (A), and still denoting by |ε(x)|the absolute
error and by ε(x) the relative error, we give the following approximation of the
normal quantile function φα= Φ1(α):
(a)φα'10
log41 log1log((log α)/log 2)
log 22
|ε(α)|<5·103α[0.5,9925]
|εr(α)|<1% α[0.5,0.99908]
For the absolute error of (a) see Figures 5 and 6. For the relative error of (a)
see Figures 7 and 8.
Fig. 5 Absolute error [iPHI41] Fig. 6 Its zoom [iPHI41Z]
1.0
0.001
0.002
0.003
0.004
0.005
0.99245
0.99250
0.99255
0.99260
-0.00005
0.00005
0.0001
Fig. 7 Absolute error [iPHI41R] Fig. 8 Its zoom [iPHI41RZ]
1.0
0.002
0.004
0.006
0.008
0.010
0.999075
0.999080
0.999085
0.999090
-0.00004
-0.00002
0.00002
0.00004
4 Conclusions
In this paper for the normal cumulative distribution function Φ(x) and the
normal quantile function φαrespectively we gave these very simply explicitly
invertible (with 1 entry of x)corresponding approximations:
(A) Φ(x)'222141x/10 x0
(a)φα'10
log41 log1log((log α)/log 2)
log 22 0.5α < 1
Very simply explicitly invertible approximations 4329
As quantified more precisely in Sections 3.1 and 3.2, the approximation (A) of
Φ(x) grants abundantly 3 decimals of precision (having absolute error less than
0.00013), is very simple – with only 1 entry of x– and very simply explicitly
invertible, and the inverse (a) has essentiallly the same characteristics, giving
an approximation of the normal quantile function φαwhich maintains the 1%
precision (from 0.5) till 0.999???
In the end we remember that by the symmetry Formulas (9) and (10) the
approximations of Φ(x) for x0 and of φαfor 0.5α < 1 are sufficient.
Remark 11. Because of the mutual relations (see Remark 8) among the func-
tions Φ(x), erf(x), Q(x) and erfc(x), to approximate one of them is equivalent
to approximate the others.
We searched in a wide literature approximations published not only for Φ(x),
but also the approximations of Φ(x) implicitly contained in the approxima-
tions of the other 3 functions.
Remark 12. We will report other’s Author’s Formulas in a standard format.
This allows easy comparison.
We use xas independent variable. We write Φ(x)', and always consider both
absolute and relative errors, in absolute value, and write respectively |ε(x)|and
|εr(x)|. Authors not always report both. And they write them with different
precisions. We found and wrote those errors with 2 digits after decimal point,
in the form a.bc ·10n.
Of course any function may be written in several ways. We did our best in
reporting other Author’s formulas, sometimes changing the formal appearance.
In particular
1
2+1
2q1ef(x)= 0.5+0.5(1 exp f(x))0.5=1 + (1 exp f(x)) 1
2
2
and we will write in the first way whenever possible.
Remark 13. The most recent approximation of Φ(x) we have found in liter-
ature is in paper [24] (2014), which gives this new approximation
Φ(x)'11
2π
ex2
2
0.226 + 0.64x+ 0.33x2+ 3 x > 0
for which we found |ε(x)|<1.93 ·104and |εr(x)|<3.86 ·104, not explicitly
invertible. The same paper lists 16 other approximations of Φ(x); the last is
Φ(x)'1
2+1
2s1e
x2
2
4
π+0.147 x2
2
1+0.147 x2
2(13)
4330 Alessandro Soranzo and Emanuela Epure
holding for x0, for which we found |ε(x)|<6.21 ·105and |εr(x)|<
6.30 ·105, originally published in [109] as
erf (x)'r1ex2
4
π+0.147x2
1+0.147x2x0.(14)
Both (13) and (14) are explicitly invertible, essentially by solving a biquadratic
equation, after obvious substitutions, just as the following improvements of
(13) which we already made available on the net in [95]
Φ(x)'1
2+1
2r1ex217+x2
26.694+2x2
|ε(x)|<4.00 ·105
|εr(x)|<4.53 ·105x0
and in [94]
Φ(x)'1
2+1
2r1e
1.2735457x20.0743968x4
2+0.1480931x2+0.0002580x4
|ε(x)|<1.14 ·105
|εr(x)|<1.78 ·105x0.
Both the above improvements reach 4 decimals of precision.
Remark 14. As far as we know, the most recent new approximations (all of
2013) of Q(x) or erf(x) or erfc(x) (from which one could immediately obtain
approximations of Φ(x)) are this double inequality
1
x+4 + x2s2
πex2
2Q(x)1
qx2+x+8
πs2
πex2
2
in [19] (originally published for qπ
2ex2
2Q(x), and notice that the lower bound
is of [10]), this bound
Q(x)1
2π
1
1 + x2ex2/2
in [39] (year 2013, originally published for 2πQ(x)) and a family
Q(x)Σn
k=0
ak
xebkx2
of upper bounds in [41] (year 2013 too) and this family
Q(x)Σn
k=0 akx ebkx2
of lower bounds in [42] (year 2013 too), and from those lower and upper bounds
one could obtain approximations of Φ(x) which are not explicitly invertible by
Very simply explicitly invertible approximations 4331
elementary functions. Those approximations are especiallly valuable not only
because bounds, but also for little relative errors for the function Q(x) for
great values of x. (Notice that Q(x)0).
Remark 15. As far as we know, the most recent new approximation of Φ(x)
or Q(x) or erf(x) or erfc(x), having 1 entry of x, is this
Φ(x)'10.24015 e0.5616x2
originally published as
erf c(x)'ΣN
k=1akekbx N:= 1 a1= 0.4803; b= 1.1232
in [78] and [79] (both year 2012); (then the Authors give other approximations,
with 2 and 3 entries of x). Clearly that approximation is not intended to min-
imize the absolute error, which in 0 is about 0.52 for erfc(x) (and 0.26 for
the derived approximation of Φ(x)); and in fact its quality is the little relative
error for the function erf c(x) for great values of x. (Notice that erf c(x)0).
Another recent (2009) approximation of Φ(x) (or Q(x) or erf(x) or erfc(x))
having 1 entry of xis this of [11]
Φ(x)'1
1 + e1.702xxIR
for which we found |ε(x)|<9.49 ·103and |εr(x)|<1.35 ·102: it is simple
and very simply explicitly invertible, but not so precise; the same paper gives
also this approximation
Φ(x)'1
1 + e0.07056x31.5976xxIR
for which we found |ε(x)|<1.42 ·104and |εr(x)|<2.08 ·104, which is
explicitly invertible solving a particular cubic equation.
Both the approximations have the quality of holding on the whole IR.
Remark 16. (Conclusions) As far as we know, before our Formula (A), the
most precise (with respect both to the absolute error and to the relative error)
approximation of Φ(x)
(α) published as approximations or bounds for Φ(x) or Q(x) or erf(x) or erfc(x)
(β) holding at least for x0 (and, then, Φ(x)=1Φ(x))
(γ) defined by a single expression (or, not piecewise defined)
(δ) very simply explicitly invertible, with 1 entry of x
4332 Alessandro Soranzo and Emanuela Epure
was this of [6]
Φ(x)'1
2+1
2q1eπ
8x2x0
for which we found |ε(x)|<1.98 ·103and |εr(x)|<2.04 ·103. The Author
provides also the inverse, approximating the normal quantile function φα.
Our Formula (A) approximating the normal cumulative distribution function
Φ(x), having |ε(x)|<1.28 ·104and |εr(x)|<1.66 ·104, appears really quite
noticeable for simplicity, precision and explicit invertibility.
That makes also quite valuable our Formula (a) for the approximation of the
normal quantile function φαinverse of Φ(x).
Remark 16. (Open problem). Modify constants to approximate erf(x)
applying erf(x) = 2Φ(x2) 1 and our Formula (A), possibly avoiding 2.
References
[1] R.W. Abernathy: Finding Normal Probabilities with Hand held Calcula-
tors, Mathematics Teacher, 81 (1981) 651 - 652.
[2] M. Abramowitz, I.A. Stegun: Handbook of Mathematical Func-
tions. With Formulas, Graphs, and Mathematical Tables. (1964) Na-
tional Bureau of Standards Applied Mathematics Series 55, Tenth
Printing, December 1972, with corrections. Available (free) at
http://people.math.sfu.ca/cbm/aands
[3] G. de Abreu: Jensen-Cotes upper and lower bounds on the Gaussian Q-
function and related functions, IEEE Trans. Commun. 57 (2009), no. 11,
pp. 3328 - 3338.
[4] W. Abu-Dayyeh, M.S. Ahmed: A double inequality for the tail probability
of standard normal distribution, J. Inform. Optim. Sci. 14 (1993), no. 2,
155- 159.
[5] G. Allasia: Approximation of the normal distribution functions by means
of a spline function, Statistica (Bologna) 41 (1981) no. 2, 325 - 332.
[6] K.M. Aludaat, M.T. Alodat: A note on approximating the normal distri-
bution function, Appl. Math. Sci. (Ruse), 2(2008), no. 9-12, 425 - 429.
[7] S.K. Badhe: New approximation of the normal distribution function, Com-
munications in Statistics-Simulation and Computation, 5(1976), no. 4,
173 - 176.
Very simply explicitly invertible approximations 4333
[8] R.J. Bagby: Calculating Normal Probabilities, Amer. Math. Monthly, 102
(1995), no. 1, 46 - 49.
[9] R.K. Bhaduri, B.K. Jennings: Note on the error function, Amer. J. Phys.
44 (1976), no. 6, 590 - 592.
[10] Z.W. Birnbaum: An inequality for Mill’s ratio, Ann. Math. Statistics 13
(1942), 245 - 246.
[11] S.R. Bowling, M.T. Khasawneh, S. Kaewkuekool, B.R. Cho: A logistic
approximation to the cumulative normal distribution, Journal of Industrial
Engineering and Management, 2(2009), no. 1, 114 - 127.
[12] A.V. Boyd: InequalitiesforMills???ratio, ReportsofStatistical Applica-
tionResearch, Union of Japanese Scientists and Engineers 6(1959), no. 2,
44 - 46.
[13] P.O. B¨orjesson, C-E. W. Sundberg: Simple approximations of the error
function Q(x) for communications applications, IEEE Trans. Commun.
Vol. COM-27 (1979), no. 3, 639 - 643.
[14] A.L. Brophy: Accuracy and speed of seven approximations of the normal
distribution function, Behavior Research Methods & Instrumentation 15
(1983), no. 6, 604 - 605.
[15] W. Bryc: A uniform approximation to the right normal tail integral, Appl.
Math. Comput. 127 (2002), no. 2-3, 365 - 374.
[16] I.W. Burr: A useful approximation to the normal distribution function,
with application to simulation, Technometrics 9(1967), no. 4, 647 - 651.
[17] J.H. Cadwell: The Bivariate Normal Integral, Biometrika 38 (1951) 475
- 479.
[18] G.D. Carta: Low-order approximations for the normal probability integral
and the error function, Math. Comp. 29 (1975) 856 - 862.
[19] Chaitanya: A lower bound for the tail probability of a normal distribu-
tion, on-line text (2013), available at http://ckrao.wordpress.com/2013/
10/24/a-lower-bound-for-the-tail-probability-of-a-normal-distribution/
(Read March 2014).
[20] S.-H. Chang, P. C. Cosman, L. B. Milstein: Chernoff-type bounds for the
Gaussian error function, IEEE Trans. Commun. 59 (2011), no. 11, 2939
- 2944.
4334 Alessandro Soranzo and Emanuela Epure
[21] Y. Chen, N.C. Beaulieu: A simple polynomial approximation to the gaus-
sian Q-function and its application, IEEE Communications Letters 13
(2009), no 2, 124 - 126.
[22] M. Chiani, D. Dardari: Improved exponential bounds and approximation
for the Q-function with application to average error probability compu-
tation, Global Telecommunications Conference, 2002. GLOBECOM ’02.
IEEE, 2(2002), 1399 - 1402.
[23] M. Chiani, D. Dardari, M.K. Simon: New exponential bounds and ap-
proximations for the computation of error probability in fading channels,
IEEE Trans. Wireless Commun. 2(2003) no. 4, pp. 840 - 845.
[24] A. Choudhury: A Simple Approximation to the Area Under Standard
Normal Curve, Mathematics and Statistics 2(2014) no. 3, 147-149.
[25] A. Choudhury, P. Roy: A fairly accurate approximation to the area under
normal curve, Comm. Statist. Simulation Comput. 38 (2009), no. 6-7,
1485 - 1491.
[26] A. Choudhury, S. Ray, P. Sarkar: Approximating the cumulative distribu-
tion function of the normal distribution, J. Statist. Res. 41 (2007), no. 1,
59 - 67.
[27] J.T. Chu: On bounds for the normal integral, Biometrika 42 (1955) 263 -
265. An approximation to the probability integral,
[28] W.W. Clendenin: Rational approximations for the error function and for
similar functions, Comm. ACM 4(1961) 354 - 355.
[29] W. Cody: Rational Chebyshev Approximations for the Error Function,
Math. Comp. 23 (1969) 631 - 637.
[30] J.D. Cook: Upper and lower bounds for the normal
distribution function, on-line text (2009), available at
http://www.johndcook.com/normalbounds.pdf (Read March 2014).
[31] S.E. Derenzo: Approximations for hand calculators using small integral
coefficients, Math. Comput.31 (1977) 214 - 225.
[32] D.R. Divgi: Calculation of univariate and bivariate normal probability
function,The Annals of Statistics, 7(1979), no. 4, 903 - 910.
[33] Chokri Dridi (2003): A short note on the numerical approxi-
mation of the standard normal cumulative distribution and its
inverse, Real 03-T-7, online manuscript (2003). Available at
Very simply explicitly invertible approximations 4335
http://128.118.178.162/eps/comp/papers/0212/0212001.pdf (Read
2014, March)
[34] L. umbgen: Bounding Standard Gaussian Tail Probabilities, Tech-
nical Report 76 (2010) of the Institute of Mathematical Statis-
tics and Actuarial Science of the University of Bern, on-line text,
http://arxiv.org/pdf/1012.2063v3.pdf
[35] S.A. Dyer, J.S. Dyer: Approximations to error function, IEEE Instrumen-
tation & Measurement Magazine 10 (2007), no. 6, 45 - 48.
[36] J.S. Dyer, S.A. Dyer: Corrections to, and comments on, ”An Improved
Approximation for the Gaussian Q-Function”, IEEE Commun. Lett. 12
(2008), no. 4, 231.
[37] R.L. Edgeman: Normal distribution probabilities and quantiles without
tables, Mathematics of Computer Education, 22 (1988), no.2, 95 - 99.
[38] P.W.J. Van Eetvelt, S.J. Shepherd, S. J.: Accurate, computable approxi-
mations to the error function, Math. Today (Southend-on-Sea) 40 (2004),
no. 1, 25 - 27.
[39] P. Fan: New inequalities of Mill’s ratio and application to the inverse Q-
function approximation, Aust. J. Math. Anal. Appl. 10 (2013), no. 1, 1 -
11.
[40] W. Feller: An introduction to probability theory and its applications.
Vol. 1 (1968). Third edition. John Wiley & Sons, Inc., New York-London-
Sydney.
[41] Hua Fu, Ming-Wei Wu, Pooi-Yuen Kam: Explicit, closed-form perfor-
mance analysis in fading via new bound on Gaussian Q-function, IEEE
International Conference on Communications (ICC), (2013), 5819 - 5823.
[42] Hua Fu, Ming-Wei Wu, Pooi-Yuen Kam: Lower Bound on Averages of
the Product of L Gaussian Q-Functions over Nakagami-m Fading, 77th
IEEE Vehicular Technology Conference (VTC Spring), (2013), 1 - 5.
[43] R.D. Gordon: Values of Mill’s ratio of area to bounding ordinate of the
normal probability integral for large values of the argument, Ann. Math.
Statistics 12 (1941), no. 3, 364 -366.
[44] P. Van Halen: Accurate analytical approximations for error function and
its integral (semiconductor junctions), Electronics Letters 25 (1989), no.
9, 561 - 563.
4336 Alessandro Soranzo and Emanuela Epure
[45] H. Hamaker: Approximating the cumulative normal distribution and its
inverse, Appl. Statist. 27 (1978), 76 - 77.
[46] J.F. Hart, E.W. Cheney, C.L. Lawson, H.J. Maehly, C.K. Mesztenyi, J.R.
Rice, H.C. Thacher Jr., C. Witzgall: Computer Approximations, SIAM
series in applied mathematics, (1968) John Wiley & Sons, Inc., New York
- London - Sydney.
[47] R.G. Hart: A formula for the approximation of definite integrals of the
normal distribution function, Math. Tables Aids Comput. 11 (1957), 265.
[48] R.G. Hart: A close approximation related to the error function, Math.
Comp. 20 (1966), 600 - 602.
[49] C. Hastings, Jr.: Approximations for digital computers, (1955) Princeton
University Press, Princeton, N.J.
[50] A.G. Hawkes: Approximating the Normal Tail, The Statistician 3(1982),
no. 3, 231 - 236.
[51] T.J. Heard: Approximation to the normal distribution function, Mathe-
matical Gazette, 63 (1979) 39 - 40.
[52] J.P. Hoyt: A Simple Approximation to the Standard Normal Probability
Density Function, (in The Teacher’s Caorner of) Amer. Statist. 22 (1968)
no. 2, 25 - 26.
[53] Y. Isukapalli, B. D. Rao: An analytically tractable approximation for the
Gaussian Q-function, IEEE Commun. Lett. 12 (2008), no. 9, 669 - 671.
[54] N. Jaspen: The Calculation of Probabilities Corresponding To Values of
z, t, F, and Chi-Square, Educational and Psychological Measurement 25
(1965), no. 3, 877 - 880.
[55] N.L. Johnson, S. Kotz, N. Balakrishnan: Continuous Univariate Distri-
butions, (1994) Vol. 1, 2nd ed. Boston, MA: Houghton Mifflin.
[56] G.K. Karagiannidis, A.S. Lioumpas: An Improved Approximation for the
Gaussian Q-Function, IEEE Communications Letters, 11 (2007), no. 8,
644 - 646.
[57] R.P. Kenan: Comment on ”Approximate closed form solution for the error
function”, Am. J. of Phys. 44 (1976), no. 6, 592 - 592.
[58] D.F. Kerridge, G.W. Cook: Yet another series for the normal integral,
Biometrika 63 (1976), 401 - 407.
Very simply explicitly invertible approximations 4337
[59] C.G. Khatri: On Certain Inequalities for Normal Distributions and their
Applications to Simultaneous Confidence Bounds, The Annals of Mathe-
matical Statistics 38 (1967), no. 6, 1853 - 1867.
[60] M. Kiani, J. Panaretos, S. Psarakis, M. Saleem: Approximations to the
normal distribution function and an extended table for the mean range of
the normal variables, J. Iran. Stat. Soc. (JIRSS) 7(2008), no. 1-2, 57 -
72.
[61] N. Kingsbury: Approximation Formulae for the Gaus-
sian Error Integral, Q(x), on-line text, available (free) at
http://cnx.org/content/m11067/2.4/ (read Nov 2011)
[62] R.A. Lew: An approximation to the cumulative normal distribution with
simple coefficients, Applied Statistics, 30 (1981), no. 3, 299 - 301.
[63] J.T. Lin: Alternative to Hamaker???s Approximations to the Cumulative
Normal Distribution and its Inverse, J. Roy. Statist. Soc. Ser. D 37 (1988),
no. 4-5, 413 - 414.
[64] J.T. Lin: Approximating the normal tail probability and its inverse for use
on a pocket calculator, J. Roy. Statist. Soc. Ser. C38 (1989), no. 1, 69 -
70.
[65] J. T. Lin: A simpler logistic approximation to the normal tail probability
and its inverse, J. Roy. Statist. Soc. Ser. C39 (1990), no. 2, 255 - 257.
[66] J.M. Linhart: Algorithm 885: Computing the Logarithm of the Normal
Distribution, ACM Transactions on Mathematical Software (TOMS) 35
(2008), no. 3, Article No. 20.
[67] P. Loskot, N. Beaulieu: Prony and Polynomial Approximations for Eval-
uation of the Average Probability of Error Over Slow-Fading Channels,
IEEE Trans. Vehic. Tech. 58 (2009), no 3, 1269 - 1278.
[68] G. Marsaglia: Evaluating the Normal Distribution, Journal of Statistical
Software, 11, (2004), no. 4, 1 - 11.
[69] G.V. Martynov: Evaluation of the normal distribution function, Journal
of Soviet Mathematics, 17 (1981) 1857 - 1876.
[70] F. Matta, A. Reichel: Uniform computation of the error function and
other related functions, Math. Comp. 25 (1971), 339 - 344.
[71] C.R. McConnell: Pocket-computer approximation for areas under the
standard normal curve, (in Letters To The Editor of) Amer. Statist. 44
(1990), no. 1, 63.
4338 Alessandro Soranzo and Emanuela Epure
[72] R.Menzel: Approximate closed form solution to the error function, Amer.
J. Phys. 43 (1976), no. 4, 366 - 367.
[73] R. Milton, R. Hotchkiss: Computer evaluation of the normal and inverse
normal distribution functions, Technometrics 11 (1969), no. 4, 817 - 822.
[74] J.F. Monahan: Approximation the log of the normal cumulative, Com-
puter Science and Statistic: Proceeding of the Thirteenth Symposium on
the Interface W. F. Eddy (editor), (1981) 304 - 307, New York. Springer-
Verlag.
[75] P.A.P. Moran: Calculation of the normal distribution function, Biometrika
67 (1980), no. 3, 675 - 676.
[76] M. Mori: A method for evaluation of the error function of real and complex
variable with high relative accuracy, Publ. Res. Inst. Math. Sci. 19 (1983),
no. 3, 1081 - 1094.
[77] R.M. Norton: Pocket-calculator approximation for areas under the stan-
dard normal curve, (in The Teacher’s Corner of) Amer. Statist. 43 (1989),
no. 1, 24 - 26.
[78] O. Olabiyi, A. Annamalai: New exponential-type approximations for the
erfc(.) and erfcp(.) functions with applications, Wireless Communications
and Mobile Computing Conference (IWCMC), 8th International (2012),
1221 - 1226.
[79] O. Olabiyi, A. Annamalai: Invertible Exponential-Type Approximations
for the Gaussian Probability Integral Q(x) with Applications, IEEE Wire-
less Communications Letters 1(2012), no. 5, 544 - 547.
[80] E. Page: Approximation to the cumulative normal function and its inverse
for use on a pocket calculator, Appl. Statist. 26 (1977), 75 - 76.
[81] S.L. Parsonson: An approximation to the normal distribution function,
Mathematical Gazette, 62 (1978) 118 - 121.
[82] J.K. Patel, C.B. Read: Handbook of the Normal Distribution. CRC Press
(1996).
[83] G. P´olya: Remarks on computing the probability integral in one and two
dimensions, Proceedings of the Berkeley Symposium on Mathematical
Statistics and Probability, (1945, 1946), pp. 63 - 78. University of Cal-
ifornia Press, Berkeley and Los Angeles, 1949. An approximation to the
probability integral, Ann. Math. Statistics 17 (1946). 363 - 365.
Very simply explicitly invertible approximations 4339
[84] G.A. Pugh: Computing with Gaussian Distribution: A Survey of Algo-
rithms, ASQC Technical Conference Transactions, 46 (1989) 496-501.
[85] D.R. Raab, E.H. Green: A cosine approximation to the normal distribu-
tion, Psychometrika 26 (1961), 447 - 450.
[86] C. Ren, A.R. MacKenzie: Closed-form approximations to the error and
complementary error functions and their applications in athmospheric sci-
ence, Atmospheric Science Letters 8(2007), 70 - 73.
[87] K.J.A. Revfeim: More approximations for the cumulative and inverse nor-
mal distribution, (in Letters To The Editor of) Amer. Statist. 44 (1990),
no. 1, 63.
[88] M.R. Sampford: Some Inequalities on Mill’s Ratio and Related Functions
The Annals of Mathematical Statistics 24 (1953), no. 1, 130 - 132.
[89] W.R. Schucany, H.L. Gray: A New Approximation Related to the Error
Function, Math. Comp. 22 (1968), 201 - 202.
[90] C.R. Selvakumar: Approximations to complementary error function by
method of least squares, Proceedings of the IEEE 70 (1982), no. 4, 410 -
413.
[91] A.K. Shah: A Simpler Approximation for Areas Under the Standard Nor-
mal Curve, (in Commentaries of) Amer. Statist. 39, (1985), no. 1, p.
80.
[92] H. Shore: Response modeling methodology (RMM)-current statistical dis-
tributions, transformations and approximations as special cases of RMM,
Communications in Statistics-Theory and Methods, 33 (2004), no. 7, 1491
- 1510.
[93] H. Shore:Accurate RMM-based approximations for the CDF of the nor-
mal distribution, Communications in Statistics-Theory and Methods, 34
(2005) 507 - 513.
[94] A. Soranzo, A. Epure: Simply Explicitly Invertible Approximations to 4
Decimals of Error Function and Normal Cumulative Distribution Func-
tion, (2012), on-line text, (arXiv:1201.1320 [stat.CO]) available (free) at
http://arxiv.org/abs/1201.1320v1 and www.intellectualarchive.com (se-
lecting Mathematics).
[95] A. Soranzo, E. Epure: Practical Explicitly Invertible Approximation
to 4 Decimals of Normal Cumulative Distribution Function Modifying
Winitzki’s Approximation of erf, (2012), on-line text, (arXiv:1211.6403
[math.ST]) available (feee) at http://arxiv.org/abs/1211.6403
4340 Alessandro Soranzo and Emanuela Epure
[96] A.J. Strecock: On the calculation of the inverse of the error function,
Mathematics of Computation, 22 (1968), no. 101, 144 - 158.
[97] S.J. Szarek, E. Wernera: A Nonsymmetric Correlation Inequality for
Gaussian Measure, Journal of Multivariate Analysis 68 (1999), no. 2,
193 - 211.
[98] R.F. Tate: On a double inequality of the normal distribution, Ann. Math.
Statistics 24 (1953), 132 - 134.
[99] G.J. Tee: Further approximations to the error function, Math. Today
(Southend-on-Sea) 41 (2005), no. 2, 58 - 60.
[100] K.D. Tocher: The art of simulation. (1963) English Universities Press,
London.
[101] J.D. Vedder: Simple approximations for the error function and its in-
verse, Amer. J. Phys. 55 (1987), no. 8, p. 762 - 763.
[102] J.D. Vedder: An invertible approximation to the normal-distribution
function, Comput. Statist. Data Anal. 16 (1993), 119 - 123.
[103] G.R. Waissi, D.F. Rossin: A sigmoid approximation of the standard nor-
mal integral, Appl. Math. Comput. 77 (1996), 91 - 95.
[104] Weisstein, E. W. (read 2011, April). Normal Distribution Function,
MathWorld–A Wolfram Web Resource.
http://mathworld.wolfram.com/NormalDistributionFunction.html , on-
line text (Read 2014, March):
[105] G. West: Better approximations to cumulative normal functions,
Wilmott Magazine (2005), 70 - 76.
[106] Wikipedia: Error Function, on-line text, http://en.wikipedia.org/wiki/
Error function (read 2014, March)
[107] Wikipedia: Normal distribution, on-line text, http://en.wikipedia.org/
wiki/Normal distribution (read 2014, March)
[108] J.D. Williams: An approximation to the probability integral, Ann. Math.
Statistics 17 (1946). 363 - 365.
[109] S. Winitzki A handy approximation for the error function and its inverse,
on-line text, http://docs.google.com/viewer?a=v&pid=sites&srcid=ZGV
mYXVsdGRvbWFpbnx3aW5pdHpraXxneDoxYTUzZTEzNWQwZjZlO
WY2 (read 2014, May),
Very simply explicitly invertible approximations 4341
[110] M. Wu, X. Lin, P.-Y. Kam: New Exponential Lower Bounds on the Gaus-
sian Q-Function via Jensen’s Inequality, IEEE 3rd Vehicular Technology
Conference (VTC Spring), (2011), 1 - 5.
[111] R. Yerukala, N.K. Boiroju, M.K. Reddy: An Approximation to the Cdf
of Standard Normal Distribution, (2011) International Journal of Mathe-
matical Archive 2(2011) no. 7, 1077 - 1079.
[112] B.I. Yun: Approximation to the cumulative normal distribution using
hyperbolic tangent based functions, J. Korean Math. Soc. 46 (2009), no.
6, 1267 - 1276.
[113] M. Zelen, N.C. Severo: Probability Functions. In: Handbook of Math-
ematical Functions, Ed. M. Abramowitz and I.A. Stegun, Washington
D.C.: Department of US Government. (1966).
[114] I.H. Zimmerman: Extending Menzel’s closed-form approximation for the
error function, Am. J. Phys. 44 (1976), no. 6, 592-593
[115] B. Zogheib, M. Hlynka: Approximations of the Standard Normal Dis-
tribution, University of Windsor, Dept. of Mathematics and Statistics,
(2009), on-line text,
Received: May 11, 2014
... Fortunately, many software libraries exist that can produce high-quality numerical approximations to Φ 0,1 (z), e.g., the cdf command in MATLAB [4]. However, it may be desirable to have explicit functions (especially very simply explicitly invertible ones [5]) to efficiently approximate Φ 0,1 (z). Several of these approximations have been published and offer varying degrees of accuracy. ...
... Explicitly invertible approximations of the normal CDF are often relatively simpler expressions with fewer degrees of freedom for the purposes of fitting (e.g., [5], [8], [11], [23], [25], [27], [28]). Consequently, explicitly invertible approximations tend to possess larger maximum absolute errors. ...
... To the best of the authors' knowledge, the best explicitly invertible approximation to the normal CDF is from [14] with a maximum absolute error of 7.17e-5. Similarly, the most accurate published very simply explicitly invertible approximation of the normal CDF is the particularly elegant one given by Soranzo and Epure [5] (see Table 1) with a maximum absolute error of 1.3e-04. Section 4 of this paper shows that all maximum absolute errors of the proposed function Equation (1.1) reported here are superior to both of these approximations, with the best maximum absolute errors having a value of 2.73e-05, almost five times smaller than that of [5] and twice as small as that of [14]. ...
... Fortunately, many software libraries exist that can produce high-quality numerical approximations to Φ 0,1 (z), e.g., the cdf command in MATLAB [4]. However, it may be desirable to have explicit functions (especially very simply explicitly invertible ones [5]) to efficiently approximate Φ 0,1 (z). Several of these approximations have been published and offer varying degrees of accuracy. ...
... Explicitly invertible approximations of the normal CDF are often relatively simpler expressions with fewer degrees of freedom for the purposes of fitting (e.g., [5], [8], [11], [23], [25], [27], [28]). Consequently, explicitly invertible approximations tend to possess larger maximum absolute errors. ...
... To the best of the authors' knowledge, the best explicitly invertible approximation to the normal CDF is from [14] with a maximum absolute error of 7.17e-5. Similarly, the most accurate published very simply explicitly invertible approximation of the normal CDF is the particularly elegant one given by Soranzo and Epure [5] (see Table 1) with a maximum absolute error of 1.3e-04. Section 4 of this paper shows that all maximum absolute errors of the proposed function Equation (1.1) reported here are superior to both of these approximations, with the best maximum absolute errors having a value of 2.73e-05, almost five times smaller than that of [5] and twice as small as that of [14]. ...
Article
This paper proposes a new very simply explicitly invertible function to approximate the standard normal cumulative distribution function (CDF). The new function was fit to the standard normal CDF using both MATLAB's Global Optimization Toolbox and the BARON software package. The results of three separate fits are presented in this paper. Each fit was performed across the range $ 0 \leq z \leq 7 $ and achieved a maximum absolute error (MAE) superior to the best MAE reported for previously published very simply explicitly invertible approximations of the standard normal CDF. The best MAE reported from this study is 2.73e–05, which is nearly a factor of five better than the best MAE reported for other published very simply explicitly invertible approximations.
... Solving this equation system requires evaluating the expectation of a sigmoid function, which generally does not have a closed-form expression. A sigmoid function can, however, be accurately approximated by the standard normal cumulative distribution function [6,67,22]. Utilizing this technique, I am able to show (in Appendix A.9) that solving the equation system (4.7) is approximately equivalent to solving the following four equations ...
Thesis
Advances in high-throughput biological technologies have led to large numbers of potentially predictive biomarkers becoming routinely measured in modern clinical trials. Biomarkers which influence treatment efficacy may be used to find subgroups of patients who are most likely to benefit from a new treatment. Consequently, there is a growing interest in better approaches to identify biomarker signatures and utilize the biomarker information in clinical trials. The first focus of this thesis is on developing methods for detecting biomarker-treatment interactions in large-scale trials. Traditional interaction analysis, using regression models to test biomarker-treatment interactions one biomarker at a time, may suffer from poor power when there is a large multiple testing burden. I adapt recently proposed two-stage interaction detecting procedures for application in randomized clinical trials. I propose two new stage 1 multivariate screening strategies using lasso and ridge regressions to account for correlations among biomarkers. For these new multivariate screening strategies, I prove the asymptotic between-stage independence, required for family-wise error rate control. Simulation and real data application results are presented which demonstrate greater power of the new strategies compared with previously existing approaches. The second focus of this thesis is on developing methods for utilizing biomarker information during the course of a randomized clinical trial to improve the informativeness of results. Under the adaptive signature design (ASD) framework, I propose two new classifiers that more efficiently leverage biomarker signatures to select a subgroup of patients who are most likely to benefit from the new treatment. I provide analytical arguments and demonstrate through simulations that these two proposed classification criteria can provide at least as good, and sometimes significantly greater power than the originally proposed ASD classifier. Third, I focus on an important issue in the statistical analysis of interactions for binary outcomes, which is pertinent to both topics above. Testing for biomarker-treatment interactions with logistic regression can suffer from an elevated number of type I errors due to the asymptotic bias of the interaction regression coefficient under model misspecification. I analyze this problem in the randomized clinical trial setting and propose two new de-biasing procedures, which can offer improved family-wise error rate control in various simulated scenarios. Finally, I summarize the main contributions from the work above, discuss some practical limitations as well as their real world value, and prioritize future directions of research building upon the work in this thesis.
... A considerable portion of the approximations is the group of ad-hoc methods which typically utilize an a priori selected parametric function and apply various mathematical techniques to estimate the parameters in order minimize the approximation error. The authors of [45][46][47] gave comprehensive overviews of the approximation formulas in their reviews. In general, we may state that the accuracy of approximations increases with the complexity of formulas and with the number of parameters they have. ...
Chapter
In this chapter, a new four-parameter probability distribution function is introduced and some of its applications are discussed. As the novel distribution function is so flexible that it may be viewed as an alternative to some notable distribution functions, we will call it the pliant distribution function. The cumulative distribution function of the novel probability distribution is founded on the so-called omega function.
... Most of the approximations belong to the group of ad-hoc methods which typically utilize an a priori selected parametric function and apply various mathematical techniques to estimate the parameters in order minimize the approximation error. [21], [30] and [35] gave comprehensive overviews of the approximation formulas in their reviews. In general, we can say that the accuracy of approximations increases with the complexity of formulas and with the number of parameters they have. ...
... Over the decades, many different kinds of approximations to Gaussian-Q function are proposed using various mathematical techniques. [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] The lower and upper bounds derived in Borjesson and Sundberg 10 are more suitable for calculators rather than communication systems as the expressions are complex and require both exponent and algebraic computations. The approximation in Karagiannidis and Lioumpas 11 is accurate over a wide range of argument x, but the presence of both exp(x) and exp(x 2 ) along with the argument x in the denominator limits its use in performance analysis of communication systems for complex fading channels. ...
Article
Full-text available
The fading channels often involve complex expressions, when it comes to computing the integrals required for performance evaluation of various digital modulation schemes. In this paper, usefulness of exponential‐based approximations to the Gaussian‐Q function in computing these integrals is discussed. We present generic symbol error probability (SEP) expressions over η−μ and κ−μ fading distributions, which can be tailored for any digital modulation technique using different approximations. The resulting expressions thus obtained comprise only elementary mathematical functions thereby avoiding complex evaluations of hypergeometric functions. We explore all the exponential‐based approximations proposed till date and conclude that apart from being mathematically simple, they also lead to accurate expressions for performance analysis of various digital modulation schemes. We present generic SEP expressions over η−μ and κ−μ fading distributions, which can be tailored for any digital modulation technique using different approximations. The resulting expressions thus obtained comprise only elementary mathematical functions thereby avoiding complex evaluations of hypergeometric functions.
Article
Group recommendation refers to a recommendation of items to a group of users (i.e., members). When predicting relevant items, a model commonly faces unseen groups that do not appear in the training step. Recently, deep neural networks and an attention mechanism were applied to group recommendations by aggregating user preferences. However, current methods are insufficient to handle unseen groups (i.e., transductive models) or struggle to compute cost-effective attention networks and regularizations. In this study, we propose the novel Bayesian inductive learning method, called IndiG, for making recommendations to seen and unseen groups. To learn inductively, a function distribution consisting of efficient attention-based aggregation is used as shared information across groups. By incorporating a transductive model as a posterior into the proposed Bayesian method, an inductive model as a prior can learn robustly. We adopt cost-effective regularization to prevent degenerated solutions by maximizing a correlation between group representations of a transductive model and an inductive model, while decorrelating dimensions of group representations. Through experiments, we demonstrated that the proposed method outperformed other existing methods. The experiments also showed that the utilization of uncertainty on the predicted ratings of items worked effectively to improve the performance.
Article
Full-text available
The orderly deregulation of planned electricity generation and consumption is an important measure for electricity market reform in several countries (such as China), and a reasonable proportion of planned electricity in the total energy consumption is conducive to the smooth transition from the regulation mode to the market mode. Under the plan-market double-track mechanism (PMDM) implemented, a modified linear bidding function of generation companies (GCs) is first proposed, and the unified clearing price of unilateral generation market is determined accordingly. Second, considering the robust bidding strategies of generation companies, a bi-level optimal planned electricity allocation model for power exchange (PX) is constructed. In the upper level, the proportion of planned electricity is optimized by PX to minimize the CVaR of Lerner Index (LI), so as to maintain the market power at a low level. In the lower level, based on the robust optimization theory and the prediction of rival bidding, the bidding strategy of a GC is optimized by solving a specified max–min problem. Simulations based on data from a provincial electricity market in China illustrate that the market power can be reduced through a reasonable proportion of planned electricity designed by the PX. In addition, when more GCs tend to avoid a market risk, the proportion of planned electricity can be increased accordingly.
Article
We discuss the objectives of any endeavor in creating artificial intelligence, AI, and provide a possible alternative. Intelligence might be an unintended consequence of curiosity left to roam free, best exemplified by a frolicking infant. This suggests that our attempts at AI could have been misguided. What we actually need to strive for can be termed artificial curiosity, AC, and intelligence happens as a consequence of those efforts. For this unintentional yet welcome aftereffect to set in a foundational list of guiding principles needs to be present. We start with the intuition for this line of reasoning and formalize it with a series of definitions, assumptions, ingredients, models and iterative improvements that will be necessary to make the incubation of intelligence a reality. Our discussion provides conceptual modifications to the Turing Test and to Searle’s Chinese room argument. We discuss the future implications for society as AI becomes an integral part of life. We provide a road-map for creating intelligence with the technical parts relegated to the appendix so that the article is accessible to a wide audience. The central techniques in our formal approach to creating intelligence draw upon tools and concepts widely used in physics, cognitive science, psychology, evolutionary biology, statistics, linguistics, communication systems, pattern recognition, marketing, economics, finance, information science and computational theory highlighting that solutions for creating artificial intelligence have to transcend the artificial barriers between various fields and be highly multi-disciplinary.
Article
Full-text available
Of all statistical distributions, the standard normal is perhaps the most popular and widely used. Its use often involves computing the area under its probability curve. Unlike many other statistical distributions, there is no closed form theoretical expression for this area in case of the normal distribution. Consequently it has to be approximated. While there are a number of highly complex but accurate algorithms, some simple ones have also been proposed in literature. Even though the simple ones may not be very accurate, they are nevertheless useful as accuracy has to be gauged vis-à-vis simplicity. In this short paper, we present another simple approximation formula to the cumulative distribution function of standard normal distribution. This new formula is fairly good when judged vis-à-vis its simplicity.
Article
Full-text available
This paper develops a logistic approximation to the cumulative normal distribution. Although the literature contains a vast collection of approximate functions for the normal distribution, they are very complicated, not very accurate, or valid for only a limited range. This paper proposes an enhanced approximate function. When comparing the proposed function to other approximations studied in the literature, it can be observed that the proposed logistic approximation has a simpler functional form and that it gives higher accuracy, with the maximum error of less than 0.00014 for the entire range. This is, to the best of the authors’ knowledge, the lowest level of error reported in the literature. The proposed logistic approximate function may be appealing to researchers, practitioners and educators given its functional simplicity and mathematical accuracy.
Research
Full-text available
a new approximation function proposed for the cumulative distribution function of standard normal distribution using a feedforward neural network. Efficiency of the proposed function is measured using maximum absolute error, root mean squared error, mean absolute error, mean absolute percentage error and the results compared with the existing approximation functions in the literature
Article
Over the years, a number of approximations to the cdf of the Normal distribution have been proposed. How does one make a choice among them? This paper compares their performance with a view to identifying the best among them. Our analysis reveals that a uniformly best approximation formula does not exist. Locally best are identified. Finally we combine the locally best approximations to obtain a combined formula with a very high accuracy. A subroutine is presented.
Article
This note sets out an easily recalled approximation to the cumulative normal distribution in which all but one of the special coefficients equal unity.
Article
A number of simple approximate formulae for the upper tail probability of the normal distribution are compared. Some new more accurate, but only slightly more complicated, formulae are introduced. All are suitable for use with a pocket calculator.
Article
Alternatives are given to Hamaker's approximations to the cumulative normal distribution and its inverse. The alternatives are as accurate as Hamaker's, but the former is more convenient than the latter.