Page 1
A Convenient Way of Generating Gamma
Random Variables Using Generalized
Exponential Distribution
Debasis Kundu1& Rameshwar D. Gupta2
Abstract
In this paper we propose a very convenient way to generate gamma random variables
using generalized exponential distribution, when the shape parameter lies between 0
and 1. The new method is compared with the most popular Ahrens & Dieter method
and the method proposed by Best. Like Ahrens & Dieter and Best methods our method
also uses the acceptance-rejection principle. But it is observed that our method has
greater acceptance proportion than Ahrens & Dieter or Best methods.
Key Words and Phrases: Generalized exponential distribution; Random number gener-
ator; Gamma generator; Ahrens & Dieter method; Best method.
Short Running Title: Generating gamma random numbers
Address of correspondence: Debasis Kundu, e-mail: kundu@iitk.ac.in, Phone no. 91-
512-2597141, Fax no. 91-512-2597500.
1Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Pin
208016, India.
2Department of Computer Science and Applied Statistics , The University of New Brunswick,
Saint John, Canada, E2L 4L5. Part of the work was supported by a grant from the Natural
Sciences and Engineering Research Council.
1
Page 2
1 Introduction
Generating gamma random numbers is an old and very important problem in the statistical
literature. Particularly, in the recent days because of the popularity of MCMC techniques
it has gained more importance. Several methods are available in the literature to generate
gamma random numbers. The book of Johnson et al. [11] provides an extensive list of
references of the different methods available today. It is well known that the available
algorithms can be divided into two distinct cases. Case 1: Shape parameter < 1; Case 2:
Shape parameter > 1. Although several methods are available for case 2, but for case 1,
mainly two methods are well known; (a) the most popular and very simple method proposed
by Ahrens & Dieter [1], (b) the modified Ahrens & Dieter’s method proposed by Best [3], see
for example the book by Law and Kelton [12] or Fishman [4]. Both the methods mainly use
the majorization functions and the acceptance-rejection principle. It is known that Best’s
method has lower rejection proportion than Ahrens & Dieter’s method. Since now a days
particularly for MCMC sampling, often a large number of gamma random numbers needs to
be generated, one naturally prefers an acceptance-rejection method which has lower rejection
proportion to achieve larger period of the corresponding generator.
Recently generalized exponential (GE) distribution has been proposed and studied quite
extensively by Gupta and Kundu [5, 6, 7, 8, 9]. The readers may be referred to some of the
related literature on GE distribution by Raqab [13], Raqab and Ahsanullah [14], Raqab and
Madi [15] and Zheng [16]. The two-parameter GE for α,λ > 0, has the following distribution
function;
FGE(x;α,λ) =
?
0 if
if
x < 0,
x > 0. (1 − e−λx)α
(1)
The corresponding density function is;
fGE(x;α,λ) =
?
0if
if
x < 0,
x > 0.αλ(1 − e−λx)α−1e−λx
(2)
2
Page 3
Here α and λ are the shape and scale parameters respectively. When α = 1, it coincides
with the exponential distribution. If α ≤ 1, the density function of a GE distribution is
a strictly decreasing function and for α > 1 it has uni-modal density function. The shape
of the density function of the GE distribution for different α can be found in Gupta and
Kundu [6].
In a recent paper by Gupta and Kundu [10], it is observed that the distribution functions
of the GE and gamma can be very close and it is sometimes very difficult to distinguish
between the two distribution functions. In fact using the closeness property, the authors
proposed to generate approximate gamma random variables using GE distribution for certain
ranges of the shape parameters.
In this paper we mainly consider the generation of gamma random deviates for the shape
parameter less than one. We denote the density function of a gamma random variable with
scale parameter 1 and shape parameter α as
fGA(x;α) =
1
Γ(α)xα−1e−x;x > 0.(3)
From now on, we take 0 < α < 1, unless otherwise mentioned. Note that, it is enough to
provide the generation for scale parameter equal to 1. It is observed that when the shape
parameter is less than one, then a constant multiplication of the GE density function can be
used as a majorization function. Therefore, using the acceptance-rejection principle, gamma
random deviates can be easily generated using GE random numbers. We further make some
simple modifications to the proposed generator. It is observed that our proposed method has
greater acceptance proportion than Ahrens & Dieter and Best generators for all 0 < α < 1.
The rest of the paper is organized as follows. In section 2 we briefly describe the two
most popular methods. The proposed methods are discussed in section 3. The numerical
comparisons are provided in section 4 and finally we conclude the paper in section 5.
3
Page 4
2 The Two Most Popular Methods
In this section we briefly provide the two most popular methods which have been used to
generate gamma random numbers when the shape parameter is less than 1. The first method
is proposed by Ahrens & Dieter [1] and the second one is by Best [3]. Both the methods are
based on the acceptance-rejection principle with proper choice of the majorization functions.
Ahrens & Dieter [1] used the following majorization function;
tAD(x;α) =
xα−1
Γ(α)
if0 < x < 1,
1
Γ(α)e−x
if x > 1.
(4)
Since cAD =
?∞
0
tAD(x;α)dx =
(e + α)
eΓ(α + 1), therefore it needs to generate random deviate
from the following probability density function
rAD(x;α) =
tAD(x;α)
0tAD(x;α)dx=
?∞
αexα−1
e+α
if0 < x < 1,
αe
e+αe−x
ifx > 1,
and using the standard acceptance-rejection method, the gamma random deviate can be
easily obtained. Note that in this case the rejection proportion is cAD− 1.
Best [3] modified Ahrens & Dieter’s [1] method, by using the following majorization
function;
tB(x;α) =
xα−1
Γ(α)
if0 < x < d,
1
Γ(α)dα−1e−x
if x > d.
?∞
(5)
Here d should be chosen in such a manner such that cB=
0
tB(x;α)dx is minimum. It
can be easily seen that in that case d must satisfy the following non-linear equation
d = e−d(1 − α + d).
Best has suggested to use an approximation of d as d = 0.07+0.75√1 − α. Once d is known,
the acceptance-rejection method can be easily used to generate gamma random deviate using
4
Page 5
the majorization function (5). It boils down to generate random deviate from the following
probability density function
rB(x;α) =
tB(x;α)
0tB(x;α)dx=
?∞
αxα−1
bdα
if0 < x < d,
α
bde−x
ifx > d,
here b = 1 +e−dα
d
. It has been shown theoretically that Best’s method has lower rejection
proportion that Ahrens & Dieter’s method.
3Proposed Methodology
In this section, we provide the new gamma random number generator using the generalized
exponential distribution. Observe that for all x ≥ 0,
1 − e−x≤ x.(6)
Suppose β = 1 − α, then for all x ≥ 0, using (6), we have
(1 − e−x)α−1=e−x(1 − e−x)β
xα−1e−x
xβ
≤ e−x.(7)
Therefore,
1
Γ(α)xα−1e−x
2(1 − e−x/2)α−1e−x/2=
α
2e−x/2xα−1
Γ(α + 1)(1 − e−x/2)α−1≤
2α
Γ(α + 1). (8)
From (8), we obtain
fGA(x;α) ≤
2α
Γ(α + 1)fGE(x;α,1
2). (9)
Based on (9), we provide the following algorithm;
Algorithm 1:
1. Generate U from uniform (0,1).
5
Page 6
2. Compute X = −2ln(1 − U1/α).
3. Generate V from uniform (0,1) independent of U.
4. If V ≤
Xα−1e−X/2
2α−1(1 − e−X/2)α−1accept X, otherwise goto 1.
Although, (9) is true for all x > 0, it is observed that for 0 < x < 1, the bound provided by
(9) is quite sharp, but for 1 < x < ∞, the bound is not very sharp. Therefore, we propose the
following majorization function t1(x;α) of fGA(x,α), i.e., for all x > 0, fGA(x,α) ≤ t1(x;α),
where
Note that
t1(x;α) =
2α
Γ(α+1)fGE(x;α,1
2)if0 < x < 1,
1
Γ(α)e−x
ifx > 1.
(10)
?∞
0
t1(x;α)dx =
1
Γ(α + 1)
?
2α?
1 − e−1
2
?α+ αe−1?
= c1 (say).
We need to generate from the following density function;
r1(x;α) =1
c1t1(x;α); x > 0,
which has the following distribution function;
R1(x;α) =
0 if x < 0,
2α
c1 Γ(α+1)
?
1 − e−x/2?α
1
c1 Γ(α)e−x
if0 < x < 1,
1 −
ifx > 1.
(11)
Now we propose the following modified algorithm.
Algorithm 2:
Set a =
?
1 − e−1/2?α
(1 − e−1/2)α+αe−1
2α
and b =
?
1 − e−1/2?α+αe−1
2α.
1. Generate U from uniform (0,1).
6
Page 7
2. If U ≤ a, then X = −2ln
?
1 − (Ub)
1
α
?
, otherwise X = −ln
?2α
αb(1 − U)
?
.
3. Generate V from uniform (0,1). If X ≤ 1, check whether V ≤
true return X, otherwise go back to 1. If X > 1, check whether V ≤ Xα−1. If true
return X, otherwise go back to 1.
Xα−1e−X/2
2α−1(1 − e−X/2)α−1. If
Note that the majorization function t(x;α) has two part envelop with the change point
at 1, similar to the Ahrens and Dieter’s [1] algorithm. But it was suggested by Atkinson
and Pearce [2] and later implemented by Best [3] that the change point might depend on
the shape parameter α and it enhanced the performance. Using that idea, we propose the
following majorization function for fGA(x;α) ≤ t2(x;α), where
t2(x;α) =
2α
Γ(α+1)fGE(x;α,1
2) if0 < x < dα,
1
Γ(α)dα−1
α
e−x
ifx > dα,
(12)
where dαis chosen before. How to choose the optimum dαwill be discussed shortly. Note
that for dα= 1, t1(x;α) = t2(x;α). For 0 < α < 1, the majorization is obvious. Now
?∞
0
t2(x;α)dx =
1
Γ(α + 1)
?
2α?
1 − e−dα
2
?α+ αdα−1
α
e−dα
?
= c2 (say).
Therefore, we need to generate from the following distribution function
R2(x;α) =
0 ifx < 0,
2α
c2Γ(α+1)
?
1
1 − e−x/2?α
c2Γ(α)dα−1
α
if0 < x < dα,
1 −
e−x
ifx > dα.
(13)
Now we discuss the optimum choice of dα. Note that c2≥ 1 is the normalizing constant
and c2− 1 denotes the rejection proportion. Therefore, we should choose dα, so that c2is
minimum for a given α. It has to be obtained iteratively by solving a non-linear equation.
We denote the optimum dαas do
α. We have performed a non-linear regression between α and
7
Page 8
do
αand based on that we are suggesting the following approximation of the optimum do
αas
d∗
α, where
d∗
α= 1.0334 − 0.0766e2.2942α.(14)
Therefore, now we have the final algorithm;
Algorithm 3:
Set d = 1.0334 − 0.0766e2.2942α, a = 2α?
1 − e−d
2
?α, b = αdα−1e−dand c = a + b.
1. Generate U from uniform (0,1).
2. If U ≤
a
a + b, then X = −2ln
?
1 −(cU)1/α
2
?
, otherwise X = −ln
?c(1 − U)
αdα−1
?
.
3. Generate V from uniform (0,1). If X ≤ d, check whether V ≤
Xα−1e−X/2
2α−1(1 − e−X/2)α−1. If
?d
X
true return X, otherwise go back to 1. If X > d, check whether V ≤
return X, otherwise go back to 1.
?1−α
. If true
4 Numerical Comparison
In this section we compare different methods numerically. Note that all the methods dis-
cussed here, are based on the acceptance-rejection principle. We compare different methods
in terms of their rejection proportion or equivalently the expected number of the uniform
random deviates to be generated to produce one gamma variate, see Law and Kelton [12].
We report the different expected numbers below:
1. Ahrens and Dieter Method:
cAD=1 + e−1α
Γ(α + 1).
2. Best Method:
cB=(d + e−dα)dα−1
Γ(α + 1)
.
8
Page 9
3. Algorithm 1:
c1=
2α
Γ(α + 1).
4. Algorithm 2:
c2=
1
Γ(α + 1)
?
2α?
1 − e−1
2
?α+ αe−1?
.
5. Algorithm 3:
c3=
1
Γ(α + 1)
?
2α
?
1 − e−d∗
α
2
?α
+ αd∗(α−1)
α
e−d∗
α
?
.
We report Γ(α+1)c for different values of α in Table 1. We also report Γ(α+1)c4, where c4is
obtained from c3, by replacing d∗
αwith do
α. We further provide the graphs of the performances
of the different methods in Figure 1 for a much finer range of α in (0,1) (here α varies between
0.01 to 0.99 with an increment 0.01). From the table and also from the graphs it is clear
that our proposed final algorithm Algorithm 3 has lower rejection proportion than Best’s
method.
Finally we just want to see how our proposed approximation, d∗
α, of do
αworks. For that
purpose we have plotted in Figure 2 the performance of the Algorithm 3 and also the per-
formance of the method obtained by replacing d∗
αwith do
α. We obtained do
αnumerically and
name it as ‘Optimum Algorithm’. From Figure 2 it is clear that the proposed approximation
works very well for 0 < α < 0.9. When α is very close to 1, the approximation is not that
good.
5Conclusions
In this paper we have proposed a new algorithm of generating gamma variates using gener-
alized exponential distribution for the shape parameter less than 1. It is observed that our
algorithm has lower rejection proportion than the popular Ahrens-Dieter or Best method.
Our method is not very difficult to implement also. Moreover, like any other gamma gener-
9
Page 10
Table 1: The expected numbers multiplied by Γ(α+1) for different methods and for different
values of α are reported
α Γ(α + 1)cAD
Γ(α + 1)cB
Γ(α + 1)c1
Γ(α + 1)c2
Γ(α + 1)c3
Γ(α + 1)c4
.1000
.2000
.3000
.4000
.5000
.6000
.7000
.8000
.9000
1.0368
1.0736
1.1104
1.1472
1.1839
1.2207
1.2575
1.2943
1.3311
1.0328
1.0630
1.0897
1.1121
1.1289
1.1383
1.1381
1.1246
1.0906
1.0718
1.1487
1.2311
1.3195
1.4142
1.5157
1.6245
1.7411
1.8661
1.0131
1.0268
1.0410
1.0558
1.0710
1.0868
1.1031
1.1199
1.1371
1.0129
1.0261
1.0392
1.0517
1.0632
1.0725
1.0780
1.0769
1.0625
1.0129
1.0261
1.0392
1.0517
1.0632
1.0725
1.0780
1.0768
1.0624
ator for shape parameter less than one, our method also can be used for generating normal
random numbers through chi-square generator.
Acknowledgements:
The authors would like to thank two referees and one associate editor for their constructive
suggestions.
References
[1] Ahrens, J.H. and Dieter, U. (1974) “Computer methods for sampling from gamma, beta,
Poisson and Binomial distribution”, Computing, vol. 12, 223 - 246.
[2] Atkinson, A.C. and Pearce, M.C. (1976), “The computer generation of beta, gamma
and normal random variables (with discussion)”, Journal of the Royal Statistical Society,
A139, 431 - 461.
10
Page 11
α
Expected
Numbers
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
’Ahren−Dieter’
’Best’
’Algo−1’
’Algo−2’
’Algo−3’
Figure 1: Expected numbers multiplied by Γ(α + 1) are reported for different methods
[3] Best, D.J. (1983), “A note on gamma variate generators with shape parameter less than
unity”, Computing, vol. 30, 185 - 188.
[4] Fishman, G.S. (1995) Monte Carlo; Concepts, Algorithms and Applications, Springer,
New York.
[5] Gupta, R.D. and Kundu, D. (1999), “Generalized exponential distributions”, Australian
and New Zealand Journal of Statistics, vol. 41, no. 2, 173-188.
[6] Gupta, R.D. and Kundu, D. (2001), “Exponentiated exponential distribution: An alter-
native to gamma and Weibull distributions”, Biometrical Journal, vol. 43, no. 1, 117-130.
[7] Gupta, R.D. and Kundu, D. (2001), “Generalized exponential distributions: different
methods of estimations”, Journal of Statistical Computation and Simulation, vol. 69, no.
4, 315-338.
[8] Gupta, R.D. and Kundu, D. (2002), “Generalized exponential distributions: statistical
inferences”, Journal of Statistical Theory and Applications, vol. 1, no. 2, 101-118, 2002.
11
Page 12
Expected
Numbers
α
1
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
’Algo−3’
’Opt−Algo’
Figure 2: Expected numbers multiplied by Γ(α + 1) are reported for Algorithm 3 and for
Optimum Algorithm methods
[9] Gupta, R.D. and Kundu, D. (2003), “Discriminating Between Weibull and Generalized
Exponential Distributions”, Computational Statistics and Data Analysis, vol. 43, 179-
196.
[10] Gupta, R.D. and Kundu, D. (2003), “Closeness of gamma and generalized exponential
distribution”, Communications in Statistics - Theory and Methods, vol. 32, no. 4, 705-
721.
[11] Johnson, N., Kotz, S. and Balakrishnan, N. (1995), Continuous Univariate Distribution
Vol. 1, John Wiley and Sons, New York.
[12] Law, A.M. and Kelton, W.D. (2000) Simulation Modeling and Analysis, 3-rd edition,
McGraw-Hill, Singapore.
[13] Raqab, M.Z. (2002), “Inference for generalized exponential distribution based on record
statistics”, Journal of Statistical Planning and Inference, vol. 104, no. 2, 339-350.
12
Page 13
[14] Raqab, M.Z. and Ahsanullah, M. (2001), “Estimation of the location and scale param-
eters of the generalized exponential distribution based on order statistics”, Journal of
Statistical Computation and Simulation, vol. 69, no. 2, 109-124.
[15] Raqab, M.Z. and Madi, M.T. (2005), “Bayesian inference for the generalized exponential
distribution”, Journal of Statistical Computation and Simulation, vol. 75, no. 10, 841-
852..
[16] Zheng, G. (2002), “On the Fisher information matrix in type-II censored data from the
exponentiated exponential family”, Biometrical Journal, vol. 44, no. 3, 353-357.
13
Download full-text