PreprintPDF Available

# On the limit behaviour of finite-support bivariate discrete probability distributions under iterated partial summations

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

## Abstract

Bivariate partial-sums discrete probability distributions are defined. The question of the existence of a limit distribution for iterated partial summations is solved for finite-support bivariate distributions which satisfy conditions under which the power method (known from matrix theory) can be used. An oscillating sequence of distributions, a phenomenon which has never been reported before, is presented.
arXiv:1903.03316v1 [math.PR] 8 Mar 2019
ON THE LIMIT BEHAVIOUR OF FINITE-SUPPORT BIVARIATE DISCRETE PROBA-
BILITY DISTRIBUTIONS UNDER ITERATED PARTIAL SUMMATIONS
L´ıvia Leˇsov´a and J´an Maˇcutek
Department of Applied Mathematics and Statistics
Comenius University in Bratislava
Mlynsk´a dolina, 842 48 Bratislava
livia.lessova@fmph.uniba.sk
jmacutek@yahoo.com
Key Words: discrete probability distributions, partial-sums distributions, convergence.
ABSTRACT
Bivariate partial-sums discrete probability distributions are deﬁned. The question of the
existence of a limit distribution for iterated partial summations is solved for ﬁnite-support
bivariate distributions which satisfy conditions under which the power method (known from
matrix theory) can be used. An oscillating sequence of distributions, a phenomenon which
has never been reported before, is presented.
1. INTRODUCTION
Let {P(1)
x}
x=0 and {P
x}
x=0 be probability mass functions of two univariate discrete prob-
ability distributions deﬁned on nonnegative integers. The distribution {P(1)
x}
x=0 (the de-
scendant distribution) is a partial-sums distribution created from {P
x}
x=0 (the parent dis-
tribution) if
P(1)
x=c1
X
j=x
g(j)P
j, x = 0,1,2, ..., (1)
where c1is a normalization constant and g(j) a real function. Several types of partial
summations - for diﬀerent choices of g(j) - are mentioned in the comprehensive monograph
by Johnson et al. (2005). An extensive survey of pairs of parents and descendants was
provided by Wimmer and Altmann (2000). More detailed analyses (e.g. relations between
1
probability generating functions of the parent and descendant distributions) can be found in
Maˇcutek (2003).
Partial summations from (1) can be applied iteratively. Take {P(1)
x}
x=0, i.e. the descen-
dant distribution from (1), as the parent, with function g(j) remaining unaltered. We obtain
the descendant of the second generation
P(2)
x=c2
X
j=x
g(j)P(1)
j, x = 0,1,2, ...,
and, repeatedly applying the partial summation, for any kNthe descendant of the k-th
generation
P(k)
x=ck
X
j=x
g(j)P(k1)
j, x = 0,1,2, ...,
c2,ckbeing normalization constants.
The question whether the sequence of the descendant distributions has a limit was inves-
tigated by Maˇcutek (2006) for a constant function g(j). In this case, the answer is positive
for a wide class of parent distributions, with the limit distribution being geometric. Koˇcov´a
et al. (2018) presented a solution - albeit not a general one - of the problem if the parent
distribution has a ﬁnite support.
In this paper we extend the result from Koˇcov´a et al. (2018) to bivariate discrete
probability distributions.
2. BIVARIATE PARTIAL-SUMS DISTRIBUTIONS
Research on partial-sums distributions is almost exclusively dedicated to univariate dis-
tributions (see Wimmer and Maˇcutek (2012), and references therein). The only note on
the bivariate (and r-variate) partial-sums distributions can be found in Kotz and Johnson
(1991), who more or less restrict themselves to a suggestion to study multivariate cases.
Univariate partial-sums distributions from Section 1 can be naturally generalized to two
dimensions as follows.
Let {P
x,y}
x,y=0 and {P(1)
x,y }
x,y=0 be bivariate discrete distributions and let g(x, y) be a real
2
function. Then {P(1)
x,y }
x,y=0 is the descendant of the parent {P
x,y}
x,y=0 if
P(1)
x,y =c1
X
i=x
X
j=y
g(i, j)P
i,j.(2)
We will obtain the descendant of the k-th generation analogously to the univariate case, e.g.
the k-th descendant is
P(k)
x,y =ck
X
i=x
X
j=y
g(i, j)P(k1)
i,j .
We will show that if the parent distribution has a ﬁnite support of the size m×n,
the power method, which is a computational approach to ﬁnding matrix eigenvalues and
eigenvectors, can in some cases be used to ﬁnd the limit distribution.
3. POWER METHOD AND ITS APPLICATION
The power method (see e.g. Golub and Van Loan (1996)) was suggested as a compu-
tational tool which enables, under certain conditions, to ﬁnd an approximation of square
matrix eigenvalues. The method can be applied to a diagonalizable matrix (i.e. a matrix
which has linearly independent eigenvectors, or, equivalently, it is similar to a diagonal ma-
trix) with a unique dominant eigenvalue (denote the eigenvalues λ1, λ2, ..., λn; there exists k
such that |λk|>|λi|,i6=k). The eigenvector corresponding to the dominant eigenvalue is
the dominant eigenvector.
If a matrix Asatisﬁes abovementioned conditions, then there exists a non-zero vector x0
such that the sequence {Akx0}
k=1 converges to a multiple of the dominant eigenvector.
While the application of the power method is straightforward for univariate iterated par-
tial sumations (see Koˇcov´a et al. (2018)), a bivariate distribution requires an additional
step, namely, a vectorization of the probability matrix, which is, however, a standard op-
eration in matrix theory (see e.g. Golub and Van Loan (1996)). Denote Pthe parent
distribution, i.e.
P=
.P
0,0P
0,1. . . P
0,n1
P
1,0P
1,1. . . P
1,n1
.
.
..
.
.....
.
.
P
m1,0P
m1,1. . . P
m1,n1
,
3
the vectorization of Pyields a vector of probabilities
v(P) = P
0,0,...,P
m1,0, P
0,1,...,P
m1,1, P
0,n1,...,P
m1,n1T.
Now we will construct a matrix ˜
Gfrom the values of the function g(i, j) from (2),
˜
G=
g0,0g1,0g2,0··· gm1,0g0,1g1,1g2,1··· gm1,n1
0g1,0g2,0··· gm1,00g1,1g2,1··· gm1,n1
0 0 g2,0··· gm1,00 0 g2,1··· gm1,n1
.
.
..
.
..
.
.....
.
..
.
..
.
..
.
.....
.
.
0 0 ··· 0gm1,00 0 ··· 0gm1,n1
000... 0g0,1g1,1g2,1... gm1,n1
.
.
..
.
..
.
.....
.
..
.
..
.
..
.
.....
.
.
000... 0 0 0 0 ... gm1,n1
.
Denote D=diag(g(0,0), g(1,0),...,g(m1, n 1)) and Athe upper triangular matrix of
ones with dimensions m×m, i.e.
A=
111··· 1
011··· 1
001··· 1
.
.
..
.
..
.
.....
.
.
000··· 1
m×m
.
Then it holds
˜
G=
AAA··· A
0A A ··· A
0 0 A··· A
.
.
..
.
..
.
.....
.
.
000··· A
D.
Matrix ˜
Gis an upper triangular matrix with dimensions nm ×nm, in its each column there
is only one particular g(i, j) (several times). Its diagonal consists of elements g(i, j), each of
them occurring just once.
4
The notation established above allows us to write
vP(1)=˜
Gv (P)
k˜
Gv (P)k1
,
and the k-th descendant can be expressed in its vector form as
vP(k)=˜
Gv P(k1)
k˜
Gv (P(k1))k1
=˜
Gkv(P)
k˜
Gkv(P)k1
.
If the assumptions under which the power method converges are satisﬁed (i.e. a ﬁnite
support of the parent distribution, a unique dominant eigenvalue of a diagonalizable matrix
˜
G, a suitable starting vector P), the sequence
˜
Gv (P)
k˜
Gv (P)k2
,
˜
Gv P(1)
k˜
Gv (P(1))k2
, ...,
˜
Gv P(k)
k˜
Gv (P(k))k2
, ...
converges to the unit dominant eigenvector of matrix ˜
G. The eigenvalues of the upper
triangular matrix are its diagonal elements, so the dominant eigenvalue is unique if and only
if the greatest absolute value of g(i, j) is unique. We will obtain the limit distribution by
multiplying the dominant unit vector by a normalization constant, i.e. the limit distribution
will be
v(P()) = lim
k→∞
v(P(k))
kv(P(k))k1
= lim
k→∞
˜
Gkv(P)
k(˜
Gkv(P))k1
.
4. EXAMPLES
4.1. LIMIT DISTRIBUTION
Let N1, N2, N3N,k∈ {1,2, ..., N3}and N=N1+N2+N3. Vector X
Yhas a bivariate
inverse hypergeometric distribution (see Johnson et al. (1997)) if
P(X=x, Y =y) = N3k+ 1
N(x+y+k1)
N1
xN2
y N3
k1
N
x+y+k1,
for x= 0,1,2,...,N1,y= 0,1,2,...,N2.
We will consider such a function g(i, j) which leaves the bivariate inverse hypergeometric
distribution unchanged. We choose parameter values N1=N2= 2, N3= 5, k= 2, i.e.
5
P=
5
18
10
63
5
126
10
63
10
63
4
63
5
126
4
63
5
126
.
The corresponding matrix ˜
Gis
˜
G=
3
7
3
20 3
5
3
20
9
20
3
83
5
3
81
03
20 3
509
20
3
803
81
0 0 3
50 0 3
80 0 1
0 0 0 3
20
9
20
3
83
5
3
81
0 0 0 0 9
20
3
803
81
0 0 0 0 0 3
80 0 1
0 0 0 0 0 0 3
5
3
81
0 0 0 0 0 0 0 3
81
0 0 0 0 0 0 0 0 1
.
The conditions that matrix ˜
Gmust be diagonalizable and it must have the unique domi-
nant eigenvalue are satisﬁed in this case. If we start from any suitable probability vector (it
can not be ortoghonal to the space of the dominant eigenvalue), the iterated partial sum-
mations will converge to the unit eigenvector corresponding to the dominant eigenvalue of
the matrix ˜
G. In this case is the dominant eigenvalue 1, so a multiple of its corresponding
eigenvector will be the limit distribution
P()=
5
18
10
63
5
126
10
63
10
63
4
63
5
126
4
63
5
126
.
We remind that (almost, with the exception of vectors orthogonal to the space of the dom-
inant eigenvalue) regardless of the parent distribution, the limit distribution P()is the
bivariate hypergeometric distribution with the parameters N1=N2= 2, N3= 5, k= 2
which in our example determines the function g(i, j).
6
4.2. OSCILLATION
There are also sequences of descendant distributions which do not converge. Let the
parent be the bivariate hypergeometric distribution (see Johnson et al. (1997)) with the
parameters N1=N2= 1, N3= 2, n= 1, i.e.
P=
1
2
1
4
1
40
and let the matrix ˜
Gbe
˜
G=
1 1 1 0
0 1 0 0
0 0 1 0
0 0 0 0
.
After the ﬁrst partial summation we obtain
P(1) =
01
2
1
20
,
and after the second summation
P(2) =
1
2
1
4
1
40
,
i.e. the distribution identical to the parent P.
BIBLIOGRAPHY
Golub H. G. and Van Loan Ch. F. (1996). Matrix Computations. Baltimore, London: The
Johns Hopkins University Press.
Johnson, N.L., Kemp, A.W. and Kotz, S. (2005). Univariate Discrete Distributions. Hobo-
ken (NJ): Wiley.
Johnson, N.L., Kotz, S. and Balakrishnan, N. (1997). Discrete Multivariate Distributions.
Hoboken (NJ): Wiley.
7
Koˇcov´a M., Harman R. and Maˇcutek J. (2018). Iterated partial summations applied to
ﬁnite-support discrete distributions.
http://www.iam.fmph.uniba.sk/ospm/Harman/KoscovaHarmanMacutek2018preprint.pdf
(accessed on 30-Dec-2018)
Kotz, S. and Johnson, N. L. (1991). A note on renewal (partial sums) distributions for
discrete variables. Statistics & Probability Letters, 12, 229–231.
Maˇcutek J. (2006). A limit property of the geometric distribution. Theory of Probability
and its Applications,50(2), 316–319.
Maˇcutek, J. (2003). On two types of partial summations. Tatra Mountains Mathematical
Publications, 26, 403–410.
Wimmer, G. and Altmann, G. (2000). On the generalization of the STER distribution
applied to generalized hypergeometric parents. Acta Universitatis Palackianae Olomucensis
Facultas Rerum Naturalium Mathematica, 39(1), 215–247.
Wimmer, G. and Maˇcutek, J. (2012). New integrated view at partial-sums distributions.
Tatra Mountains Mathematical Publications, 51, 183–190.
8