PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

The paper deals with fluctuations of Kendall random walks, which are extremal Markov chains. We give the joint distribution of the first ascending ladder epoch and height over any level $a \geq 0$ and distribution of maximum and minimum for these extremal Markovian sequences. We show that distribution of the first crossing time of level $a \geq0$ is a mixture of geometric and negative binomial distributions. The Williamson transform is the main tool for considered problems connected with the Kendall convolution.
arXiv:1902.00576v1 [math.PR] 1 Feb 2019
FLUCTUATIONS OF EXTREMAL MARKOV CHAINS OF
THE KENDALL TYPE
B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
Abstract. The paper deals with fluctuations of Kendall random walks,
which are extremal Markov chains. We give the joint distribution of
the first ascending ladder epoch and height over any level a0and
distribution of maximum and minimum for these extremal Markovian
sequences. We show that distribution of the first crossing time of level
a0is a mixture of geometric and negative binomial distributions.
The Williamson transform is the main tool for considered problems con-
nected with the Kendall convolution.
1. Introduction
Addition of independent random variables and corresponding operation on
their measures - convolution - is one of the most commonly occurring opera-
tions in probability theory and applications. Classical convolution is a special
case of much more general operation called a generalized convolution.
The origin of generalized convolutions can be found in delphic semigroups
([9, 10, 18]). Inspired by Kingman’s study of spherical random walks ([20]),
Urbanik introduced the notion of a generalized convolution for measures con-
centrated on a positive half-line in a series of papers [29]. This definition was
extended to symmetrical measures on Rby Jasiulis-Gołdyn in [12]. General-
ized convolutions were explored with the use of regular variation ([2, 3, 13])
and were used to construct Lévy processes and stochastic integrals ([4], [28]).
In the theory of generalized convolutions, we create new mathematical ob-
jects that have potential in applications. It is enough to look at the case
of the maximum convolution corresponding to extreme value theory ([5]).
Currently, limit distribution for extremes, i.e. generalized extreme value dis-
tribution (Frechét, Gumbel, Weibull) is commonly used for modeling rainfall,
floods, drought, cyclones, extreme air pollutants, etc. Random walks with
respect to generalized convolutions form a class of extremal Markov chains
(see [1, 4, 13]). Studying them in the appropriate algebras will be a mean-
ingful contribution to extreme value theory ([5]). Kendall random walks
([11, 13]), which are the main objects of investigations in this paper, are
related to maximal processes (and thus maximum convolution), Pareto pro-
cesses ([8, 32]) or pRARMAX processes ([7]). One of the differences lies in
1Institute of Mathematics, University of Wrocław, pl. Grunwaldzki 2/4, 50-384
Wrocław, Poland, e-mail: jasiulis@math.uni.wroc.pl
2Faculty of Mathematics and Information Science, Warsaw University of Technology, ul.
Koszykowa 75, 00-662 Warszawa, Poland, e-mail: m.staniak@mini.pw.edu.pl
Keywords and phrases: Kendall convolution, Markov process, Pareto distribution, ran-
dom walk, Spitzer identity, Polaczek-Khintchine formula, Williamson transform
Mathematics Subject Classification. 60K05, 60G70, 44A35, 60J05, 60E10.
1
2 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
the fact that in this case values of the process are randomly multiplied by
a heavy-tailed Pareto random variables, which results in even more extreme
behavior than in the case of the classical maximum process.
Generalized convolutions have connections with the theory of weakly sta-
ble measures ([12, 17]) and non-commutative probability ([16]). By the
Williamson transform one can also find some connections with copula theory
([24, 25]).
Fluctuations of classical random walks and Lévy processes were widely de-
scribed in literature (see e.g. [6, 27]) and they are still the object of interest
for scientists [19, 21, 23, 26, 22]. This paper is a continuation of research
initiated in [15]. The main result of this paper is a description of fluctua-
tions of random walks generated by a particular generalized convolution -
the Kendall convolution - in terms of the first ladder moment (epoch) and
the first ladder height of the random walk over any level a0. It turns out
that the distribution of the first ladder moment is a mixture of three negative
binomial distributions where coefficients and parameters depend on the unit
step distribution. We also present distribution of maxima and minima for
the Kendall random walks in terms of the Williamson transform and unit
step cumulative distribution function. Description of the behavior of the ex-
tremes of the stochastic processes is an analogue of the Pollaczek-Khintchine
equation from classical theory.
Organization of the paper: We begin Section 2 by recalling the Kendall con-
volution ([11, 15]), which is an example of generalized convolution in the
sense defined by K. Urbanik [29]. This convolution is quite specific because
the result of two point-mass probability measures is a convex linear combina-
tion of measure concentrated at one point and the Pareto distribution. Next,
we present the definition and main properties of the Williamson transform
([15, 31]), which is analog of characteristic function in the Kendall convolu-
tion algebra. This transform is very easy to invert and allows us to get many
results for extremal Markov sequences of the Kendall type.
In the third section, we consider the first ascending ladder epoch over any
level a0and prove that its distribution is a convex linear combination
of geometrical and negative binomial distributions. Section 4 consists of
two parts: the distribution of the first ladder height and the maximum and
minimum distributions for studied stochastic processes.
Notation: The distribution of the random element Xis denoted by L(X).
For a probability measure λand aR+the rescaling operator is given by
Taλ=L(aX)if λ=L(X). By Pwe denote family of all probability mea-
sures on the Borel subsets of R, while by Pswe denote symmetric probability
measures on R. For abbreviation the set of all natural numbers including
zero is denoted by N0. Additionally eπ2αdenotes a Pareto random measure
with the density function eπ2α(dy) = α|y|2α11
1
1[1,)(|y|)dy. In general, by
eνwe denote symmetrization of a probability measure ν. In this paper, we
usually consider symmetric probability measures assuming that ν∈ Ps.
FLUCTUATIONS OF KENDALL RANDOM WALKS 3
We study positive and negative excursions for the Kendall random walk
{Xn:nN0}which is defined by the following construction:
Definition 1. Stochastic process {Xn:nN0}is a discrete time Kendall
random walk with parameter α > 0and step distribution νif there exist
1. (Yk)i.i.d. random variables with distribution νP,
2. (ξk)i.i.d. random variables with uniform distribution on [0,1],
3. (θk)i.i.d. random variables with the symmetric Pareto distribution with
the density eπ2α(dy) = α|y|2α11
1
1[1,)(|y|)dy,
4. sequences (Yk),(ξk)and (θk)are independent,
such that
X0= 1, X1=Y1, Xn+1 =Mn+1rn+1 [I(ξn> ̺n+1) + θn+1I(ξn< ̺n+1 )] ,
where θn+1 and Mn+1 are independent,
Mn+1 = max{|Xn|,|Yn+1|}, mn+1 = min{|Xn|,|Yn+1|}, ̺n+1 =mα
n+1
Mα
n+1
and
rn+1 ={sgn(u) : max {|Xn|,|Yn+1|} =|u|} .
The Kendall random walk is extremal Markov chain with X00and the
transition probabilities
Pn(x, A) = PXn+kAXk=x=δxαναn, n, k N,
where measure ν∈ Psis called the step distribution. Construction and some
basic properties of this particular process are described in [4, 11, 13, 14].
2. Williamson transform and Kendall convolution
The stochastic process considered here was constructed using the Kendall
convolution defined in the following way:
Definition 2. Commutative and associative binary operation α:P2
s→ Ps
defined for discrete measures by
e
δxαe
δy:= TM̺αeπ2α+ (1 ̺α)e
δ1
where M= max{|x|,|y|},m= min{|x|,|y|},̺=m/M, we call the Kendall
convolution. The extension of αto the whole Psis given by
ν1αν2(A) = ZR2e
δxαe
δy(A)ν1(dx)ν2(dy).
Notice that the operation αis a generalized convolution in the sense intro-
duced by Urbanik (see [12, 29, 30]) having the following properties:
ναδ0=νfor each ν∈ Ps;
(1+ (1 p)ν2)αν=p(ν1αν) + (1 p) (ν2αν)for each
p[0,1] and each ν, ν1, ν2∈ Ps.
if λnλand νnν, then (λnανn)(λαν), where
denotes the weak convergence;
Taν1αν2=Taν1αTaν2for each ν1, ν2∈ Ps.
The Kendall convolution is strictly connected with the Williamson transform,
which plays similar role to characteristic function in the classical algebra.
4 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
Definition 3. By the Wil liamson transform we understand the operation
νbνgiven by
bν(t) = ZR
(1 − |xt|α)+ν(dx), ν ∈ Ps,
where a+=aif a>0and a+= 0 otherwise.
For convenience we use the following notation:
Ψ(t) = (1 − |t|α)+, G(t) = bν(1/t).
The next lemma is almost evident and well known. It provides the inverse
of the Williamson transform, which is surprisingly simple.
Lemma 1. The correspondence between measure ν∈ Psand its Williamson
transform is 11. Moreover, denoting by Fthe cumulative distribution
function of ν,ν({0}) = 0, we have
F(t) = 1
2α[α(G(t) + 1) + tG(t)] if t > 0;
1F(t)if t < 0.
except for the countable many tR.
For details of the proof of the above Lemma see [15].
As we mentioned above the Williamson transform ([31]) plays the same role
for the Kendall convolution as the Fourier transform for the classical convo-
lution (for proof see Proposition 2.2. in [15]), i.e.
Proposition 1. Let ν1, ν2∈ Psbe probability measures with Williamson
transforms bν1,bν2. Then
ZR
Ψ(xt)ν1αν2(dx) = bν1(t)bν2(t).
The following fact is a simple consequence of Lemma 1 and Proposition 1.
Proposition 2. Let ν∈ P. For each natural number n>2the cumulative
distribution function Fnof measure ναnis equal
Fn(t) = 1
2G(t)n+ 1 + nG(t)n1H(t), t > 0,
where
H(t) = 2F(t)G(t)1 = tα
t
Z
t
|x|αν(dx)
and Fn(t) = 1 Fn(t)for t < 0, where G(t) = bν(1/t).
Proof. At the beginning, it is worth noting that
Gn(t) = G(t)n.
Then by Lemma 1 we arrive at the following formula:
Fn(t) = 1
2ααG(t)n+ 1+tnG(t)n1G(t)
for t > 0and we also have
G(t) = α
tH(t),
which ends the proof.
FLUCTUATIONS OF KENDALL RANDOM WALKS 5
Example 2.1. Let ν=e
δ1. Then
G(t) = 1− |t|α+
and
dFn(t) = αn(n1)
2|t|2α+1 1− |t|α(n1) 1
1
1[1,)(|t|)dt.
Example 2.2. For Kendall random walk with unit step distribution X1
να∈ P such that E|X1|α=mα<, stable distribution has the following
density
να(dx) = α
2mα|x|2(α+1) exp{−mα|x|α}dx.
Then
F1(t) = 1
2+1
2(1 + mαtα) exp{−mαtα}if t > 0;
1F(t)if t < 0
and
G(t) = exp mα|t|α
Fn(t) = 1
2+1
2(1 + nmαtα) exp{−nmαtα}if t > 0;
1F(t)if t < 0.
It is evident that we have:
Fn(t) = F1(n1t).
Example 2.3. Let ν=eπ2αfor α(0,1]. Since e
δ1αe
δ1=eπ2α, then using
Example 2.1 we arrive at:
dFn(t) = αn(2n1)
|t|2α+1 1− |t|α2(n1) 1
1
1[1,)(|t|)dt.
The explicit formula for transition probabilities for Kendall random walks is
given by:
Lemma 2. For all nNand x0
δxαναn(0, t) = Pn(x, [0, t)) = 1
2hΨx
tHn(t) + Gn(t)i1{|x|<t},
where
Hn(t) = 2Fn(t)1Gn(t).
Proof. By Lemma 3.1 in [15] we have
(δxαδy) (0, t) = 1
21xy
t2α1{|x|<t,|y|<t}
=1
2hΨx
t+ Ψ y
tΨx
tΨy
ti1{|x|<t,|y|<t},
(δxαν) (0, t) = P1(x, [0, t))
=1
2hΨx
t2F(t)1G(t)+G(t)i1{|x|<t}.
6 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
The transition probability can now be computed by replacing νwith ναn
in the last formula.
In the following section, we will also need the formula for the integral
Za
−∞
Ψx
t(δyαν)(dx).
In order to find it, we first need to find the following truncated moment of
order α.
Lemma 3. For all nNand a > 0
Za
0
xα(δyαν)(dx) = 1
2[H(a) (aα− |y|α) + |y|αG(a)] (|y|< a)
=aα
2hH(ay
a+1Ψy
aG(a)i1{|y|<a}.
Proof. By Lemma 2 we have
(δyαδz) (0, x) = 1
21yz
x2α1{|y|<x,|z|<x}.
Integrating by parts, we obtain
Za
0
xα(δyαδz)(dx) = aα(δyαδz) (0, a)
a
Z
0
αxα1(δyαδz) (0, x)(dx)
=1
2|y|α|yz|α
aα+1
2|z|α1{|y|<a,|z|<a}
from which it follows that
Za
0
xα(δyαν)(dx) = Z
−∞ Za
0
xα(δyαδz)(dx)ν(dz)
=Za
a1
2|y|α|yz|α
aα+1
2|z|αν(dz)
=1
2[H(a) (aα− |y|α) + |y|αG(a)] 1{|y|<a}
=aα
2hH(ay
a+1Ψy
aG(a)i1{|y|<a}.
Now we can find the formula for Ra
−∞ Ψx
t(δyαν)(dx).
Lemma 4. For all ta0the following equality holds.
Za
−∞
Ψx
t(δyαν)(dx) =
=1
2Ψy
aM(a, t) + 1
2G(aa
t(|y|< a) + 1
2Ψy
tG(t),
where
M(a, t) = H(aa
t+1Ψa
tG(a).
FLUCTUATIONS OF KENDALL RANDOM WALKS 7
Proof. By Lemmas 2 and 3 we have
Za
−∞
Ψx
t(δyαν)(dx)
= (δyαν)(t, a)tαZa
t
|x|α(δyαν)(dx)
= (δyαν)(0, a) + (δyαν)(0, t)
tαZa
0
xα(δyαν)(dx)tαZt
0
xα(δyαν)(dx)
=1
2hΨy
aH(a) + G(a)i1{|y|<a}+1
2hΨy
tH(t) + G(t)i1{|y|<t}
aα
2tαhH(ay
a+1Ψy
aG(a)i1{|y|<a}
1
2hH(ty
t+1Ψy
tG(t)i1{|y|<t}
from which we obtain the desired result by regrouping the terms and noticing
that since ta,a
tα= 1 Ψa
t.
Let us notice that in particular for t=a
Za
−∞
Ψx
a(δyαν)(dx) = G(ay
a,
because it is the Williamson transform of the Kendall convolution of two
measures: δyand ν.
Based on previous results, we will find closed-form formulas for two more
important integrals. Let us define
I(n, a, t) := Za
−∞
...Za
−∞
Ψxn
t(δxn1αν)(dxn). . . ν(dx1),
II(n, a, t) := Za
−∞
...Za
−∞
(|xn|< t)(δxn1αν)(dxn). . . ν(dx1)
In both of these expressions we integrate ntimes.
First, we will find I(n, a, a)and II (n, a, a), which is much simpler than the
general case and will be used in following calculations.
Lemma 5. For all n1
I(n, a, a) = G(a)n.
Proof. First, let us notice that
I(1, a, a) = Za
−∞
Ψxn
aν(dx1) = G(a)
by the definition of Williamson transform. By Lemma 1 we have
I(n, a, a) = Za
−∞
...Za
−∞
Ψxn
a(δxn1αν)(dxn). . . ν(dx1)
=Za
−∞
...Za
−∞
G(axn1
a(δxn2αν)(dxn1). . . ν(dx1)
=G(a)I(n1, a, a).
8 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
It follows that I(n, a, t)is geometric sequence with common ratio equal to
G(a).
Lemma 6. For all n1
II(n, a, a) = G(a)n1[nH(a) + G(a)] .
Proof. First, let us notice that
II(1, a, a) = Za
−∞
(|xn|< a)ν(dx1) = 2F(a)1 = H(a) + G(a).
By Lemma 2 sequence II(n, a, a)solves the following recurrence equation.
II(n, a, a) = Za
−∞
...Za
−∞
(|xn|< a)(δxn1αν)(dxn). . . ν(dx1)
=H(a)Za
−∞
...Za
−∞
Ψxn1
a(δxn2αν)(dxn1). . . ν(dx1)
+G(a)Za
−∞
...Za
−∞
(|xn1|< a)(δxn2αν)(dxn1). . . ν(dx1)
=H(a)I(n1, a, a) + G(a)II (n1, a, a)
=H(a)G(a)n1+G(a)II (n1, a, a).
On the other hand, we have
H(a)G(a)n1+G(a)II (n1, a, a)
=H(a)G(a)n1+G(a)G(a)n2[(n1)H(a) + G(a)]
=G(a)n1[nH(a) + G(a)] = II(n, a, a),
which ends the proof.
Using these results we can find the expression for I(n, a, t).
Theorem 1. The integral I(n, a, t)is given by
I(n, a, t) = C1G(t)
2n1
+G(a)n[C2n+C3],
G(t)6= 2G(a), for n2and by G(t)
2+1
2H(aa
t+G(a)
2for n= 1, where
C1(a, t) = I(1, a, t)G(a)
2G(a)G(t)hG(a) + H(aa
t(1 G(t)
2G(a)G(t)i,
C2(a, t) = H(a(a
t)
2G(a)G(t),
C3(a, t) = H(a(a
t)+G(a)
2G(a)G(t)2H(a)G(a(a
t)
(2G(a)G(t))2.
For simplicity of notation, we will write Cifor Ci(a, t), i = 1,2,3.
Proof. First, by Lemma 4 we find that
I(1, a, t) = Za
−∞
Ψx1
tν(dx1) = G(t)
2+1
2H(aa
t+G(a)
2.
By the same Lemma we have
I(n, a, t) = G(t)
2I(n1, a, t) + M(a, t)
2I(n1, a, a) + G(a)
2Ψa
tII(n1, a, a).
FLUCTUATIONS OF KENDALL RANDOM WALKS 9
Iterating this equation, we can see that
I(n, a, t) = G(t)
2n1
I(1, a, t)
+M(a, t)
2
n1
X
k=1
I(k, a, a)G(t)
2n1k
+G(aa
t
2
n1
X
k=1
II(k, a, a)G(t)
2n1k
.
To check this equality, we find that
G(t)
2I(n1, a, t) + M(a, t)
2I(n1, a, a) + G(aa
t
2II(n1, a, a)
=G(t)
2n1
I(1, a, t) + M(a, t)
2
n2
X
k=1
I(k, a, a)G(t)
2n1k
+G(aa
t
2
n2
X
k=1
II(k, a, a)G(t)
2n1k
+M(a, t)
2I(n1, a, a)
+G(a)
2Ψa
tII(n1, a, a)
=G(t)
2n1
I(1, a, t) + M(a, t)
2
n1
X
k=1
I(k, a, a)G(t)
2n1k
+G(aa
t
2
n1
X
k=1
II(k, a, a)G(t)
2n1k
=I(n, a, t).
It is enough to find the two sums which we used in the above formula. Using
simple algebra we find that
n1
X
k=1
I(k, a, a)G(t)
2n1k
=
n1
X
k=1
G(a)kG(t)
2n1k
=2G(a)
2G(a)G(t)"G(a)n1G(t)
2n1#
and
n1
X
k=1
II(k, a, a)G(t)
2n1k
=
n1
X
k=1
G(a)k1[kH(a) + G(a)] G(t)
2n1k
= 2G(a)n1nH(a) + G(a)
(2G(a)G(t)) 2G(a)H(a)
(2G(a)G(t))2
2G(t)
2n1G(a)
(2G(a)G(t)) G(t)H(a)
(2G(a)G(t))2.
Combining these results ends the proof.
Now we can find the formula for II (n, a, t).
10 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
Theorem 2. The integral I I (n, a, t)is given by
II(n, a, t) = G(a)n"(n+ 1)H(a) + G(a)
2G(a)G(t)2G(a)H(a)
(2G(a)G(t))2
+H(t)
2G(a)G(t)nC2+C32C2G(a)
2G(a)G(t)#
+G(t)
2n1"II(1, a, t)G(a)(H(a) + G(a))
2G(a)G(t)+G(a)G(t)H(a)
(2G(a)G(t))2
+(n1)C1H(t)
G(t)G(a)H(t)
2G(a)G(t)C3C2G(t)
2G(a)G(t)#.
for n2and G(t)6= 2G(a).
Proof. First, let us notice that G(t)>0, since t > 0. By Lemma 2 and the
definition of H(t)we have
II(1, a, t) = H(a) + H(t) + G(a) + G(t)
2.
By the same Lemma we find that
II(n, a, t) = H(a)
2I(n1, a, a) + G(a)
2II(n1, a, a)
+H(t)
2I(n1, a, t) + G(t)
2II(n1, a, t).
By iterating the above formula for II(n, a, t)we can see that
II(n, a, t) = II(1, a, t)G(t)
2n1
+H(a)
2
n1
X
k=1
I(k, a, a)G(t)
2n1k
+G(a)
2
n1
X
k=1
II(k, a, a)G(t)
2n1k
+H(t)
2
n1
X
k=1
I(k, a, t)G(t)
2n1k
.
An argument similar to the one provided for I(n, a, t)convinces us that
this expression solves the recurrence equation that defines II(n, a, t). It is
sufficient to find closed form of the sum
n1
P
k=1
I(k, a, t)G(t)
2n1k. We have
n1
X
k=1
I(k, a, t)G(t)
2n1k
=2G(a)n
2G(a)G(t)nC2+C32C2G(a)
2G(a)G(t)
+G(t)
2n2(n1)C1G(a)G(t)
(2G(a)G(t)) C3C2G(t)
2G(a)G(t).
A simple use of algebra ends the proof.
FLUCTUATIONS OF KENDALL RANDOM WALKS 11
For both I(n, a, t)and I I (n, a, t)we needed to assume that G(t)6= 2G(a).
In case G(t) = 2G(a)it is easy to check that we have
I(n, a, t) = G(a)n1"I(1, a, t) + n1
2 B+nH(aa
t
2!#,
II(n, a, t) = G(a)n1"II(1, a, t) + H(a)(n1)
21 + n
2+G(a)(n1)
2
+H(t)
2G(a)(n1)I(1, a, t) + B(n1)(n2)
4+H(aa
tn(n1)(n2)
12 #,
B=H(aa
t+G(a).
3. Fluctuations of Kendall random walk
For any positive awe introduce the first hitting times of the half lines [a, )and
(−∞, a]for the random walk {Xn:nN0}:
τ+
a= min{n>1 : Xn> a}, τ
a= min{n>1 : Xn< a}
and weak ascending and descending ladder variables:
eτ+
a= min{n>1 : Xna},eτ
a= min{n>1 : Xna}
with convention min =. In [15] authors found joint distribution of the random
vector (τ+
0, Xτ+
0). Our main goal here is to extend this result for any a0.
Lemma 7. Random variable τ+
0(and, by symmetry of the Kendall random walk,
also variable τ
0) has geometric distribution
P(τ+
0=k) = 1
2k, k = 1,2,··· .
We will now investigate the distribution of the random variable τ+
a. First, we notice
that
P(τ+
a=n) = P(X0a, X1a, . . . , Xn1a, Xn> a)
=Za
−∞
...Za
−∞ Z
a
P1(xn1, dxn)P1(xn2, dxn1). . .P1(0, dx1)
At the beginning, we will compute the value of the innermost integral. The result
is given in the following lemma.
Lemma 8.
Z
a
P1(xn1, dxn) = 1
21
2hΨxn1
aH(a) + G(a)i1(|xn1|<a)
where
H(a) = 2F(a)1G(a).
Proof.
Z
a
P1(xn1, dxn) = Z
a
(δxn1αν)(dxn)
= (δxn1αν)(a, )
=1
21
2hΨxn1
a(2F(a)1G(a)) + G(a)i1(|xn1|<a)
by the symmetry of δxn1ανmeasure and by Lemma 2.
Iterating Lemma 4 ntimes we arrive at the tail distribution of eτ
a:
12 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
Lemma 9.
P(eτ
a> n + 1) = 1
2n(1 F(a))
and
P(eτ
a= 1) = F(a).
Proof. Indeed
P(eτ
a> n + 1) = P(X0> a, X1> a, . . . , Xn> a, Xn+1 > a)
=Z
a
...Z
aZ
a
P1(xn, dxn+1)P1(xn1, dxn). . .P1(0, dx1).
Since the inner integral is given by Lemma 8 by
Z
a
P1(xn, dxn+1) = (δxnαν) (a, )
=1
21
2hΨxn
aH(a) + G(a)i1(|xn|<a)
and the second factor of the above expression is equal zero for xn> a, we see that
Z
aZ
a
P1(xn, dxn+1)P1(xn1, dxn) = 1
2δxn1αν(a, ).
Hence
Z
a
...Z
aZ
a
P1(xn, dxn+1)P1(xn1, dxn). . .P1(0, dx1) = 1
2n(δ0αν) (a, ),
which ends the proof of the first formula because δ0αν=ν.
Moreover
P(eτ
a= 1) = lim
n0
P(eτ
an+ 1) = F(a),
i.e. distribution of eτ
ahas atom at 1 with mass F(a)and
P(eτ
a=k) = 1
2n1(1 F(a))
for n2. It exactly means that eτ
ahas geometrical distribution for n2with
mass:
P(eτ
a>1) = 1 F(a).
Now it is evident that distribution of eτ
adepends only on cumulative distribution
function of the unit step and distribution of τ+
0:
Corollary 1.
P(eτ
a=n) = F(a)if n = 1,
(1 F(a)) P(τ+
0=n1) if n 2.
Let τ+
adenote the first ladder moment for the Kendall random walk, meaning that
τ+
a= inf{n1 : Xn> a},
where (Xn)is the Kendall random walk. In this section, we prove the first important
result of this paper: formula for the probability distribution function of the random
variable τ+
a.
FLUCTUATIONS OF KENDALL RANDOM WALKS 13
Theorem 3. For any a0and nN
P(τ+
a=n) = A(a)1
2n
+B(a)n(1 G(a))2G(a)n1+C(a)G(a)n1(1 G(a)),
where
A(a) = 1 + H(a)
(2G(a)1)2G(a)
(2G(a)1)
B(a) = H(a)
(2G(a)1)(1G(a))
C(a) = G(a)
2G(a)1H(a)
(2G(a)1)2
G(a)
(1G(a)) .
It is easy to see that A(a) + B(a) + C(a) = 1. The distribution of τ+
ais a convex
combination of two geometric distributions, one with a probability of success equal
to 1
2, one with a probability of success equal to G(a) and a shifted negative binomial
distribution with parameters 2 and G(a). Thus, it is a mixture of negative bino-
mial distributions with coefficients that depend on the unit step distribution of the
associated Kendall random walk both through its CDF and Williamson transform.
Proof of this formula uses Markov property of the Kendall random walk. The prob-
ability distribution function of τ+
ais expressed as an iterated integral with respect
to the transition kernels. Results of consecutive integrations form a sequence. We
will find its closed form to calculate P(τ+
a=n).
Proof. Since Xnis a Markov process, we have
P(τ+
a=n) = P(X0a, X1a,...,Xn1a, Xn> a)
=Za
−∞
...Za
−∞ Z
a
P1(xn1, dxn)P1(xn2, dxn1). . .P1(0, dx1).
Using Lemma 2, we can compute the innermost integral
Z
a
P1(xn1, dxn) = 1
21
2hΨxn1
aH(a) + G(a)i1(|xn1|<a)
Now we can define a sequence (Ij), where Ijdenotes the result of the j-th integra-
tion. Let
I1=Z
a
P1(xn1, dxn)
and
Ij+1 =Za
−∞
Ijδxnj1αν(dxnj).
Let us notice that Ijis of the form
Aj+ Ψ xnj
aH(a)Bj+CjG(a)(|xnj|<a),
which is easy to verify by integrating this formula with respect to xnj. This way
we also obtain recurrence equations for Aj,Bjand Cjsequences. We have
Aj+1 =1
2Aj,
Bj+1 =1
2Aj+G(a)(Bj+Cj),
Cj+1 =1
2Aj+CjG(a)
with initial conditions A1=1
2,B1=C1=1
2. It is easy to see that Ajsequence
is a geometric sequence. After plugging the formula for Ajinto the equations that
define Bjand Cjand iterating the formulas for Bjand Cj, we arrive at the following
solutions (Bj=G(a)j1j(1G(a))(2G(a)1)G(a)
(2G(a)1)2+1
2j1
(2G(a)1)2,
Cj=G(a)j1(1G(a))2j
2G(a)1=G(a)j11G(a)
2G(a)11
2j1
2G(a)1.
14 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
We will check that these sequences satisfy our recurrence equations. For sequence
Bj
1
2j+1
+G(a)(Bj+Cj)
=G(a)jj(1 G(a))(2G(a)1) G(a)
(2G(a)1)2+1G(a)
2G(a)1
+1
2j+1 1 + 2G(a)
(2G(a)1)22G(a)
2G(a)1
=G(a)j(j+ 1)(1 G(a))(2G(a)1) G(a)
(2G(a)1)2
+1
2j+1 1
(2G(a)1)2=Bj+1
For sequence Cjwe have
G(a)Cj+ 2(j+1) =G(a)j(1 G(a)) 2jG(a)
2G(a)1+ 2(j+1)
=G(a)j(1 G(a)) 2jG(a) + 2(j+1)(2G(a)1)
2G(a)1
=G(a)j(1 G(a)) 2(j+1)
2G(a)1=Cj+1
Since (|x0|< a) = 1 and substituting x0= 0 implies Ψx0
t= 1, we have proven
the formula for the probability distribution function of τ+
a, which can be seen after
we group terms in formulas for sequences Bjand Cj.
4. Distribution of the first ladder height
In this section, we give the formula for the cumulative distribution function of the
first ladder height over any level a0. At the beginning let us look on a special
case of the desired result in the case a= 0, which was proved in [15].
Theorem 4. Cumulative distribution function of the joint distribution of random
variables τ+
0and Xτ+
0is given by
Φ0
n(t) = P(Xτ+
0t, τ +
0=n) = 1
2nG(t)n1h2nF(t)1
2(n1)G(t)i.
Notice that
Φ0
n(t) = Pτ+
0=nP|Xn|< t
and
PXτ+
0t=4F(t)2G(t)2
(2 G(t))2.
Our goal in this section is to generalize this result for any level a0in the following
way:
FLUCTUATIONS OF KENDALL RANDOM WALKS 15
Theorem 5. Cumulative distribution function of the joint distribution of random
variables τ+
aand Xτ+
ais given by
Φa
n(t) := P(Xτ+
at, τ +
a=n) =
=G(t)
2n1"2G(a)H(a)(G(t)G(a))
(2G(a)G(t))2G(a)2
2G(a)G(t)
+(n1)C1H(t)
G(t)G(a)H(t)
2G(a)G(t)C3C2G(t)
2G(a)G(t)+II (1, a, t)#
+G(a)n1"(nH(a) + G(a))(G(t)G(a))
2G(a)G(t)G(a)G(t)H(a)
(2G(a)G(t))2
+G(a)H(t)C2n
2G(a)G(t)+G(t)H(t)
2G(a)G(t)) C3C2
2G(a)C2
2G(a)G(t)+H(t)(C3C2)
2#
for n2,t > a such that |G(t)|<1and expressions C1, C2, C3defined in Theorem
1. Moreover Φa
n(0) = 0. For n= 1 we have simply P(Xτ+
at, τ +
a= 1) =
F(t)F(a). Since G(0) = 0 and consequently H(0) = 0, it is easy to see that for
a= 0 this expression simplifies to the expression given in Theorem 4. We also need
the technical assumption that G(t)6= 2G(a).
Proof. Let us introduce the notation
Φa
n(t) := Pτ+
a=n, Xτ+
a< t=P(X1a, . . . , Xn1a, a < Xnt)
=Za
−∞
...Za
−∞ Zt
a
(δxn1αν)(dxn)(δxn2αν)(dxn1)...ν(dx1).
The innermost integral can be computed using Lemma 2. Then this probability
can be expressed as
Φa
n(t) = H(t)
2I(n1, a, t)+ G(t)
2II(n1, a, t)H(a)
2I(n1, a, a)G(a)
2II(n1, a, a).
Substituting the expressions obtained in Theorems 1 and 2 for the terms I(n
1, a, t),II(n1, a, t)I(n1, a, a)and II(n1, a, a)ends the proof.
We can now find the marginal distribution of the random variable Xτ+
a.
Corollary 2. Cumulative distribution function of the random variable Xτ+
ais given
by the following formula.
P(Xτ+
at) = F(t)F(a)
+G(t)
2G(t)"H(t)(4 G(t))C1
G(t)(2 G(t)) +2G(a)H(a)(G(t)G(a))
(2G(a)G(t))2
G(a)2
2G(a)G(t)C1H(t)
G(t)G(a)H(t)
2G(a)G(t)C3C2G(t)
2G(a)G(t)+II (1, a, t)#
+G(a)
1G(a)"(2 G(a))(H(a)(G(t)G(a)) + G(a)H(t)C2)
(1 G(a))(2G(a)G(t)) +G(a)(G(t)G(a))
2G(a)G(t)
G(a)G(t)H(a)
(2G(a)G(t))2+G(t)H(t)
2G(a)G(t)C3C2
2G(a)C2
2G(a)G(t)+H(t)(C3C2)
2#.
Again, it is easy to check that for a= 0 this expression simplifies to the expression
given in Theorem 4.
16 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
4.1. Maxima and minima of Kendall random walks. In this section we will
prove an analog of Pollaczek-Khintchine formula. We will start with a lemma that
describes the distribution of the maximum of nsteps of a Kendall random walk.
Lemma 10. Let {Xn:nN0}denote the Kendall random walk. Then the
distribution of max
0inXiis given by
P( max
0inXit)
=A(t)P(τ+
0=n) + B(t)G(t)
1G(t)(1 G(t))2nG(t)n1
+(B(t) + C(t)) G(t)
1G(t)G(t)n1(1 G(t))
for functions A,Band Cdefined in Theorem 3 and t > 0.
Proof. It is sufficient to see that
P( max
0inXit) = P(X1t,...,Xnt)
=P(X1t,...,Xnt, Xn+1 t)
+P(X1t,...,Xnt, Xn+1 > t)
=P(τ+
t> n + 1) + P(τ+
t=n+ 1) = P(τ+
tn+ 1).
Summation of the formula for the distribution of τ+
tends the proof.
Lemma 11. Let {Xn:nN0}denote the Kendall random walk. Then the
distribution of min
0inXiis given by
P( min
0inXit) = 1 1
2n1 + H(t)
(2G(t)1)2G(t)
2G(t)1
G(t)nG(t)
2G(t)12H(t)
(2G(t)1)2
nG(t)n2H(t)G(t)
(2G(t)1)2
for t < 0.
Proof. The proof is a simple modification of the proof of Theorem 3. For t < 0
we have
P( min
0inXit) = 1 P( min
0inXi> t)
= 1 P(X0> t, X1> t,...,Xn> t)
= 1 Z
t
...Z
t
(δxn1αν)(dxn)...(δx0αν)(dx1).
We define
I1:= Z
t
(δxn1αν)(dxn)
and recursively
Ij+1 := Z
t
(δxnj1αν)(dxnj)
FLUCTUATIONS OF KENDALL RANDOM WALKS 17
for 1jn1and x0= 0. Assuming Ij=Aj+BjΨxnj
t+Cj(|xnj|<t)
we have
Ij+1 =Z
tAj+BjΨxnj
t+Cj(|xnj|<t)(δxnj1αν)(dxnj)
=AjZ
t
(δxnj1αν)(dxnj)
+BjZ
t
Ψxnj
t(δxnj1αν)(dxnj)
+CjZt
t
(δxnj1αν)(dxnj)
=Aj1
2+1
2H(txnj1
t+G(t)(|xnj1|<t)
+BjG(txnj1
t+CjH(txnj1
t+G(t)(|xnj1|<t)
=1
2Aj+ Ψ xnj1
t1
2H(t)Aj+G(t)Bj+CjH(t)
+(|xnj1|<t)1
2G(t)Aj+G(t)Cj.
Thus we arrive at the following set of recurrence equations.
Aj+1 =1
2Aj,
Bj+1 =G(t)Bj+H(t)(Aj+1 +Cj),
Cj+1 =G(t)Aj+1 +G(t)Cj
with the initial conditions A1=1
2,B1=1
2H(t),C1=1
2G(t). It is easy to check
that the solutions are given by the following sequences
Aj=1
2j,
Bj=jG(t)j2G(t)H(t)
(2G(t)1)2G(t)j2H(t)
(2G(t)1)2+ 2jH(t)
(2G(t)1)2,
Cj==G(t)
2G(t)1G(t)j1
2j.
Acknowledgements. This paper is a part of pro ject "First order Kendall maximal
autoregressive processes and their applications", which is carried out within the
POWROTY/REINTEGRATION programme of the Foundation for Polish Science
co-financed by the European Union under the European Regional Development
Fund.
References
[1] T. Alpuim. An Extremal Markovian Sequence. J. Appl. Math., 26(2), 219–232, 1989.
[2] N. H. Bingham. Factorization theory and domains of attraction for generalized con-
volution algebra. Proc. London Math. Soc 23(3), 16–30, 1971.
[3] N. H. Bingham. On a theorem of Kłosowska about generalized convolutions. Coll.
Math. 48(1), 117–125, 1984.
[4] M. Borowiecka-Olszewska, B.H. Jasiulis-Gołdyn, J.K. Misiewicz, J. Rosiński. Lévy
processes and stochastic integrals in the sense of generalized convolutions. Bernoulli
21(4), 2513–2551, 2015.
[5] P. Embrechts, K. Kluppelberg, T. Mikosch. Modelling Extremal Events: For Insur-
ance and Finance. Applications of Mathematics, Stochastic Modelling and Applied
Probability 33, Springer-Verlag Berlin Heidelberg, 1997.
18 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2
[6] W. Feller. An Introduction to Probability Theory and Its Applications II, 2-nd edition,
John Wiley & Sons, 1971.
[7] M. Ferreira, L. Canto e Castro. Modeling rare events through a pRARMAX process.
Journal of Statistical Planning and Inference 140, 3552–3566, 2010.
[8] M. Ferreira.On the extremal behavior of a Pareto process: an alternative for ARMAX
modeling. Kybernetika 48(1), 31–49, 2012.
[9] J. Gilewski. Generalized convolutions and delphic semigroups. Coll. Math. 25, 281–
289, 1972.
[10] J. Gilewski, K. Urbanik. Generalized convolutions and generating functions. Bull.
Acad. Sci. Polon. Ser. Math. Astr. Phys. 16, 481–487, 1968.
[11] B.H. Jasiulis-Gołdyn. Kendall random walks. Probab. Math. Stat. 36(1), 165–185,
2016.
[12] B.H. Jasiulis. Limit property for regular and weak generalized convolution. J. Theoret.
Probab. 23(1), 315–327, 2010.
[13] B. H. Jasiulis-Gołdyn, K. Naskręt, J.K. Misiewicz, E. Omey. Renewal the-
ory for extremal Markov sequences of the Kendall type, submitted, 2018,
https://arxiv.org/pdf/1803.11090.pdf.
[14] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Classical definitions of the Poisson process do
not coincide in the case of weak generalized convolution.Lith. Math. J.,55(4), 518–
542, 2015.
[15] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Kendall random walk, Williamson transform
and the corresponding Wiener-Hopf factorization. Lith. Math. J., 57(4), 479–489,
2017.
[16] B.H. Jasiulis-Gołdyn, A. Kula. The Urbanik generalized convolutions in the non-
commutative probability and a forgotten method of constructing generalized convo-
lution. Proc. Math. Sci. 122(3), 437–458. 2012.
[17] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Weak Lévy-Khintchine representation for weak
infinite divisibility. Theory of Probabability and Its Applications. 60(1), 132–151,
2016.
[18] D. G. Kendall. Delphic semi-groups, infinitely divisible regenerative phenomena, and
the arithmetic of p-functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 9(3),
163–195, 1968.
[19] J. Kennedy. Understanding the Wiener-Hopf factorization for the simple random
walk. J. Appl. Math., 31, 561–563, 1994.
[20] J. G. C. Kingman. Random Walks with Spherical Symmetry. Acta Math. 109(1),
11–53, 1963.
[21] A.E. Kyprianou. Introductory lectures on fluctuations of Lévy processes and appli-
cations. Springer, 2006.
[22] A.E. Kyprianou, Z. Palmowski. Fluctuations of spectrally negative Markov Additive
processes. Séminaire de Probabilités, XLI, 121–135, 2008.
[23] A. Lachal. A note on Spitzer identity for random walk. Statistics & Probability Let-
ters, 78(2), 97–108, 2008.
[24] A.J. McNeil, J. Ne˜slehová. Multivariate Archimedean copulas, d-monotone functions
and 1-norm symmetric distributions. The Annals of Statistics 37 (5B), 3059–3097,
2009.
[25] A.J. McNeil, J. Nešlehová. From Archimedean to Liouville Copulas. J. Multivariate
Analysis 101(8), 1771-1790, 2010.
[26] T. Nakajima. Joint distribution of first hitting time and first hitting place for random
walk. Kodai Math. J. 21, 192–200, 1998.
[27] K-I. Sato. Lévy Processes and Infinitely Divisible Distributions. Cambridge University
Press, 2007.
[28] R. Sousa, M. Guerra, S. Yakubovich. Lévy processes with respect to the index Whit-
taker convolution. arXiv: https://arxiv.org/pdf/1805.03051.pdf, 2018.
[29] K. Urbanik. Generalized convolutions I-V. Studia Math., 23(1964), 217–245,
45(1973), 57–70, 80(1984), 167–189, 83(1986), 57–95, 91(1988), 153–178.
FLUCTUATIONS OF KENDALL RANDOM WALKS 19
[30] V. Vol’kovich, D. Toledano-Ketai, R. Avros. On analytical properties of generalized
convolutions. Banach Center Publications, Stability in Probability 5(3), 243–274.
2010.
[31] R.E. Williamson, Multiply monotone functions and their Laplace transforms. Duke
Math. J. 23, 189–207, 1956.
[32] H. Ch. Yeh, B.C. Arnold, C.A. Robertson. Pareto Processes. Journal of Applied
Probability, 25(2), 291–301, 1988.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The paper gives some properties of hitting times and an analogue of the Wiener-Hopf factorization for the Kendall random walk. We show also that the Williamson transform is the best tool for problems connected with the Kendall generalized convolution.
Article
Full-text available
The paper deals with a new class of random walks strictly connected with the Pareto distribution. We consider stochastic processes in the sense of generalized convolution or weak generalized convolution following the idea given in [1]. The processes are Markov processes in the usual sense. Their structure is similar to perpetuity or autoregressive model. We prove theorem, which describes the magnitude of the fluctuations of random walks generated by generalized convolutions. We give a construction and basic properties of random walks with respect to the Kendall convolution. We show that they are not classical L\'evy processes. The paper proposes a new technique to cumulate the Pareto-type distributions using a modification of the Williamson transform and contains many new properties of weakly stable probability measure connected with the Kendall convolution. It seems that the Kendall convolution produces a new classes of heavy tailed distributions of Pareto-type.
Article
The paper deals with renewal theory for a class of extremal Markov sequences connected with the Kendall convolution. We consider here some particular cases of the Wold processes associated with generalized convolutions. We prove an analogue of the Fredholm theorem for all regular generalized convolutions algebras. Using regularly varying functions we prove a Blackwell theorem and a limit theorem for renewal processes defined by Kendall random walks. Our results set new research hypotheses for other generalized convolution algebras to investigate renewal processes constructed by Markov processes with respect to generalized convolutions.
Article
An autoregressive process ARP(1) with Pareto-distributed inputs, analogous to those of Lawrance and Lewis (1977), (1980), is defined and its properties developed. It is shown that the stationary distributions are Pareto. Further, the maximum and minimum processes are asymptotically Weibull, and the ARP(1) process is shown to be closed under maximization or minimization when the number of terms is geometrically distributed. The ARP(1) process leads naturally to an extremal process in the sense of Lamperti (1964). Statistical inference for the ARP(1) process is developed. An absolutely continuous variant of the Pareto process is described.
Article
In what concerns extreme values modeling, heavy tailed autoregressive processes defined with the minimum or maximum operator have proved to be good alternatives to classical linear ARMA with heavy tailed marginals (Davis and Resnick [8], Ferreira and Canto e Castro [13]). In this paper we present a complete characterization of the tail behavior of the autoregressive Pareto process known as Yeh-Arnold-Robertson Pareto(III) (Yeh et al. [32]). We shall see that it is quite similar to the first order max-autoregressive ARMAX, but has a more robust parameter estimation procedure, being therefore more attractive for modeling purposes. Consistency and asymptotic normality of the presented estimators will also be stated.