Content uploaded by Mateusz Staniak

Author content

All content in this area was uploaded by Mateusz Staniak on Feb 20, 2019

Content may be subject to copyright.

Content uploaded by Mateusz Staniak

Author content

All content in this area was uploaded by Mateusz Staniak on Feb 20, 2019

Content may be subject to copyright.

arXiv:1902.00576v1 [math.PR] 1 Feb 2019

FLUCTUATIONS OF EXTREMAL MARKOV CHAINS OF

THE KENDALL TYPE

B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

Abstract. The paper deals with ﬂuctuations of Kendall random walks,

which are extremal Markov chains. We give the joint distribution of

the ﬁrst ascending ladder epoch and height over any level a≥0and

distribution of maximum and minimum for these extremal Markovian

sequences. We show that distribution of the ﬁrst crossing time of level

a≥0is a mixture of geometric and negative binomial distributions.

The Williamson transform is the main tool for considered problems con-

nected with the Kendall convolution.

1. Introduction

Addition of independent random variables and corresponding operation on

their measures - convolution - is one of the most commonly occurring opera-

tions in probability theory and applications. Classical convolution is a special

case of much more general operation called a generalized convolution.

The origin of generalized convolutions can be found in delphic semigroups

([9, 10, 18]). Inspired by Kingman’s study of spherical random walks ([20]),

Urbanik introduced the notion of a generalized convolution for measures con-

centrated on a positive half-line in a series of papers [29]. This deﬁnition was

extended to symmetrical measures on Rby Jasiulis-Gołdyn in [12]. General-

ized convolutions were explored with the use of regular variation ([2, 3, 13])

and were used to construct Lévy processes and stochastic integrals ([4], [28]).

In the theory of generalized convolutions, we create new mathematical ob-

jects that have potential in applications. It is enough to look at the case

of the maximum convolution corresponding to extreme value theory ([5]).

Currently, limit distribution for extremes, i.e. generalized extreme value dis-

tribution (Frechét, Gumbel, Weibull) is commonly used for modeling rainfall,

ﬂoods, drought, cyclones, extreme air pollutants, etc. Random walks with

respect to generalized convolutions form a class of extremal Markov chains

(see [1, 4, 13]). Studying them in the appropriate algebras will be a mean-

ingful contribution to extreme value theory ([5]). Kendall random walks

([11, 13]), which are the main objects of investigations in this paper, are

related to maximal processes (and thus maximum convolution), Pareto pro-

cesses ([8, 32]) or pRARMAX processes ([7]). One of the diﬀerences lies in

1Institute of Mathematics, University of Wrocław, pl. Grunwaldzki 2/4, 50-384

Wrocław, Poland, e-mail: jasiulis@math.uni.wroc.pl

2Faculty of Mathematics and Information Science, Warsaw University of Technology, ul.

Koszykowa 75, 00-662 Warszawa, Poland, e-mail: m.staniak@mini.pw.edu.pl

Keywords and phrases: Kendall convolution, Markov process, Pareto distribution, ran-

dom walk, Spitzer identity, Polaczek-Khintchine formula, Williamson transform

Mathematics Subject Classiﬁcation. 60K05, 60G70, 44A35, 60J05, 60E10.

1

2 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

the fact that in this case values of the process are randomly multiplied by

a heavy-tailed Pareto random variables, which results in even more extreme

behavior than in the case of the classical maximum process.

Generalized convolutions have connections with the theory of weakly sta-

ble measures ([12, 17]) and non-commutative probability ([16]). By the

Williamson transform one can also ﬁnd some connections with copula theory

([24, 25]).

Fluctuations of classical random walks and Lévy processes were widely de-

scribed in literature (see e.g. [6, 27]) and they are still the object of interest

for scientists [19, 21, 23, 26, 22]. This paper is a continuation of research

initiated in [15]. The main result of this paper is a description of ﬂuctua-

tions of random walks generated by a particular generalized convolution -

the Kendall convolution - in terms of the ﬁrst ladder moment (epoch) and

the ﬁrst ladder height of the random walk over any level a≥0. It turns out

that the distribution of the ﬁrst ladder moment is a mixture of three negative

binomial distributions where coeﬃcients and parameters depend on the unit

step distribution. We also present distribution of maxima and minima for

the Kendall random walks in terms of the Williamson transform and unit

step cumulative distribution function. Description of the behavior of the ex-

tremes of the stochastic processes is an analogue of the Pollaczek-Khintchine

equation from classical theory.

Organization of the paper: We begin Section 2 by recalling the Kendall con-

volution ([11, 15]), which is an example of generalized convolution in the

sense deﬁned by K. Urbanik [29]. This convolution is quite speciﬁc because

the result of two point-mass probability measures is a convex linear combina-

tion of measure concentrated at one point and the Pareto distribution. Next,

we present the deﬁnition and main properties of the Williamson transform

([15, 31]), which is analog of characteristic function in the Kendall convolu-

tion algebra. This transform is very easy to invert and allows us to get many

results for extremal Markov sequences of the Kendall type.

In the third section, we consider the ﬁrst ascending ladder epoch over any

level a≥0and prove that its distribution is a convex linear combination

of geometrical and negative binomial distributions. Section 4 consists of

two parts: the distribution of the ﬁrst ladder height and the maximum and

minimum distributions for studied stochastic processes.

Notation: The distribution of the random element Xis denoted by L(X).

For a probability measure λand a∈R+the rescaling operator is given by

Taλ=L(aX)if λ=L(X). By Pwe denote family of all probability mea-

sures on the Borel subsets of R, while by Pswe denote symmetric probability

measures on R. For abbreviation the set of all natural numbers including

zero is denoted by N0. Additionally eπ2αdenotes a Pareto random measure

with the density function eπ2α(dy) = α|y|−2α−11

1

1[1,∞)(|y|)dy. In general, by

eνwe denote symmetrization of a probability measure ν. In this paper, we

usually consider symmetric probability measures assuming that ν∈ Ps.

FLUCTUATIONS OF KENDALL RANDOM WALKS 3

We study positive and negative excursions for the Kendall random walk

{Xn:n∈N0}which is deﬁned by the following construction:

Deﬁnition 1. Stochastic process {Xn:n∈N0}is a discrete time Kendall

random walk with parameter α > 0and step distribution νif there exist

1. (Yk)i.i.d. random variables with distribution ν∈P,

2. (ξk)i.i.d. random variables with uniform distribution on [0,1],

3. (θk)i.i.d. random variables with the symmetric Pareto distribution with

the density eπ2α(dy) = α|y|−2α−11

1

1[1,∞)(|y|)dy,

4. sequences (Yk),(ξk)and (θk)are independent,

such that

X0= 1, X1=Y1, Xn+1 =Mn+1rn+1 [I(ξn> ̺n+1) + θn+1I(ξn< ̺n+1 )] ,

where θn+1 and Mn+1 are independent,

Mn+1 = max{|Xn|,|Yn+1|}, mn+1 = min{|Xn|,|Yn+1|}, ̺n+1 =mα

n+1

Mα

n+1

and

rn+1 ={sgn(u) : max {|Xn|,|Yn+1|} =|u|} .

The Kendall random walk is extremal Markov chain with X0≡0and the

transition probabilities

Pn(x, A) = PXn+k∈AXk=x=δx△αν△αn, n, k ∈N,

where measure ν∈ Psis called the step distribution. Construction and some

basic properties of this particular process are described in [4, 11, 13, 14].

2. Williamson transform and Kendall convolution

The stochastic process considered here was constructed using the Kendall

convolution deﬁned in the following way:

Deﬁnition 2. Commutative and associative binary operation △α:P2

s→ Ps

deﬁned for discrete measures by

e

δx△αe

δy:= TM̺αeπ2α+ (1 −̺α)e

δ1

where M= max{|x|,|y|},m= min{|x|,|y|},̺=m/M, we call the Kendall

convolution. The extension of △αto the whole Psis given by

ν1△αν2(A) = ZR2e

δx△αe

δy(A)ν1(dx)ν2(dy).

Notice that the operation △αis a generalized convolution in the sense intro-

duced by Urbanik (see [12, 29, 30]) having the following properties:

•ν△αδ0=νfor each ν∈ Ps;

•(pν1+ (1 −p)ν2)△αν=p(ν1△αν) + (1 −p) (ν2△αν)for each

p∈[0,1] and each ν, ν1, ν2∈ Ps.

•if λn→λand νn→ν, then (λn△ανn)→(λ△αν), where →

denotes the weak convergence;

•Taν1△αν2=Taν1△αTaν2for each ν1, ν2∈ Ps.

The Kendall convolution is strictly connected with the Williamson transform,

which plays similar role to characteristic function in the classical algebra.

4 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

Deﬁnition 3. By the Wil liamson transform we understand the operation

ν→bνgiven by

bν(t) = ZR

(1 − |xt|α)+ν(dx), ν ∈ Ps,

where a+=aif a>0and a+= 0 otherwise.

For convenience we use the following notation:

Ψ(t) = (1 − |t|α)+, G(t) = bν(1/t).

The next lemma is almost evident and well known. It provides the inverse

of the Williamson transform, which is surprisingly simple.

Lemma 1. The correspondence between measure ν∈ Psand its Williamson

transform is 1−1. Moreover, denoting by Fthe cumulative distribution

function of ν,ν({0}) = 0, we have

F(t) = 1

2α[α(G(t) + 1) + tG′(t)] if t > 0;

1−F(−t)if t < 0.

except for the countable many t∈R.

For details of the proof of the above Lemma see [15].

As we mentioned above the Williamson transform ([31]) plays the same role

for the Kendall convolution as the Fourier transform for the classical convo-

lution (for proof see Proposition 2.2. in [15]), i.e.

Proposition 1. Let ν1, ν2∈ Psbe probability measures with Williamson

transforms bν1,bν2. Then

ZR

Ψ(xt)ν1△αν2(dx) = bν1(t)bν2(t).

The following fact is a simple consequence of Lemma 1 and Proposition 1.

Proposition 2. Let ν∈ P. For each natural number n>2the cumulative

distribution function Fnof measure ν△αnis equal

Fn(t) = 1

2G(t)n+ 1 + nG(t)n−1H(t), t > 0,

where

H(t) = 2F(t)−G(t)−1 = t−α

t

Z

−t

|x|αν(dx)

and Fn(t) = 1 −Fn(−t)for t < 0, where G(t) = bν(1/t).

Proof. At the beginning, it is worth noting that

Gn(t) = G(t)n.

Then by Lemma 1 we arrive at the following formula:

Fn(t) = 1

2ααG(t)n+ 1+tnG(t)n−1G′(t)

for t > 0and we also have

G′(t) = α

tH(t),

which ends the proof.

FLUCTUATIONS OF KENDALL RANDOM WALKS 5

Example 2.1. Let ν=e

δ1. Then

G(t) = 1− |t|−α+

and

dFn(t) = αn(n−1)

2|t|2α+1 1− |t|−α(n−1) 1

1

1[1,∞)(|t|)dt.

Example 2.2. For Kendall random walk with unit step distribution X1∼

να∈ P such that E|X1|α=mα<∞, stable distribution has the following

density

να(dx) = α

2mα|x|−2(α+1) exp{−mα|x|−α}dx.

Then

F1(t) = 1

2+1

2(1 + mαt−α) exp{−mαt−α}if t > 0;

1−F(−t)if t < 0

and

G(t) = exp −mα|t|−α

Fn(t) = 1

2+1

2(1 + nmαt−α) exp{−nmαt−α}if t > 0;

1−F(−t)if t < 0.

It is evident that we have:

Fn(t) = F1(n−1/αt).

Example 2.3. Let ν=eπ2αfor α∈(0,1]. Since e

δ1△αe

δ1=eπ2α, then using

Example 2.1 we arrive at:

dFn(t) = αn(2n−1)

|t|2α+1 1− |t|−α2(n−1) 1

1

1[1,∞)(|t|)dt.

The explicit formula for transition probabilities for Kendall random walks is

given by:

Lemma 2. For all n∈Nand x≥0

δx△αν△αn(0, t) = Pn(x, [0, t)) = 1

2hΨx

tHn(t) + Gn(t)i1{|x|<t},

where

Hn(t) = 2Fn(t)−1−Gn(t).

Proof. By Lemma 3.1 in [15] we have

(δx△αδy) (0, t) = 1

21−xy

t2α1{|x|<t,|y|<t}

=1

2hΨx

t+ Ψ y

t−Ψx

tΨy

ti1{|x|<t,|y|<t},

(δx△αν) (0, t) = P1(x, [0, t))

=1

2hΨx

t2F(t)−1−G(t)+G(t)i1{|x|<t}.

6 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

The transition probability can now be computed by replacing νwith ν△αn

in the last formula.

In the following section, we will also need the formula for the integral

Za

−∞

Ψx

t(δy△αν)(dx).

In order to ﬁnd it, we ﬁrst need to ﬁnd the following truncated moment of

order α.

Lemma 3. For all n∈Nand a > 0

Za

0

xα(δy△αν)(dx) = 1

2[H(a) (aα− |y|α) + |y|αG(a)] (|y|< a)

=aα

2hH(a)Ψ y

a+1−Ψy

aG(a)i1{|y|<a}.

Proof. By Lemma 2 we have

(δy△αδz) (0, x) = 1

21−yz

x2α1{|y|<x,|z|<x}.

Integrating by parts, we obtain

Za

0

xα(δy△αδz)(dx) = aα(δy△αδz) (0, a)−

a

Z

0

αxα−1(δy△αδz) (0, x)(dx)

=1

2|y|α−|yz|α

aα+1

2|z|α1{|y|<a,|z|<a}

from which it follows that

Za

0

xα(δy△αν)(dx) = Z∞

−∞ Za

0

xα(δy△αδz)(dx)ν(dz)

=Za

−a1

2|y|α−|yz|α

aα+1

2|z|αν(dz)

=1

2[H(a) (aα− |y|α) + |y|αG(a)] 1{|y|<a}

=aα

2hH(a)Ψ y

a+1−Ψy

aG(a)i1{|y|<a}.

Now we can ﬁnd the formula for Ra

−∞ Ψx

t(δy△αν)(dx).

Lemma 4. For all t≥a≥0the following equality holds.

Za

−∞

Ψx

t(δy△αν)(dx) =

=1

2Ψy

aM(a, t) + 1

2G(a)Ψ a

t(|y|< a) + 1

2Ψy

tG(t),

where

M(a, t) = H(a)Ψ a

t+1−Ψa

tG(a).

FLUCTUATIONS OF KENDALL RANDOM WALKS 7

Proof. By Lemmas 2 and 3 we have

Za

−∞

Ψx

t(δy△αν)(dx)

= (δy△αν)(−t, a)−t−αZa

−t

|x|α(δy△αν)(dx)

= (δy△αν)(0, a) + (δy△αν)(0, t)

−t−αZa

0

xα(δy△αν)(dx)−t−αZt

0

xα(δy△αν)(dx)

=1

2hΨy

aH(a) + G(a)i1{|y|<a}+1

2hΨy

tH(t) + G(t)i1{|y|<t}

−aα

2tαhH(a)Ψ y

a+1−Ψy

aG(a)i1{|y|<a}

−1

2hH(t)Ψ y

t+1−Ψy

tG(t)i1{|y|<t}

from which we obtain the desired result by regrouping the terms and noticing

that since t≥a,a

tα= 1 −Ψa

t.

Let us notice that in particular for t=a

Za

−∞

Ψx

a(δy△αν)(dx) = G(a)Ψ y

a,

because it is the Williamson transform of the Kendall convolution of two

measures: δyand ν.

Based on previous results, we will ﬁnd closed-form formulas for two more

important integrals. Let us deﬁne

I(n, a, t) := Za

−∞

...Za

−∞

Ψxn

t(δxn−1△αν)(dxn). . . ν(dx1),

II(n, a, t) := Za

−∞

...Za

−∞

(|xn|< t)(δxn−1△αν)(dxn). . . ν(dx1)

In both of these expressions we integrate ntimes.

First, we will ﬁnd I(n, a, a)and II (n, a, a), which is much simpler than the

general case and will be used in following calculations.

Lemma 5. For all n≥1

I(n, a, a) = G(a)n.

Proof. First, let us notice that

I(1, a, a) = Za

−∞

Ψxn

aν(dx1) = G(a)

by the deﬁnition of Williamson transform. By Lemma 1 we have

I(n, a, a) = Za

−∞

...Za

−∞

Ψxn

a(δxn−1△αν)(dxn). . . ν(dx1)

=Za

−∞

...Za

−∞

G(a)Ψ xn−1

a(δxn−2△αν)(dxn−1). . . ν(dx1)

=G(a)I(n−1, a, a).

8 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

It follows that I(n, a, t)is geometric sequence with common ratio equal to

G(a).

Lemma 6. For all n≥1

II(n, a, a) = G(a)n−1[nH(a) + G(a)] .

Proof. First, let us notice that

II(1, a, a) = Za

−∞

(|xn|< a)ν(dx1) = 2F(a)−1 = H(a) + G(a).

By Lemma 2 sequence II(n, a, a)solves the following recurrence equation.

II(n, a, a) = Za

−∞

...Za

−∞

(|xn|< a)(δxn−1△αν)(dxn). . . ν(dx1)

=H(a)Za

−∞

...Za

−∞

Ψxn−1

a(δxn−2△αν)(dxn−1). . . ν(dx1)

+G(a)Za

−∞

...Za

−∞

(|xn−1|< a)(δxn−2△αν)(dxn−1). . . ν(dx1)

=H(a)I(n−1, a, a) + G(a)II (n−1, a, a)

=H(a)G(a)n−1+G(a)II (n−1, a, a).

On the other hand, we have

H(a)G(a)n−1+G(a)II (n−1, a, a)

=H(a)G(a)n−1+G(a)G(a)n−2[(n−1)H(a) + G(a)]

=G(a)n−1[nH(a) + G(a)] = II(n, a, a),

which ends the proof.

Using these results we can ﬁnd the expression for I(n, a, t).

Theorem 1. The integral I(n, a, t)is given by

I(n, a, t) = C1G(t)

2n−1

+G(a)n[C2n+C3],

G(t)6= 2G(a), for n≥2and by G(t)

2+1

2H(a)Ψ a

t+G(a)

2for n= 1, where

C1(a, t) = I(1, a, t)−G(a)

2G(a)−G(t)hG(a) + H(a)Ψ a

t(1 −G(t)

2G(a)−G(t)i,

C2(a, t) = H(a)Ψ(a

t)

2G(a)−G(t),

C3(a, t) = H(a)Ψ(a

t)+G(a)

2G(a)−G(t)−2H(a)G(a)Ψ(a

t)

(2G(a)−G(t))2.

For simplicity of notation, we will write Cifor Ci(a, t), i = 1,2,3.

Proof. First, by Lemma 4 we ﬁnd that

I(1, a, t) = Za

−∞

Ψx1

tν(dx1) = G(t)

2+1

2H(a)Ψ a

t+G(a)

2.

By the same Lemma we have

I(n, a, t) = G(t)

2I(n−1, a, t) + M(a, t)

2I(n−1, a, a) + G(a)

2Ψa

tII(n−1, a, a).

FLUCTUATIONS OF KENDALL RANDOM WALKS 9

Iterating this equation, we can see that

I(n, a, t) = G(t)

2n−1

I(1, a, t)

+M(a, t)

2

n−1

X

k=1

I(k, a, a)G(t)

2n−1−k

+G(a)Ψ a

t

2

n−1

X

k=1

II(k, a, a)G(t)

2n−1−k

.

To check this equality, we ﬁnd that

G(t)

2I(n−1, a, t) + M(a, t)

2I(n−1, a, a) + G(a)Ψ a

t

2II(n−1, a, a)

=G(t)

2n−1

I(1, a, t) + M(a, t)

2

n−2

X

k=1

I(k, a, a)G(t)

2n−1−k

+G(a)Ψ a

t

2

n−2

X

k=1

II(k, a, a)G(t)

2n−1−k

+M(a, t)

2I(n−1, a, a)

+G(a)

2Ψa

tII(n−1, a, a)

=G(t)

2n−1

I(1, a, t) + M(a, t)

2

n−1

X

k=1

I(k, a, a)G(t)

2n−1−k

+G(a)Ψ a

t

2

n−1

X

k=1

II(k, a, a)G(t)

2n−1−k

=I(n, a, t).

It is enough to ﬁnd the two sums which we used in the above formula. Using

simple algebra we ﬁnd that

n−1

X

k=1

I(k, a, a)G(t)

2n−1−k

=

n−1

X

k=1

G(a)kG(t)

2n−1−k

=2G(a)

2G(a)−G(t)"G(a)n−1−G(t)

2n−1#

and

n−1

X

k=1

II(k, a, a)G(t)

2n−1−k

=

n−1

X

k=1

G(a)k−1[kH(a) + G(a)] G(t)

2n−1−k

= 2G(a)n−1nH(a) + G(a)

(2G(a)−G(t)) −2G(a)H(a)

(2G(a)−G(t))2

−2G(t)

2n−1G(a)

(2G(a)−G(t)) −G(t)H(a)

(2G(a)−G(t))2.

Combining these results ends the proof.

Now we can ﬁnd the formula for II (n, a, t).

10 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

Theorem 2. The integral I I (n, a, t)is given by

II(n, a, t) = G(a)n"(n+ 1)H(a) + G(a)

2G(a)−G(t)−2G(a)H(a)

(2G(a)−G(t))2

+H(t)

2G(a)−G(t)nC2+C3−2C2G(a)

2G(a)−G(t)#

+G(t)

2n−1"II(1, a, t)−G(a)(H(a) + G(a))

2G(a)−G(t)+G(a)G(t)H(a)

(2G(a)−G(t))2

+(n−1)C1H(t)

G(t)−G(a)H(t)

2G(a)−G(t)C3−C2G(t)

2G(a)−G(t)#.

for n≥2and G(t)6= 2G(a).

Proof. First, let us notice that G(t)>0, since t > 0. By Lemma 2 and the

deﬁnition of H(t)we have

II(1, a, t) = H(a) + H(t) + G(a) + G(t)

2.

By the same Lemma we ﬁnd that

II(n, a, t) = H(a)

2I(n−1, a, a) + G(a)

2II(n−1, a, a)

+H(t)

2I(n−1, a, t) + G(t)

2II(n−1, a, t).

By iterating the above formula for II(n, a, t)we can see that

II(n, a, t) = II(1, a, t)G(t)

2n−1

+H(a)

2

n−1

X

k=1

I(k, a, a)G(t)

2n−1−k

+G(a)

2

n−1

X

k=1

II(k, a, a)G(t)

2n−1−k

+H(t)

2

n−1

X

k=1

I(k, a, t)G(t)

2n−1−k

.

An argument similar to the one provided for I(n, a, t)convinces us that

this expression solves the recurrence equation that deﬁnes II(n, a, t). It is

suﬃcient to ﬁnd closed form of the sum

n−1

P

k=1

I(k, a, t)G(t)

2n−1−k. We have

n−1

X

k=1

I(k, a, t)G(t)

2n−1−k

=2G(a)n

2G(a)−G(t)nC2+C3−2C2G(a)

2G(a)−G(t)

+G(t)

2n−2(n−1)C1−G(a)G(t)

(2G(a)−G(t)) C3−C2G(t)

2G(a)−G(t).

A simple use of algebra ends the proof.

FLUCTUATIONS OF KENDALL RANDOM WALKS 11

For both I(n, a, t)and I I (n, a, t)we needed to assume that G(t)6= 2G(a).

In case G(t) = 2G(a)it is easy to check that we have

I(n, a, t) = G(a)n−1"I(1, a, t) + n−1

2 B+nH(a)Ψ a

t

2!#,

II(n, a, t) = G(a)n−1"II(1, a, t) + H(a)(n−1)

21 + n

2+G(a)(n−1)

2

+H(t)

2G(a)(n−1)I(1, a, t) + B(n−1)(n−2)

4+H(a)Ψ a

tn(n−1)(n−2)

12 #,

B=H(a)Ψ a

t+G(a).

3. Fluctuations of Kendall random walk

For any positive awe introduce the ﬁrst hitting times of the half lines [a, ∞)and

(−∞, a]for the random walk {Xn:n∈N0}:

τ+

a= min{n>1 : Xn> a}, τ−

a= min{n>1 : Xn< a}

and weak ascending and descending ladder variables:

eτ+

a= min{n>1 : Xn≥a},eτ−

a= min{n>1 : Xn≤a}

with convention min ∅=∞. In [15] authors found joint distribution of the random

vector (τ+

0, Xτ+

0). Our main goal here is to extend this result for any a≥0.

Lemma 7. Random variable τ+

0(and, by symmetry of the Kendall random walk,

also variable τ−

0) has geometric distribution

P(τ+

0=k) = 1

2k, k = 1,2,··· .

We will now investigate the distribution of the random variable τ+

a. First, we notice

that

P(τ+

a=n) = P(X0≤a, X1≤a, . . . , Xn−1≤a, Xn> a)

=Za

−∞

...Za

−∞ Z∞

a

P1(xn−1, dxn)P1(xn−2, dxn−1). . .P1(0, dx1)

At the beginning, we will compute the value of the innermost integral. The result

is given in the following lemma.

Lemma 8.

Z∞

a

P1(xn−1, dxn) = 1

2−1

2hΨxn−1

aH(a) + G(a)i1(|xn−1|<a)

where

H(a) = 2F(a)−1−G(a).

Proof.

Z∞

a

P1(xn−1, dxn) = Z∞

a

(δxn−1△αν)(dxn)

= (δxn−1△αν)(a, ∞)

=1

2−1

2hΨxn−1

a(2F(a)−1−G(a)) + G(a)i1(|xn−1|<a)

by the symmetry of δxn−1△ανmeasure and by Lemma 2.

Iterating Lemma 4 ntimes we arrive at the tail distribution of eτ−

a:

12 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

Lemma 9.

P(eτ−

a> n + 1) = 1

2n(1 −F(a))

and

P(eτ−

a= 1) = F(a).

Proof. Indeed

P(eτ−

a> n + 1) = P(X0> a, X1> a, . . . , Xn> a, Xn+1 > a)

=Z∞

a

...Z∞

aZ∞

a

P1(xn, dxn+1)P1(xn−1, dxn). . .P1(0, dx1).

Since the inner integral is given by Lemma 8 by

Z∞

a

P1(xn, dxn+1) = (δxn△αν) (a, ∞)

=1

2−1

2hΨxn

aH(a) + G(a)i1(|xn|<a)

and the second factor of the above expression is equal zero for xn> a, we see that

Z∞

aZ∞

a

P1(xn, dxn+1)P1(xn−1, dxn) = 1

2δxn−1△αν(a, ∞).

Hence

Z∞

a

...Z∞

aZ∞

a

P1(xn, dxn+1)P1(xn−1, dxn). . .P1(0, dx1) = 1

2n(δ0△αν) (a, ∞),

which ends the proof of the ﬁrst formula because δ0△αν=ν.

Moreover

P(eτ−

a= 1) = lim

n→0

P(eτ−

a≤n+ 1) = F(a),

i.e. distribution of eτ−

ahas atom at 1 with mass F(a)and

P(eτ−

a=k) = 1

2n−1(1 −F(a))

for n≥2. It exactly means that eτ−

ahas geometrical distribution for n≥2with

mass:

P(eτ−

a>1) = 1 −F(a).

Now it is evident that distribution of eτ−

adepends only on cumulative distribution

function of the unit step and distribution of τ+

0:

Corollary 1.

P(eτ−

a=n) = F(a)if n = 1,

(1 −F(a)) P(τ+

0=n−1) if n ≥2.

Let τ+

adenote the ﬁrst ladder moment for the Kendall random walk, meaning that

τ+

a= inf{n≥1 : Xn> a},

where (Xn)is the Kendall random walk. In this section, we prove the ﬁrst important

result of this paper: formula for the probability distribution function of the random

variable τ+

a.

FLUCTUATIONS OF KENDALL RANDOM WALKS 13

Theorem 3. For any a≥0and n∈N

P(τ+

a=n) = A(a)1

2n

+B(a)n(1 −G(a))2G(a)n−1+C(a)G(a)n−1(1 −G(a)),

where

A(a) = 1 + H(a)

(2G(a)−1)2−G(a)

(2G(a)−1)

B(a) = H(a)

(2G(a)−1)(1−G(a))

C(a) = G(a)

2G(a)−1−H(a)

(2G(a)−1)2

G(a)

(1−G(a)) .

It is easy to see that A(a) + B(a) + C(a) = 1. The distribution of τ+

ais a convex

combination of two geometric distributions, one with a probability of success equal

to 1

2, one with a probability of success equal to G(a) and a shifted negative binomial

distribution with parameters 2 and G(a). Thus, it is a mixture of negative bino-

mial distributions with coeﬃcients that depend on the unit step distribution of the

associated Kendall random walk both through its CDF and Williamson transform.

Proof of this formula uses Markov property of the Kendall random walk. The prob-

ability distribution function of τ+

ais expressed as an iterated integral with respect

to the transition kernels. Results of consecutive integrations form a sequence. We

will ﬁnd its closed form to calculate P(τ+

a=n).

Proof. Since Xnis a Markov process, we have

P(τ+

a=n) = P(X0≤a, X1≤a,...,Xn−1≤a, Xn> a)

=Za

−∞

...Za

−∞ Z∞

a

P1(xn−1, dxn)P1(xn−2, dxn−1). . .P1(0, dx1).

Using Lemma 2, we can compute the innermost integral

Z∞

a

P1(xn−1, dxn) = 1

2−1

2hΨxn−1

aH(a) + G(a)i1(|xn−1|<a)

Now we can deﬁne a sequence (Ij), where Ijdenotes the result of the j-th integra-

tion. Let

I1=Z∞

a

P1(xn−1, dxn)

and

Ij+1 =Za

−∞

Ijδxn−j−1△αν(dxn−j).

Let us notice that Ijis of the form

Aj+ Ψ xn−j

aH(a)Bj+CjG(a)(|xn−j|<a),

which is easy to verify by integrating this formula with respect to xn−j. This way

we also obtain recurrence equations for Aj,Bjand Cjsequences. We have

Aj+1 =1

2Aj,

Bj+1 =1

2Aj+G(a)(Bj+Cj),

Cj+1 =1

2Aj+CjG(a)

with initial conditions A1=1

2,B1=C1=−1

2. It is easy to see that Ajsequence

is a geometric sequence. After plugging the formula for Ajinto the equations that

deﬁne Bjand Cjand iterating the formulas for Bjand Cj, we arrive at the following

solutions (Bj=G(a)j−1j(1−G(a))(2G(a)−1)−G(a)

(2G(a)−1)2+1

2j1

(2G(a)−1)2,

Cj=G(a)j−1(1−G(a))−2−j

2G(a)−1=G(a)j−11−G(a)

2G(a)−1−1

2j1

2G(a)−1.

14 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

We will check that these sequences satisfy our recurrence equations. For sequence

Bj

1

2j+1

+G(a)(Bj+Cj)

=G(a)jj(1 −G(a))(2G(a)−1) −G(a)

(2G(a)−1)2+1−G(a)

2G(a)−1

+1

2j+1 1 + 2G(a)

(2G(a)−1)2−2G(a)

2G(a)−1

=G(a)j(j+ 1)(1 −G(a))(2G(a)−1) −G(a)

(2G(a)−1)2

+1

2j+1 1

(2G(a)−1)2=Bj+1

For sequence Cjwe have

G(a)Cj+ 2−(j+1) =G(a)j(1 −G(a)) −2−jG(a)

2G(a)−1+ 2−(j+1)

=G(a)j(1 −G(a)) −2−jG(a) + 2−(j+1)(2G(a)−1)

2G(a)−1

=G(a)j(1 −G(a)) −2−(j+1)

2G(a)−1=Cj+1

Since (|x0|< a) = 1 and substituting x0= 0 implies Ψx0

t= 1, we have proven

the formula for the probability distribution function of τ+

a, which can be seen after

we group terms in formulas for sequences Bjand Cj.

4. Distribution of the first ladder height

In this section, we give the formula for the cumulative distribution function of the

ﬁrst ladder height over any level a≥0. At the beginning let us look on a special

case of the desired result in the case a= 0, which was proved in [15].

Theorem 4. Cumulative distribution function of the joint distribution of random

variables τ+

0and Xτ+

0is given by

Φ0

n(t) = P(Xτ+

0≤t, τ +

0=n) = 1

2nG(t)n−1h2nF(t)−1

2−(n−1)G(t)i.

Notice that

Φ0

n(t) = Pτ+

0=nP|Xn|< t

and

PXτ+

0≤t=4F(t)−2−G(t)2

(2 −G(t))2.

Our goal in this section is to generalize this result for any level a≥0in the following

way:

FLUCTUATIONS OF KENDALL RANDOM WALKS 15

Theorem 5. Cumulative distribution function of the joint distribution of random

variables τ+

aand Xτ+

ais given by

Φa

n(t) := P(Xτ+

a≤t, τ +

a=n) =

=G(t)

2n−1"2G(a)H(a)(G(t)−G(a))

(2G(a)−G(t))2−G(a)2

2G(a)−G(t)

+(n−1)C1H(t)

G(t)−G(a)H(t)

2G(a)−G(t)C3−C2G(t)

2G(a)−G(t)+II (1, a, t)#

+G(a)n−1"(nH(a) + G(a))(G(t)−G(a))

2G(a)−G(t)−G(a)G(t)H(a)

(2G(a)−G(t))2

+G(a)H(t)C2n

2G(a)−G(t)+G(t)H(t)

2G(a)−G(t)) C3−C2

2−G(a)C2

2G(a)−G(t)+H(t)(C3−C2)

2#

for n≥2,t > a such that |G(t)|<1and expressions C1, C2, C3deﬁned in Theorem

1. Moreover Φa

n(0) = 0. For n= 1 we have simply P(Xτ+

a≤t, τ +

a= 1) =

F(t)−F(a). Since G(0) = 0 and consequently H(0) = 0, it is easy to see that for

a= 0 this expression simpliﬁes to the expression given in Theorem 4. We also need

the technical assumption that G(t)6= 2G(a).

Proof. Let us introduce the notation

Φa

n(t) := Pτ+

a=n, Xτ+

a< t=P(X1≤a, . . . , Xn−1≤a, a < Xn≤t)

=Za

−∞

...Za

−∞ Zt

a

(δxn−1△αν)(dxn)(δxn−2△αν)(dxn−1)...ν(dx1).

The innermost integral can be computed using Lemma 2. Then this probability

can be expressed as

Φa

n(t) = H(t)

2I(n−1, a, t)+ G(t)

2II(n−1, a, t)−H(a)

2I(n−1, a, a)−G(a)

2II(n−1, a, a).

Substituting the expressions obtained in Theorems 1 and 2 for the terms I(n−

1, a, t),II(n−1, a, t)I(n−1, a, a)and II(n−1, a, a)ends the proof.

We can now ﬁnd the marginal distribution of the random variable Xτ+

a.

Corollary 2. Cumulative distribution function of the random variable Xτ+

ais given

by the following formula.

P(Xτ+

a≤t) = F(t)−F(a)

+G(t)

2−G(t)"H(t)(4 −G(t))C1

G(t)(2 −G(t)) +2G(a)H(a)(G(t)−G(a))

(2G(a)−G(t))2

−G(a)2

2G(a)−G(t)−C1H(t)

G(t)−G(a)H(t)

2G(a)−G(t)C3−C2G(t)

2G(a)−G(t)+II (1, a, t)#

+G(a)

1−G(a)"(2 −G(a))(H(a)(G(t)−G(a)) + G(a)H(t)C2)

(1 −G(a))(2G(a)−G(t)) +G(a)(G(t)−G(a))

2G(a)−G(t)

−G(a)G(t)H(a)

(2G(a)−G(t))2+G(t)H(t)

2G(a)−G(t)C3−C2

2−G(a)C2

2G(a)−G(t)+H(t)(C3−C2)

2#.

Again, it is easy to check that for a= 0 this expression simpliﬁes to the expression

given in Theorem 4.

16 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

4.1. Maxima and minima of Kendall random walks. In this section we will

prove an analog of Pollaczek-Khintchine formula. We will start with a lemma that

describes the distribution of the maximum of nsteps of a Kendall random walk.

Lemma 10. Let {Xn:n∈N0}denote the Kendall random walk. Then the

distribution of max

0≤i≤nXiis given by

P( max

0≤i≤nXi≤t)

=A(t)P(τ+

0=n) + B(t)G(t)

1−G(t)(1 −G(t))2nG(t)n−1

+(B(t) + C(t)) G(t)

1−G(t)G(t)n−1(1 −G(t))

for functions A,Band Cdeﬁned in Theorem 3 and t > 0.

Proof. It is suﬃcient to see that

P( max

0≤i≤nXi≤t) = P(X1≤t,...,Xn≤t)

=P(X1≤t,...,Xn≤t, Xn+1 ≤t)

+P(X1≤t,...,Xn≤t, Xn+1 > t)

=P(τ+

t> n + 1) + P(τ+

t=n+ 1) = P(τ+

t≥n+ 1).

Summation of the formula for the distribution of τ+

tends the proof.

Lemma 11. Let {Xn:n∈N0}denote the Kendall random walk. Then the

distribution of min

0≤i≤nXiis given by

P( min

0≤i≤nXi≤t) = 1 −1

2n1 + H(−t)

(2G(t)−1)2−G(t)

2G(t)−1

−G(t)nG(t)

2G(t)−1−2H(−t)

(2G(t)−1)2

−nG(t)n2H(−t)G(t)

(2G(t)−1)2

for t < 0.

Proof. The proof is a simple modiﬁcation of the proof of Theorem 3. For t < 0

we have

P( min

0≤i≤nXi≤t) = 1 −P( min

0≤i≤nXi> t)

= 1 −P(X0> t, X1> t,...,Xn> t)

= 1 −Z∞

t

...Z∞

t

(δxn−1△αν)(dxn)...(δx0△αν)(dx1).

We deﬁne

I1:= Z∞

t

(δxn−1△αν)(dxn)

and recursively

Ij+1 := Z∞

t

(δxn−j−1△αν)(dxn−j)

FLUCTUATIONS OF KENDALL RANDOM WALKS 17

for 1≤j≤n−1and x0= 0. Assuming Ij=Aj+BjΨxn−j

t+Cj(|xn−j|<−t)

we have

Ij+1 =Z∞

tAj+BjΨxn−j

t+Cj(|xn−j|<−t)(δxn−j−1△αν)(dxn−j)

=AjZ∞

t

(δxn−j−1△αν)(dxn−j)

+BjZ∞

t

Ψxn−j

t(δxn−j−1△αν)(dxn−j)

+CjZ−t

t

(δxn−j−1△αν)(dxn−j)

=Aj1

2+1

2H(−t)Ψ xn−j−1

t+G(t)(|xn−j−1|<−t)

+BjG(t)Ψ xn−j−1

t+CjH(−t)Ψ xn−j−1

t+G(t)(|xn−j−1|<−t)

=1

2Aj+ Ψ xn−j−1

t1

2H(−t)Aj+G(t)Bj+CjH(−t)

+(|xn−j−1|<−t)1

2G(t)Aj+G(t)Cj.

Thus we arrive at the following set of recurrence equations.

Aj+1 =1

2Aj,

Bj+1 =G(t)Bj+H(−t)(Aj+1 +Cj),

Cj+1 =G(t)Aj+1 +G(t)Cj

with the initial conditions A1=1

2,B1=1

2H(−t),C1=1

2G(t). It is easy to check

that the solutions are given by the following sequences

Aj=1

2j,

Bj=jG(t)j2G(t)H(−t)

(2G(t)−1)2−G(t)j2H(−t)

(2G(t)−1)2+ 2−jH(−t)

(2G(t)−1)2,

Cj==G(t)

2G(t)−1G(t)j−1

2j.

Acknowledgements. This paper is a part of pro ject "First order Kendall maximal

autoregressive processes and their applications", which is carried out within the

POWROTY/REINTEGRATION programme of the Foundation for Polish Science

co-ﬁnanced by the European Union under the European Regional Development

Fund.

References

[1] T. Alpuim. An Extremal Markovian Sequence. J. Appl. Math., 26(2), 219–232, 1989.

[2] N. H. Bingham. Factorization theory and domains of attraction for generalized con-

volution algebra. Proc. London Math. Soc 23(3), 16–30, 1971.

[3] N. H. Bingham. On a theorem of Kłosowska about generalized convolutions. Coll.

Math. 48(1), 117–125, 1984.

[4] M. Borowiecka-Olszewska, B.H. Jasiulis-Gołdyn, J.K. Misiewicz, J. Rosiński. Lévy

processes and stochastic integrals in the sense of generalized convolutions. Bernoulli

21(4), 2513–2551, 2015.

[5] P. Embrechts, K. Kluppelberg, T. Mikosch. Modelling Extremal Events: For Insur-

ance and Finance. Applications of Mathematics, Stochastic Modelling and Applied

Probability 33, Springer-Verlag Berlin Heidelberg, 1997.

18 B.H. JASIULIS-GOŁDYN1AND M. STANIAK2

[6] W. Feller. An Introduction to Probability Theory and Its Applications II, 2-nd edition,

John Wiley & Sons, 1971.

[7] M. Ferreira, L. Canto e Castro. Modeling rare events through a pRARMAX process.

Journal of Statistical Planning and Inference 140, 3552–3566, 2010.

[8] M. Ferreira.On the extremal behavior of a Pareto process: an alternative for ARMAX

modeling. Kybernetika 48(1), 31–49, 2012.

[9] J. Gilewski. Generalized convolutions and delphic semigroups. Coll. Math. 25, 281–

289, 1972.

[10] J. Gilewski, K. Urbanik. Generalized convolutions and generating functions. Bull.

Acad. Sci. Polon. Ser. Math. Astr. Phys. 16, 481–487, 1968.

[11] B.H. Jasiulis-Gołdyn. Kendall random walks. Probab. Math. Stat. 36(1), 165–185,

2016.

[12] B.H. Jasiulis. Limit property for regular and weak generalized convolution. J. Theoret.

Probab. 23(1), 315–327, 2010.

[13] B. H. Jasiulis-Gołdyn, K. Naskręt, J.K. Misiewicz, E. Omey. Renewal the-

ory for extremal Markov sequences of the Kendall type, submitted, 2018,

https://arxiv.org/pdf/1803.11090.pdf.

[14] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Classical deﬁnitions of the Poisson process do

not coincide in the case of weak generalized convolution.Lith. Math. J.,55(4), 518–

542, 2015.

[15] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Kendall random walk, Williamson transform

and the corresponding Wiener-Hopf factorization. Lith. Math. J., 57(4), 479–489,

2017.

[16] B.H. Jasiulis-Gołdyn, A. Kula. The Urbanik generalized convolutions in the non-

commutative probability and a forgotten method of constructing generalized convo-

lution. Proc. Math. Sci. 122(3), 437–458. 2012.

[17] B.H. Jasiulis-Gołdyn, J.K. Misiewicz. Weak Lévy-Khintchine representation for weak

inﬁnite divisibility. Theory of Probabability and Its Applications. 60(1), 132–151,

2016.

[18] D. G. Kendall. Delphic semi-groups, inﬁnitely divisible regenerative phenomena, and

the arithmetic of p-functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 9(3),

163–195, 1968.

[19] J. Kennedy. Understanding the Wiener-Hopf factorization for the simple random

walk. J. Appl. Math., 31, 561–563, 1994.

[20] J. G. C. Kingman. Random Walks with Spherical Symmetry. Acta Math. 109(1),

11–53, 1963.

[21] A.E. Kyprianou. Introductory lectures on ﬂuctuations of Lévy processes and appli-

cations. Springer, 2006.

[22] A.E. Kyprianou, Z. Palmowski. Fluctuations of spectrally negative Markov Additive

processes. Séminaire de Probabilités, XLI, 121–135, 2008.

[23] A. Lachal. A note on Spitzer identity for random walk. Statistics & Probability Let-

ters, 78(2), 97–108, 2008.

[24] A.J. McNeil, J. Ne˜slehová. Multivariate Archimedean copulas, d-monotone functions

and ℓ1-norm symmetric distributions. The Annals of Statistics 37 (5B), 3059–3097,

2009.

[25] A.J. McNeil, J. Nešlehová. From Archimedean to Liouville Copulas. J. Multivariate

Analysis 101(8), 1771-1790, 2010.

[26] T. Nakajima. Joint distribution of ﬁrst hitting time and ﬁrst hitting place for random

walk. Kodai Math. J. 21, 192–200, 1998.

[27] K-I. Sato. Lévy Processes and Inﬁnitely Divisible Distributions. Cambridge University

Press, 2007.

[28] R. Sousa, M. Guerra, S. Yakubovich. Lévy processes with respect to the index Whit-

taker convolution. arXiv: https://arxiv.org/pdf/1805.03051.pdf, 2018.

[29] K. Urbanik. Generalized convolutions I-V. Studia Math., 23(1964), 217–245,

45(1973), 57–70, 80(1984), 167–189, 83(1986), 57–95, 91(1988), 153–178.

FLUCTUATIONS OF KENDALL RANDOM WALKS 19

[30] V. Vol’kovich, D. Toledano-Ketai, R. Avros. On analytical properties of generalized

convolutions. Banach Center Publications, Stability in Probability 5(3), 243–274.

2010.

[31] R.E. Williamson, Multiply monotone functions and their Laplace transforms. Duke

Math. J. 23, 189–207, 1956.

[32] H. Ch. Yeh, B.C. Arnold, C.A. Robertson. Pareto Processes. Journal of Applied

Probability, 25(2), 291–301, 1988.