ArticlePDF Available

# Kendall random walk, Williamson transform and the corresponding Wiener-Hopf factorization

Authors:

## Abstract

The paper gives some properties of hitting times and an analogue of the Wiener-Hopf factorization for the Kendall random walk. We show also that the Williamson transform is the best tool for problems connected with the Kendall generalized convolution.
arXiv:1501.05873v1 [math.PR] 23 Jan 2015
KENDALL RANDOM WALK, WILLIAMSON
TRANSFORM AND THE CORRESPONDING
WIENER-HOPF FACTORIZATION
B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
Abstract. The paper gives some properties of hitting times and
an analogue of the Wiener-Hopf factorization for the Kendall ran-
dom walk. We show also that the Williamson transform is the best
tool for problems connected with the Kendall generalized convolu-
tion.
1. Introduction
In this paper we study positive and negative excursions for the Kendall
random walk {Xn:nN0}which can be deﬁned by the following
recurrence construction:
Deﬁnition 1.1. The stochastic process {Xn:nN0}is a discrete
time Kendall random walk with parameter α > 0and step distribution
νif there exist
1. (Yk)i.i.d. random variables with distribution ν,
2. (ξk)i.i.d. random variables with uniform distribution on [0,1],
3. (θk)i.i.d. random variables with symmetric Pareto distribution with
density eπ2α(dy) = α|y|2α11
1
1[1,)(|y|)dy,
4. sequences (Yk),(ξk)and (θk)are independent
such that
X0= 1, X1=Y1, Xn+1 =Mn+1 [I(ξn< ̺n+1) + θn+1 I(ξn> ̺n+1)] ,
where θn+1 and Mn+1 are independent and
Mn+1 = max{Xn, Yn+1}, mn+1 = min{Xn, Yn+1}, ̺n+1 =mα
n+1
Mα
n+1
.
The Kendall random walk is an example of discrete time evy ran-
dom walk under generalized convolution studied in [1]. Some basic
properties of this particular process are described in [3].
1Institute of Mathematics, University of Wroc law, pl. Grunwaldzki 2/4, 50-384
Wroc law, Poland, e-mail: jasiulis@math.uni.wroc.pl
2Faculty of Mathematics and Information Science, Warsaw University of Technol-
ogy, ul. Koszykowa 75, 00-662 Warsaw, Poland, e-mail: j.misiewicz@mini.pw.edu.pl
Key words and phrases: generalized convolution, Kendall convolution, Markov
process, Pareto distribution, random walk, weakly stable distribution, Williamson
transform, Wiener-Hopf factorization
Mathematics Subject Classiﬁcation. 60G50, 60J05, 44A35, 60E10.
1
2 B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
In the second section we present the deﬁnition and main properties of
the Williamson transform together with unexpectedly simple inverse
formula. We give also the deﬁnition of the Kendall convolution ([4])
which is an example of generalized convolution in the sense deﬁned by
Urbanik [8, 9]. This section is written only for the completeness of the
paper, since all presented facts are known, but rather diﬃcult to ﬁnd
in the literature.
In the third section we recall another deﬁnition of Kendall random walk
following the multidimensional distributions approach given in [1]. By
this construction it is clear that the Kendall random walk is a Markov
process. Then we study hitting times, positive and negative excursions
for this process. The main result of the paper is the analog of the
Wiener-Hopf factorization.
For simplicity we use notation Tafor the rescaling operator (dilatation)
deﬁned by (Taλ)(A) = λ(A/a) for every Borel set Awhen a6= 0, and
T0λ=δ0. In the whole paper parameter α > 0 is ﬁxed.
2. Williamson transform and Kendall convolution
Let us start from reminding the Williamson transform (see [12]) and its
basic properties. In this paper we apply it to Ps-symmetric probability
measures on R, but equivalently it can be considered on the set P+-
measures on [0,).
Deﬁnition 2.1. By Williamson transform we understand operation
νbνgiven by
bν(t) = ZR
(1 − |xt|α)+ν(dx), ν ∈ Ps,
where a+=aif a>0and a+= 0 otherwise.
For the convenience we use the following notation:
Ψ(t) = (1 − |t|α)+, G(t) = bν(1/t).
The next lemma is almost evident and well known. It contains the
inverse of Williamson transform which is surprisingly simple.
Lemma 2.1. The correspondence between a measure ν∈ P and its
Williamson transform is 11. Moreover, denoting by Fthe cumulative
distribution function of ν,ν({0}) = 0, we have
F(t) = 1
2α[α(G(t) + 1) + tG(t)] if t > 0;
1F(t)if t < 0.
except for the countable many tR.
KENDALL RANDOM WALK 3
Proof. Of course the assumption ν({0}) = 0 is only technical simpli-
ﬁcation. Notice ﬁrst, that for t > 0 integrating by parts we obtain
G(t) = ZR
(1 − |x/t|α)+dF (x)
= 2 1xαtαF(x)t
0+αtαZt
0
xα1F(x)dx
=1 + 2αtαZt
0
xα1F(x)dx.
Since
lim
h0+
1
hZt+h
0
xα1F(x)dx Zt
0
xα1F(x)dx=tα1F(t+),
and
lim
h0
1
hZt+h
0
xα1F(x)dx Zt
0
xα1F(x)dx=tα1F(t)
then, except for the points of jumps for Fwe can diﬀerentiate the
equality
2αZt
0
xα1F(x)dx =tα(G(t) + 1)
and obtain
2αtα1F(t) = αtα1(G(t) + 1) + tαG(t).
In the following we use the notation: e
δx=1
2δx+1
2δxand eπ2α(dy) =
α|y|2α11
1
1[1,)(|y|)dy is the symmetric Pareto distribution.
Deﬁnition 2.2. By the Kendall convolution we understand operation
α:P2
s→ Psdeﬁned for discrete measures by
e
δxαe
δy:= TM̺αeπ2α+ (1 ̺α)e
δ1
where M= max{|x|,|y|},m= min{|x|,|y|},̺=m/M . The extension
of αto the whole Psis given by
ν1αν2(A) = ZR2e
δxαe
δy(A)ν1(dx)ν2(dy).
It is easy to see that the operation αis symmetric, associative, com-
mutative and
ναδ0=νfor each ν∈ Ps;
(1+ (1 p)ν2)αν=p(ν1αν)+(1 p) (ν2αν) for each
p[0,1] and each ν, ν1, ν2∈ Ps.
if λnλand νnνthen (λnανn)(λαν), where
denotes the weak convergence;
Taν1αν2=Taν1αTaν2for each ν1, ν2∈ Ps.
4 B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
The next proposition, which is only a reformulation of a known prop-
erty of Williamson transform, shows that this transform is playing the
same role for Kendall convolution as the Fourier transform for classical
convolution.
Proposition 2.2. Let ν1, ν2∈ Psbe probability measures with the
Williamson transforms bν1,bν2. Then
ZR
Ψ(x/t)ν1αν2(dx) = bν1(t)bν2(t).
Proof. Notice ﬁrst that for M= max{|x|,|y|},m= min{|x|,|y|},
̺=m/M we have
ZR
Ψ(ut)e
δxαe
δy(du) = ZR
Ψ(Mut)e
δ̺αe
δ1(du)
=̺α2Z
0
Ψ(Mut)α du
|u|2α+1 + (1 ̺α)Ψ(Mt)1[1,1](M t)
=̺α(1 − |Mt|α)21[1,1](M t) + (1 ̺α) (1 − |Mt|α)1[1,1](M t)
= (1 − |M t|α) (1 − |mt|α)1[1,1](Mt) = Ψ(xt)Ψ(yt)
Now, by the deﬁnition of Kendall convolution, we see that
ZR
Ψ(xt)
ν1αν2(dx) = ZR3
Ψ(ut)e
δxαe
δy(du)ν1(dx)ν2(dy)
=ZR2
Ψ(xt)Ψ(yt)ν1(dx)ν2(dy) = bν1(t)bν2(t),
which was to be shown.
Proposition 2.3. There exists a sequence of positive numbers (cn)
such that Tcne
δαn
1converges weakly to a measure σ6=δ0(here ναn=
να···ανdenotes the Kendall convolution of nmeasures ν).
Proof. Since the Williamson transform of e
δ1is equal Ψ(t) then putting
cn=n1we obtain
ZR
Ψ(xt)Tcne
δαn
1(dx) = ZR
Ψ(cnxt)e
δαn
1(dx)
= (1 − |cnt|α)n
+e−|t|αif n→ ∞.
The function e−|t|αis symmetric and monotonically decreasing as a
function of tαon the positive half line, thus there exists a measure
σ∈ Pssuch that
e−|t|α=ZR
(1 − |xt|α)+σ(dx).
According to Lemma 2.1 we can calculate the cumulative distribution
function Fof the measure σobtaining F(0) = 1
2and
F(t) = 1
21 + tα+etαetαif t > 0;
1F(t) if t < 0.
KENDALL RANDOM WALK 5
Remark. Notice that properties of Kendall convolution together with
the one described in the last proposition show that the Kendall con-
volution is an example of generalized convolutions in the sense deﬁned
by K. Urbanik. More about generalized convolutions one can ﬁnd in
[2, 5, 6, 7, 8, 9, 10, 11].
As a simple consequence of Lemma 2.1 and Proposition 2.2 we obtain
the following fact:
Proposition 2.4. Let ν∈ P. For each natural number n>2the
cumulative distribution function Fnof the measure ναnis equal
Fn(t) = 1
2ααG(t)n+ 1+tnG(t)n1G(t), t > 0,
and F(t) = 1 F(t)for t < 0, where G(t) = bν(1/t).
Examples.
1. Let νbe uniform distribution on [1,1]. Then
G(t) = α
α+ 1|t|1(1,1)(t) + |t| − |t|α
α+ 11(1,)(|t|),
Fn(t) = 1
2+1
2sgn(t)α
α+ 1nt+n
αtn11
1
1[0,1)(t)
+1
2sgn(t)1tα
α+ 1n11 + n
α+tα
α+ 1 n
t11
1
1[1,)(t).
2. Let ν=e
δ1:= 1
2δ1+1
2δ1, thus G(t) = (1 − |t|α)+. For each
natural number n>2 the probability measure ναnhas the density
dFn(t) = αn(n1)
2|t|2α+1 1− |t|αn21
1
1[1,)(|t|)dt.
3. Let ν=pe
δ1+ (1 p)eπp, where p(0,1] and eπpis the symmetric
Pareto distribution with density eπp(dy) = α|y|p11
1
1[1,)(|y|)dy. For
each natural number n>2 we have:
Fn(t) = 1
2+1
2sgn(t)h1α(1 p)
αp|t|p+p(1 α)
αp|t|αin1
×h1 + (1 p)(np α)
αp|t|pp(1 α)(n1)
αp|t|αi1
1
1[1,)(|t|).
4. Let ν=eπ2αfor α(0,1]. Since e
δ1αe
δ1=eπ2αthen using
Example 2 we arrive at:
dFn(t) = αn(2n1)
|t|2α+1 1− |t|α2(n1) 1
1
1[1,)(|t|)dt.
6 B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
3. Kendall random walk
The direct construction of the symmetric Kendall random walk {Xn:
nN0}based on the sequence (Yk) of i.i.d. unit steps with distribu-
tion νwe have already presented in Deﬁnition 1.1. Slightly diﬀerent
direct construction was given in [3]. We recall here one more deﬁnition
of the Kendall random walk following [1], where only multidimensional
distributions of this process are considered. This approach is more con-
venient in studying positive and negative excursions, which we consider
in this section.
Deﬁnition 3.1. The Kendall random walk or a discrete time L´evy
process under the Kendall generalized distribution is the Markov process
{Xn:nN0}with X00and the transition probabilities
Pn(x, A) = PXn+kAXk=x=δxαναn, n, k N,
where measure ν∈ Psis called the step distribution.
For any real awe introduce the ﬁrst hitting times of the half lines
(a, ) and (−∞, a) for random walk {Xn:nN0}:
τ+
a= min{n>1 : Xn> a}, τ
a= min{n>1 : Xn< a}
with the convention min =. We want to ﬁnd the joint distribution
of the random vector (τ+
0, Xτ+
0). In order to attain this we need some
lemmas:
Lemma 3.1.
h(x, y, t) := δxαδy(0, t) = 1
21xy
t2
α1{|x|<t,|y|<t}
=1
2hΨx
t+ Ψ y
tΨx
tΨy
ti1{|x|<t,|y|<t},
δxαν(0, t) = P1(x, [0, t))
=Ψx
tF(t)1
2+1
2G(t)1
2Ψx
tG(t)1{|x|<t}.
Proof. With the notation a= min{|x|,|y|},b= max{|x|,|y|} and
̺= (a/b)αwe have
δxαδy(0, t) = Tb(1 ̺)e
δ1+̺eπ2α(0, t)
=1
2(1 ̺)1(b < t) + 1
2̺Tbπ2α(0, t)
=1
2(1 ̺)1(b < t) + 1
2̺1(b < t)Zt/b
1
2α
s2α+1 ds
=1
21̺b
t2α1(b < t) = 1
21ab
t2α1(b < t)
=1
21xy
t2
α1(|x|< t, |y|< t).
KENDALL RANDOM WALK 7
The ﬁnal form trivially follows from the identity 1 ab = 1 a+ 1
b(1 a)(1 b). The second formula follows from
δxαν(0, t) = ZR
δxαδy(0, t)ν(dy).
The next lemma is crucial for further considerations.
Lemma 3.2. Let {Xn:nN0}be the Kendall random walk. Then
Φn(t) := PX160,...Xn160,0< Xn< t
=1
2nG(t)n1h2nF(t)1
2(n1)G(t)i.
Proof. For k= 1 we have Φ1(t) = P{0< X1< t}=F(t)1
2. For
k= 2
Φ2(t) = Z0
−∞ Zt
0
P1(x1, dx2)P1(0, dx1) = Z0
−∞
P1(x1,(0, t)) ν(dx1)
=1
2Z0
th2Ψ(x1/t)F(t)1
2+G(t)Ψ(x1/t)G(t)iν(dx1)
=1
4h4G(t)F(t)1
2G(t)2i.
In order to calculate Φ3notice ﬁrst that by Proposition 2.2
ZR
Ψ(y/t) (δxαν) (dy) = Ψ(x/t)G(t)
and applying the second formula from Lemma 3.1 to the measure
P1(x1,·) = δx1ανwe have
Z0
−∞ Zt
0
P1(x2, dx3)P1(x1, dx2) = Z0
−∞
(δx2αν) (0, t) (δx1αν) (dx2)
=1
2Z0
t2Ψ(x2/t)F(t)1
2+G(t)Ψ(x2/t)G(t)(δx1αν) (dx2)
=1
2Ψ(x1/t)G(t)
F(t)1
2+G(t)δx1αν(0, t)1
2Ψ(x1/t)G(t)2
=1
44Ψ(x1/t)G(t)
F(t)1
2+G(t)22Ψ(x1/t)G(t)2.
8 B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
Consequently
Φ3(t) = Z0
−∞ Z0
−∞ Zt
0
P1(x2, dx3)P1(x1, dx2)P1(0, dx1)
=1
4Z0
t4Ψ(x1/t)G(t)
F(t)1
2+G(t)22Ψ(x1/t)G(t)2ν(dx1)
=1
42G(t)2
F(t)1
2+G(t)2
F(t)1
2G(t)3
=1
236G(t)2
F(t)1
22G(t)3.
Simple, but laborious, application of mathematical induction ends the
proof.
Lemma 3.3. The random variable τ+
0(and, by symmetry of the Kendall
random walk, also variable τ
0) has geometric distribution P(τ+
0=k) =
1
2k, k = 1,2,··· it has the following moments generating function:
Esτ+
0=
s
2
1s
2
,06s < 2.
Proof. Notice that limt→∞ G(t) = 1 since limt→∞ Ψ(1/t) = 1. Now it
is enough to apply Lemma 3.1:
P(τ+
0=k) = P{X160,··· , Xk160, Xk>0}= Φk() = 1
2k.
Lemma 3.4.
PnXτ+
0< to=4F(t)2G(t)2
(2 G(t))2.
Proof. It is enough to apply Lemma 3.1:
PnXτ+
0< to=
X
k=1
PnXτ+
0< t, τ+
0=ko=
X
k=1
Φk(t)
=
X
n=1
1
2nG(t)n1h2nF(t)1
2(n1)G(t)i
=1
(2 G(t))2h4F(t)2G(t)2i.
Deﬁnition 3.2. Let {Xn:nN0}be the Kendall random walk in-
dependent of the random variable Ns/2with the geometric distribution
P{Ns/2=k}= (s
2)k1(1 s
2),06s61, for k= 1,2,.... By the
geometric-Kendall random variable we understand the following
Zs/2=
X
k=1
Xk1{Ns/2=k}.
KENDALL RANDOM WALK 9
Notice that EΨ (Xk/t) = G(t)kby Proposition 2.2, thus we have
EΨ(Zs/2/t) =
X
k=1
EΨ (Xk/t)s
2k11s
2
=
X
k=1
G(t)ks
2k11s
2=G(t)1s
2
1s
2G(t).
We are ready now to prove the main result of the paper: the Wiener-
Hopf factorization for the Kendall random walk. This gives the Laplace-
Williamson transform Esτ+
0Ψ(uXτ+
0), thus consequently, also the joint
distribution of (τ+
0, Xτ+
0).
Theorem 3.5. Let {Xn:nN0}be a random walk under the Kendall
convolution with unit step X1Fsuch that F(0) = 1
2. Then
Esτ+
0ΨuXτ+
0=
s
2
1s
2
·(1 s
2)G(1/u)
1s
2G(1/u)=Esτ+
0EΨuZs/2.
Proof. Let H(s, u) := Esτ+
0ΨuXτ+
0. Then
H(s, u) =
X
k=1
skZ
0
Ψ(ux)dΦk(x) =
Z
0
Ψ(ux)d
X
k=1
skΦk(x)x
=
Z
0
Ψ(ux)d
X
k=1hsG(x)
2ik1
G(x)2khF(x)1
2i(k1)G(x)x
=Z
0
Ψ(ux)ds(F(x)1
2)1
4s2G(x)2
11
2sG(x)2x
=αuαZ1/u
0
xα1s(F(x)1
2)1
4s2G(x)2
11
2sG(x)2dx.
In these calculations we write d(f(s, x))xfor the diﬀerential of the
function f(s, x) with respect to x. To get the last expression we need
to know that G(0) = 0. Now we calculate G(x). Since
G(x) = ZR
Ψz
xν(dz) = 2 Zx
01z
x
αν(dz),
then integrating by parts we obtain
(G(x) + 1) xα= 2αZx
0
zα1F(z)dz.
Now it is enough to diﬀerentiate both sides of this equality to obtain
G(x) = 2αx1F(x)1
21
2G(x).
10 B.H. JASIULIS-GO LDYN1AND J.K. MISIEWICZ 2
We see that
H(s, u) = uαZ1/u
0xαs
2G(x)
1s
2G(x)
dx =
s
2G(1/u)
1s
2G(1/u).
The same method as in the proof of the last theorem we are using in
the next lemma.
Lemma 3.6.
EXα
τ+
0= 2E|Y1|α
Proof. In the following calculations we take R > 0:
EXα
τ+
0=Z
0
xαdF(x)1
21
4G(x)2
(1 1
2G(x))2x
= lim
R→∞ xαF(x)1
21
4G(x)2
(1 1
2G(x))2
R
0ZR
0hxα1
2G(x)
(1 1
2G(x))idx
= lim
R→∞ xαF(x)1
21
2G(x)
(1 1
2G(x))2
R
0
= lim
R→∞ xα
(1 1
2G(x))2hZx
0
ν(du)Zx
0h1u
xαiν(du)iR
0
= lim
R→∞ 1
(1 1
2G(x))2Zx
0
uαν(du)R
0
= 4 Z
0
uαν(du),
which was to be shown. Notice that the assumption E|Y1|α<is
irrelevant in these calculations.
References
[1] Borowiecka-Olszewska, M., Jasiulis-Go ldyn, B.H., Misiewicz,
J.K., Rosi´
nski, J. (2014). evy processes and stochastic integrals in the
sense of generalized convolutions, accepted for publication in Bernoulli,
http://arxiv.org/abs/1312.4083.
[2] Jasiulis, B.H. (2010). Limit property for regular and weak generalized con-
volutions.Journ. of Theor. Probab.,23(1), 315–327.
[3] Jasiulis-Go ldyn, B.H. (2014). Kendall random walks, submitted,
http://arxiv.org/abs/1412.0220.
[4] Jasiulis-Go ldyn, B.H., Misiewicz, J.K. (2011). On the uniqueness of the
Kendall generalized convolution.Journ. of Theor. Probab.,24(3), 746–755.
[5] Jasiulis-Go ldyn, B.H., Misiewicz, J.K. (2014). Weak L´evy-Khintchine
representation for weak inﬁnite divisibility, accepted for publication in Theory
of Probabability and Its Applications, http://arxiv.org/abs/1407.4097.
[6] Kucharczak, J., Urbanik, K. (1986). Transformations preserving weak
stability. Bull. Polish Acad. Sci. Math. 34(7-8), 475-486.
[7] Misiewicz, J.K. (2006). Weak stability and generalized weak convolution for
random vectors and stochastic processes,IMS Lecture Notes-Monoghaph Series
Dynamics & Stochastics. 48,109–118.
[8] Urbanik, K. (1993). Anti-irreducible probability measures.Probab. and Math.
Statist. 14(1), 89–113.
KENDALL RANDOM WALK 11
[9] Urbanik, K.,Generalized convolutions I-V,Studia Math.,23(1964), 217–245,
45(1973), 57–70, 80(1984), 167–189, 83(1986), 57–95, 91(1988), 153–178.
[10] Van Thu, N. (2009). A Kingman convolution approach to Bessel Process.
Probab. and Math. Statist. 29(1), 119–134.
[11] Vol’kovich, V., Toledano-Ketai, D., Avros, R. (2010). On analytical
properties of generalized convolutions,Banach Center Publications, Stability
in Probability 5(3), 243–274.
[12] Williamson, R.E. (1956). Multiply monotone functions and their Laplace
transforms,Duke Math. J. 23, 189-207.
... [6,27]) and they are still the object of interest for scientists [19,21,23,26,22]. This paper is a continuation of research initiated in [15]. The main result of this paper is a description of fluctuations of random walks generated by a particular generalized convolutionthe Kendall convolution -in terms of the first ladder moment (epoch) and the first ladder height of the random walk over any level a ≥ 0. It turns out that the distribution of the first ladder moment is a mixture of three negative binomial distributions where coefficients and parameters depend on the unit step distribution. ...
... Organization of the paper: We begin Section 2 by recalling the Kendall convolution ( [11,15]), which is an example of generalized convolution in the sense defined by K. Urbanik [29]. This convolution is quite specific because the result of two point-mass probability measures is a convex linear combination of measure concentrated at one point and the Pareto distribution. ...
... This convolution is quite specific because the result of two point-mass probability measures is a convex linear combination of measure concentrated at one point and the Pareto distribution. Next, we present the definition and main properties of the Williamson transform ( [15,31]), which is analog of characteristic function in the Kendall convolution algebra. This transform is very easy to invert and allows us to get many results for extremal Markov sequences of the Kendall type. ...
Preprint
Full-text available
The paper deals with fluctuations of Kendall random walks, which are extremal Markov chains. We give the joint distribution of the first ascending ladder epoch and height over any level $a \geq 0$ and distribution of maximum and minimum for these extremal Markovian sequences. We show that distribution of the first crossing time of level $a \geq0$ is a mixture of geometric and negative binomial distributions. The Williamson transform is the main tool for considered problems connected with the Kendall convolution.
... Originated by Urbanik [29] (see also [21]) generalized convolutions regain popularity in recent years (see, e.g., [7,19,28] and references therein). In this paper we focus on the Kendall convolution (see, e.g., [16,27]) which thanks to its connections to heavy tailed distributions, Williamson transform [18], Archimedian copulas, renewal theory [19], noncomutative probability [15] or delphic semi-groups [12,13,20] presents high potential of applicability. We refer to [7] for the definition and detailed description of the basics of theory of generalized convolutions, for a survey on the most important classes of generalized convolutions, and a discussion on Lévy processes and stochastic integrals based on that convolutions, as well as to [14] for the definition and basic properties of Kendall random walks. ...
... The basic properties of the transition probability kernel are also proved. We refer to [14,17,19,18] for discussions and proofs of further properties of the process {X n }. The structure of the processes considered here (see Definition 2.6) is similar to the first order autoregressive maximal Pareto processes [2,3,24,31], max-AR(1) sequences [1], minification processes [23,24], the max-autoregressive moving average processes MARMA [9], pARMAX and pRARMAX processes [10]. ...
... By using recurrence construction, we define a stochastic processes {X n : n ∈ N 0 }, called the Kendall random walk (see also [14,18,19]). Further, we show strict connection of the process {X n } to the Kendall convolution. ...
... where G F (x) = 1 − G F (x). Also we have the following inversion formula, cf. [8] and [9]: ...
... is its G−transform when F 3 is the Kendall convolution of F 1 and F 2 , cf. [8] and [9]. Details on the Kendall convolution can be found in [12] and e.g. ...
Preprint
Full-text available
An elementary renewal theorem and a Blackwell theorem provided by Jasiulis-Go{\l}dyn et al. (2020) in a setting of Kendall convolutions are proved under weaker hypothesis and extended to the Gamma class. Convergence rates of the limits concerned in these theorems are analyzed.
... In this paper we study renewal theory for a class of extremal Markov sequences with transition probabilities given by a generalized convolution (see [1]), called Kendall random walks (see [2], [3]). Their structure is similar to Pareto processes ( [4], [5]), minification processes ( [6], [7]), the max-autoregressive moving average processes MARMA ( [5]) and extremal processes ( [8]). ...
... The main tool, which we use in the Kendall convolution algebra, is the Williamson transform (see [2], [3]), the operation ν → ν given by ...
Article
The paper deals with renewal theory for a class of extremal Markov sequences connected with the Kendall convolution. We consider here some particular cases of the Wold processes associated with generalized convolutions. We prove an analogue of the Fredholm theorem for all regular generalized convolutions algebras. Using regularly varying functions we prove a Blackwell theorem and a limit theorem for renewal processes defined by Kendall random walks. Our results set new research hypotheses for other generalized convolution algebras to investigate renewal processes constructed by Markov processes with respect to generalized convolutions.
... In the Kendall convolution algebra the main tool used in the analysis of a measure ν is the Williamson transform (see [14], [20]) that is generalized characteristic function of ν and plays the same role as the classical Laplace or Fourier transform for convolutions defined by addition of independent random elements. For detailed properties and connections with generalized convolutions see e.g. ...
... Proof. The first equality is an analogue of the formula given in Lemma 3.1 in [14]. To obtain (8) observe that by (1) we have ...
Preprint
Full-text available
We consider a class of max-AR(1) sequences connected with the Kendall convolution. For a large class of step size distributions we prove that the one dimensional distributions of the Kendall random walk with any unit step distribution, are regularly varying. The finite dimensional distributions for Kendall convolutions are given. We prove convergence of a continuous time stochastic process constructed from the Kendall random walk in the finite dimensional distributions sense using regularly varying functions.
... In this paper we study the renewal theory for Kendall random walks defined in [12], [14]. These random walks form a class of extremal Markov sequences ( [1]). ...
... for x, y > 0. The main tool, which we use in the Kendall convolution algebra, is the Williamson transform (see [12], [14]), the operation ν → ν given by ...
Preprint
Full-text available
The paper deals with the renewal theory for a class of extremal Markov sequences connected with the Kendall convolu-tion. We consider here some particular cases of the Wold processes connected with generalized convolutions. We prove an analogue of the Fredholm theorem for all generalized convolutions algebras. Using the technique of regularly varying functions we prove the Blackwell theorem for renewal processes defined by the Kendall random walks.
... For a while it was not clear that the class of generalized convolutions is rich enough to be interesting and useful in stochastic simulation and mathematical modeling, but by now we know that this class is very rich, worth studying and applying. It turned out, for example that each generalized convolution has its own maximal stability exponent thus its own Gaussian distribution or its own distribution with lack of memory property thus its own Poisson process (see [21,22,24,26]). The origin of some generalized convolutions one can find also in Delphic semi-groups ( [14,15,27]). A different approach to generalized convolutions appeared in the theory of harmonic analysis, see e.g. ...
Preprint
Full-text available
In the paper we describe several important properties of the Kendall convolution at the same time pointing to these generalized convolutions which have the same property. For example the monotonic property is necessary to build a renewal process with respect to generalized convolution, lack of memory property is needed for the construction of the Poisson process with respect to generalized convolution. Another valuable property is the simplicity of inverting the corresponding generalized characteristic function e.g. inverting the Williamson transform in the case of the Kendall convolution. The convex linear combination property makes calculations easier and the representation as a weak convolution with respect to $\max$-convolution allows describing extreme phenomena. In the last section we discuss the representable generalized convolutions in the sense similar to the one defined in arXiv:1312.4083v1, i.e. such convolutions $\diamond$ for which the random variable $Z$ satisfies the condition $\mathcal{L}(\theta_1) \diamond \mathcal{L}(\theta_1) = \mathcal{L}(Z)$ ($\mathcal{L}(X)$ denotes the distribution of $X$) can be explicitly given by some function on variables $X, Y$ and maybe some additional variables. This property enables the computer simulation of stochastic processes with respect to representable generalized convolution and makes possible studying path properties of such processes.
... Notice that the Williamson transform (see [11,14,15,28]) is easy to invert: If µ has cumulative distribution function F , then for H(t) : ...
Preprint
Full-text available
We consider here the Cramer-Lundberg model based on generalized convolutions. In our model the insurance company invests at least part of its money, have employees, shareholders. The financial situation of the company after paying claims can be even better than before. We compute the ruin probability for $\alpha$-convolution case, maximal convolution and the Kendall convolution case, which is formulated in the Williamson transform terms. We also give some new results on the Kendall random walks.
Chapter
Article
Full-text available
The paper deals with a new class of random walks strictly connected with the Pareto distribution. We consider stochastic processes in the sense of generalized convolution or weak generalized convolution following the idea given in [1]. The processes are Markov processes in the usual sense. Their structure is similar to perpetuity or autoregressive model. We prove theorem, which describes the magnitude of the fluctuations of random walks generated by generalized convolutions. We give a construction and basic properties of random walks with respect to the Kendall convolution. We show that they are not classical L\'evy processes. The paper proposes a new technique to cumulate the Pareto-type distributions using a modification of the Williamson transform and contains many new properties of weakly stable probability measure connected with the Kendall convolution. It seems that the Kendall convolution produces a new classes of heavy tailed distributions of Pareto-type.
Article
The theory of Generalized Convolutions represents an unifying approach to many limit schemes in the probability theory, like Central Limit Theorems for i.i.d. random variables, extreme value theory, Kingman convolution and others. The main idea is, that in contrast to the usual summation and maxima scheme, the operation between random variables may itself be random. This random effect of the operation first appeared in a seminal work of Kingman (1963). The structure of Kingman convolution attracted attention of many specialists, among them Urbanik (1964, 1973, 1988), Bingham (1971, 1984), Volkovich (1980, 1984). It was Urbanik (1964), who developed a theory of generalized convolutions studying binary operations on probability measures on the positive half-line, that possess analogues of the most important properties of ordinary convolution. Kingman’s work provides only an example of such an operation. We shall refer to generalized convolutions of probability measures on R + as to Urbanik convolutions. The theory of Urbanik convolutions is very extensive now. We summarize it in Section 2. The main open problem in this area is the rate of convergence in the Central Limit Theorem. In Section 3 we deal with sharp estimates of the rate of convergence for normalized Urbanik convolutions to generalized stable laws. In Section 4 we present an example of a generalized convolution of random vectors and provide an estimate of the rate of convergence of an n-fold generalized convolution to a generalized stable vector.