PreprintPDF Available

# Asymptotic properties of extremal Markov processes driven by Kendall convolution

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
arXiv:1901.05698v2 [math.PR] 9 Oct 2019
Asymptotic properties of extremal Markov processes driven by
Kendall convolution
Marek Arendarczyk1, Barbara Jasiulis - Gołdyn2, Edward Omey3
1,2Mathematical Institute, University of Wrocław,
pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland.
3Faculty of Economics and Business-Campus Brussels,
KU Leuven, Warmoesberg 26, 1000 Brussels, Belgium
E-mail: 1marendar@math.uni.wroc.pl,
2jasiulis@math.uni.wroc.pl,
3edward.omey@kuleuven.be
October 10, 2019
Contents
1 Introduction 2
2 Stochastic representation and basic properties of Kendall random walk 4
3 Finite dimensional distributions 6
4 Limit theorems 10
5 Proofs 13
5.1 Proof of Proposition 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2 Proof of Lemma 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Proof of Lemma 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.5 Proof of Proposition 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.6 Proof of Theorem 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
References 20
1
Abstract
This paper is devoted to the analysis of the ﬁnite-dimensional distributions and asymptotic behav-
ior of extremal Markov processes connected to the Kendall convolution. In particular, based on its
stochastic representation, we provide general formula for ﬁnite dimensional distributions of the ran-
dom walk driven by the Kendall convolution for a large class of step size distributions. Moreover, we
prove limit theorems for random walks and connected continuous time stochastic process.
Key words: Markov process; Extremes; Kendall convolution; Regular variation; Williamson transform;
Limit theorems; Exact asymptotics
Mathematics Subject Classiﬁcation. 60G70, 60F05, 44A35, 60J05, 60J35, 60E07, 60E10.
1 Introduction
The Kendall convolution being the main building block in the construction of the extremal Markov pro-
cess called Kendall random walk {Xn:nN0}is an important example of generalization of the convolu-
tions corresponding to classical sum and to classical maximum. Originated by Urbanik [29] (see also [21])
generalized convolutions regain popularity in recent years (see, e.g., [7, 19, 28] and references therein).
In this paper we focus on the Kendall convolution (see, e.g., [16, 27]) which thanks to its connections to
heavy tailed distributions, Williamson transform [18], Archimedian copulas, renewal theory [19], non-
comutative probability [15] or delphic semi-groups [12, 13, 20] presents high potential of applicability. We
refer to [7] for the deﬁnition and detailed description of the basics of theory of generalized convolutions,
for a survey on the most important classes of generalized convolutions, and a discussion on Lévy pro-
cesses and stochastic integrals based on that convolutions, as well as to [14] for the deﬁnition and basic
properties of Kendall random walks.
Our main goal is to study ﬁnite dimensional distributions and asymptotic properties of extremal Markov
processes connected to the Kendall convolution. In particular we present many examples of ﬁnite dimen-
sional distributions by the unit step characteristics, which create a new class of heavy tailed distributions
with potential applications. Innovative thinking about possible applications comes from the fact that
the generalized convolution of two point-mass probability measures can be a non-degenerate probabil-
ity measure. In particular the Kendall convolution of two probability measures concentrated at 1is the
Pareto distribution and consequently it generates heavy tailed distributions. In this context the theory of
regularly varying functions (see, e.g., [4, 5]) plays a crucial role. In this paper, regular variation techniques
are used, to investigate asymptotic behavior of Kendall random walks and convergence of the ﬁnite di-
mensional distributions of continuous time stochastic processes constructed from Kendall random walks.
Most of the proofs presented in this paper are also based on the application of Williamson transform that
is a generalized characteristic function of the probability measure in the Kendall algebra (see, e.g., [7, 30]).
The great advantage of the Williamson transform is that it is easy to invert. It is also worth to mention
that, this kind of transforms is generator of Archimedean copulas [25, 26] and is used to compute radial
measures for reciprocal Archimedean copulas [11]. Asymptotic properties of the Williamson transform
in the context of Archimedean copulas and extreme value theory were given in [22]. In this context, we
believe, that results on Williamson transform obtained in presented paper might be applicable in copula
2
theory which can be an interesting topic for future research.
We start, in Section 2 with presenting the deﬁnition and basic properties of the Kendall convolution. Next,
the deﬁnition and construction of random walk under the Kendall convolution is presented, which leads
to a stochastic representation of the process {Xn}. The basic properties of the transition probability ker-
nel are also proved. We refer to [14, 17, 19, 18] for discussions and proofs of further properties of the
process {Xn}. The structure of the processes considered here (see Deﬁnition 2.6) is similar to the ﬁrst
order autoregressive maximal Pareto processes [2, 3, 24, 31], max-AR(1) sequences [1], miniﬁcation pro-
cesses [23, 24], the max-autoregressive moving average processes MARMA [9], pARMAX and pRARMAX
processes [10]. Since the random walks form a class of extremal Markov chains, we believe that study-
ing them will yield an important contribution to extreme value theory. Additionally, the sequences in
the presented paper have interesting dependency relationships between factors which also justiﬁes their
potential applicability.
Section 3 is devoted to the analysis of the ﬁnite-dimentional distributions of the process {Xn}. We derive
general formula and present some important examples of processes with different types of step distribu-
tions which leads to new classes of heavy tailed distributions.
In Section 4 we study different limiting behaviors of Kendall convolutions and connected processes. We
present asymptotic properties of random walks under Kendall convolution using regular variation tech-
niques. In particular we show asymptotic equivalence between Kendall convolution and maximum con-
volution in the case of regularly varying step distribution ν. The result shows the connection with clas-
sical extreme value theory (see, e.g., [8]) and suggests possible applications of the Kendall random walk
in modelling phenomenons where independence between events can not be assumed. Moreover limit
theorems for Kendall random walks are given in the case of ﬁnite α-moment as well as in the case of regu-
larly varying tail of unit step distribution. Finally, we deﬁne continuous time stochastic process based on
random walk under Kendall convolution and prove convergence of its ﬁnite-dimensional distributions
using regular variation techniques.
Notation.
Through this paper, the distribution of the random element Xis denoted by L(X). By P+we denote
family of all probability measures on the Borel σ-algebra B(R+)with R+:= [0,). For a probability
measure λ∈ P+and aR+the rescaling operator is given by Taλ=L(aX)if λ=L(X). The set of all
natural numbers including zero is denoted by N0. Additionally we use notation π2α, α > 0for a Pareto
random measure with probability density function (pdf)
π2α(dy) = 2αy2α11
1
1[1,)(y)dy. (1)
Moreover, by
m(α)
ν:= Z
0
xαν(dx)
3
we denote the αth moment of measure ν. The truncated α-moment of the measure νwith cumulative
distribution function (cdf) Fis given by
H(t) :=
t
Z
0
yαF(dy).
By ν1αν2we denote the Kendall convolution of the measures ν1, ν2∈ P+. For all nN, the Kendall
convolution of nidentical measures νis denoted by ναn:= να···αν(n-times). By Fnwe denote the
cumulative distribution function of the measure ναn, whereas for the tail distribution of ναnwe use the
standard notation Fn. By d
we denote convergence in distribution, whereas f dd
denotes convergence of
ﬁnite-dimentional distributions. Finally, we say that a measurable and positive function f(·)is regularly
varying at inﬁnity with index βif, for all x > 0, it satisﬁes limt→∞ f(tx)
f(t)=xβ(see, e.g., [6]).
2 Stochastic representation and basic properties of Kendall random walk
We start with the deﬁnition of the Kendall generalized convolution (see, e.g., [7], Section 2).
Deﬁnition 2.1 The binary operation α:P2
+→ P+deﬁned for point-mass measures by
δxαδy:= TM(̺απ2α+ (1 ̺α)δ1),
where α > 0,̺=m/M ,m= min{x, y},M= max{x, y}, is called the Kendall convolution. The extension of α
for any B(R+)and ν1, ν2∈ P+is given by
ν1αν2(A) =
Z
0
Z
0
(δxαδy) (A)ν1(dx)ν2(dy).(2)
Remark 2.2 Note that the convolution of two point mass measures is a continuous measure that reduces to a Pareto
distribution π2αin case of x=y= 1, which is different than the classical convolution or maximum convolution
algebra, where convolution of discrete measures yields also a discrete one.
In the Kendall convolution algebra the main tool used in the analysis of a measure νis the Williamson
transform (see, e.g., [30]) that is characteristic function for Kendall convolution (see, e.g., [7], Deﬁnition
2.2) and plays the same role as the classical Laplace or Fourier transform for convolutions deﬁned by
addition of independent random elements. We refer to [7], [27] and [29], for the deﬁnition and detailed
discussion on properties of generalized characteristic functions and its connections to generalized convo-
lutions. Through this paper, the function G(t)that is Williamson transform at point 1
tplays a crucial role
in the analysis of Kendall convolutions and connected stochastic processes.
Deﬁnition 2.3 The operation G:R+R+given by
G(t) =
Z
0
Ψx
tν(dx), ν ∈ P+,
4
where
Ψx
t=1x
tα+,(3)
a+= max(0, a),α > 0, is called the Williamson transform of measure νat point 1
t.
Remark 2.4 Note that Ψx
tas a function of tis the Williamson transform of δx, x 0.
Remark 2.5 Due to Proposition 2.3 and Example 3.4 in [7] function G1
tis a generalized characteristic func-
tion for the Kendall convolution. Thus, the Williamson transform Gn(t)of the measure ναnhas the following,
important, property (see, e.g., Deﬁnition 2.2 in [7])
Gn(t) = G(t)n.(4)
By using recurrence construction, we deﬁne a stochastic processes {Xn:nN0}, called the Kendall
random walk (see also [14, 18, 19]). Further, we show strict connection of the process {Xn}to the Kendall
convolution.
Deﬁnition 2.6 The stochastic process {Xn:nN0}is a discrete time Kendall random walk with parameter α > 0
and step distribution ν∈ P+if there exist
1. {Yk}i.i.d. random variables with distribution ν,
2. {ξk}i.i.d. random variables with uniform distribution on [0,1],
3. {θk}i.i.d. random variables with Pareto distribution with α > 0and density
π2α(dy) = 2αy2α11
1
1[1,)(y)dy,
such that sequences {Yk},{ξk}, and {θk}are independent and
X0= 1, X1=Y1, Xn+1 =Mn+1 1
1
1(ξnn+1)+θn+11
1
1(ξnn+1),
where
Mn+1 = max{Xn, Yn+1}, mn+1 = min{Xn, Yn+1}, ̺n+1 =mα
n+1
Mα
n+1
.
In the next proposition we show that the process constructed in Deﬁnition 2.6 is a homogeneous Markov
process driven by the Kendall convolution. We refer to [7], Section 4 for the proof and a general discussion
of the existence of the Markov processes under generalized convolutions.
Proposition 2.7 The process {Xn:nN0}with the stochastic representation given by Deﬁnition 2.6 is a homo-
geneous Markov process with transition probability kernel
Pn,n+k(x, A) := Pk(x, A) = δxαναk(A),(5)
where k, n N, A ∈ B(R+), x 0, α > 0.
The proof of Proposition 2.7 is presented in Section 5.1.
5
3 Finite dimensional distributions
In this section we study the ﬁnite dimensional distributions of the process {Xn}. We start with a proposi-
tion that describes the one-dimensional distributions of {Xn}and their relationships with the Williamson
transform and truncated α-moment.
Proposition 3.1 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+with cdf F. Then
(i) for any t0we have
G(t) = αtα
t
Z
0
xα1F(x)dx,
and F(t) = G(t) + t
αG(t).
(ii) for any t0, n 1we have
Fn(t) = G(t)n1ntαH(t) + G(t).
Proof. First, observe that
G(t) = F(t)tα
t
Z
0
xαν(dx) = αtα
t
Z
0
xα1F(x)dx, (6)
where the last equation follows from integration by parts. In order to complete the proof of (i) it sufﬁces
to take derivatives on both sides of the above equation.
In an analogous way we obtain that
(G(t))n=αtαZt
0
xα1Fn(x)dx. (7)
In order to complete the proof it is sufﬁcient to take derivatives on both sides of equation (7) and apply
(i).
The next two lemmas give characterizations of the transition probabilities of the process {Xn}and play
an important role in the analysis of its ﬁnite-dimensional distributions.
Lemma 3.2 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+. For all k, n N, x, y, t 0we have
(i)
Pn(x, ((0, t])) = G(t)n+n
tαH(t)G(t)n1Ψx
t1{x<t}
6
(ii)
t
Z
0
wαPn(x, dw) = xαG(t)n+nG(t)n1H(tx
t1{x<t}.
The proof of Lemma 3.2 is presented in Section 5.2.
The following lemma is the main tool in ﬁnding the ﬁnite-dimensional distributions of {Xn}. In order to
formulate the result it is convenient to introduce the notation
Ak:= {0,1}k\{(0,0, ..., 0)},for any kN.
Additionally, for any (ǫ1, ..., ǫn)∈ Anwe denote
˜ǫ1= min {i∈ {1, ..., k}:ǫi= 1}, . . . , ˜ǫm:= min {i > ˜ǫm1:ǫi= 1}, m = 1,2, . . . , s, s :=
k
X
i=1
ǫi.
Lemma 3.3 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+. Then for any 0 = n0n1n2· · · nk, where njNfor all jNand 0y0x1x2 · · ·
xkxk+1 we have
x1
Z
0
x2
Z
0
···
xk
Z
0
Ψyk
xk+1 Pnknk1(yk1, dyk)Pnk1nk2(yk2, dyk1)···Pn1(y0, dy1)
=X
(ǫ12,··· k)∈Ak
Ψy0
x˜ǫ1Ψx˜ǫs
xk+1 s1
Y
i=1
Ψx˜ǫi
x˜ǫi+1 k
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
+ Ψ y0
xk+1 k
Y
j=1
(G(xj))njnj1,
where
s1
Y
i=1
Ψx˜ǫi
x˜ǫi+1 = 1 for s= 1.
The proof of Lemma 3.3 is presented in Section 5.3.
Now, we are able to derive a general formula for the ﬁnite-dimentional distributions of the process {Xn}.
Theorem 3.4 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+. Then for any 0 =: n0n1n2· · · nk, where njNfor all jNand 0x1x2 · · · xkwe
have
P(Xnkxk, Xnk1xk1,··· , Xn1x1)
=X
(ǫ12,··· k)∈{0,1}k
s1
Y
i=1
Ψx˜ǫi
x˜ǫi+1 k
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
,
7
where
s1
Y
i=1
Ψx˜ǫi
x˜ǫi+1 = 1 for s∈ {0,1}.
Proof. First, observe that
P(Xnkxk, Xnk1xk1,··· , Xn1x1)
=
x1
Z
0
x2
Z
0
···
xk
Z
0
Pnknk1(yk1, dyk)Pnk1nk2(yk2, dyk1)···Pn1(0, dy1).
Moreover, by the deﬁnition of Ψ(·), for any a > 0, we have
lim
xk+1→∞ Ψa
xk+1 = Ψ (0) = 1.
Now in order to complete the proof it sufﬁces to apply Lemma 3.3 with y0= 0 and xk+1 → ∞.
Finally, we present the cumulative distribution functions and characterizations of the ﬁnite-dimensional
distributions of the process {Xn}for the most interesting examples of unit step distributions ν. Since,
by Theorem 3.4, ﬁnite-dimentional distributions of {Xn}are uniquely determined by the Williamson
transform G(·)and the truncated α-moment H(·)of the step distribution ν, then this two characteristics
are presented for each examples of the analyzed cases. Additionally in each example we derive the cdf of
ναnthat is the one-dimentional distribution of the process {Xn}.
Example 3.1 Let ν=δ1. Then the Williamson transform and truncated α-moment of measure νare given
by
G(x) = 11
xα+
and H(x) = 1
1
1[1,)(x),
respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = 1 + n1
xα11
xαn1
1
1
1[1,)(x).
In the next example we consider a linear combination of δ1and the Pareto distribution that plays a crucial
role in construction of Kenadall convolution.
Example 3.2 Let ν=1+ (1 p)πp, where p(0,1] and πpis a Pareto distribution with the pdf (1) with
2α=p. Then Williamson transform and truncated α-moment of measure νare given by
G(x) =
1α(1p)
(αp)xp+p(1α)
(αp)xα1
1
1[1,)(x)if p6=α,
(1 (1 p)xppxα+p(1 p)xαlog(x)) 1
1
1[1,)(x)if p=α
8
and
H(x) =
p(1α)
(pα)+p(1p)
(αp)xαp1
1
1[1,)(x)if p6=α,
p+p(1 p) log(x)if p=α,
respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = 1α(1 p)
αpxp+p(1 α)
αpxαn1
·1 + (1 p)(np α)
αpxpp(1 α)(n1)
αpxα1
1
1[1,)(x)
for p6=α, and
Fn(x) = 1(1 p)xppxα+p(1 p)xαlog(x)n1
·1 + (n1)pxα(1 p)xp+p(1 p)(n+ 1)xαlog(x)1
1
1[1,)(x)
for p=α.
In the next example we consider the distribution νwith the lack of memory property for the Kendall
convolution. We refer to [17] for a general result about the existence of measures with the lack of memory
property for the so called monotonic generalized convolutions.
Example 3.3 Let νbe a probability measure with the cdf F(x) = 1 (1 xα)+, where α > 0. Then the
Williamson transform and truncated α-moment of measure νare given by
G(x) = xα
21
1
1[0,1)(x) + 11
2xα1
1
1[1,)(x) and H(x) = x2α
21
1
1[0,1)(x) + 1
21
1
1[1,)(x),
respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = (1
2
n+1
2nxαn for x[0,1];
1
211
2xαn11 + n1
2xαfor x > 1.
In the next example we consider a unit step distribution, which is a stable probability measure for the
Kendall random walk with unit step distribution ν(see Section 4, Theorem 4.5).
Example 3.4 Let ρν,α, α > 0be a probability measure with cdf
F(x) = 1 + m(α)
νxαem(α)
νxα1
1
1(0,)(x),
where α > 0and m(α)
ν>0is a parameter. Then the Williamson transform and truncated α-moment of
measure ρν,α are given by
G(x) = exp{−m(α)
νxα}1
1
1(0,)(x) and H(x) = m(α)
νG(x),
respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = 1 + nm(α)
νxαexp{−nm(α)
νxα}1
1
1(0,)(x).
9
Example 3.5 Let ν=U(0,1) be the uniform distribution with the density ν(dx) = 1
1
1(0,1)(x)dx. Then the
Williamson transform and truncated α-moment of measure νare given by
G(x) = (x1) (x1)α+1
(α+ 1)xα,and H(x) = (x1)α+1
(α+ 1) ,
respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = α
α+ 1n1 + n
αxn1
1
1[0,1)(x)
+11
(α+ 1)xαn11 + n1
(α+ 1)xα1
1
1[1,)(x).
Example 3.6 Let ν=γa,b, a, b > 0, be the Gamma distribution with the pdf
γa,b(dx) = ba
Γ(a)xa1ebx1
1
1(0,)(x)dx.
Then the Williamson transform and truncated α-moment of measure νare given by
G(x) = γa,b(0, x]Γ(a+α)
Γ(a)bαxαγa+α,b(0, x],and H(x) = Γ(a+α)
Γ(a)bαγa+α,b(0, x],
where γa,b(0, x] = ba
Γ(a)Rx
0ts1etdt. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have
Fn(x) = γa,b(0, x]Γ(a+α)
Γ(a)bαxαγa+α,b(0, x]n1
·γa,b(0, x] + Γ(a+α)
Γ(a)(n1)bαxαγa+α,b(0, x].
4 Limit theorems
In this section we investigate limiting behaviors of Kendall random walks and connected continuous
time processes. The analysis is based on inverting the Williamson transform as the given in Proposition
3.1, (ii). Moreover, as it is shown in Section 2, the Kendall convolution is strongly related to the Pareto
distribution. Hence, regular variation techniques play a crucial role in the analysis of the asymptotic
behaviors and limit theorems for the processes studied in this section.
We start with the analysis of asymptotic behavior of the tail distribution of random variables Xn.
Theorem 4.1 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+. Then
Fn(x) = nF (x) + 1
2n(n1)(H(x))2x2α(1 + o(1))
as x→ ∞.
10
The proof of Theorem 4.1 is presented in Section 5.4.
The following Corollary is a direct consequence of the Theorem 4.1.
Corollary 4.2 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+. Moreover, let
(i) F(x)be regularly varying with parameter θαas x→ ∞, where 0< θ < α. Then
Fn(x) = nF (x)(1 + o(1)) as x→ ∞.
(ii) m(α)
ν<. Then
Fn(x) = nF (x) + 1
2n(n1) m(α)
ν2x2α(1 + o(1)) as x→ ∞.
(iii) F(x) = ox2αas x→ ∞. Then
Fn(x) = 1
2n(n1) m(α)
ν2x2α(1 + o(1)) as x→ ∞.
Remark 4.3 It shows that in case of regularly varying step distribution ν, the tail distribution of random variable
Xnis asymptotically equivalent to the maximum of ni.i.d. random variables with distribution ν.
In the next proposition, we investigate the limit distribution for Kendall random walks in case of ﬁnite
α-moment as well as for regularly varying tail of the unit step. We start with the following observation.
Remark 4.4 Due to Proposition 1 in [4] random variable Xbelongs to the domain of attraction of a stable measure
with respect to the Kendall convolution if and only if 1G(t)is regularly varying function at .
Notice that 1G(t)is regularly varying whenever the random variable Xhas ﬁnite α-moment or its tail
is regularly varying at inﬁnity. The following Proposition formalizes this observation providing formulas
for stable distributions with respect to Kendall convolution.
Proposition 4.5 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+
(i) If m(α)
ν<, then as n→ ∞,
n1Xnd
X,
where the cdf of random variable Xis given by
ρν,α,θ(0, x] = 1 + m(α)
νxαem(α)
νxα1
1
1(0,)(x)(8)
and the pdf of Xis given by
ρν,α(dx) = αm(α)
ν2x2α1exp{−m(α)
νxα}1
1
1(0,)(y)dx. (9)
11
(ii) If Fis regularly varying as x→ ∞ with parameter θα, where 06θ < α, then there exists a sequence {an},
an→ ∞, such that
a1
nXnd
X,
where the cdf of random variable Xis given by
ρν,α,θ(0, x] = 1 + x(αθ)ex(αθ)1
1
1(0,)(x)(10)
and the pdf of Xis given by
ρν,α,θ(dx) = αx2(αθ)1exp{−x(αθ)}1
1
1(0,)(x)dx. (11)
The proof of Proposition 4.5 is presented in Section 5.5.
Now we deﬁne a new stochastic process {Zn(t) : nN0}connected with the Kendall random walk
{Xn:nN0}such that
{Zn(t)}d
=a1
nX[nt],
where [·]denotes integer part and the sequence {an}is such that an>0and limn→∞ an=.
In the following theorem, we prove convergence of the ﬁnite-dimensional distributions of the process
{Zn(t)}, for appropriately chosen sequence {an}.
Theorem 4.6 Let {Xn:nN0}be a Kendall random walk with parameter α > 0and unit step distribution
ν∈ P+.
(i) If m(α)
ν<and an=n1(1 + o(1)), as n→ ∞, then
{Zn(t)}fdd
→ {Z(t)},
where, for any 0 = t0t1... tk, the ﬁnite-dimensional distributions of {Z(t)}are given by
P(Z(t1)z1, Z(t2)z2,··· , Z(tk)zk)
=X
(ǫ12,··· k)∈{0,1}k
s1
Y
i=1
Ψz˜ǫi
z˜ǫi+1 k
Y
j=1 (tjtj1)
zα
j
m(α)
ν!ǫj
exp (m(α)
ν
k
X
i=1
zα
i(titi1)),
(ii) If F(·)is regularly varying as x→ ∞ with parameter θα, where 06θ < α, then there exists a sequence
{an}, an→ ∞ such that
Zn(t)fdd
Z(t),
where, for any 0 = t0t1... tk, the ﬁnite-dimensional distributions of {Z(t)}are given by
P(Z(t1)z1, Z(t2)z2,··· , Z(tk)zk)
=X
(ǫ12,··· k)∈{0,1}k
s1
Y
i=1
Ψz˜ǫi
z˜ǫi+1 k
Y
j=1 (tjtj1)zθα
jǫjexp (
k
X
i=1
zθα
i(titi1)),
12
where in both above cases we have
s1
Y
i=1
Ψz˜ǫi
z˜ǫi+1 = 1 for s∈ {0,1}
with
˜ǫ1= min {i:ǫi= 1}, . . . , ˜ǫm:= min {i > ˜ǫm1:ǫi= 1}, m = 1,2, . . . , s, s =
k
X
i=1
ǫi.
The proof of Theorem 4.6 is presented in Section 5.6.
5 Proofs
In this section, we present detailed proofs of our results.
5.1 Proof of Proposition 2.7
Due to the independence of sequences {Yk},{ξk}, and {θk}, it follows directly from the Deﬁnition 2.6 that
the process {Xn}satisﬁes the Markov property.
Now, let nNbe ﬁxed. We shall show that, for all kN, A ∈ B(R+), x 0, α > 0, the transition
probabilities of the process {Xn}are of the form (5). In order to do this we proceed by induction. By
Deﬁnition 2.6 we have
P(XnA|Xn1=x)
=
Z
0
P(XnA|Xn1=x, Yn=y)ν(dy)
=
Z
0Pmax(x, y)θnA, ξn<min(x, y)
max(x, y)α+IA(max(x, y))Pξn>min(x, y)
max(x, y)αν(dy).
Moreover, by the independence of the random variables θnand ξnthe above expression is equal to
Z
0min(x, y)
max(x, y)α
P(max(x, y)θnA) + 1min(x, y)
max(x, y)αIA(max(x, y))ν(dy)
=
Z
0
Tmax(x,y)min(x, y)
max(x, y)α
π2α(A) + 1min(x, y)
max(x, y)αδ1(A)ν(dy)
=
Z
0Tmax(x,y)δmin(x,y)
max(x,y)
αδ1(A)ν(dy)(12)
=
Z
0
(δxαδy) (A)ν(dy) = (δxαν) (A),(13)
13
where (12) follows from Deﬁnition 2.1 and (13) follows from (2). This completes the ﬁrst step of the proof
by induction.
Now, assuming that
P(Xn+kA|Xn=x) = δxαναk(A)(14)
holds for k2we establish its validity for k+ 1.
Due to the Chapman-Kolmogorov equation for the process {Xn}we have
P(Xn+k+1 A|Xn=x) =
Z
0Z
A
P1(y, dz)Pk(x, dy)
=
Z
0
(δyαν) (A)δxαναk(dy)(15)
=δxαναk+1(A),(16)
where (15) follows from (13) and (14) while (16) follows from (2). This completes the induction argument
and the proof.
5.2 Proof of Lemma 3.2
First notice that by Deﬁnition 2.1, we have
(δxαδy) ((0, t]) = min(x, y)
max(x, y)α
P(max(x, y)θt) + 1min(x, y)
max(x, y)α1{max(x,y)<t}
=1xαyα
t2α1{x<t,y<t}(17)
=hΨx
t+ Ψ y
tΨx
tΨy
ti1{x<t,y<t},(18)
where (18) is a direct application of (3). In order to prove (i), observe that by (2) and (18) we have
Pn(x, ((0, t])) =
Z
0
(δxαδy) (0, t)ναn(dy)
=
t
Z
0hΨx
t+ Ψ y
tΨx
tΨy
ti1{x<t}ναn(dy)
=hΨx
tFn(t) + 1Ψx
tGn(t)i1{x<t},(19)
where (19) holds by Deﬁnition 2.3. In order to complete the proof of the case (i) it sufﬁces to combine (19)
with (4) and Proposition 3.1 (ii).
To prove (ii), observe that integration by parts leads to
t
Z
0
wα(δxαδy) (dw) = tα(δxαδy) (0, t)
t
Z
0
αwα1(δxαδy) (0, w)dw
=xα2xαyα
tα+yα1{xy<t},(20)
14
where (20) follows from (17). Applying (20) we obtain
t
Z
0
wαPn(x, dw) =
Z
0
t
Z
0
wα(δxαδy) (dw)ναn(dy)
=
t
Z
0xα2xαyα
tα+yαναn(dy)1{x<t}
=xαFn(t)2xα
tαHn(t) + Hn(t)1{x<t},(21)
where Hn(t) := Rt
0yαναn(dy) = tα(Fn(t)Gn(t)) by (6). Finally, the proof of the case (ii) is completed
by combining (21) with Proposition 3.1 (ii).
5.3 Proof of Lemma 3.3
Let k= 1. Then by Lemma 3.2 we obtain
x1
Z
0
Ψy1
x2Pn1(y0, dy1) = Pn1(y0,((0, x1])) xα
2
x1
Z
0
yα
1Pn(y0, dy1)
=Ψy0
x2Gn1(x1) + n1
xα
1
Gn11(x1)H1(x1y0
x1Ψx1
x21{y0<x1},
which ends the ﬁrst step of proof by induction.
Now, assume that the formula holds for kN. We shall establish its validity for k+ 1. Let
˜η1:= min {i2 : ǫi= 1}, . . . , ˜ηm:= min {i > ˜ηm1:ǫi= 1}, m = 1,2,...,s2, s2:=
k+1
X
i=2
ǫi.
i=1 ǫi.
Moreover, let
A0
k+1 := {(0, ǫ2, ǫ3,··· , ǫk+1)∈ {0,1}k+1 : (ǫ2, ǫ3,··· , ǫk+1)∈ Ak},
A1
k+1 := {(1, ǫ2, ǫ3,··· , ǫk+1)∈ {0,1}k+1 : (ǫ2, ǫ3,··· , ǫk+1)∈ Ak}.
By splitting Ak+1 into four subfamilies of sets: A0
k+1,A1
k+1,{(1,0, ..., 0)}, and {(0, ..., 0)}and applying the
formula for kand the ﬁrst induction step we obtain
x1
Z
0
x2
Z
0
···
xk+1
Z
0
Ψyk+1
xk+2 Pnk+1nk(yk, dyk+1 )···Pn2n1(y1, dy2)
Pn1(y0, dy1)
=X
(ǫ23,··· k+1)∈Ak
Ψx˜ηs2
xk+2 s21
Y
i=1
Ψx˜ηi
x˜ηi+1 k+1
Y
j=2
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
·Zx1
0
Ψy1
x˜η1Pn1(y0, dy1) +
k+1
Y
j=2
(G(xj))njnj1Zx1
0
Ψy1
xk+2 Pn1(y0, dy1)
=S[A0
k+1] + S[A1
k+1] + S[{(1,0, ..., 0)}] + S[{(0, ..., 0)}],(22)
15
where
S[A0
k+1] = X
(023,··· k+1)∈A0
k+1
Ψy0
x˜η1Ψx˜ηs2
xk+2 s21
Y
i=1
Ψx˜ηi
x˜ηi+1
·(G(x1))n1
k+1
Y
j=2
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
,
S[A1
k+1] = X
(123,··· k+1)∈A1
k+1
Ψy0
x1Ψx˜ηs2
xk+2 s21
Y
i=1
Ψx˜ηi
x˜ηi+1 Ψx1
x˜η1
·n1
xα
1
(G(x1))n11H1(x1)
k+1
Y
j=2
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
,
S[{(1,0, ..., 0)}] = n1
xα
1
Gn11(x1)H1(x1y0
x1Ψx1
xk+2 k+1
Y
j=2
(G(xj))njnj1,
S[{(0, ..., 0)}] = Ψ y0
xk+2 G(x1)n1
k+1
Y
j=2
(G(xj))njnj1.
Observe that for any sequence (0, ǫ2, ǫ3,··· , ǫk+1 )∈ A0
k+1 we have ǫ1,˜ǫ2, ..., ˜ǫs1) = (˜η1,˜η2, ..., ˜ηs2), with
s1=s2, which implies that
Ψy0
x˜η1Ψx˜ηs2
xk+2 s21
Y
i=1
Ψx˜ηi
x˜ηi+1 = Ψ y0
x˜ǫ1Ψx˜ǫs1
xk+2 s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 .
Morever
(G(x1))n1
k+1
Y
j=2
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
=
k+1
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
.
Hence
S[A0
k+1] = X
(023,··· k+1)∈A0
k+1
Ψy0
x˜ǫ1Ψx˜ǫs1
xk+2 s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 k+1
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
.(23)
Analogously, for any sequence (1, ǫ2, ǫ3,··· , ǫk+1)∈ A1
k+1 we have ǫ1,˜ǫ2, ..., ˜ǫs1) = (1,˜η1,˜η2, ..., ˜ηs2), with
s1=s2+ 1 which implies that
Ψy0
x1Ψx1
x˜η1Ψx˜ηs2
xk+2 s21
Y
i=1
Ψx˜ηi
x˜ηi+1 = Ψ y0
x˜ǫ1Ψx1
x˜η1Ψx˜ǫs1
xk+2 s12
Y
i=1
Ψx˜ǫi+1
x˜ǫi+2
= Ψ y0
x˜ǫ1Ψx˜ǫs1
xk+2 s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 ,(24)
16
where (24) is the consequence of
Ψx1
x˜η1s12
Y
i=1
Ψx˜ǫi+1
x˜ǫi+2 =
s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 .
Moreover,
n1
xα
1
(G(x1))n11H1(x1)
k+1
Y
j=2
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
=
k+1
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
.
Hence
S[A1
k+1] = X
(123,··· k+1)∈A1
k+1
Ψy0
x˜ǫ1Ψx˜ǫs1
xk+2 s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 k+1
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
.(25)
S[{(1,0, ..., 0)}] = Ψ y0
x˜ǫ1Ψxs1
xk+2 n1H1(x1)
xα
1
k+1
Y
j=1
(G(xj))njnj1ǫj(26)
and due to the fact that n0= 0, we have
S[{(0, ..., 0)}] = Ψ y0
xk+2 k+1
Y
j=1
(G(xj))njnj1.(27)
Finally, by combining (23), (25), (26), and (27) with (22) we obtain that
x1
Z
0
x2
Z
0
···
xk+1
Z
0
Ψyk+1
xk+2 Pnk+1nk(yk, dyk+1)···Pn2n1(y1, dy2)
Pn1(y0, dy1)
=X
(ǫ12,··· k+1)∈Ak+1
Ψy0
x˜ǫ1Ψx˜ǫs1
xk+2 s11
Y
i=1
Ψx˜ǫi
x˜ǫi+1 k+1
Y
j=1
(G(xj))njnj1ǫj (njnj1)H(xj)
xα
j!ǫj
+ Ψ y0
xk+1 k+1
Y
j=1
(G(xj))njnj1.
This completes the induction argument and the proof.
5.4 Proof of Theorem 4.1
Due to Proposition 3.1, for any n2, we have
Fn(x) = F(x)1
xαH(x)n1F(x) + n1
xαH(x)
= (F(x))n+n
n1
X
k=1
(1)k1n1
k1k1
kH(x)
xαk
(F(x))nk
+ (1)n1(n1) H(x)
xαn
,(28)
17
where (28) follows by the observation that, for any a0,n2, we have
(1 a)n1(1 + a(n1)) = 1 + n
n1
X
k=1
(1)k1n1
k1k1
kak+ (1)n1(n1)an.
Thus
Fn(x) = I1+I2,
where
I1= 1 (F(x))n=nF (x)(1 + o(1))
as x→ ∞, and
I2=n
n1
X
k=1
(1)kn1
k1k1
kH(x)
xαk
(F(x))nk+ (1)n(n1) H(x)
xαn
.
Note that limx→∞ H(x)
xα= 0 for any measure ν∈ P+and hence
n
n1
X
k=3
(1)kn1
k1k1
kH(x)
xαk
(F(x))nk+ (1)n(n1) H(x)
xαn
=o 1
2n(n1) H(x)
xα2!,
as x→ ∞. This completes the proof.
5.5 Proof of Proposition 4.5
The following lemma plays a crucial role in further analysis.
Lemma 5.1 Let HRVθwith 0< θ < α. Then, there exists a sequence {an}such that
H(an)
(an)α=1
n(1 + o(1))
as n→ ∞.
Proof. First, observe that W(x) := xα/H(x)is regularly varying function with parameter αθas x→ ∞.
Then, due to Theorem 1.5.12. in [6], there exists an increasing function V(x)such that W(V(x)) = x(1 +
o(1)),as x→ ∞. Now, in order to complete the proof it sufﬁces to take an=V(n).
Proof of Proposition 4.5. Using (4) and (6), the Williamson transform of a1
nXnis given by
hGan
xin=Fan
xxα
aα
n
Han
xn
.(29)
In order to prove (i) observe that under assumption of the ﬁniteness of m(α)
νwe have
lim
n→∞ H n1
x!=m(α)
νand lim
n→∞ F n1
x!= 1,
18
which, by (29), yields that
lim
n→∞ "G n1
x!#n
=em(α)
νxα.
Due to Proposition 3.1, (i) there exists uniquely determined random variable Xwith cdf (8) and pdf (9)
such that em(α)
νxαis its Williamson transform. This completes the proof of the case (i).
In order to prove (ii), notice that, due to Theorem 1.5.8 in [6] FRVθα, implies that HRVθ. Hence,
for any z > 0, we have
Han
x=xθH(an)(1 + o(1)),
Moreover, by Lemma 5.1 we can choose a sequence {an}such that
H(an)
aα
n
=1
n(1 + o(1))
as n→ ∞. Thus,
lim
n→∞ Ghan
xni= lim
n→∞ Fan
xxα
aα
n
Han
xn
=exαθ.
Due to Proposition 3.1, (i) there exists random variable Xwith cdf (10) and pdf (11) such that exαθis its
Williamson transform. This completes the proof.
5.6 Proof of Theorem 4.6
Proof. Let 0 =: t0t1t2 · · · tk, where kN. By Theorem 3.4, the distribution of
(Zn(t1), Zn(t2),··· , Zn(tk)) is given by
P(Zn(t1)z1, Zn(t2)z2,··· , Zn(tk)zk)
=PX[nt1]anz1, X[nt2]anz2,··· , X[ntk]anzk
=X
(ǫ12,··· k)∈{0,1}k
s1
Y
i=1
Ψz˜ǫi
z˜ǫi+1 k
Y
j=1
(G(anzj))[ntj][ntj1]ǫj ([ntj][ntj1])H(anzj)
aα
nzα
j!ǫj
,(30)
where
s1
Y
i=1
Ψz˜ǫi
z˜ǫi+1 = 1 for s∈ {0,1}.
In analogous way to the proof of Theorem 4.5 (i) we obtain
lim
n→∞ G(anzj)[ntj][ntj1]ǫj= lim
n→∞ F(anzj)H(anzj)
(anzj)α[ntj][ntj1]ǫj
= exp nm(α)
ν(tjtj1)zθα
jo.
(31)
19
and
lim
n→∞ ([ntj][ntj1])H(anzj)
aα
nzα
j!ǫj
= (tjtj1)zα
jm(α)
ν(32)
In order to complete the proof of (i) it sufﬁces to pass with n→ ∞ in (30) applying (31) and (32).
In order to prove (ii), notice that similarly to the proof of Theorem 4.5, for any tj, zj>0and 1jk,
we obtain
lim
n→∞ G(anzj)[ntj][ntj1]ǫj= lim
n→∞ F(anzj)H(anzj)
(anzj)α[ntj][ntj1]ǫj
= exp n(tjtj1)zθα
jo(33)
and
lim
n→∞ ([ntj][ntj1])H(anzj)
aα
nzα
j!ǫj
= (tjtj1)zθα
j.(34)
In order to complete the proof it sufﬁces to pass with n→ ∞ in (30) applying (33) and (34).
Acknowledgements. B. Jasiulis-Gołdyn and E. Omey were supported by "First order Kendall maximal
autoregressive processes and their applications", within the POWROTY/REINTEGRATION programme
of the Foundation for Polish Science co-ﬁnanced by the European Union under the European Regional
Development Fund.
References
[1] M.T. Alpuim, N.A. Catkan, J. Hüsler, Extremes and clustering of non-stationary max-AR(1) se-
quences. Stoch. Proc. Appl.,56, 171–184, 1995.
[2] B. C. Arnold, Pareto Processes. Stochastic Processes: Theory and Methods. Handbook of Statistics, 19,
1–33, 2001.
[3] B. C. Arnold, Pareto Distributions. Monographs on Statistics and Applied Probability, 140, Taylor &
Francis Group, 2015.
[4] N. H. Bingham, Factorization theory and domains of attraction for generalized convolution algebra.
Proc. London Math. Sci., Inﬁnite Dimensional Analysis, Quantum Probability and Related Topics 23(4), 16–
30, 1971.
[5] N. H. Bingham, On a theorem of Kłosowska about generalized convolutions, Coll. Math. 48(1), 117–
125, 1984.
[6] N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular variation. Cambridge University Press, Cam-
bridge, 1987.
[7] M. Borowiecka-Olszewska, B.H. Jasiulis-Gołdyn, J.K. Misiewicz, J. Rosi ´nski, Lévy processes and
stochastic integral in the sense of generalized convolution. Bernoulli,21(4), 2513–2551, 2015.
20
[8] P. Embrechts, C. Klüppelberg, T. Mikosch, Modelling Extremal Events for Insurance and Finance.
Springer, Berlin, 1997.
[9] M. Ferreira, On the extremal behavior of a Pareto process: an alternative for ARMAX modeling.
Kybernetika 48(1), 31–49, 2012.
[10] M. Ferreira, L. Canto e Castro, Modeling rare events through a pRARMAX process. Journal of Statis-
tical Planning and Inference 140, 3552–3566, 2010.
[11] C. Genest, J. Nešlehová, L.,P. Rivest, The class of multivariate max-id copulas with l1norm sym-
metric exponent measure, Bernoulli,24(4B), 3751–3790, 2018.
[12] J. Gilewski, Generalized convolutions and delphic semigroups. Coll. Math.,25, 281–289, 1972.
[13] J. Gilewski, K. Urbanik, Generalized convolutions and generating functions. Bull. Acad. Sci. Polon.
Ser. Math. Astr. Phys.,16, 481–487, 1968.
[14] B.H. Jasiulis-Gołdyn, Kendall random walks. Probab. Math. Stat.,36(1), 165–185, 2016.
[15] B.H. Jasiulis-Gołdyn, A. Kula, The Urbanik generalized convolutions in the non-commutative proba-
bility and a forgotten method of constructing generalized convolution. Proceedings - Math. Sci.,122(3),
437–458, 2012.
[16] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, On the Uniqueness of the Kendall Generalized Convolution.
J. Theor. Probab.,24(3), 746-755, 2011.
[17] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, Classical deﬁnitions of the Poisson process do not coincide in
the case of weak generalized convolution. Lith. Math. J.,55(4), 518-542, 2015.
[18] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, Kendall random walk, Williamson transform and the corre-
sponding Wiener-Hopf factorization. Lith. Math. J.,57(4), 479-489, 2017.
[19] B. H. Jasiulis-Gołdyn, K. Naskr ˛et, J.K. Misiewicz, E. Omey, Renewal theory for extremal Markov
sequences of the Kendall type, to appear: Stoch. Proc. Appl., 2019.
[20] D. G. Kendall, Delphic semi-groups, inﬁnitely divisible regenerative phenomena, and the arithmetic
of p-functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete,9(3), 163–195, 1968.
[21] J.F.C. Kingman, Random Walks with Spherical Symmetry. Acta Math.,109(1), 11–53, 1963.
[22] M. Larsson, J. Nešlehová, Extremal behavior of Archimedean copulas, Adv. Appl. Probab.,43, 195-216,
2011.
[23] P.A.W. Lewis, Ed McKenzie, Miniﬁcation Processes and Their Transformations. Journal of Applied
Probability,28(1), 45–57, 1991.
21
[24] J. Lopez-Diaz, M. Angeles Gil, P. Grzegorzewski, O. Hryniewicz, J. Lawry, Soft Methodolody and Ran-
dom Information Systems. Advances in Inteligent and Soft computing, Springer, 2004.
[25] A.J. McNeil, J. Nešlehová, From Archimedean to Liouville Copulas, J. Multivariate Anal. 101(8), 1771–
1790, 2010.
[26] A.J. McNeil, J. Nešlehová, Multivariate Archimedean Copulas, dmonotone Functions and l1
norm Symmetric Distributions, Ann. Statist.,37(5B), 3059–3097, 2009.
[27] J. Misiewicz, Generalized convolutions and the Levi-Civita functional equation, Aequationes Mathe-
maticae,92(5), 911-933, 2018.
[28] J. Misiewicz, V. Volkovich, Symmetric weakly-stable random vector is pseudo-isotropic, to appear: J.
Math. Anal. Appl., 2019.
[29] K. Urbanik, Generalized convolutions I-V. Studia Math.,23(1964), 217–245, 45(1973), 57–70, 80(1984),
167–189, 83(1986), 57–95, 91(1988), 153–178.
[30] R.E. Williamson, Multiply monotone functions and their Laplace transforms. Duke Math. J. 23, 189–
207, 1956.
[31] H. Ch. Yeh, B.C. Arnold, C.A. Robertson, Pareto Processes. Journal of Applied Probability,25(2), 291–
301, 1988.
22
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In 1938, Schoenberg [20] posed a few seemingly simple problems in the area of elementary probability theory. The main goal was to characterize all the pseudo-isotropic distributions; that is, probability distributions in which all the one-dimensional projections are the same up to a scale parameter c. However, except for rotationally invariant and symmetric stable distributions this goal turned out to be extremely difficult and significant results appeared after four decades. In 2005, Misiewicz, Oleszkiewicz and Urbanik [17] presented the introduction to the theory of weakly-stable distributions and vectors, where a random vector X, taking values in a Banach space E, is called weakly-stable iff for all random variables Q1,Q2 there exists a random variable Θ such that(⁎)XQ1+X′Q2=dXΘ, where X,X′,Q1,Q2,Θ are independent. In [6], the authors showed that under some additional weak assumptions, every extreme point of the set of pseudo-isotropic distributions with the fixed quasi-norm c has to be weakly-stable. In this paper we show the converse implication; i.e. we show that every symmetric weakly-stable random vector is pseudo-isotropic. This seems to be another small step in solving Schoenberg's problems. As an application of our results, we propose a method to check whether a given symmetric multi-dimensional distribution is pseudo-isotropic, or, equivalently, whether or not a given symmetric weakly-stable distribution has a multidimensional version.
Article
Full-text available
In Borowiecka et al. (Bernoulli 21(4):2513–2551, 2015) the authors show that every generalized convolution can be used to define a Markov process, which can be treated as a Lévy process in the sense of this convolution. The Bessel process is the best known example here. In this paper we present new classes of regular generalized convolutions enlarging the class of such Markov processes. We give here a full characterization of such generalized convolutions $$\diamond$$ for which $$\delta _x \diamond \delta _1$$, $$x \in [0,1]$$, is a convex linear combination of $$n=3$$ fixed measures and only the coefficients of the linear combination depend on x. For $$n=2$$ it was shown in Jasiulis-Goldyn and Misiewicz (J Theor Probab 24(3):746–755, 2011) that such a convolution is unique (up to the scale and power parameters). We show also that characterizing such convolutions for $$n \geqslant 3$$ is equivalent to solving the Levi-Civita functional equation in the class of continuous generalized characteristic functions.
Article
Full-text available
The paper gives some properties of hitting times and an analogue of the Wiener-Hopf factorization for the Kendall random walk. We show also that the Williamson transform is the best tool for problems connected with the Kendall generalized convolution.
Article
The paper deals with renewal theory for a class of extremal Markov sequences connected with the Kendall convolution. We consider here some particular cases of the Wold processes associated with generalized convolutions. We prove an analogue of the Fredholm theorem for all regular generalized convolutions algebras. Using regularly varying functions we prove a Blackwell theorem and a limit theorem for renewal processes defined by Kendall random walks. Our results set new research hypotheses for other generalized convolution algebras to investigate renewal processes constructed by Markov processes with respect to generalized convolutions.
Article
We show how the extremal behavior of d -variate Archimedean copulas can be deduced from their stochastic representation as the survival dependence structure of an ℓ 1 -symmetric distribution (see McNeil and Nešlehová (2009)). We show that the extremal behavior of the radial part of the representation is determined by its Williamson d -transform. This leads in turn to simple proofs and extensions of recent results characterizing the domain of attraction of Archimedean copulas, their upper and lower tail-dependence indices, as well as their associated threshold copulas. We outline some of the practical implications of their results for the construction of Archimedean models with specific tail behavior and give counterexamples of Archimedean copulas whose coefficient of lower tail dependence does not exist.
Article
In what concerns extreme values modeling, heavy tailed autoregressive processes defined with the minimum or maximum operator have proved to be good alternatives to classical linear ARMA with heavy tailed marginals (Davis and Resnick [8], Ferreira and Canto e Castro [13]). In this paper we present a complete characterization of the tail behavior of the autoregressive Pareto process known as Yeh-Arnold-Robertson Pareto(III) (Yeh et al. [32]). We shall see that it is quite similar to the first order max-autoregressive ARMAX, but has a more robust parameter estimation procedure, being therefore more attractive for modeling purposes. Consistency and asymptotic normality of the presented estimators will also be stated.