Content uploaded by Edward Omey

Author content

All content in this area was uploaded by Edward Omey on Oct 10, 2019

Content may be subject to copyright.

arXiv:1901.05698v2 [math.PR] 9 Oct 2019

Asymptotic properties of extremal Markov processes driven by

Kendall convolution

Marek Arendarczyk1, Barbara Jasiulis - Gołdyn2, Edward Omey3

1,2Mathematical Institute, University of Wrocław,

pl. Grunwaldzki 2/4, 50-384 Wrocław, Poland.

3Faculty of Economics and Business-Campus Brussels,

KU Leuven, Warmoesberg 26, 1000 Brussels, Belgium

E-mail: 1marendar@math.uni.wroc.pl,

2jasiulis@math.uni.wroc.pl,

3edward.omey@kuleuven.be

October 10, 2019

Contents

1 Introduction 2

2 Stochastic representation and basic properties of Kendall random walk 4

3 Finite dimensional distributions 6

4 Limit theorems 10

5 Proofs 13

5.1 Proof of Proposition 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5.2 Proof of Lemma 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

5.3 Proof of Lemma 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5.4 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5.5 Proof of Proposition 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.6 Proof of Theorem 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

References 20

1

Abstract

This paper is devoted to the analysis of the ﬁnite-dimensional distributions and asymptotic behav-

ior of extremal Markov processes connected to the Kendall convolution. In particular, based on its

stochastic representation, we provide general formula for ﬁnite dimensional distributions of the ran-

dom walk driven by the Kendall convolution for a large class of step size distributions. Moreover, we

prove limit theorems for random walks and connected continuous time stochastic process.

Key words: Markov process; Extremes; Kendall convolution; Regular variation; Williamson transform;

Limit theorems; Exact asymptotics

Mathematics Subject Classiﬁcation. 60G70, 60F05, 44A35, 60J05, 60J35, 60E07, 60E10.

1 Introduction

The Kendall convolution being the main building block in the construction of the extremal Markov pro-

cess called Kendall random walk {Xn:n∈N0}is an important example of generalization of the convolu-

tions corresponding to classical sum and to classical maximum. Originated by Urbanik [29] (see also [21])

generalized convolutions regain popularity in recent years (see, e.g., [7, 19, 28] and references therein).

In this paper we focus on the Kendall convolution (see, e.g., [16, 27]) which thanks to its connections to

heavy tailed distributions, Williamson transform [18], Archimedian copulas, renewal theory [19], non-

comutative probability [15] or delphic semi-groups [12, 13, 20] presents high potential of applicability. We

refer to [7] for the deﬁnition and detailed description of the basics of theory of generalized convolutions,

for a survey on the most important classes of generalized convolutions, and a discussion on Lévy pro-

cesses and stochastic integrals based on that convolutions, as well as to [14] for the deﬁnition and basic

properties of Kendall random walks.

Our main goal is to study ﬁnite dimensional distributions and asymptotic properties of extremal Markov

processes connected to the Kendall convolution. In particular we present many examples of ﬁnite dimen-

sional distributions by the unit step characteristics, which create a new class of heavy tailed distributions

with potential applications. Innovative thinking about possible applications comes from the fact that

the generalized convolution of two point-mass probability measures can be a non-degenerate probabil-

ity measure. In particular the Kendall convolution of two probability measures concentrated at 1is the

Pareto distribution and consequently it generates heavy tailed distributions. In this context the theory of

regularly varying functions (see, e.g., [4, 5]) plays a crucial role. In this paper, regular variation techniques

are used, to investigate asymptotic behavior of Kendall random walks and convergence of the ﬁnite di-

mensional distributions of continuous time stochastic processes constructed from Kendall random walks.

Most of the proofs presented in this paper are also based on the application of Williamson transform that

is a generalized characteristic function of the probability measure in the Kendall algebra (see, e.g., [7, 30]).

The great advantage of the Williamson transform is that it is easy to invert. It is also worth to mention

that, this kind of transforms is generator of Archimedean copulas [25, 26] and is used to compute radial

measures for reciprocal Archimedean copulas [11]. Asymptotic properties of the Williamson transform

in the context of Archimedean copulas and extreme value theory were given in [22]. In this context, we

believe, that results on Williamson transform obtained in presented paper might be applicable in copula

2

theory which can be an interesting topic for future research.

We start, in Section 2 with presenting the deﬁnition and basic properties of the Kendall convolution. Next,

the deﬁnition and construction of random walk under the Kendall convolution is presented, which leads

to a stochastic representation of the process {Xn}. The basic properties of the transition probability ker-

nel are also proved. We refer to [14, 17, 19, 18] for discussions and proofs of further properties of the

process {Xn}. The structure of the processes considered here (see Deﬁnition 2.6) is similar to the ﬁrst

order autoregressive maximal Pareto processes [2, 3, 24, 31], max-AR(1) sequences [1], miniﬁcation pro-

cesses [23, 24], the max-autoregressive moving average processes MARMA [9], pARMAX and pRARMAX

processes [10]. Since the random walks form a class of extremal Markov chains, we believe that study-

ing them will yield an important contribution to extreme value theory. Additionally, the sequences in

the presented paper have interesting dependency relationships between factors which also justiﬁes their

potential applicability.

Section 3 is devoted to the analysis of the ﬁnite-dimentional distributions of the process {Xn}. We derive

general formula and present some important examples of processes with different types of step distribu-

tions which leads to new classes of heavy tailed distributions.

In Section 4 we study different limiting behaviors of Kendall convolutions and connected processes. We

present asymptotic properties of random walks under Kendall convolution using regular variation tech-

niques. In particular we show asymptotic equivalence between Kendall convolution and maximum con-

volution in the case of regularly varying step distribution ν. The result shows the connection with clas-

sical extreme value theory (see, e.g., [8]) and suggests possible applications of the Kendall random walk

in modelling phenomenons where independence between events can not be assumed. Moreover limit

theorems for Kendall random walks are given in the case of ﬁnite α-moment as well as in the case of regu-

larly varying tail of unit step distribution. Finally, we deﬁne continuous time stochastic process based on

random walk under Kendall convolution and prove convergence of its ﬁnite-dimensional distributions

using regular variation techniques.

Notation.

Through this paper, the distribution of the random element Xis denoted by L(X). By P+we denote

family of all probability measures on the Borel σ-algebra B(R+)with R+:= [0,∞). For a probability

measure λ∈ P+and a∈R+the rescaling operator is given by Taλ=L(aX)if λ=L(X). The set of all

natural numbers including zero is denoted by N0. Additionally we use notation π2α, α > 0for a Pareto

random measure with probability density function (pdf)

π2α(dy) = 2αy−2α−11

1

1[1,∞)(y)dy. (1)

Moreover, by

m(α)

ν:= Z∞

0

xαν(dx)

3

we denote the αth moment of measure ν. The truncated α-moment of the measure νwith cumulative

distribution function (cdf) Fis given by

H(t) :=

t

Z

0

yαF(dy).

By ν1△αν2we denote the Kendall convolution of the measures ν1, ν2∈ P+. For all n∈N, the Kendall

convolution of nidentical measures νis denoted by ν△αn:= ν△α···△αν(n-times). By Fnwe denote the

cumulative distribution function of the measure ν△αn, whereas for the tail distribution of ν△αnwe use the

standard notation Fn. By d

→we denote convergence in distribution, whereas f dd

→denotes convergence of

ﬁnite-dimentional distributions. Finally, we say that a measurable and positive function f(·)is regularly

varying at inﬁnity with index βif, for all x > 0, it satisﬁes limt→∞ f(tx)

f(t)=xβ(see, e.g., [6]).

2 Stochastic representation and basic properties of Kendall random walk

We start with the deﬁnition of the Kendall generalized convolution (see, e.g., [7], Section 2).

Deﬁnition 2.1 The binary operation △α:P2

+→ P+deﬁned for point-mass measures by

δx△αδy:= TM(̺απ2α+ (1 −̺α)δ1),

where α > 0,̺=m/M ,m= min{x, y},M= max{x, y}, is called the Kendall convolution. The extension of △α

for any B(R+)and ν1, ν2∈ P+is given by

ν1△αν2(A) =

∞

Z

0

∞

Z

0

(δx△αδy) (A)ν1(dx)ν2(dy).(2)

Remark 2.2 Note that the convolution of two point mass measures is a continuous measure that reduces to a Pareto

distribution π2αin case of x=y= 1, which is different than the classical convolution or maximum convolution

algebra, where convolution of discrete measures yields also a discrete one.

In the Kendall convolution algebra the main tool used in the analysis of a measure νis the Williamson

transform (see, e.g., [30]) that is characteristic function for Kendall convolution (see, e.g., [7], Deﬁnition

2.2) and plays the same role as the classical Laplace or Fourier transform for convolutions deﬁned by

addition of independent random elements. We refer to [7], [27] and [29], for the deﬁnition and detailed

discussion on properties of generalized characteristic functions and its connections to generalized convo-

lutions. Through this paper, the function G(t)that is Williamson transform at point 1

tplays a crucial role

in the analysis of Kendall convolutions and connected stochastic processes.

Deﬁnition 2.3 The operation G:R+→R+given by

G(t) =

∞

Z

0

Ψx

tν(dx), ν ∈ P+,

4

where

Ψx

t=1−x

tα+,(3)

a+= max(0, a),α > 0, is called the Williamson transform of measure νat point 1

t.

Remark 2.4 Note that Ψx

tas a function of tis the Williamson transform of δx, x ≥0.

Remark 2.5 Due to Proposition 2.3 and Example 3.4 in [7] function G1

tis a generalized characteristic func-

tion for the Kendall convolution. Thus, the Williamson transform Gn(t)of the measure ν△αnhas the following,

important, property (see, e.g., Deﬁnition 2.2 in [7])

Gn(t) = G(t)n.(4)

By using recurrence construction, we deﬁne a stochastic processes {Xn:n∈N0}, called the Kendall

random walk (see also [14, 18, 19]). Further, we show strict connection of the process {Xn}to the Kendall

convolution.

Deﬁnition 2.6 The stochastic process {Xn:n∈N0}is a discrete time Kendall random walk with parameter α > 0

and step distribution ν∈ P+if there exist

1. {Yk}i.i.d. random variables with distribution ν,

2. {ξk}i.i.d. random variables with uniform distribution on [0,1],

3. {θk}i.i.d. random variables with Pareto distribution with α > 0and density

π2α(dy) = 2αy−2α−11

1

1[1,∞)(y)dy,

such that sequences {Yk},{ξk}, and {θk}are independent and

X0= 1, X1=Y1, Xn+1 =Mn+1 1

1

1(ξn>̺n+1)+θn+11

1

1(ξn<̺n+1),

where

Mn+1 = max{Xn, Yn+1}, mn+1 = min{Xn, Yn+1}, ̺n+1 =mα

n+1

Mα

n+1

.

In the next proposition we show that the process constructed in Deﬁnition 2.6 is a homogeneous Markov

process driven by the Kendall convolution. We refer to [7], Section 4 for the proof and a general discussion

of the existence of the Markov processes under generalized convolutions.

Proposition 2.7 The process {Xn:n∈N0}with the stochastic representation given by Deﬁnition 2.6 is a homo-

geneous Markov process with transition probability kernel

Pn,n+k(x, A) := Pk(x, A) = δx△αν△αk(A),(5)

where k, n ∈N, A ∈ B(R+), x ≥0, α > 0.

The proof of Proposition 2.7 is presented in Section 5.1.

5

3 Finite dimensional distributions

In this section we study the ﬁnite dimensional distributions of the process {Xn}. We start with a proposi-

tion that describes the one-dimensional distributions of {Xn}and their relationships with the Williamson

transform and truncated α-moment.

Proposition 3.1 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+with cdf F. Then

(i) for any t≥0we have

G(t) = αt−α

t

Z

0

xα−1F(x)dx,

and F(t) = G(t) + t

αG′(t).

(ii) for any t≥0, n ≥1we have

Fn(t) = G(t)n−1nt−αH(t) + G(t).

Proof. First, observe that

G(t) = F(t)−t−α

t

Z

0

xαν(dx) = αt−α

t

Z

0

xα−1F(x)dx, (6)

where the last equation follows from integration by parts. In order to complete the proof of (i) it sufﬁces

to take derivatives on both sides of the above equation.

In an analogous way we obtain that

(G(t))n=αt−αZt

0

xα−1Fn(x)dx. (7)

In order to complete the proof it is sufﬁcient to take derivatives on both sides of equation (7) and apply

(i).

The next two lemmas give characterizations of the transition probabilities of the process {Xn}and play

an important role in the analysis of its ﬁnite-dimensional distributions.

Lemma 3.2 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+. For all k, n ∈N, x, y, t ≥0we have

(i)

Pn(x, ((0, t])) = G(t)n+n

tαH(t)G(t)n−1Ψx

t1{x<t}

6

(ii)

t

Z

0

wαPn(x, dw) = xαG(t)n+nG(t)n−1H(t)Ψ x

t1{x<t}.

The proof of Lemma 3.2 is presented in Section 5.2.

The following lemma is the main tool in ﬁnding the ﬁnite-dimensional distributions of {Xn}. In order to

formulate the result it is convenient to introduce the notation

Ak:= {0,1}k\{(0,0, ..., 0)},for any k∈N.

Additionally, for any (ǫ1, ..., ǫn)∈ Anwe denote

˜ǫ1= min {i∈ {1, ..., k}:ǫi= 1}, . . . , ˜ǫm:= min {i > ˜ǫm−1:ǫi= 1}, m = 1,2, . . . , s, s :=

k

X

i=1

ǫi.

Lemma 3.3 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+. Then for any 0 = n0≤n1≤n2· · · ≤ nk, where nj∈Nfor all j∈Nand 0≤y0≤x1≤x2≤ · · · ≤

xk≤xk+1 we have

x1

Z

0

x2

Z

0

···

xk

Z

0

Ψyk

xk+1 Pnk−nk−1(yk−1, dyk)Pnk−1−nk−2(yk−2, dyk−1)···Pn1(y0, dy1)

=X

(ǫ1,ǫ2,··· ,ǫk)∈Ak

Ψy0

x˜ǫ1Ψx˜ǫs

xk+1 s−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 k

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

+ Ψ y0

xk+1 k

Y

j=1

(G(xj))nj−nj−1,

where

s−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 = 1 for s= 1.

The proof of Lemma 3.3 is presented in Section 5.3.

Now, we are able to derive a general formula for the ﬁnite-dimentional distributions of the process {Xn}.

Theorem 3.4 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+. Then for any 0 =: n0≤n1≤n2· · · ≤ nk, where nj∈Nfor all j∈Nand 0≤x1≤x2≤ · · · ≤ xkwe

have

P(Xnk≤xk, Xnk−1≤xk−1,··· , Xn1≤x1)

=X

(ǫ1,ǫ2,··· ,ǫk)∈{0,1}k

s−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 k

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

,

7

where

s−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 = 1 for s∈ {0,1}.

Proof. First, observe that

P(Xnk≤xk, Xnk−1≤xk−1,··· , Xn1≤x1)

=

x1

Z

0

x2

Z

0

···

xk

Z

0

Pnk−nk−1(yk−1, dyk)Pnk−1−nk−2(yk−2, dyk−1)···Pn1(0, dy1).

Moreover, by the deﬁnition of Ψ(·), for any a > 0, we have

lim

xk+1→∞ Ψa

xk+1 = Ψ (0) = 1.

Now in order to complete the proof it sufﬁces to apply Lemma 3.3 with y0= 0 and xk+1 → ∞.

Finally, we present the cumulative distribution functions and characterizations of the ﬁnite-dimensional

distributions of the process {Xn}for the most interesting examples of unit step distributions ν. Since,

by Theorem 3.4, ﬁnite-dimentional distributions of {Xn}are uniquely determined by the Williamson

transform G(·)and the truncated α-moment H(·)of the step distribution ν, then this two characteristics

are presented for each examples of the analyzed cases. Additionally in each example we derive the cdf of

ν△αnthat is the one-dimentional distribution of the process {Xn}.

We start with a basic case of a point-mass distribution ν.

Example 3.1 Let ν=δ1. Then the Williamson transform and truncated α-moment of measure νare given

by

G(x) = 1−1

xα+

and H(x) = 1

1

1[1,∞)(x),

respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = 1 + n−1

xα1−1

xαn−1

1

1

1[1,∞)(x).

In the next example we consider a linear combination of δ1and the Pareto distribution that plays a crucial

role in construction of Kenadall convolution.

Example 3.2 Let ν=pδ1+ (1 −p)πp, where p∈(0,1] and πpis a Pareto distribution with the pdf (1) with

2α=p. Then Williamson transform and truncated α-moment of measure νare given by

G(x) =

1−α(1−p)

(α−p)x−p+p(1−α)

(α−p)x−α1

1

1[1,∞)(x)if p6=α,

(1 −(1 −p)x−p−px−α+p(1 −p)x−αlog(x)) 1

1

1[1,∞)(x)if p=α

8

and

H(x) =

p(1−α)

(p−α)+p(1−p)

(α−p)xα−p1

1

1[1,∞)(x)if p6=α,

p+p(1 −p) log(x)if p=α,

respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = 1−α(1 −p)

α−px−p+p(1 −α)

α−px−αn−1

·1 + (1 −p)(np −α)

α−px−p−p(1 −α)(n−1)

α−px−α1

1

1[1,∞)(x)

for p6=α, and

Fn(x) = 1−(1 −p)x−p−px−α+p(1 −p)x−αlog(x)n−1

·1 + (n−1)px−α−(1 −p)x−p+p(1 −p)(n+ 1)x−αlog(x)1

1

1[1,∞)(x)

for p=α.

In the next example we consider the distribution νwith the lack of memory property for the Kendall

convolution. We refer to [17] for a general result about the existence of measures with the lack of memory

property for the so called monotonic generalized convolutions.

Example 3.3 Let νbe a probability measure with the cdf F(x) = 1 −(1 −xα)+, where α > 0. Then the

Williamson transform and truncated α-moment of measure νare given by

G(x) = xα

21

1

1[0,1)(x) + 1−1

2xα1

1

1[1,∞)(x) and H(x) = x2α

21

1

1[0,1)(x) + 1

21

1

1[1,∞)(x),

respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = (1

2

n+1

2nxαn for x∈[0,1];

1

21−1

2xαn−11 + n−1

2xαfor x > 1.

In the next example we consider a unit step distribution, which is a stable probability measure for the

Kendall random walk with unit step distribution ν(see Section 4, Theorem 4.5).

Example 3.4 Let ρν,α, α > 0be a probability measure with cdf

F(x) = 1 + m(α)

νx−αe−m(α)

νx−α1

1

1(0,∞)(x),

where α > 0and m(α)

ν>0is a parameter. Then the Williamson transform and truncated α-moment of

measure ρν,α are given by

G(x) = exp{−m(α)

νx−α}1

1

1(0,∞)(x) and H(x) = m(α)

νG(x),

respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = 1 + nm(α)

νx−αexp{−nm(α)

νx−α}1

1

1(0,∞)(x).

9

Example 3.5 Let ν=U(0,1) be the uniform distribution with the density ν(dx) = 1

1

1(0,1)(x)dx. Then the

Williamson transform and truncated α-moment of measure νare given by

G(x) = (x∧1) −(x∧1)α+1

(α+ 1)xα,and H(x) = (x∧1)α+1

(α+ 1) ,

respectively. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = α

α+ 1n1 + n

αxn1

1

1[0,1)(x)

+1−1

(α+ 1)xαn−11 + n−1

(α+ 1)xα1

1

1[1,∞)(x).

Example 3.6 Let ν=γa,b, a, b > 0, be the Gamma distribution with the pdf

γa,b(dx) = ba

Γ(a)xa−1e−bx1

1

1(0,∞)(x)dx.

Then the Williamson transform and truncated α-moment of measure νare given by

G(x) = γa,b(0, x]−Γ(a+α)

Γ(a)b−αx−αγa+α,b(0, x],and H(x) = Γ(a+α)

Γ(a)b−αγa+α,b(0, x],

where γa,b(0, x] = ba

Γ(a)Rx

0ts−1e−tdt. Hence, by Proposition 3.1 (ii), for any n= 2,3, ..., we have

Fn(x) = γa,b(0, x]−Γ(a+α)

Γ(a)b−αx−αγa+α,b(0, x]n−1

·γa,b(0, x] + Γ(a+α)

Γ(a)(n−1)b−αx−αγa+α,b(0, x].

4 Limit theorems

In this section we investigate limiting behaviors of Kendall random walks and connected continuous

time processes. The analysis is based on inverting the Williamson transform as the given in Proposition

3.1, (ii). Moreover, as it is shown in Section 2, the Kendall convolution is strongly related to the Pareto

distribution. Hence, regular variation techniques play a crucial role in the analysis of the asymptotic

behaviors and limit theorems for the processes studied in this section.

We start with the analysis of asymptotic behavior of the tail distribution of random variables Xn.

Theorem 4.1 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+. Then

Fn(x) = nF (x) + 1

2n(n−1)(H(x))2x−2α(1 + o(1))

as x→ ∞.

10

The proof of Theorem 4.1 is presented in Section 5.4.

The following Corollary is a direct consequence of the Theorem 4.1.

Corollary 4.2 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+. Moreover, let

(i) F(x)be regularly varying with parameter θ−αas x→ ∞, where 0< θ < α. Then

Fn(x) = nF (x)(1 + o(1)) as x→ ∞.

(ii) m(α)

ν<∞. Then

Fn(x) = nF (x) + 1

2n(n−1) m(α)

ν2x−2α(1 + o(1)) as x→ ∞.

(iii) F(x) = ox−2αas x→ ∞. Then

Fn(x) = 1

2n(n−1) m(α)

ν2x−2α(1 + o(1)) as x→ ∞.

Remark 4.3 It shows that in case of regularly varying step distribution ν, the tail distribution of random variable

Xnis asymptotically equivalent to the maximum of ni.i.d. random variables with distribution ν.

In the next proposition, we investigate the limit distribution for Kendall random walks in case of ﬁnite

α-moment as well as for regularly varying tail of the unit step. We start with the following observation.

Remark 4.4 Due to Proposition 1 in [4] random variable Xbelongs to the domain of attraction of a stable measure

with respect to the Kendall convolution if and only if 1−G(t)is regularly varying function at ∞.

Notice that 1−G(t)is regularly varying whenever the random variable Xhas ﬁnite α-moment or its tail

is regularly varying at inﬁnity. The following Proposition formalizes this observation providing formulas

for stable distributions with respect to Kendall convolution.

Proposition 4.5 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+

(i) If m(α)

ν<∞, then as n→ ∞,

n−1/αXnd

→X,

where the cdf of random variable Xis given by

ρν,α,θ(0, x] = 1 + m(α)

νx−αe−m(α)

νx−α1

1

1(0,∞)(x)(8)

and the pdf of Xis given by

ρν,α(dx) = αm(α)

ν2x−2α−1exp{−m(α)

νx−α}1

1

1(0,∞)(y)dx. (9)

11

(ii) If Fis regularly varying as x→ ∞ with parameter θ−α, where 06θ < α, then there exists a sequence {an},

an→ ∞, such that

a−1

nXnd

→X,

where the cdf of random variable Xis given by

ρν,α,θ(0, x] = 1 + x−(α−θ)e−x−(α−θ)1

1

1(0,∞)(x)(10)

and the pdf of Xis given by

ρν,α,θ(dx) = αx−2(α−θ)−1exp{−x−(α−θ)}1

1

1(0,∞)(x)dx. (11)

The proof of Proposition 4.5 is presented in Section 5.5.

Now we deﬁne a new stochastic process {Zn(t) : n∈N0}connected with the Kendall random walk

{Xn:n∈N0}such that

{Zn(t)}d

=a−1

nX[nt],

where [·]denotes integer part and the sequence {an}is such that an>0and limn→∞ an=∞.

In the following theorem, we prove convergence of the ﬁnite-dimensional distributions of the process

{Zn(t)}, for appropriately chosen sequence {an}.

Theorem 4.6 Let {Xn:n∈N0}be a Kendall random walk with parameter α > 0and unit step distribution

ν∈ P+.

(i) If m(α)

ν<∞and an=n1/α(1 + o(1)), as n→ ∞, then

{Zn(t)}fdd

→ {Z(t)},

where, for any 0 = t0≤t1≤... ≤tk, the ﬁnite-dimensional distributions of {Z(t)}are given by

P(Z(t1)≤z1, Z(t2)≤z2,··· , Z(tk)≤zk)

=X

(ǫ1,ǫ2,··· ,ǫk)∈{0,1}k

s−1

Y

i=1

Ψz˜ǫi

z˜ǫi+1 k

Y

j=1 (tj−tj−1)

zα

j

m(α)

ν!ǫj

exp (−m(α)

ν

k

X

i=1

z−α

i(ti−ti−1)),

(ii) If F(·)is regularly varying as x→ ∞ with parameter θ−α, where 06θ < α, then there exists a sequence

{an}, an→ ∞ such that

Zn(t)fdd

→Z(t),

where, for any 0 = t0≤t1≤... ≤tk, the ﬁnite-dimensional distributions of {Z(t)}are given by

P(Z(t1)≤z1, Z(t2)≤z2,··· , Z(tk)≤zk)

=X

(ǫ1,ǫ2,··· ,ǫk)∈{0,1}k

s−1

Y

i=1

Ψz˜ǫi

z˜ǫi+1 k

Y

j=1 (tj−tj−1)zθ−α

jǫjexp (−

k

X

i=1

zθ−α

i(ti−ti−1)),

12

where in both above cases we have

s−1

Y

i=1

Ψz˜ǫi

z˜ǫi+1 = 1 for s∈ {0,1}

with

˜ǫ1= min {i:ǫi= 1}, . . . , ˜ǫm:= min {i > ˜ǫm−1:ǫi= 1}, m = 1,2, . . . , s, s =

k

X

i=1

ǫi.

The proof of Theorem 4.6 is presented in Section 5.6.

5 Proofs

In this section, we present detailed proofs of our results.

5.1 Proof of Proposition 2.7

Due to the independence of sequences {Yk},{ξk}, and {θk}, it follows directly from the Deﬁnition 2.6 that

the process {Xn}satisﬁes the Markov property.

Now, let n∈Nbe ﬁxed. We shall show that, for all k∈N, A ∈ B(R+), x ≥0, α > 0, the transition

probabilities of the process {Xn}are of the form (5). In order to do this we proceed by induction. By

Deﬁnition 2.6 we have

P(Xn∈A|Xn−1=x)

=

∞

Z

0

P(Xn∈A|Xn−1=x, Yn=y)ν(dy)

=

∞

Z

0Pmax(x, y)θn∈A, ξn<min(x, y)

max(x, y)α+IA(max(x, y))Pξn>min(x, y)

max(x, y)αν(dy).

Moreover, by the independence of the random variables θnand ξnthe above expression is equal to

∞

Z

0min(x, y)

max(x, y)α

P(max(x, y)θn∈A) + 1−min(x, y)

max(x, y)αIA(max(x, y))ν(dy)

=

∞

Z

0

Tmax(x,y)min(x, y)

max(x, y)α

π2α(A) + 1−min(x, y)

max(x, y)αδ1(A)ν(dy)

=

∞

Z

0Tmax(x,y)δmin(x,y)

max(x,y)

△αδ1(A)ν(dy)(12)

=

∞

Z

0

(δx△αδy) (A)ν(dy) = (δx△αν) (A),(13)

13

where (12) follows from Deﬁnition 2.1 and (13) follows from (2). This completes the ﬁrst step of the proof

by induction.

Now, assuming that

P(Xn+k∈A|Xn=x) = δx△αν△αk(A)(14)

holds for k≥2we establish its validity for k+ 1.

Due to the Chapman-Kolmogorov equation for the process {Xn}we have

P(Xn+k+1 ∈A|Xn=x) =

∞

Z

0Z

A

P1(y, dz)Pk(x, dy)

=

∞

Z

0

(δy△αν) (A)δx△αν△αk(dy)(15)

=δx△αν△αk+1(A),(16)

where (15) follows from (13) and (14) while (16) follows from (2). This completes the induction argument

and the proof.

5.2 Proof of Lemma 3.2

First notice that by Deﬁnition 2.1, we have

(δx△αδy) ((0, t]) = min(x, y)

max(x, y)α

P(max(x, y)θ≤t) + 1−min(x, y)

max(x, y)α1{max(x,y)<t}

=1−xαyα

t2α1{x<t,y<t}(17)

=hΨx

t+ Ψ y

t−Ψx

tΨy

ti1{x<t,y<t},(18)

where (18) is a direct application of (3). In order to prove (i), observe that by (2) and (18) we have

Pn(x, ((0, t])) =

∞

Z

0

(δx△αδy) (0, t)ν△αn(dy)

=

t

Z

0hΨx

t+ Ψ y

t−Ψx

tΨy

ti1{x<t}ν△αn(dy)

=hΨx

tFn(t) + 1−Ψx

tGn(t)i1{x<t},(19)

where (19) holds by Deﬁnition 2.3. In order to complete the proof of the case (i) it sufﬁces to combine (19)

with (4) and Proposition 3.1 (ii).

To prove (ii), observe that integration by parts leads to

t

Z

0

wα(δx△αδy) (dw) = tα(δx△αδy) (0, t)−

t

Z

0

αwα−1(δx△αδy) (0, w)dw

=xα−2xαyα

tα+yα1{x∨y<t},(20)

14

where (20) follows from (17). Applying (20) we obtain

t

Z

0

wαPn(x, dw) =

∞

Z

0

t

Z

0

wα(δx△αδy) (dw)ν△αn(dy)

=

t

Z

0xα−2xαyα

tα+yαν△αn(dy)1{x<t}

=xαFn(t)−2xα

tαHn(t) + Hn(t)1{x<t},(21)

where Hn(t) := Rt

0yαν△αn(dy) = tα(Fn(t)−Gn(t)) by (6). Finally, the proof of the case (ii) is completed

by combining (21) with Proposition 3.1 (ii).

5.3 Proof of Lemma 3.3

Let k= 1. Then by Lemma 3.2 we obtain

x1

Z

0

Ψy1

x2Pn1(y0, dy1) = Pn1(y0,((0, x1])) −x−α

2

x1

Z

0

yα

1Pn(y0, dy1)

=Ψy0

x2Gn1(x1) + n1

xα

1

Gn1−1(x1)H1(x1)Ψ y0

x1Ψx1

x21{y0<x1},

which ends the ﬁrst step of proof by induction.

Now, assume that the formula holds for k∈N. We shall establish its validity for k+ 1. Let

˜η1:= min {i≥2 : ǫi= 1}, . . . , ˜ηm:= min {i > ˜ηm−1:ǫi= 1}, m = 1,2,...,s2, s2:=

k+1

X

i=2

ǫi.

Additionally, we denote s1:= Pk+1

i=1 ǫi.

Moreover, let

A0

k+1 := {(0, ǫ2, ǫ3,··· , ǫk+1)∈ {0,1}k+1 : (ǫ2, ǫ3,··· , ǫk+1)∈ Ak},

A1

k+1 := {(1, ǫ2, ǫ3,··· , ǫk+1)∈ {0,1}k+1 : (ǫ2, ǫ3,··· , ǫk+1)∈ Ak}.

By splitting Ak+1 into four subfamilies of sets: A0

k+1,A1

k+1,{(1,0, ..., 0)}, and {(0, ..., 0)}and applying the

formula for kand the ﬁrst induction step we obtain

x1

Z

0

x2

Z

0

···

xk+1

Z

0

Ψyk+1

xk+2 Pnk+1−nk(yk, dyk+1 )···Pn2−n1(y1, dy2)

Pn1(y0, dy1)

=X

(ǫ2,ǫ3,··· ,ǫk+1)∈Ak

Ψx˜ηs2

xk+2 s2−1

Y

i=1

Ψx˜ηi

x˜ηi+1 k+1

Y

j=2

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

·Zx1

0

Ψy1

x˜η1Pn1(y0, dy1) +

k+1

Y

j=2

(G(xj))nj−nj−1Zx1

0

Ψy1

xk+2 Pn1(y0, dy1)

=S[A0

k+1] + S[A1

k+1] + S[{(1,0, ..., 0)}] + S[{(0, ..., 0)}],(22)

15

where

S[A0

k+1] = X

(0,ǫ2,ǫ3,··· ,ǫk+1)∈A0

k+1

Ψy0

x˜η1Ψx˜ηs2

xk+2 s2−1

Y

i=1

Ψx˜ηi

x˜ηi+1

·(G(x1))n1

k+1

Y

j=2

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

,

S[A1

k+1] = X

(1,ǫ2,ǫ3,··· ,ǫk+1)∈A1

k+1

Ψy0

x1Ψx˜ηs2

xk+2 s2−1

Y

i=1

Ψx˜ηi

x˜ηi+1 Ψx1

x˜η1

·n1

xα

1

(G(x1))n1−1H1(x1)

k+1

Y

j=2

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

,

S[{(1,0, ..., 0)}] = n1

xα

1

Gn1−1(x1)H1(x1)Ψ y0

x1Ψx1

xk+2 k+1

Y

j=2

(G(xj))nj−nj−1,

S[{(0, ..., 0)}] = Ψ y0

xk+2 G(x1)n1

k+1

Y

j=2

(G(xj))nj−nj−1.

Observe that for any sequence (0, ǫ2, ǫ3,··· , ǫk+1 )∈ A0

k+1 we have (˜ǫ1,˜ǫ2, ..., ˜ǫs1) = (˜η1,˜η2, ..., ˜ηs2), with

s1=s2, which implies that

Ψy0

x˜η1Ψx˜ηs2

xk+2 s2−1

Y

i=1

Ψx˜ηi

x˜ηi+1 = Ψ y0

x˜ǫ1Ψx˜ǫs1

xk+2 s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 .

Morever

(G(x1))n1

k+1

Y

j=2

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

=

k+1

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

.

Hence

S[A0

k+1] = X

(0,ǫ2,ǫ3,··· ,ǫk+1)∈A0

k+1

Ψy0

x˜ǫ1Ψx˜ǫs1

xk+2 s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 k+1

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

.(23)

Analogously, for any sequence (1, ǫ2, ǫ3,··· , ǫk+1)∈ A1

k+1 we have (˜ǫ1,˜ǫ2, ..., ˜ǫs1) = (1,˜η1,˜η2, ..., ˜ηs2), with

s1=s2+ 1 which implies that

Ψy0

x1Ψx1

x˜η1Ψx˜ηs2

xk+2 s2−1

Y

i=1

Ψx˜ηi

x˜ηi+1 = Ψ y0

x˜ǫ1Ψx1

x˜η1Ψx˜ǫs1

xk+2 s1−2

Y

i=1

Ψx˜ǫi+1

x˜ǫi+2

= Ψ y0

x˜ǫ1Ψx˜ǫs1

xk+2 s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 ,(24)

16

where (24) is the consequence of

Ψx1

x˜η1s1−2

Y

i=1

Ψx˜ǫi+1

x˜ǫi+2 =

s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 .

Moreover,

n1

xα

1

(G(x1))n1−1H1(x1)

k+1

Y

j=2

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

=

k+1

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

.

Hence

S[A1

k+1] = X

(1,ǫ2,ǫ3,··· ,ǫk+1)∈A1

k+1

Ψy0

x˜ǫ1Ψx˜ǫs1

xk+2 s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 k+1

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

.(25)

Additionally, observe that

S[{(1,0, ..., 0)}] = Ψ y0

x˜ǫ1Ψxs1

xk+2 n1H1(x1)

xα

1

k+1

Y

j=1

(G(xj))nj−nj−1−ǫj(26)

and due to the fact that n0= 0, we have

S[{(0, ..., 0)}] = Ψ y0

xk+2 k+1

Y

j=1

(G(xj))nj−nj−1.(27)

Finally, by combining (23), (25), (26), and (27) with (22) we obtain that

x1

Z

0

x2

Z

0

···

xk+1

Z

0

Ψyk+1

xk+2 Pnk+1−nk(yk, dyk+1)···Pn2−n1(y1, dy2)

Pn1(y0, dy1)

=X

(ǫ1,ǫ2,··· ,ǫk+1)∈Ak+1

Ψy0

x˜ǫ1Ψx˜ǫs1

xk+2 s1−1

Y

i=1

Ψx˜ǫi

x˜ǫi+1 k+1

Y

j=1

(G(xj))nj−nj−1−ǫj (nj−nj−1)H(xj)

xα

j!ǫj

+ Ψ y0

xk+1 k+1

Y

j=1

(G(xj))nj−nj−1.

This completes the induction argument and the proof.

5.4 Proof of Theorem 4.1

Due to Proposition 3.1, for any n≥2, we have

Fn(x) = F(x)−1

xαH(x)n−1F(x) + n−1

xαH(x)

= (F(x))n+n

n−1

X

k=1

(−1)k−1n−1

k−1k−1

kH(x)

xαk

(F(x))n−k

+ (−1)n−1(n−1) H(x)

xαn

,(28)

17

where (28) follows by the observation that, for any a≥0,n≥2, we have

(1 −a)n−1(1 + a(n−1)) = 1 + n

n−1

X

k=1

(−1)k−1n−1

k−1k−1

kak+ (−1)n−1(n−1)an.

Thus

Fn(x) = I1+I2,

where

I1= 1 −(F(x))n=nF (x)(1 + o(1))

as x→ ∞, and

I2=n

n−1

X

k=1

(−1)kn−1

k−1k−1

kH(x)

xαk

(F(x))n−k+ (−1)n(n−1) H(x)

xαn

.

Note that limx→∞ H(x)

xα= 0 for any measure ν∈ P+and hence

n

n−1

X

k=3

(−1)kn−1

k−1k−1

kH(x)

xαk

(F(x))n−k+ (−1)n(n−1) H(x)

xαn

=o 1

2n(n−1) H(x)

xα2!,

as x→ ∞. This completes the proof.

5.5 Proof of Proposition 4.5

The following lemma plays a crucial role in further analysis.

Lemma 5.1 Let H∈RVθwith 0< θ < α. Then, there exists a sequence {an}such that

H(an)

(an)α=1

n(1 + o(1))

as n→ ∞.

Proof. First, observe that W(x) := xα/H(x)is regularly varying function with parameter α−θas x→ ∞.

Then, due to Theorem 1.5.12. in [6], there exists an increasing function V(x)such that W(V(x)) = x(1 +

o(1)),as x→ ∞. Now, in order to complete the proof it sufﬁces to take an=V(n).

Proof of Proposition 4.5. Using (4) and (6), the Williamson transform of a−1

nXnis given by

hGan

xin=Fan

x−xα

aα

n

Han

xn

.(29)

In order to prove (i) observe that under assumption of the ﬁniteness of m(α)

νwe have

lim

n→∞ H n1/α

x!=m(α)

νand lim

n→∞ F n1/α

x!= 1,

18

which, by (29), yields that

lim

n→∞ "G n1/α

x!#n

=e−m(α)

νxα.

Due to Proposition 3.1, (i) there exists uniquely determined random variable Xwith cdf (8) and pdf (9)

such that e−m(α)

νxαis its Williamson transform. This completes the proof of the case (i).

In order to prove (ii), notice that, due to Theorem 1.5.8 in [6] F∈RVθ−α, implies that H∈RVθ. Hence,

for any z > 0, we have

Han

x=x−θH(an)(1 + o(1)),

Moreover, by Lemma 5.1 we can choose a sequence {an}such that

H(an)

aα

n

=1

n(1 + o(1))

as n→ ∞. Thus,

lim

n→∞ Ghan

xni= lim

n→∞ Fan

x−xα

aα

n

Han

xn

=e−xα−θ.

Due to Proposition 3.1, (i) there exists random variable Xwith cdf (10) and pdf (11) such that e−xα−θis its

Williamson transform. This completes the proof.

5.6 Proof of Theorem 4.6

Proof. Let 0 =: t0≤t1≤t2≤ · · · ≤ tk, where k∈N. By Theorem 3.4, the distribution of

(Zn(t1), Zn(t2),··· , Zn(tk)) is given by

P(Zn(t1)≤z1, Zn(t2)≤z2,··· , Zn(tk)≤zk)

=PX[nt1]≤anz1, X[nt2]≤anz2,··· , X[ntk]≤anzk

=X

(ǫ1,ǫ2,··· ,ǫk)∈{0,1}k

s−1

Y

i=1

Ψz˜ǫi

z˜ǫi+1 k

Y

j=1

(G(anzj))[ntj]−[ntj−1]−ǫj ([ntj]−[ntj−1])H(anzj)

aα

nzα

j!ǫj

,(30)

where

s−1

Y

i=1

Ψz˜ǫi

z˜ǫi+1 = 1 for s∈ {0,1}.

In analogous way to the proof of Theorem 4.5 (i) we obtain

lim

n→∞ G(anzj)[ntj]−[ntj−1]−ǫj= lim

n→∞ F(anzj)−H(anzj)

(anzj)α[ntj]−[ntj−1]−ǫj

= exp n−m(α)

ν(tj−tj−1)zθ−α

jo.

(31)

19

and

lim

n→∞ ([ntj]−[ntj−1])H(anzj)

aα

nzα

j!ǫj

= (tj−tj−1)z−α

jm(α)

ν(32)

In order to complete the proof of (i) it sufﬁces to pass with n→ ∞ in (30) applying (31) and (32).

In order to prove (ii), notice that similarly to the proof of Theorem 4.5, for any tj, zj>0and 1≤j≤k,

we obtain

lim

n→∞ G(anzj)[ntj]−[ntj−1]−ǫj= lim

n→∞ F(anzj)−H(anzj)

(anzj)α[ntj]−[ntj−1]−ǫj

= exp n−(tj−tj−1)zθ−α

jo(33)

and

lim

n→∞ ([ntj]−[ntj−1])H(anzj)

aα

nzα

j!ǫj

= (tj−tj−1)zθ−α

j.(34)

In order to complete the proof it sufﬁces to pass with n→ ∞ in (30) applying (33) and (34).

Acknowledgements. B. Jasiulis-Gołdyn and E. Omey were supported by "First order Kendall maximal

autoregressive processes and their applications", within the POWROTY/REINTEGRATION programme

of the Foundation for Polish Science co-ﬁnanced by the European Union under the European Regional

Development Fund.

References

[1] M.T. Alpuim, N.A. Catkan, J. Hüsler, Extremes and clustering of non-stationary max-AR(1) se-

quences. Stoch. Proc. Appl.,56, 171–184, 1995.

[2] B. C. Arnold, Pareto Processes. Stochastic Processes: Theory and Methods. Handbook of Statistics, 19,

1–33, 2001.

[3] B. C. Arnold, Pareto Distributions. Monographs on Statistics and Applied Probability, 140, Taylor &

Francis Group, 2015.

[4] N. H. Bingham, Factorization theory and domains of attraction for generalized convolution algebra.

Proc. London Math. Sci., Inﬁnite Dimensional Analysis, Quantum Probability and Related Topics 23(4), 16–

30, 1971.

[5] N. H. Bingham, On a theorem of Kłosowska about generalized convolutions, Coll. Math. 48(1), 117–

125, 1984.

[6] N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular variation. Cambridge University Press, Cam-

bridge, 1987.

[7] M. Borowiecka-Olszewska, B.H. Jasiulis-Gołdyn, J.K. Misiewicz, J. Rosi ´nski, Lévy processes and

stochastic integral in the sense of generalized convolution. Bernoulli,21(4), 2513–2551, 2015.

20

[8] P. Embrechts, C. Klüppelberg, T. Mikosch, Modelling Extremal Events for Insurance and Finance.

Springer, Berlin, 1997.

[9] M. Ferreira, On the extremal behavior of a Pareto process: an alternative for ARMAX modeling.

Kybernetika 48(1), 31–49, 2012.

[10] M. Ferreira, L. Canto e Castro, Modeling rare events through a pRARMAX process. Journal of Statis-

tical Planning and Inference 140, 3552–3566, 2010.

[11] C. Genest, J. Nešlehová, L.,P. Rivest, The class of multivariate max-id copulas with l1−norm sym-

metric exponent measure, Bernoulli,24(4B), 3751–3790, 2018.

[12] J. Gilewski, Generalized convolutions and delphic semigroups. Coll. Math.,25, 281–289, 1972.

[13] J. Gilewski, K. Urbanik, Generalized convolutions and generating functions. Bull. Acad. Sci. Polon.

Ser. Math. Astr. Phys.,16, 481–487, 1968.

[14] B.H. Jasiulis-Gołdyn, Kendall random walks. Probab. Math. Stat.,36(1), 165–185, 2016.

[15] B.H. Jasiulis-Gołdyn, A. Kula, The Urbanik generalized convolutions in the non-commutative proba-

bility and a forgotten method of constructing generalized convolution. Proceedings - Math. Sci.,122(3),

437–458, 2012.

[16] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, On the Uniqueness of the Kendall Generalized Convolution.

J. Theor. Probab.,24(3), 746-755, 2011.

[17] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, Classical deﬁnitions of the Poisson process do not coincide in

the case of weak generalized convolution. Lith. Math. J.,55(4), 518-542, 2015.

[18] B. H. Jasiulis-Gołdyn, J. K. Misiewicz, Kendall random walk, Williamson transform and the corre-

sponding Wiener-Hopf factorization. Lith. Math. J.,57(4), 479-489, 2017.

[19] B. H. Jasiulis-Gołdyn, K. Naskr ˛et, J.K. Misiewicz, E. Omey, Renewal theory for extremal Markov

sequences of the Kendall type, to appear: Stoch. Proc. Appl., 2019.

[20] D. G. Kendall, Delphic semi-groups, inﬁnitely divisible regenerative phenomena, and the arithmetic

of p-functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete,9(3), 163–195, 1968.

[21] J.F.C. Kingman, Random Walks with Spherical Symmetry. Acta Math.,109(1), 11–53, 1963.

[22] M. Larsson, J. Nešlehová, Extremal behavior of Archimedean copulas, Adv. Appl. Probab.,43, 195-216,

2011.

[23] P.A.W. Lewis, Ed McKenzie, Miniﬁcation Processes and Their Transformations. Journal of Applied

Probability,28(1), 45–57, 1991.

21

[24] J. Lopez-Diaz, M. Angeles Gil, P. Grzegorzewski, O. Hryniewicz, J. Lawry, Soft Methodolody and Ran-

dom Information Systems. Advances in Inteligent and Soft computing, Springer, 2004.

[25] A.J. McNeil, J. Nešlehová, From Archimedean to Liouville Copulas, J. Multivariate Anal. 101(8), 1771–

1790, 2010.

[26] A.J. McNeil, J. Nešlehová, Multivariate Archimedean Copulas, d−monotone Functions and l1−

norm Symmetric Distributions, Ann. Statist.,37(5B), 3059–3097, 2009.

[27] J. Misiewicz, Generalized convolutions and the Levi-Civita functional equation, Aequationes Mathe-

maticae,92(5), 911-933, 2018.

[28] J. Misiewicz, V. Volkovich, Symmetric weakly-stable random vector is pseudo-isotropic, to appear: J.

Math. Anal. Appl., 2019.

[29] K. Urbanik, Generalized convolutions I-V. Studia Math.,23(1964), 217–245, 45(1973), 57–70, 80(1984),

167–189, 83(1986), 57–95, 91(1988), 153–178.

[30] R.E. Williamson, Multiply monotone functions and their Laplace transforms. Duke Math. J. 23, 189–

207, 1956.

[31] H. Ch. Yeh, B.C. Arnold, C.A. Robertson, Pareto Processes. Journal of Applied Probability,25(2), 291–

301, 1988.

22