ArticlePDF Available

# Simulation of Weakly Self-Similar Stationary Increment {\text{Sub}}_{\varphi } {\left( \Omega \right)}-Processes: A Series Expansion Approach

Authors:

## Abstract

We consider simulation of $${\text{Sub}}_{\varphi } {\left( \Omega \right)}$$-processes that are weakly selfsimilar with stationary increments in the sense that they have the covariance function $$R{\left( {t,s} \right)} = \frac{1}{2}{\left( {t^{{2H}} + s^{{2H}} - {\left| {t - s} \right|}^{{2H}} } \right)}$$for some H ∈ (0, 1). This means that the second order structure of the processes is that of the fractional Brownian motion. Also, if $$H >\frac{1} {2}$$ then the process is long-range dependent. The simulation is based on a series expansion of the fractional Brownian motion due to Dzhaparidze and van Zanten. We prove an estimate of the accuracy of the simulation in the space C([0, 1]) of continuous functions equipped with the usual sup-norm. The result holds also for the fractional Brownian motion which may be considered as a special case of a $${\text{Sub}}_{{{x^{2} } \mathord{\left/ {\vphantom {{x^{2} } 2}} \right. \kern-\nulldelimiterspace} 2}} {\left( \Omega \right)}$$-process.
Simulation of Weakly Self-Similar Stationary
Increment SubððððÞÞÞÞÞð -Processes: A Series
Expansion Approach
YURIY KOZACHENKO yvk@univ.kiev.ua
Mechanics and Mathematics Faculty, Department of Probability Theory and Math. Statistics, Taras Shevchenko
Kyiv National University, Volodymyrska 64, Kyiv, Ukraine
TOMMI SOTTINEN tommi.sottinen@helsinki.ﬁ
Department of Mathematics and Statistics, University of Helsinki, P.O. Box 68 (Gustaf Ha¨llstro¨min katu 2b),
00014 Helsinki, Finland
OLGA VASYLYK vasylyk@univ.kiev.ua
Mechanics and Mathematics Faculty, Department of Probability Theory and Math. Statistics, Taras Shevchenko
Kyiv National University, Volodymyrska 64, Kyiv, Ukraine
Received November 10, 2004; Revised October 5, 2005; Accepted July 6, 2005
Abstract. We consider simulation of SubðÞ-processes that are weakly selfsimilar with stationary increments
in the sense that they have the covariance function
Rt;sðÞ¼
1
2t2Hþs2Hts
jj
2H

or some H2(0, 1). This means that the second order structure of the processes is that of the fractional
Brownian motion. Also, if H>1
2then the process is long-range dependent.
The simulation is based on a series expansion of the fractional Brownian motion due to Dzhaparidze and van
Zanten. We prove an estimate of the accuracy of the simulation in the space C([0, 1]) of continuous functions
equipped with the usual sup-norm. The result holds also for the fractional Brownian motion which may be
considered as a special case of a Subx2=2ðÞ-process.
Keywords: fractional Brownian motion, -sub-Gaussian processes, long-range dependence, self-similarity,
series expansions, simulation
AMS 2000 Subject Classiﬁcation: 60G18, 60G15, 68U20, 33C10
1. Introduction
We consider simulation of centred second order processes deﬁned on the interval [0, 1]
whose covariance function is
Rt;sðÞ¼
1
2t2Hþs2Hts
jj
2H

;
and belong to the space SubðÞ. This space is deﬁned later in Section 2. The parameter
Htakes values in the interval (0,1) the other cases being either uninteresting or impossible.
Methodology and Computing in Applied Probability, 7, 379–400, 2005
#2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands.
The motivation to study processes with the second order structure given by Rcomes
from the notions of statistical self-similarity and long-range dependence. A square
integrable process with stationary increments is long-range dependent if its autocor-
relation function is not summable. A process Zis self-similar with index Hif it satisﬁes
the scaling property
Zt
ðÞ
t0¼
daHZat

t0
for all a> 0. Here dmeans equality in distributions. The self-similarity parameter H2
(0, 1), or Hurst index, has also the following role. If H1
2then Zis a process with
dependent increments. There are 1
2-self-similar processes with independent increments,
but these are not square integrable processes. If H>1
2then the increments of the process
Zare long-range dependent. The case H<1
2corresponds to short-range dependence.
These properties, self-similarity and long-range dependence, have been shown to be
charateristic in e.g., teletrafﬁc and ﬁnancial time series. See (Beran, 1994; Doukhan
et al., 2003; Embrechts and Maejima, 2002; Samorodnitsky and Taqqu, 1994) for refer-
ences to these studies and for self-similarity and long-range dependence in general.
Assume now that a process Zis H-self-similar, has stationary increments, and is
centred and square integrable. Then it is easy to see that Zhas Ras the covariance
function. So, if a process has the covariance function Rwe say that it is weakly self-
similar with stationary increments, or second order self-similar with stationary in-
crements. In the Gaussian case the properties of the weak self-similarity and the proper
one coincide. In this case Zis called the fractional Brownian motion, and, in particu-
lar, the Brownian motion if H¼1
2
. The fractional Brownian motion was originally
deﬁned and studied by Kolmogorov (1940) under the name BWiener helix.^The name
Bfractional Brownian motion^comes from Mandelbrot and Van Ness (1968).
Recently Dzhaparidze and van Zanten (2004) proved a series representation for the
fractional Brownian motion B:
Bt¼X
1
n¼1
sin xntðÞ
xn
XnþX
1
n¼1
1cos yntðÞ
yn
Yn:ð1:1Þ
Here the X
n
’s and the Y
n
’s are independent zero mean Gaussian random variables with
certain variances depending on Hand n.Thex
n
’s are the positive real zeros of the Bessel
function J
jH
of the ﬁrst kind and the y
n
’s are the positive real zeros of the Bessel
function J
1jH
. The series in (1.1) converge in mean square as well as uniformly on [0, 1]
with probability 1. Details of representation (1.1) are recalled later in Section 3.
In this paper we study the use of the expansion (1.1) in simulating processes with the
covariance function R. In particular, we study processes of the form (1.1) where the X
n
’s
and Y
n
’s are replaced by independent random variables from the space SubðÞ. The
fractional Brownian motion may be obtained as a special case with (x)=x
2
/2.
Let us end this introduction by saying a few words of the pros and cons of using the
series expansion (1.1). The Hurst parameter His roughly the Ho¨lder index of the process.
This means that, especially in the case of small H, the sample paths of the process are
very erratic. However, the coefﬁcient functions in (1.1) are smooth. So, in order to have
380 KOZACHENKO, SOTTINEN AND VASYLYK
good accuracy in simulation one needs a large truncation point in the expansion. This is
the bad news. The good news is that once the coefﬁcient functions are calculated we are
in no way restricted to any pregiven time grid. Indeed, unlike in some traditional simu-
lation methods, to calculate the value of the sample path in a new time point one does
not have to condition on the already calculated time points. The computational effort in
adding a new time point is always constant.
2. Space SubððððÞÞÞÞ
We recall some basic facts about the space SubðÞof -sub-Gaussian (or generalised
sub-Gaussian) random variables. For details and proofs we refer to Buldygin and
Kozachenko (2000) and Krasnoselskii and Rutitskii (1958).
DEFINITION 2.1 (Krasnoselskii and Rutitskii, 1958) A continuous even convex function
u¼uxðÞ;x2R
fg
is an Orlicz N-function if it is strictly increasing for x > 0, u(0) = 0
uxðÞ
x!0asx!0 and uxðÞ
x!1 as x!1:
PROPOSITION 2.2 (Krasnoselskii and Rutitskii, 1958) The function u is an Orlicz N-
function if and only if
uxðÞ¼Zx
jj
0
luðÞdu;x2R;
where the density function l is nondecreasing, right continuous, l(u)>0as u >0,l(0) = 0
and l(u)YVas u YV.
DEFINITION 2.3 Let ube an Orlicz N-function. The even function u*¼u*xðÞ;x2R
fg
deﬁned by the formula
u*xðÞ¼sup
y>0
xy uyðÞðÞ;x0;
is the Young-Fenchel transformation of the function u.
PROPOSITION 2.4 (Krasnoselskii and Rutitskii, 1958) The function u* is an Orlicz N-
function and for x >0
u*xðÞ¼xy0uy
0
ðÞif y0¼l1xðÞ:
Here l
j1
is the generalised inverse function of l, i.e.,
l1xðÞ:¼sup v0:lvðÞx
fg
:
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 381
DEFINITION 2.5 The assumption Q holds for an Orlicz N-function if it is quadratic
around the origin, i.e., there exist such constants x
0
>0and C > 0that (x) = Cx
2
for
ªxªex
0
.
EXAMPLE 2.6 The assumption Qholds for the following Orlicz N-functions
xðÞ¼
x
jj
p
pif xjj>1;
x2
pif x
jj1;
(p>1;
xðÞ¼
e
2

2
x2if x
jj2

1=;
exp x
jj
fg
if x
jj>2

1=;
(0<<1:
Let (,F,P) be a standard probability space.
DEFINITION 2.7 Let be an Orlicz N-function satisfying the assumption Q. A zero mean
random variable belongs to the space SubðÞ,the space of -sub-Gaussian random
variables, if there exists a positive and ﬁnite constant asuch that the inequality
Eexp 
fg
exp aðÞ
fg
holds for all 2R.
REMARK 2.8 Note that like the Gaussian variables the -sub-Gaussian random variables
also have light tails. In particular, they have moments of all orders.
PROPOSITION 2.9 (Buldygin and Kozachenko, 2000) The space SubðÞis a Banach space
with respect to the norm
ðÞ¼inf aQ0:Eexp 
fg
exp aðÞ
fg
;2R
fg
:
Moreover, for any 2Rwe have
Eexp 
fg
exp ’
ðÞ

;E2

1
22CðÞ
1
2ðÞ;
where C is the constant from the assumption Q.
DEFINITION 2.10 A stochastic process X =(X
t
)
t2[0,1]
is a SubðÞ-process if it is a
bounded family of SubðÞrandom variables: Xt2SubðÞfor all t 2[0, 1] and
sup
t20;1½
Xt
ðÞ<1:
The properties of random variables from the spaces SubðÞwere studied in the book
(Buldygin and Kozachenko, 2000).
REMARK 2.11 When xðÞ¼x2
2the space SubðÞis called the space of sub-Gaussian
random variables and is denoted by Sub(). Centred Gaussian random variables belong
to the space Sub(), and in this case ðÞis just the standard deviation: (E
2
)
1/2
. Also, if
is bounded, i.e., ªªeca.s. then 2Sub() and
()ec.
382 KOZACHENKO, SOTTINEN AND VASYLYK
PROPOSITION 2.12 Let be an Orlicz N-function satisfying the assumption Q. Assume
further that the function ﬃﬃ
p

is convex. Let
1
,
2
,...,
n
be independent random
variables from the space SubðÞ.Then
2
X
n
i¼1
i
!
X
n
i¼1
2
i
ðÞ:
3. Series Representation
Let us now recall the DzhaparidzeYvan Zanten series representation (1.1) in detail. Let J
be the Bessel function of the ﬁrst kind of order , i.e.,
JxðÞ¼X
1
n¼0
1ðÞ
nx=2ðÞ
þ2n
nþ1ðÞþnþ1ðÞ
:
Here x>0,mj1, j2, . . . and Gdenotes the Euler Gamma function
zðÞ¼Z1
0
tz1etdt:
It is well-known that for >j1 the Bessel function Jhas countable number of real
positive zeros tending to inﬁnity. Denote by x
n
the nth positive real zero of the Bessel
function J
jH
;y
n
is the nth positive real zero of J
1jH
.
Let Bbe the fractional Brownian motion with index H. Then it may be represented as
the mean square convergent series
Bt¼X
1
n¼1
cnsin xntðÞ
e
XXnþX
1
n¼1
dn1cos yntðÞðÞ
e
YYn:
Here e
XXn;e
YYn;n¼1;2;:::; are independent and identically distributed zero mean
Gaussian random variables with Ee
XX 2
n¼Ee
YY 2
n¼1 and
cn¼Hﬃﬃﬃﬃ
2c
p
xHþ1
nJ1Hxn
ðÞ
;n¼1;2;:::; ð3:1Þ
dn¼Hﬃﬃﬃﬃ
2c
p
yHþ1
nJHyn
ðÞ
;n¼1;2;:::; ð3:2Þ
c¼2Hþ1ðÞsin HðÞ
2Hþ1:ð3:3Þ
We shall generalise the setting above in the following way: Deﬁne a process
Z=(Zt)
t2[0,1]
by the expansion
Zt¼X
1
n¼1
cnsin xntðÞnþX
1
n¼1
dn1cos yntðÞðÞn;ð3:4Þ
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 383
where c
n
and d
n
are given by (3.1) and (3.2), x
n
,h
n
,n=1,2,..., are independent
identically distributed centred random variables from the space SubðÞwith E
n
2
=
Eh
n
2
=1,n=1, 2, .... Furthermore, we shall assume that the function ﬃﬃ
p
ðÞis convex.
Since -sub-Gaussian random variables are square integrable we have the following
direct consequence of the series representation (1.1) for fractional Brownian motion.
PROPOSITION 3.1 The series (3.4) converges in mean square and the covariance function
of the process Z is R.
2
-convergence the spaces SubðÞare nice enough to allow
uniform w-by-wconvergence.
THEOREM 3.2 The series (3.4) converges uniformly with probability one and the process
Z is almost surely continuous on [0, 1]. Moreover, if Z is strongly self-similar with
stationary increments then it is -Ho¨lder continuous with any index <H.
The continuity in Theorem 3.2 follows by using Hunt’s theorem (Hunt, 1951). The
Ho¨lder continuity comes from Kolmogorov’s criterion (Embrechts and Maejima, 2002),
Lemma 4.1.1. Let us also note that from the case of fractional Brownian motion we know
that in general we cannot have Ho¨lder continuity with index =H, cf. (Decreusefond
and U
¨stu¨nel, 1999).
For the reader’s convenience we now recite a modiﬁcation of Hunt’s theorem as a
lemma (cf. Buldygin and Kozachenko, 2000, Example 3.5.2).
LEMMA 3.3 Suppose that (
n
)
nQ1
is a sequence of independent centred random variables
with E
n
2
=1,n=1,2,....Let (
n
)
nQ1
be a sequence such that
n
e
n+1
and
n
YV
as n YV.
If
X
1
n¼1
a2
nln 1 þn
ðÞðÞ
1þ<1
for some >0then the series
X
1
n¼1
ancos ntðÞnand X
1
n¼1
ansin ntðÞn
converge uniformly on [0,1] with probability one.
Proof of Theorem 3.2: Let us consider the almost sure uniform convergence ﬁrst. Now,
from Watson (1944), p. 506, we have x
n
õy
n
õnas nYV. Also from (Watson, 1944),
p. 200, we have the following asymptotic relation for the Bessel function Jfor >j1:
J2
xðÞþJ2
þ1xðÞõ2
x
384 KOZACHENKO, SOTTINEN AND VASYLYK
for large ªxª. Since the zeros x
n
of J
jH
and y
n
of J
1jH
tend to inﬁnity this yields
J2
1Hxn
ðÞõ2
xn
and J2
Hyn
ðÞõ2
yn
as nYV. Therefore,
c2
nõc
n2Hþ1and d2
nõc
n2Hþ1
(see (3.1)Y(3.3)). Consequently, the series
X
1
n¼1
c2
nln 1 þxn
ðÞðÞ
1þ"and X
1
n¼1
d2
nln 1 þyn
ðÞðÞ
1þ"
converge for all "> 0. The almost sure uniform convergence and the continuity of the
process follow now from Hunt’s theorem (Lemma 3.3), since almost sure uniform
convergence of the series P1
n¼1dncos yntðÞnand almost sure convergence of the series
P1
n¼1dnnimply that P1
n¼1dn1cos yntðÞðÞnalso converges uniformly on [0, 1] with
probability one.
To see the Ho¨lder continuity of Zjust use strong self-similarity and the stationarity of
the increments together with the fact that Zhas all moments. Indeed, for all n2Nwe
have
EZtZs
jj
n¼EZts
jj
n¼ts
jj
HnEZ1
jj
n;
and the claim follows from Kolmogorov’s criterion. Í
4. Simulation, Accuracy and Reliability
We want to construct a model e
ZZ of the process Z, such that e
ZZ approximates Zwith given
reliability and accuracy in the norm of some Banach space. In this paper we consider the
space C([0, 1]) equipped with the usual sup-norm.
DEFINITION 4.1 The model e
ZZ approximates the process Z with given reliability 1 j,0<
<1,and accuracy >0inC([0, 1]) if
Psup
t20;1½
Zte
ZZt
>
!
:
A natural model for Z, deﬁned by the expansion (3.4), would be the truncated series
X
N
n¼1
cnsin xntðÞnþdn1cos yntðÞðÞn
ðÞ:
However, it is realistic to assume that the constants c
n
and d
n
and the zeros x
n
,y
n
are only
calculated approximately, especially since there are fast-to-compute asymptotic formulas
for the zeros x
n
and y
n
(cf. Watson, 1944, p. 506). Note that the constants c
n
and d
n
depend on the zeros.
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 385
Let e
ccnand e
ddnbe the approximated values of the c
n
and d
n
, respectively. Let
e
ccncn
jj
c
n;
je
ddndnjd
n;
n= 1, ... , N. The errors
n
c
and
n
d
are assumed to be known. Let e
xxnand e
yynbe
approximations of the corresponding zeros x
n
and y
n
with error bounds
e
xxnxn
jj
x
n;
e
yynyn
jj
y
n;
The error bounds
n
x
and
n
y
are also assumed to be known.
Then, the model of the process Zis
e
ZZt¼X
N
n¼1e
ccnsin e
xxntðÞnþe
ddn1cos e
yyntðÞðÞn

:ð4:1Þ
The error in the simulation (model) is
t:¼Zte
Zt
Zt
¼X
N
n¼1
cnsin xntðÞ
e
ccnsin e
xn
xntðÞðÞnþdn1cos yntðÞðÞ
e
ddn1cos e
yyntðÞðÞ

n
no
þX
1
n¼Nþ1
cnsin xntðÞnþdn1cos yntðÞðÞn
fg
¼:
appr
tþcut
t:
In order to bound the error Din C([0, 1]) we need estimates for t
ðÞand
ts
ðÞfor all s,t2[0, 1]. The estimates are given in the following proposition.
PROPOSITION 4.2 Denote a:¼n
ðÞ¼n
ðÞand
cut :¼a2
X
1
n¼Nþ1
c2
nþ4d2
n

;
appr :¼a2
X
N
n¼1
cnx
nþc
n

2þdny
nþ2d
n

2
no
:
Let 2(0, H)and denote
cut
:¼222a2
X
1
n¼Nþ1
c2
nx2
nþd2
ny2
n

;
appr
:¼232a2
X
N
n¼1
x2
nc
n

2þy2
nd
n

2þ232e
cn
cn
ðÞ
2x
n

2xnþ~
xxn
ðÞ
2
22þ1
(
þe
ddn

2y
n

2ynþ~
yyn

2
22þ1!):
386 KOZACHENKO, SOTTINEN AND VASYLYK
Then we have for all s, t 2[0, 1]
2
t
ðÞappr þcut ;ð4:2Þ
2
ts
ðÞappr
þcut

ts
jj
2:ð4:3Þ
Proof: By Proposition 2.12 we know that
2
t
ðÞ2
appr
t
ðÞþ2
cut
t

:ð4:3Þ
For 2
appr
t
ðÞwe obtain by using Proposition 2.12 and the mean value theorem that
2
appr
t
ðÞ
X
N
n¼1
cnsin xntðÞ
e
ccnsin e
xxntðÞðÞ
22
n
ðÞ
þX
N
n¼1
dn1cos yntðÞðÞ
e
ddn1cos e
yyntðÞðÞ

22
n
ðÞ
¼a2
X
N
n¼1
cnsin xntðÞsin e
xxntðÞðÞþcne
ccn
ðÞsin e
xxntðÞðÞ
2
(
þX
N
n¼1
dncos e
yyntðÞcos yntðÞðÞþdne
ddn

1cos e
yyntðÞðÞ

2
a2
X
N
n¼1
cnxne
xxn
jj
tþcne
ccn
ðÞsin e
xxntðÞ
jj
ðÞ
2
(
þX
N
n¼1
dne
yynyn
jjtþdne
ddn

1cos e
yyntðÞðÞ

2
a2
X
N
n¼1
cnx
nþc
n

2þdny
nþ2d
n

2
no
:
¼appr:
Similarly we obtain
2
cut
t

a2
X
1
n¼Nþ1
c2
nþ4d2
n

¼cut:ð4:4Þ
Recall the asymptotics of c
n
2
and d
n
2
(3.5) to see that the series (4.4) above converges.
The estimate (4.2) follows.
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 387
Now we shall estimate the incremental error 2
ts
ðÞ. For the Bcut-off^part
we have
2
cut
tcut
s

¼2
X
1
n¼Nþ1
cnsin xntðÞsin xnsðÞðÞnþX
1
n¼Nþ1
dncos ynsðÞcos yntðÞðÞn
!
221ðÞ
a2
X
1
n¼Nþ1
c2
nxnts
jj
ðÞ
2þd2
nynts
jj
ðÞ
2

¼221ðÞ
a2
X
1
n¼Nþ1
c2
nx2
nþd2
ny2
n

ts
jj
2
¼cut
ts
jj
2:ð4:5Þ
Due to the asymptotics (3.5) and x
n
õy
n
õnthe series in (4.5) converge if <H.
For the Bapproximating part^we have
2
appr
tappr
s

a2
X
N
n¼1
cnsin xntðÞsin xnsðÞðÞ
e
ccnsin e
xxntðÞsin e
xxnsðÞðÞðÞ
2
(
þX
N
n¼1
dncos ynsðÞcos yntðÞðÞ
e
ddncos e
yynsðÞcos e
yyntðÞðÞ

2):ð4:6Þ
By using the relations
ab cdjjeajjbdjjþacjjdjj;
aþbðÞ
2e2a2þ2b2;
sin xsin y¼2cos xþy
2sin xy
2
cos xcos y¼2sin xþy
2sin yx
2
we obtain for the summand in (4.6) that
cnsin xntðÞsin xnsðÞðÞ
e
ccnsin e
xxntðÞsin e
xxnsðÞðÞ
jj
sin xntðÞsin xnsðÞ
jj
cne
ccn
jj
þsin xntðÞsin xnsðÞsin e
xxntðÞþsin e
xxnsðÞ
jj
e
ccn
jj
;
sin xntðÞsin xnsðÞ
jj
¼2sin xntsðÞ
2

cos xntþsðÞ
2

2xntsðÞ
2
388 KOZACHENKO, SOTTINEN AND VASYLYK
and
sin xntðÞsin xnsðÞsin e
xxntðÞþsin e
xxnsðÞ
jj
¼sin xntðÞsin e
xxntðÞðÞsin xnsðÞsin e
xxnsðÞðÞ
jj
¼2sin xne
xxn
ðÞt
2

cos xnþe
xxn
ðÞt
2

:
2sin xne
xxn
ðÞs
2

cos xnþe
xxn
ðÞs
2

2 sin xne
xxn
ðÞt
2

cos xnþe
xxn
ðÞt
2

cos ðxne
xxnÞs
2!
þsin xne
xxn
ðÞt
2sin xne
xxn
ðÞs
2

cos xnþe
xxn
ðÞs
2

¼2sin xne
xxn
ðÞt
2

2sin xnþe
xxn
ðÞtþsðÞ
4

sin xnþe
xxn
ðÞstðÞ
4

½
þ2cos xnþe
xxn
ðÞtþsðÞ
4

sin xne
xxn
ðÞðtsÞ
4cos xnþe
xxn
ðÞs
2

4
xne
xxn
ðÞt
2
xnþe
xxn
ðÞðstÞ
4
þ
xne
xxn
ðÞtsðÞ
4
222x
n

xnþe
xxn
ðÞ
2þ1

ts
jj
:
Hence
cnsin xntsin xnse
ccnsin e
xxntsin e
xxns2
2xntsðÞ
2
c
nþ222x
n

xnþ~
xxn
ðÞ
2þ1

ts
jj
e
ccn
jj

2
22
xntsðÞ
2
c
n

2þ22
22x
n

xnþ~
xxn
ðÞ
2þ1

ts
jj
e
ccn
jj

2
¼ts
jj
2232x2
nc
n

2þ254e
ccn
ðÞ
2x
n

2xnþ~
xxn
ðÞ
2þ1

2

ts
jj
2232x2
nc
n

2þ254e
ccn
ðÞ
2x
n

22xnþ~
xxn
ðÞ
2

2þ2

¼tsjj
2232x2
nc
n

2þ264e
ccn
ðÞ
2x
n

2xnþ~xxn
ðÞ
2
22þ1

¼232ts
jj
2x2a
nc
n

2þ232e
ccn
ðÞ
2x
n

2xnþ~xxn
ðÞ
2
22þ1

:
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 389
In the same way
dncos ynsðÞcos yntðÞðÞ
e
dncos e
yynsðÞcos e
yyntðÞðÞ

2
cos ynsðÞcos yntðÞ
jj
dne
dn
þcos ynsðÞcos yntðÞcos e
yynsðÞþcos e
yyntðÞ
jj
e
dn
2
21yn
jj
ts
jj
d
nþ222y
n

ts
jj
ynþ~yn
ðÞ
2þ1

e
ddn

2
22
1yn
jj
ts
jj
d
n

2þ22
22y
n

ts
jj
ynþ~yn
ðÞ
2þ1

e
dn

2
ts
jj
2232y2
nd
n

2þ254e
dn

2y
n

22ynþ~yn
ðÞ
2
22þ2


¼232ts
jj
2y2
nd
n

þ232e
dn

2y
n

2ynþ~yn
ðÞ
2
22þ1


:
Thus, we obtain following estimate
2
appr
tappr
s

a2
P
N
n¼1
232
ts
jj
2x2
nc
n

2þ232e
ccn
ðÞ
2x
n

2xnþ~xn
ðÞ
2
22þ1

þP
N
n¼1
232ts
jj
2y2
nd
n

2þ232e
ddn

2y
n

2ynþ~yn
ðÞ
2
22þ1


¼232a2
ts
jj
2P
N
n¼1
x2
n
c
n

2þy2
nd
n

2
þ232~
ccn
ðÞ
2x
n

2xnþ~xn
ðÞ
2
22þ1

þe
ddn

2y
n

2ynþ~yn
ðÞ
2
22þ1


¼appr
tsjj
2:
ð4:7Þ
Estimate (4.3) follows now by collecting the estimates above and by using Proposition
2.12. Í
Now we are ready to state, although not yet to prove, our main result.
THEOREM 4.3 Let b and be such that 0<b<<H.Let
appr
,
appr
,
cut
and
cut
be as
in Proposition 4.2. Denote
0¼ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
appr þcut
p;
¼ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
appr
þcut
p;
¼min 0;
2
no
:ð4:7Þ
Let l be the density of .
The model e
ZZ, deﬁned by (4.1), approximates the process Z, deﬁned by ( 3.4), with given
reliability 1 j,0<< 1, and accuracy > 0 in C([0, 1]) if the following three
inequalities are satisﬁed:
0<; ð4:8Þ
390 KOZACHENKO, SOTTINEN AND VASYLYK
0
<
2exp 1ðÞ
fg
1ðÞ
;ð4:9Þ
2 exp *
01

1
2b1b

0

b
l1
01

þ1
!
2
b
: ð4:10Þ
The following lemma is our main tool for proving Theorem 4.3. For the proof of it we
refer to Kozachenko and Vasilik (1998), Lemma 3.3.
LEMMA 4.4 Let X =(Xt)
t2[0, 1]
be a separable random process from the space SubðÞ.
Let :R
+
YR
+
be a strictly increasing continuous function such that (h)Y0as h Y
0and
sup
ts
jj
h
XtXs
ðÞhðÞ:
Denote
0
= sup
t2[0,1]
Xt
ðÞand let be such a number that 1
2
. Let
r:1;1½Þ!Rþbe a nondecreasing continuous function such that r(1) = 0 and the
mapping u 7!re
u
ðÞis convex. Suppose that
Z
0
uðÞdu <1;
where
uðÞ¼’;;r;uðÞ¼
rN1uðÞðÞðÞ
1ln N1uðÞðÞðÞ
;
and N(")is the minimum number of closed intervals of the radius "that is needed to
cover the interval [0,1] (note that N "ðÞ1
2"þ1).
Then for all 2Rand p 2(0,1) we have
Eexp sup
t20;1½
Xt
jj
()
2exp 0
1p

1pðÞþ
1p

p

r10pðÞþ
1pðÞpZp2
0
uðÞdu
0
B
@1
C
A
2
:ð4:11Þ
REMARK 4.5 (Buldygin and Kozachenko, 2000) Random process Xis called a separable
on (T, r)orr-separable, if there exist countable everywhere dense (with respect to
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 391
pseudometric (metric) r) set STand a set
0
Î:P(
0
) = 0, such that for any open
set UÎTand for any closed set DÎRwe have
\
s2S\U
XsðÞ2D
fg
n
\
s2U
XsðÞ2D
fg
0:
For any open set UTthe following relationships are true:
sup
t2U
XtðÞsup
t2S\U
XtðÞ

0;
inf
t2UXtðÞinf
t2S\UXtðÞ

0;
where P(
0
) = 0. If a random process Xis continuous with probability one then it is
separable (Buldygin and Kozachenko, 2000).
We have proved that the process Zis almost surely continuous on [0, 1]. Hence the
process t¼Zte
ZZtis almost surely continuous on [0, 1] too. Therefore it is a separable
process. So we can apply Lemma 4.4 to D
t
.
Let us now reformulate Lemma 4.4 above for our case.
LEMMA 4.6 Let ,,
0
and be as in Theorem 4.3, and let r and be as in Lemma 4.4.
Then for all 2Rand p 2(0, 1) we have
Psup
t20;1½
t
jj
>
!
2exp  þ0
1p

r10
p1pðÞ
Zp
0
uðÞdu
0
@1
A
2
:ð4:12Þ
Proof: From Proposition 4.2 it follows that for the error process Dwe may take
0¼ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
appr þcut
p:ð4:12Þ
and
hðÞ¼h¼ﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
appr
þcut
qh:ð4:12Þ
In the inequality (4.11) we put ¼min 0;
2

. Since e
0
we have
0
1p

1pðÞþ
1p

p0
1p

392 KOZACHENKO, SOTTINEN AND VASYLYK
So, it follows from the Chebyshev inequality and from (4.11) that for any > 0 we have
Psup
t20;1½
t
jj
>
!
exp  þ0
1p

2I2
r;
where we have used the denotation
Ir¼r10pðÞþ
1pðÞpZp2
0
uðÞdu
0
B
@1
C
A:
Since the function t7!re
tðÞ

is an Orlicz N-function then tðÞ¼re
tðÞ
ðÞ
tincreases
in tQ0 (cf. Krasnoselskii and Rutitskii, 1958). Therefore,
1xðÞðÞ¼
re
x
ðÞ
1xðÞincreases
in xQ0. Consequently, is a decreasing function. Thus,
pðÞ1
p1pðÞ
Zp
p2
uðÞdu
and
0pðÞ0
p1pðÞ
Zp
p2
uðÞdu:
Since 0
1 we have
0pðÞþ
p1pðÞ
Zp2
0
uðÞdu0
p1pðÞ
Zp
0
uðÞdu:
The claim follows now from Lemma 4.4. Í
Theorem 4.3 follows now by using the YoungYFenchel transformation and then
choosing suitable ,pand rin the inequality (4.12).
Proof of Theorem 4.3: By Proposition 2.4 we know that xy =(x)+*(y) when
x=l
j1
(y), where l
j1
is the generalised inverse function of the density lof .Since
 0
1p

¼0
1p1pðÞ
00
1p

we have the equality
 0
1p

¼*1pðÞ
0

SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 393
when
0
1p¼l11pðÞ
0

:
So, we choose the following :
¼1p
0
l11pðÞ
0

:
Setting this in the inequality (4.12) we obtain
Psup
t20;1½
t
jj
>
!
2exp *1pðÞ
0

r10
p1pðÞ
Zp
0
uðÞdu
0
@1
A
2
¼2exp *1pðÞ
0

r10
1pðÞ
0
l11pðÞ
0

1
p1pðÞ
Zp
0
uðÞdu
0
@1
A
2
¼2exp *1p
ðÞ
0

r11
p
l11pðÞ
0

Zp
0
uðÞdu
0
@1
A
2
Let us now consider the integral term above. In our case we have
Zp
0
uðÞdu¼Zp
0
rN1uðÞðÞðÞ
1lnN 1uðÞðÞðÞ
du
R
p
0
r1
21uðÞþ1

1ln 1
21uðÞþ1

du
¼Zp
0
r1
2
u

1
þ1

1ln 1
2
u

1
þ1

du:
Now, if the denominator satisﬁes
1ln 1
2
u

1
þ1

1
394 KOZACHENKO, SOTTINEN AND VASYLYK
as uep, that is
p
2exp 1ðÞ
fg
1ðÞ
;ð4:13Þ
then we have
Zp
0
uðÞduZp
0
r1
2
u

1
þ1

du:ð4:13Þ
Let us choose r(x)=x
b
j1, where 0 <b<. Then, by using the estimate above and
the fact that (x+1)
b
jx
b
e1, we obtain
Zp
0
uðÞduZp
0
1
2
u

1

b
du¼b
2b
pðÞ1b
1b
:
Thus, we have obtained the estimate
Psup
t20;1½
t
jj
>
!
2exp *1pðÞ
0
no
r1 1
pl11pðÞ
0

b
2b
pðÞ
1b
1b

2
¼2exp *1pðÞ
0
no
r1b
pðÞ
b
2b1b
ðÞ
l11pðÞ
0


2
:
For pwe choose
p¼0
ð4:14Þ
(recall that
0
<) and we obtain the inequality
Psup
t20;1½
t
jj
>
!
2exp *
01
no
r1b
0

b
2b1b

l1
01

!
2
¼2exp *
01
no
r11
2b1b

0

b
l1
01

!
2
¼2exp *
01
no
1
2b1b

0

b
l1
01

þ1
!
2
b
ð4:15Þ
The claim follows from the inequalities (4.13)Y(4.15) and Lemma 4.6. Í
Let us now assume that the constants c
n
and d
n
and the zeros x
n
and y
n
are actually
correctly calculated.
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 395
COROLLARY 4.7 Suppose that there is no approximation error, i.e.,
n
c
=
n
d
=
n
x
=
n
y
=0.
Then the conditions (4.8)Y(4.10) of Theorem 4.3 are satisﬁed if
Nmax A0

1=H
þ1;A0exp 1ðÞ
fg
1ðÞ

1=H
þ1;2A0
A

1
()
ð4:16Þ
and
2exp *NH
A01

A
ðÞ
b
Nþ1ðÞ
2Hb=
2b1b

A
2b
0NHðÞb
l1Nþ1ðÞ
H
A01
!
þ1
0
@1
A
2
b
;ð4:17Þ
where
A0¼aﬃﬃﬃﬃﬃﬃﬃ
5c
2H
rand A¼21aﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ
c
H
r:ð4:17Þ
Proof: Note that now appr ¼appr
¼0.
We shall use the asymptotics xnynnand c2
nd2
nc
n2Hþ1in the expressions for
cut
and cut
.
For
cut
we get the upper bound
cut ¼a2
X
1
n¼Nþ1
c2
nþ4d2
n

a2
X
1
n¼Nþ1
5c
n2Hþ1
5ca2
X
1
n¼NZ
nþ1
n
dx
x2Hþ1
¼5ca2
2HN2H:
For cut
we obtain
cut
¼222a2
X
1
n¼Nþ1
c2
nx2
nþd2
ny2
n

222a2
X
1
n¼Nþ1
cnðÞ
2
n2Hþ1þcnðÞ
2
n2Hþ1
!
222a2
2c2X
1
n¼NZ
nþ1
n
dx
x2HðÞþ1
¼222a2
c2
HðÞN2HðÞ
:
396 KOZACHENKO, SOTTINEN AND VASYLYK
In the same way we get the lower bounds
cut 5ca2
2HNþ1ðÞ
2H;
cut
222a2
c2
HðÞNþ1ðÞ
2HðÞ
:
Therefore, we have the following asymptotic bounds for
0
and of Theorem 4.3:
A0
Nþ1ðÞ
H0A0
NH;
A
Nþ1ðÞ
HA
NH:
If
N2A0
A

1
then in Theorem 4.3 we have =
0
. Now we see that the condition (4.8) is satisﬁed if
NA0

1=H
þ1:
Similarly, (4.9) is satisﬁed if
NA0exp 1ðÞ
fg
1ðÞ

1=H
þ1:
Finally, we see that the condition (4.10) is satisﬁed if (4.17) holds. Í
Theorem 4.3 and Corollary 4.7 are still rather general and not readily useful in
practice. Indeed, there are still the parameters and bone has to optimise. If we choose
a speciﬁc form for the function we are able to give an applicable version of Corollary
4.7. The next corollary deals with the sub-Gaussian case, i.e., (x)=x
2
/2.
COROLLARY 4.8 If the process Z is sub-Gaussian then the conditions (4.16) and (4.17)of
Corollary 4.7 are satisﬁed if
Nmax a
ﬃﬃﬃﬃﬃﬃﬃ
5c
2H
r
!
1=H
þ1;224
H51
H
8
<
:9
=
;ð4:18Þ
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 397
and
2exp 1
2
NH
aﬃﬃﬃﬃﬃ
5c
2H
q1
0
B
@1
C
A
2
8
>
<
>
:9
>
=
>
;N14 ; ð4:19Þ
where
¼2222
H458
HH
c

6
H
a

12
H
:ð4:19Þ
Proof: In the sub-Gaussian case we have xðÞ¼x2
2. So,
*xðÞ¼x2
2and lxðÞ¼0xðÞ¼x¼l1xðÞ:
Thus, the conditions (4.16) and (4.17) take the form
Nmax A0

1=H
þ1;2A0
A

1
() ð4:20Þ
and
2exp 1
2
NH
A01

2
()
A
ðÞ
b
Nþ1ðÞ
2Hb=
2b1b

A
2b
0NHðÞb
Nþ1ðÞ
H
A01
!
þ1
0
@1
A
2
b
:ð4:21Þ
Let’s take ¼H
2and b¼H
4.
In this case A0¼aﬃﬃﬃﬃ
5c
2H
q;A¼AH
2¼aH
221H
2ﬃﬃﬃ
2c
H
qand from the inequality (4.20)
we get
Nmax a
ﬃﬃﬃﬃﬃﬃﬃ
5c
2H
r
!
1=H
þ1;224
H51
H
8
<
:9
=
;:
Since Nis large we have in (4.21)
A
ðÞ
b
Nþ1ðÞ
2Hb=
2b1b

A
2b
0NHðÞb
Nþ1ðÞ
H
A01
!
þ1
0
@1
A
2
b
N14:
The claim follows. Í
398 KOZACHENKO, SOTTINEN AND VASYLYK
REMARK 4.9 In Corollary 4.8 the condition (4.18) for Nis in closed form. Condition
(4.19) is still implicit, but it may be solved easily using numerical methods. Corollary 4.8
is readily applicable for the fractional Brownian motion. Indeed, in this case a¼1.
EXAMPLE 4.10 Let
xðÞ¼
xp
p;x
jj
>1;p>2;
x2
p;x
jj1:
8
>
>
>
<
>
>
>
:
In this case we have:
*xðÞ¼x2
2;lxðÞ¼0xðÞ¼x;l1xðÞ¼x
for x2[0, 1] and
*xðÞ¼xq
q;1
pþ1
q¼1

;lxðÞ¼0xðÞ¼xp1;l1xðÞ¼x1
p1
for x>1.
Then for 0
011 the condition (4.10) of Theorem 4.3 takes the form
2exp 1
2
01

2
()
1
2b1b

0

b
01

þ1
!
2
b
and for
01>1 we have
2exp 1
q
01

q

1
2b1b

0

b
01

1
p1
þ1
!
2
b
:
Acknowledgments
The authors wish to thank Kacha Dzhaparidze, Harry van Zanten, and Esko Valkeila for
fruitful discussions. Sottinen was ﬁnanced by the European Commission research
training network DYNSTOCH. The authors were partially supported by the European
Union project Tempus Tacis NP 22012-2001 and the NATO Grant PST.CLG.980408.
We also thank the referee for useful comments and for pointing out a serious mistake
in the original manuscript.
References
J. Beran, Statistics for Long-Memory Processes, Chapman and Hall: New York, 1994.
V. V. Buldygin and Y. V. Kozachenko, Metric Characterization of Random Variables and Random Processes,
American Mathematical Society, Providence: RI, 2000.
SIMULATION OF WEAKLY SELF-SIMILAR STATIONARY INCREMENT SUBðÞ-PROCESSES 399
L. Decreusefond and A. S. U
¨stu¨ nel, BStochastic analysis of the fractional Brownian motion,^Potential analysis
vol. 10 (2) pp. 177Y214, 1999.
P. Doukhan, G. Oppenheim, and M. Taqqu (eds.), Theory and Applications of Long-Range Dependence,
Birkha¨ user Boston, Inc.: Boston, MA, 2003.
K. O. Dzhaparidze and J. H. van Zanten, BA series expansion of fractional Brownian motion,^Probability
Theory and Related Fields vol. 130 pp. 39Y55, 2004.
P. Embrechts and M. Maejima, Selfsimilar Processes, Princeton University Press: Princeton, 2002.
G. A. Hunt, BRandom Fourier transforms,^Transactions of the American Mathematical Society vol. 71
pp. 38Y69, 1951.
A. N. Kolmogorov, BWienersche Spiralen und einige andere interessante Kurven in Hilbertschen Raum,^
Comptes Rendus (Doklady) Acad. Sci. USSR (N.S.) vol. 26 pp. 115Y118, 1940.
Y. V. Kozachenko and O. I. Vasilik, BOn the distribution of suprema of Sub
() random processes,^Theory of
Stochastic Proc. vol. 4 (20) pp. 1Y2, 1998, pp. 147Y160.
M. A. Krasnoselskii and Y. B. Rutitskii, Convex Functions in the Orlicz Spaces, Fizmatiz: Moscow, 1958.
B. Mandelbrot and J. Van Ness, BFractional Brownian motions, fractional noises and applications,^SIAM
Review vol. 10 pp. 422Y437, 1968.
G. Samorodnitsky and M. Taqqu, Stable Non-Gaussian random processes, Chapman and Hall: New York,
1994.
G. N. Watson, A Treatise of the Theory of Bessel Functions, Cambridge University Press: Cambridge, England,
1944.
400 KOZACHENKO, SOTTINEN AND VASYLYK
... The problems of simulation of stochastic processes and fields with given reliability and accuracy in different functional spaces are also studied in [11,16,19,[22][23][24][25]. ...
... Substituting (27), (28) and (29) in the integral from (19) and applying trivial inequality ...
... Taking into account (26) and (30), the value B N (p) from (19) can be estimated as ...
Preprint
Full-text available
In the paper, we consider random variables and stochastic processes from the space Fψ(Ω) and study approximation problems for such processes. The method of series decomposition of stochastic processes from Fψ(Ω) is used to find an approximating process called a model. The rate ofconvergence of the model to the process in the uniform norm is investigated. We develop an approach for estimating the cutting-offlevel of the model under the given accuracy and reliability of the simulation. MSC Classification: 60G07, 60G15, 65C20, 68U20
... Numerical simulation of stochastic phenomena becomes actual nowadays and powerful computers together with modern computer technologies allow us to apply different types of statistical software to simulate and predict the behavior of stochastic processes. Various methods of simulation of stochastic processes and random fields are known in the literature (see, for example, [4,18,20,25,17,16]). However reliability and accuracy of models of stochastic processes are not discussed in the majority of those papers. ...
... where δ 0 (N, T ) and K(N, T ) are defined in relations (16) and (18), respectively. ...
... Then (15), (16), and (18) imply ...
... Simulation of stochastic processes with given accuracy and reliability has been studied, in particular, in [12] (see also, e.g., [6,11,13,14]). It is necessary to mention that results on simulation with given accuracy and reliability are available mostly for light-tailed processes -Gaussian and sub-Gaussian processes (although there are some exceptions; see, for instance, [26]). ...
Article
We consider stochastic processes {Y(t)} which can be represented as {Y(t)=(X(t))^{s}} , {s\in\mathbb{N}} , where {X(t)} is a stationary strictly sub-Gaussian process, and build a wavelet-based model that simulates {Y(t)} with given accuracy and reliability in {L_{p}([0,T])} . A model for simulation with given accuracy and reliability in {L_{p}([0,T])} is also built for processes {Z(t)} which can be represented as {Z(t)=X_{1}(t)X_{2}(t)} , where {X_{1}(t)} and {X_{2}(t)} are independent stationary strictly sub-Gaussian processes.
... Simulation of stochastic processes with given accuracy and reliability has been studied, in particular, in [11] (see also e.g. [5], [10], [13] and [14]). It is necessary to mention that results on simulation with given accuracy and reliability are available mostly for light-tailed processes -Gaussian and sub-Gaussian processes (although there are some exceptions, see, for instance, [18]). ...
Preprint
Full-text available
We consider stochastic processes $Y(t)$ which can be represented as $Y(t)=(X(t))^s, s \in \mathbb{N},$ where $X(t)$ is a stationary strictly sub-Gaussian process and build a wavelet-based model that simulates $Y(t)$ with given accuracy and reliability in $L_p([0,T])$. A model for simulation with given accuracy and reliability in $L_p([0,T])$ is also built for processes $Z(t)$ which can be represented as $Z(t)=X_1(t) X_2(t)$, where $X_1(t)$ and $X_2(t)$ are independent stationary strictly sub-Gaussian processes.
Article
The paper is devoted to one possible way of the model construction for the stationary Gaussian process with given accuracy and reliability in functional space C ⁢ ( [ 0 , T ] ) .
Article
In the paper, we consider the problem of simulation of a strictly φ-sub-Gaussian generalized fracti-onal Brownian motion. Simulation of random processes and fields is used in many areas of natural and social sciences. A special place is occupied by methods of simulation of the Wiener process and fractional Brownian motion, as these processes are widely used in financial and actuarial mathematics, queueing theory etc. We study some specific class of processes of generalized fractional Brownian motion and derive conditions, under which the model based on a series representation approximates a strictly φ-sub-Gaussian generalized fractional Brownian motion with given reliability and accuracy in the space C([0; 1]) in the case, when φ(x) = exp{|x|} − |x| − 1, x ∈ R. In order to obtain these results, we use some results from the theory of φ-sub-Gaussian random processes. Necessary simulation parameters are calculated and models of sample pathes of corresponding processes are constructed for various values of the Hurst parameter H and for given reliability and accuracy using the R programming environment.
Article
Full-text available
The organization of modern cloud services is based on theoretical results in logistics, operations research, supply chains, information transmission (transportation) networks, and on the practical achievements of the novel information and communication technologies. As all the inhabitants of the planet become regular users and at the same time creators of such services, the issues of decentralized decision making are becoming everyday problems. The paper presents the setup for the problem of such solutions by suppliers (providers) of cloud services and suggests a mathematical formulation of the corresponding optimization problem with resource constraints. It is a starting point for further mathematical elaboration of the new everyday problems.
Conference Paper
Full-text available
The difference between buying commodities today and buying commodity futures is more complicated because: i) the payment is delayed, and therefore a buyer can receive the interest on her money; ii) there is no need to store commodities, and therefore a buyer of futures contract saves her costs.
Article
In the paper, we consider the problem of simulation of a strictly φ-sub-Gaussian generalized fractional Brownian motion. Simulation of random processes and fields is used in many areas of natural and social sciences. A special place is occupied by methods of simulation of the Wiener process and fractional Brownian motion, as these processes are widely used in financial and actuarial mathematics, queueing theory etc. We study some specific class of processes of generalized fractional Brownian motion and derive conditions, under which the model based on a series representation approximates a strictly φ-sub-Gaussian generalized fractional Brownian motion with given reliability and accuracy in the space C([0; 1]) in the case, when φ(x) = (|x|^p)/p, |x| ≥ 1, p > 1. In order to obtain these results, we use some results from the theory of φ-sub-Gaussian random processes. Necessary simulation parameters are calculated and models of sample pathes of corresponding processes are constructed for various values of the Hurst parameter H and for given reliability and accuracy using the R programming environment.
Article
The paper is devoted to the model construction for input stochastic processes of a time-invariant linear system with a real-valued square-integrable impulse response function. The processes are considered as Gaussian stochastic processes with discrete spectrum. The response on the system is supposed to be an output process. We obtain the conditions under which the constructed model approximates a Gaussian stochastic process with given accuracy and reliability in the Banach space {C([0,1])} , taking into account the response of the system. For this purpose, the methods and properties of square-Gaussian processes are used.