Page 1

Efficient method to approximately solve retrial

systems with impatience

Jose Manuel Gimenez-Guzman∗, MaJose Domenech-Benlloch†, Vicent Pla†,

Jorge Martinez-Bauset†, Vicente Casares-Giner†

∗Dept. Automatica, Universidad de Alcal´ a, Alcal´ a de Henares, Madrid, Spain

†Dept. Comunicaciones, Universidad Politecnica de Valencia, Valencia, Spain

josem.gimenez@uah.es, mdoben@doctor.upv.es,{vpla,jmartinez,vcasares}@dcom.upv.es

Abstract

In this paper we present a novel technique to solve multiserver retrial systems

with impatience. Unfortunately these systems do not present an exact analytic so-

lution, so it is mandatory to resort to approximate techniques. This novel technique

does not rely on the numerical solution of the steady-state Kolmogorov equations

of the Continuous Time Markov Chain as it is common for this kind of systems

but it considers the system in its Markov Decision Process setting. This technique,

known as value extrapolation, truncates the infinite state space using a polynomial

extrapolation method to approach the states outside the truncated state space. A

numerical evaluation is carried out to evaluate this technique and to compare its

performance with previous techniques. The obtained results show that value ex-

trapolation greatly outperforms the previous approaches appeared in the literature

not only in terms of accuracy but also in terms of computational cost.

1Introduction

A common assumption when evaluating the performance of communication systems is that

users that do not obtain an immediate service leave the system without retrying. However,

due to the increasing number of customers and network complexity, the customer behavior

in general, and the retrial phenomenon in particular, may have a nonnegligible impact on

the system performance. For example, in mobile cellular networks the importance of the

retrial phenomenon has been stressed in [1–3]. An extensive bibliography on retrial queues

can be found in [4]. The modeling of repeated attempts has been a subject of numerous

investigations, because these systems have a non homogeneous and infinite state space.

However, it is known that the classical theory [5] is developed for random walks on the

1

Page 2

semi-strip {0,...,C}×Z+(being C the number of servers) with infinitesimal transitions

subject to conditions of space-homogeneity.

When the space-homogeneity condition does not hold, e.g. in the case of retrial queues,

the problem of calculating the equilibrium distribution has not been solved beyond ap-

proximate techniques when the number of servers in higher than two [6]. In particular

Marsan et al. [7] propose a well-known approximate technique for its analysis. In [8] a

generalization of the approximate technique in [7] was proposed, showing a substantial

improvement in the accuracy at the expense of a marginal increase of the computational

cost. Those approximations are based on the reduction of an infinite state space to a

finite one by aggregating states. Other solutions maintain the infinite state space but

homogeneize it beyond a given level in order to solve the system. These later models are

known as generalized truncated models [6], and usually present the advantage of provid-

ing a much better accuracy than the finite methodologies [9]. In this category we find

the models proposed by Falin [10], by Neuts and Rao [11] and by Artalejo and Pozo [6].

All these approaches rely on the numerical solution of the steady-state Kolmogorov equa-

tions of the Continuous Time Markov Chain (CTMC) that describes the system under

consideration.

Very recently, however, an alternative approach for evaluating infinite state space

Markov processes has been introduced by Leino et al. [12–14]. The new technique, named

value extrapolation, does not rely on solving the global balance equations. This technique

considers the system in its MDP (Markov Decision Process) setting and solves the ex-

pected value from the Howard equations written for a truncated state space. Instead of a

simple truncation, the relative values of states just outside the truncated state space are

estimated using a polynomial extrapolation based on the states inside, obtaining a closed

system. Therefore we can compute any performance parameter as far as we are capable

to express it as the expected value of a random variable that is function of the system

state.

So far the value extrapolation technique has been applied to multiclass single server

queues showing very promising results. It must be noted that a key aspect on the ap-

plication of value extrapolation lies on the election of the extrapolating function for the

relative state values. Indeed, in [14] the authors show that by selecting an appropriate

polynomial function the technique yields exact results for the moments of the queue length

in a multiclass Discriminatory Processor-Sharing (DPS) system. Unfortunately, the ap-

propriateness of the functional form of the extrapolation depends on the system and also

on the revenue function, i.e., the performance parameter we are interested in. Hence there

is no universal good choice for the extrapolating function. In this paper we address the

application of the value extrapolation technique to an important class of queuing sys-

tems, e.g. retrial queues, which are essentially different of the type of queues to which

this technique has been applied. A potential drawback of value extrapolation compared

2

Page 3

1

2

3

···

C

k users

λ

µ

Retrial

orbit

m users

µr

1 − Pi

Pi

Figure 1: Retrial model under study.

to conventional state space truncation methods is that, since the stationary state prob-

abilities are not obtained, if one want to compute several performance parameters the

technique has to be applied once per each of them. We apply well-known linear algebra

algorithms to compute several performance parameters simultaneously and through some

series of numerical examples we show that, at least for the type of system that we are

studying, the relative impact in terms of computational cost is marginal.

The application of the value extrapolation technique has only addressed problems

in which relative state values are expected to follow a polynomial tendency. In this

paper we develop the value extrapolation technique to solve a multiserver retrial system,

addressing also the drawback of computing only a single performance parameter every

time the technique is used.

In a first part of the paper, we develop the analytical part of the technique, defining

the associated Howard equations of the model and the revenue functions. In a second part,

we compare our technique with other previously proposed techniques in terms of accuracy

and computational cost. Results show that the proposed technique clearly outperforms

the rest of the studied techniques in terms of computational cost and this improvement

is even much higher in terms of accuracy.

The rest of the paper is structured as follows. Section 2 describes the system under

study, while Section 3 introduces the solving technique used. In Section 4 the numerical

analysis is carried out, evaluating the value extrapolation technique and comparing it

with other previous solving techniques proposed in the literature. Final remarks and a

summary of results are provided in Section 5.

2System Model

The system under study is a generic retrial system including user impatience, i.e., users

leave the system with certain probability after a non successful retrial. As shown in Fig. 1,

an infinite number of users arriving following a Poisson process with rate λ contend

for access to a system with C servers, requesting an exponentially distributed service

3

Page 4

(0,0)

??

λ

??

(0,1)

??

λ

??

µr

??

(0,2)

??

λ

??

2µr

??

(0,3)

??

λ

??

3µr

??

(1,0)

??

λ

??

µ

(1,1)

??

λ

??

µ

µr

??

(1,2)

??

λ

??

µ

2µr

??

(1,3)

??

λ

??

µ

3µr

??

(2,0)

...

2µ

(2,1)

...

2µ

(2,2)

...

2µ

(2,1)

...

2µ

(C−1,0)

Cµ

λ

??

(C−1,1)

Cµ

λ

??

µr

??

(C−1,2)

Cµ

λ

??

2µr

??

(C−1,3)

Cµ

λ

??

3µr

??

(C,0)

λ??

??

(C,1)

λ??

??

µrPi

??

(C,2)

λ??

??

2µrPi

??

(C,3)

??

3µrPi

??

...

...

...

...

...

Figure 2: Transition diagram.

time with rate µ. Without loss of generality, we consider that each user occupies one

resource unit. When a new request finds all servers occupied it joins the retrial orbit with

probability 1. After an exponentially distributed time of rate µrthis session retries, being

a successful retrial if it finds a free server. Otherwise, the user leaves the system with

probability Pior returns to the retrial orbit with probability (1−Pi), starting the retrial

procedure again. Note that we consider an infinite capacity for the retrial orbit.

The model considered can be represented as a bidimensional CTMC, S(t) = {K(t),M(t)},

where K(t) is the number of sessions being served and M(t) the number of users in the

retrial orbit at time t. The state space of the process is defined by

?

Figure 2 shows the transition diagram of such system, showing two important prop-

S :=s = (k,m) : k ≤ C;m ∈ Z+

?

.

erties in the dimension corresponding to the number of users in the retrial orbit. On the

one hand, its infinite cardinality and, on the other hand, its space-heterogeneity produced

by the fact that retrial rate depends on the number of customers in the retrial orbit.

3Solving technique

In this section we develop the value extrapolation technique for the scenario presented in

Section 2. Additionally, we present some particularities that should be taken into account

when using this technique.

4

Page 5

3.1MDP settings

As it has been aforementioned, the problem under interest has not a closed form solution

when C > 2 [6], so approximation techniques are mandatory. To the best of our knowl-

edge, all the approximate techniques that have appeared in literature compute the steady

state probabilities (π(s)) using the balance equations in order to compute the desired

performance parameters, i.e. solving the linear system of equations:

?

along with the normalization condition?

Notwithstanding, value extrapolation is not based on the probability of being in a

π(s)

s?

qss? =

?

sπ(s) = 1, where qss? represents the transition

s?

π(s?)qss?

∀s,

rate from state s to s?.

certain state, but on a new metric called relative state values. Relative state values

appear when we consider the system in the setting of an MDP. Formally, an MDP can

be defined as a tuple {S,A,P,R}, where S is a set of states, A is a set of actions, P is

a state transition function and R is a revenue function. The state of the system can be

controlled by choosing actions a from A, influencing in this way the state transitions. The

transition function P : S×S×A → R+specifies the transition rate to other states when a

certain action is taken at a given state. The first characteristic of the value extrapolation

technique is the necessity of the definition of a revenue function that must be a function

of the system state, i.e., r(s). Following the definition of the revenue function for every

state, we will also have a mean revenue rate of the entire process (r), which will be the

performance metric we want to compute.

Once we have defined the MDP framework as well as the revenue function we are in a

position to define the relative state values. It is obvious that after performing an action

in state s the system will collect a revenue for that action (r(s)), but, as the number

of transitions increases, the average revenue collected converges to r. The relative state

value (v(s)) tells how much is the difference between the total revenue incurred when the

system starts at state s and the total revenue incurred in a system for which the cost rate

at all states is r. If we denote by tnthe time instants in which there is a change in the

system state, then

?∞

n=0

The equations that relate revenues, relative state values and transition probabilities

v(s) = E

?

?

r?S(tn)?− r

????S(t0) = s

?

.

are the Howard equations defined by:

r(s) − r +

?

s?

qss??v(s?) − v(s)?= 0

∀s.

The Howard equations represent the policy evaluation phase of the well-known pol-

icy iteration algorithm, the most widespread dynamic programming technique, proposed

5

Page 6

in [15]. There will be as much Howard equations as number of states, |S|. The number

of unknowns will be the |S| relative state values plus the expected revenue r, i.e, |S| + 1

unknowns. However, as only the differences in the relative values appear in the Howard

equations, we can set v(0) = 0, so we will have a solvable linear system of equations with

the same number of equations as unknowns.

The Howard equations that correspond to the system under study are:

For k < C:

r(k,m) − r + λ

?

v(k + 1,m) − v(k,m)

?

+ kµ

?

?

v(k − 1,m) − v(k,m)

v(k + 1,m − 1) − v(k,m)

?

= 0.

+

+mµr

?

For k = C:

r(C,m) − r + λ

?

v(C,m + 1) − v(C,m)

?

+ Cµ

?

v(C,m − 1) − v(C,m)

v(C − 1,m) − v(C,m)

?

= 0.

+

+mµrPi

??

As we can observe the number of states is infinite because m can take any value in

Z+, thus we need to truncate the state space toˆS. In our case, the truncated state space

is defined by:

?

In general, Q is known as the truncation level. As we choose a higher value of Q we

ˆS :=s = (k,m) : k ≤ C;m ≤ Q

?

.

can expect a higher accuracy as the system is more similar to the original one, but we will

have a higher computation cost too. Therefore, the objective will be to achieve a certain

accuracy with the minimum value of Q.

3.2Polynomial fitting

The traditional truncation sets qss? = 0

more efficient truncation. Basically, value extrapolation considers the relative state values

outsideˆS that appear in the Howard equations as an extrapolation of some relative state

values insideˆS. As we truncate the retrial orbit dimension beyond a value Q, the value

extrapolation technique uses the state value of some states inˆS to approximate v(C,Q+1),

which is expected to improve the accuracy significantly, as it is better than ignoring these

relative state values. Note that if the relative values outsideˆS were correctly extrapolated,

the results obtained by solving the truncated model would be exact. Also note that

∀s?/ ∈ˆS but value extrapolation performs a

including value extrapolation neither increase the computational cost nor increase the

number of Howard equations, remaining in |ˆS| = (C + 1) × (Q + 1).

Summarizing, the objective of value extrapolation is to find an extrapolation function

that fits with some points inˆS so that it approximates also points outsideˆS. It is

6

Page 7

important to choose a fitting function that makes the Howard equations remain a closed

system of linear equations. The most common fitting functions that acomplish that fact

are the polynomials. We can use all the states inˆS into the fitting procedure (global

fitting) or, what is most commonly used, only a subset (Sf) of them (local fitting).

For the sake of simplicity, in the following description we will assume there exists a

mapping W from the two-dimensional set of states into a single-dimensional set, e.g. the

real numbers: W :ˆ Sf−→ R. Hence, below we deal with states as if they were real values

given as w = W(s). The specific mapping used for the model under study is specified

later on.

The choice of W will highly depend on the state we want to extrapolate its relative

state value. Note also that function f(w) and set W need to be chosen so that parameters

have unambiguous values, i.e. in the case of choosing a polynomial as the fitting function,

the number of different points in W has to be equal or greater than the number of

coefficients in the polynomial. In general, the procedure to compute the coefficients of

the fitting polynomial aiconsists in minimizing the least mean square

E =

?

w∈W

?

f(w) − v(w)

?2.

Optimal parameters can be computed by solving the equations

∂E

∂ai

= 0

∀i.

In our case, we are using as many points as number of parameters of the fitting

polynomial, so the fitting procedure is an ordinary polynomial interpolation and E = 0,

i.e. all the considered points will take part of the polynomial. In this case, the problem can

be formulated as follows. Given a set of n = |W| points?w0,v(w0)?,...,?wn−1,v(wn−1)?,

where there are not two identical wi, we can determine an n − 1-th degree polynomial

that f(wi) = v(wi), being

f(w) = a0+ a1w + a2w2+ ... + an−1wn−1.

The interpolator polynomial satisfies the next n linear equations

a0+ a1wi+ a2w2

i+ ... + an−1wn−1

i

= v(wi)

∀i,

that in a matrix form is

Aa =

1w0

w1

...

... wn−1

... wn−1

...

0

1

...

1 wn−1 ... wn−1

1

...

n−1

a0

a1

...

an−1

=

v(w0)

v(w1)

...

v(wn−1)

= b.

7

Page 8

(0,0)

??

λ

??

(0,1)

??

λ

??

µr

??

(0,2)

??

λ

??

2µr

??

(0,Q − 1)

??

λ

??

(0,Q)

??

λ

??

Qµr

??

(1,0)

??

λ

??

µ

(1,1)

??

λ

??

µ

µr

??

(1,2)

??

λ

??

µ

2µr

??

(1,Q − 1)

??

λ

??

µ

(1,Q)

??

λ

??

µ

Qµr

??

(2,0)

2µ

(2,1)

2µ

(2,2)

2µ

(2,Q − 1)

...

2µ

(2,Q)

2µ

(C−1,0)

??

λ

??

(C−1,1)

??

λ

??

µr

??

(C−1,2)

??

λ

??

2µr

??

(C−1,Q−1)

??

λ

??

(C−1,Q)

??

λ

??

Qµr

??

(C,0)

λ??

Cµ

(C,1)

λ??

Cµ

µrPi

??

(C,2)

Cµ

2µrPi

??

(C,Q−1)

λ

??

Cµ

(C,Q)

λ??

Cµ

QµrPi

??

(C,Q+1)

... .........

...

...

...

...

...

ˆS

Figure 3: Truncated model and states that appear in Howard equations outside the

truncated model.

The matrix of coefficients of this system (A) is a Vandermonde matrix, whose de-

terminant is nonvanishing and therefore is invertible. Thus, there always exists a unique

solution to the considered linear system of equations or, equivalently, there exists a unique

polynomial that goes through all the n points. However, Vandermonde matrices are often

badly conditioned, specially if some wiare very close, so the procedure to compute the

fitting polynomial is also badly conditioned. It is important to note that the unicity of

the fitting polynomial does not mean that it cannot be written in a basis different from

the standard monomic basis. More concretely in this work we have used the Lagrange

basis.

For the considered interpolation problem, the polynomial in its Lagrange setting is a

linear combination

L(w) =

n−1

?

j=0

v(wj)?j(w),

of Lagrange basis polynomials

?j(w) =

n−1

?

i?=j

i=0

w − wi

wj− wi=w − w0

wj− w0···w − wj−1

wj− wj−1

w − wj+1

wj− wj+1···w − wn−1

wj− wn−1.

For the truncated problem of interest and, as shown in Fig. 3, we will have a Howard

equation in which appears v(C,Q + 1), that is a state value of a state that does not

belong toˆS.

relative state values of states belonging toˆS.

extrapolation of v(C,Q + 1) we only use states from the last row of the model shown

in Fig. 3, i.e., s = (C,m). With this choice, the mapping that is described by W is

Therefore, we must approximate the value v(C,Q + 1) by using some

It is important to emphasize that for the

8

Page 9

W?(C,m)?= m. We use an (n − 1)-th degree polynomial that interpolates the n points

in Sf:=

?

si= (C,Q − i)

???i = 0,...,n − 1

w0= Q → v(w0) = v(C,Q),

w1= Q − 1 → v(w1) = v(C,Q − 1),

...

?

, i.e., W :=

?

wi= Q − i

???i = 0,...,n − 1

?

:

wj= Q − j → v(wj) = v(C,Q − j),

...

w(n−1)= Q − (n − 1) → v(w(n−1)) = v(C,Q − (n − 1)).

This way, the general form of the extrapolation state when using an (n−1)-th degree

polynomial is:

v(n)(C,Q + 1) = L(n)(Q + 1) =

n−1

?

j=0

v(C,Q − j)?j(Q + 1).

For example, in the case of linear extrapolation (n = 2) we use

?Q − 1,v(C,Q − 1)?, having:

v(2)(C,Q + 1) = L(2)(Q + 1) = v(C,Q)?0(Q + 1) + v(C,Q − 1)?1(Q + 1) =

= v(C,Q)(Q + 1) − (Q − 1)

Q − (Q − 1)

= 2v(C,Q) − v(C,Q − 1).

?Q,v(C,Q)?

and

+ v(C,Q − 1)(Q + 1) − Q

(Q − 1) − Q=

Following a similar procedure we can obtain the next relationship for n = 3 and n = 4:

v(3)(C,Q + 1) = 3v(C,Q) − 3v(C,Q − 1) + v(C,Q − 2),

v(4)(C,Q + 1) = 4v(C,Q) − 6v(C,Q − 1) + 4v(C,Q − 2) − v(C,Q − 3).

For (n − 1)-th degree polynomials and using the Lagrange basis to reduce the com-

plexity of the procedure, we obtain a simple closed-form expression for the extrapolated

value

v(n)(C,Q + 1) =

n−1

?

k=0

(−1)k

?

n

k + 1

?

v(C,Q − k).

where n is the number of coefficients taken for Lagrange polynomials.

3.3Revenue function

As performance parameters are not computed from the steady state probabilities as usual,

it is important to explain more carefully how are they computed. By definition, r(s) is

the expected immediate revenue obtained when the system is in state s. Therefore, we

must define the revenue as the performance parameter we want to compute. The effect

9

Page 10

Table 1: Revenue function definition.

Blocking probabilityPb

r(k,m) = 1 for k = C, ∀m

r(k,m) = 0 otherwise

Non-service probabilityPns

r(k,m) =mµrPi

λ

for k = C, ∀m

r(k,m) = 0 otherwise

Mean number of users retryingNret

r(k,m) = m ∀k, ∀m

r(k,m) = 1 for k = K, m = MProbability of being in state (K,M)π(K,M)

r(k,m) = 0 otherwise

Probability of having K busy serversB(K)r(k,m) = 1 for k = K, ∀m

r(k,m) = 0 otherwise

of that action is that the computed r will be the performance parameter we are looking

for. Additionally, the inputs r(s) in the Howard equations must be properly set. Table 1

gives several examples on how r(s) can be set in order to obtain certain performance

parameters such as: blocking probability Pb= Prob{K = C}, mean number of users in

the retrial orbit Nret= E[M], non-service probability Pns–probability of a user leaving

the system due to impatience without obtaining service–, probability of being in a certain

state π(K,M) and probability of having K busy servers B(K).

As an example, we focus on the blocking probability and we define the revenue function

to be 1 in those states in which an attempt is blocked, i.e., when r(C,m) = 1,∀m, and 0

in the rest of states, r(k,m) = 0,k ?= C, ∀m.

3.4Effect of the value extrapolation into the Howard equations

In our problem, and as mentioned above, we will only have to replace v(C,Q + 1) by its

approximate value in the Howard equation that corresponds to the state v(C,Q). As an

example, if we use linear extrapolation (n = 2) that equation will be:

r(C,Q) − r + v(C,Q)

+QPiµrv(C,Q − 1) = r(C,Q) − r + v(C,Q)

?

− λ − Cµ − QPiµr

?

+ λv(C,Q + 1) + Cµv(C − 1,Q) +

λ − Cµ − QPiµr

+QPiµr− λ

??

+ Cµv(C − 1,Q) +

??

v(C,Q − 1) = 0.

As v(C,Q + 1) no longer appears into the Howard equations, the linear system of

equations we have consists of (C + 1) × (Q + 1) equations with the same number of

10

Page 11

unknowns. This system can be expressed in matrix form for simplicity reasons. Therefore

the system can be seen as xT = b, where x is a vector with the (C+1)×(Q+1) unknowns

(r and the relative state values v(s)) and b are the negative expected immediate revenues

for the different states;

x =

?

r v(0,1) ... v(0,Q) v(1,0) ... v(C,Q)

?

,

b =

?

−r(0,0) −r(0,1) ... −r(C,Q)

?

.

Matrix T represents the matrix of coefficients and can be constructed making all the

elements in the first row of matrix T0equal to −1. Meanwhile, matrix T0is:

T0=

A0

A1

...

1

A0

A1

...

0

...00

21

...

...

... AC−1

0

...

0

...

00

1

AC−1

0

AC

00...AC

21

,

where each sub-matrix is defined as:

Ak

0= (k + 1)µI, for 0 ≤ k ≤ (C − 1).

Ak

2=

λ µr

0

...

0... 00

λ

...

2µr

...

... 0

...

0

......

000... λ Qµr

... 0000λ

, for 1 ≤ k ≤ C.

Ak

1=

α00...0

0α − µr

0

...

0...0

0

...

α − 2µr

...

...

...

0

...

000... α − Qµr

, for 0 ≤ k ≤ (C−1) and α=−λ−kµ.

For k = C when using a linear (n = 2) and quadratic (n=3) extrapolation, we have,

respectively:

11

Page 12

AC

1=

β

λ β−Piµr

0

...

Piµr

...00

...00

λ

...

...

...

0

...

0

...

00... β−(Q−1)Piµr

...

QPiµr− λ

λ−Cµ−QPiµr

00λ

,

AC

1=

β

λ β−Piµr

0

...

Piµr

...00

...00

λ

...

...

...

0

...

0

...

00...

... β−(Q−1)Piµr

...

(Q−1)Piµr

λ

00QPiµr− 3λ

2λ−Cµ−QPiµr

00λ

,

where β = −λ − Cµ.

Note that the size of matrix T does not depend on the order of the polynomial used to

perform the extrapolation; only matrix AC

1depends on the polynomial adjustment. This

characteristic has the advantage that there will not be any difference in the computation

cost when using higher order extrapolation.

The main drawback of the value extrapolation technique is that this technique is only

able to compute one performance parameter each time we solve the system. Notwith-

standing we can overcome this drawback in the following way. In a general manner, the

solution of the system xT = b can be obtained using the inverse matrix of T by doing

x = bT−1. Note also that choosing a different performance parameter to solve will only

affect to the values in b. Therefore, computing a second performance parameter will

only increase the computation expenses by the cost of the product bT−1, as the rest of

the process (specifically the computation of the inverse matrix T−1) is solved only once.

Similarly, we can compute several performance parameters with a marginal increase in

the computation cost using LU factorization, as the first part of the procedure (the fac-

torization, which represents the most computationally expensive part) is done only once

for the T matrix. This characteristic of the value extrapolation technique can be observed

in Fig. 4, where we show that the computation time1is only marginally increased when

we compute additional performance parameters.

1Results have been obtained using Matlab running on an Intel Pentium IV 3GHz.

12

Page 13

101520253035

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Q

Time (s)

p=1

p=2

p=3

Figure 4: Computation cost when solving p performance parameters simultaneously.

4Results

In order to evaluate and compare the proposed technique we have studied its performance

in several scenarios. Letting ρ = λ/(Cµ), we have studied different system loads by

modifying λ and keeping C = 50 resource units and µ−1= 180 s. The retrial phenomenon

has been configured with µ−1

r

= 100 s and Pi= 0.2. Although only one configuration

of the retrial orbit has been chosen, there will be fairly different working points, as the

system load is widely modified.

For obtaining the results, we have used the relative error of different performance pa-

rameters, defined for a generic performance parameter Ψ by ?Ψ= |Ψapprox−Ψexact|/Ψexact.

In order to obtain an accurate enough estimate of Ψ which can be used as Ψexact, we ran

all techniques with increasing and sufficiently high values of Q so that the value of Ψ had

stabilized up to the 14th decimal digit. As expected all techniques converged to the same

value in the performance parameters under study, Ψ ∈ {Pb,Pns,Nret}.

4.1 Value extrapolation evaluation

Table 2 shows the minimum value of Q needed to obtain a relative error lower than 10−8

for different performance parameters and loads (columns) and for different orders of the

extrapolation polynomials (rows). Note that VEx denotes the use of an extrapolation

polynomial of order x = (n−1). The number in bold indicates the lowest truncation level

of all the polynomials studied. Finally, the last row of Table 2 shows the exact value of

the studied performance parameter for that scenario.

From Table 2 we conclude that there is not a clear choice in the order of the best poly-

nomial. In general, neither the lowest nor highest order polynomials are recommendable,

so we recommend to use the intermediate cases. Furthermore, the fact that using VEx

enforces us to use a model with Q ≥ x (see Section 3.2) must be considered in the choice

13

Page 14

of the polynomial. For that reason we can conclude that, for the problem and scenario of

interest and for the relative accuracy we want to achieve, VE8 represents a good tradeoff

between accuracy and value of Q needed. Therefore, hereafter we will use the polynomial

of order 8 (VE8) and we will simply denote it as VE.

Table 2: Minimum value of Q to obtain relative errors (?) lower than 10−8.

?Pb< 10−8

?Pns< 10−8

?Nret< 10−8

ρ0.50.70.90.50.70.90.50.70.9

VE1203261 25 416422 3757

VE2 143153

21 3558 173254

VE3 15 18481931 5316 2650

VE4 122547

1730 4814 2647

VE512244412244491843

VE6102041142644 11 2239

VE77213911 2442821 40

VE88 173911 23368 1939

VE99193810223991334

VE10101635102139101735

VE111118311116371118 37

VE12121540122042 121742

VE13131443141943131843

VE14142348262548142448

VE1515 2556152956152556

VE16162756182957282757

Exact

3.89 · 10−6

0.00450.13536.05 · 10−8

1.34 · 10−4

0.01105.74 · 10−5

0.09814.4789

Value

4.2Comparison with other techniques

In this section we compare the performance of value extrapolation with other techniques

based on the traditional approach of solving the steady state probabilities using the bal-

14

Page 15

ance equations for later computing the performance parameters of interest. Although

other approaches exist, we have chosen the technique proposed in [8], refered hereafter as

FM, and the one proposed by Neuts and Rao in [11], refered as NR. Note that we have

not compared the results with the technique proposed by Artalejo and Pozo [6] as this last

technique does not include the impatience phenomenon, so it is not directly applicable.

A similar reasoning can be done for the technique proposed by Falin [10].

Table 3: Minimum Q value to obtain relative errors (?) lower than 10−8.

?Pb< 10−8

?Pns< 10−8

?Nret< 10−8

ρ = 0.5ρ = 0.7ρ = 0.9ρ = 0.5ρ = 0.7ρ = 0.9ρ = 0.5ρ = 0.7ρ = 0.9

FM 23 3968 294670254253

NR 203161 2541 6422 38 65

VE88 17391123 368 1939

In Table 3 we show the minimum values of Q needed to obtain a relative error lower

than 10−8for different performance parameters and for the aforementioned techniques.

Results show that value extrapolation clearly outperforms classical techniques as it needs

a much lower value of Q to achieve a certain accuracy in all the scenarios under study

and for all the parameters studied. Similarly, in Figs. 5-7 we plot the relative error for

Pb, Pns and Nret respectively when ρ = 0.7 and for the different techniques deployed.

Results show that, for a same value of Q, VE is able to obtain lower relative errors than

NR and FM. The difference in the relative errors is around 4 to 5 orders of magnitude,

which supposes a very clear improvement.

1015 20253035

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10

0

Q

Relative error

FM

NR

VE

Figure 5: Relative error in Pbfor different techniques.

15