Content uploaded by Panagiotis G. Sarigiannidis
Author content
All content in this area was uploaded by Panagiotis G. Sarigiannidis
Content may be subject to copyright.
Content uploaded by Panagiotis G. Sarigiannidis
Author content
All content in this area was uploaded by Panagiotis G. Sarigiannidis
Content may be subject to copyright.
Photonic Network Communications (2006) 11:211–227
DOI 10.1007/s11107-005-6024-x
ORIGINAL ARTICLE
CS-POSA: A high performance scheduling
algorithm for WDM star networks
P. G. Sarigiannidis ·G. I. Papadimitriou ·
A. S. Pomportsis
Received: 5 August 2004 / Revised: 12 July 2005 / Accepted: 20 July 2005
© Springer Science + Business Media, Inc. 2006
Abstract In this paper a new packet scheduling algorithm
for WDM star networks is introduced. The protocol adopted
is pre-transmission coordination-based and packet collisions
have been eliminated due to predetermination of the times-
lots each node transmits in a demand matrix. The requests of
the transmitted packets are predicted through Markov chains
in order to reduce the calculation time of the final schedul-
ing matrix. This is accomplished by pipelining the schedule
computation. The innovation that this algorithm introduces
is to modify the service sequence of the node. The proposed
algorithm is studied via extensive simulation results and it is
proved that changing the sequence that nodes transmit, from
the node with the largest number of requests to the node with
the fewest requests, that there is an increase in the throughput
of the network, with a minimum (almost zero) cost in mean
time delay and in delay variance.
Keywords Optical WDM networks ·Star topology ·
Reservation ·Scheduling ·Traffic prediction
Introduction
The ever-increasing demand for high speeds in audio, image,
and video transmission, within local area network (LAN),
metropolitan area network (MAN), and wide area network
(WAN) is met by the enormous bandwidth of optic fiber tech-
nology. However, optic communication requires subtle oper-
ation for the efficient utilization of their possibilities. The
crucial problem of this utilization is the co-operation of optic
G. I. Papadimitriou (B
)·P. G. Sarigiannidis ·A. S. Pomportsis
Department of Informatics,
Aristotle University,
Box 888, 54124 Thessaloniki, Greece
e-mail: gp@csd.auth.gr
fibers with electronic circuits [1]. The wavelength division
multiplexed (WDM) technology, within a single optic fiber
[2–4] may result in gigabit-per-second data rates in indepen-
dent channels that transmit simultaneously data flows to a
single or multiple users. Multiplexing as well as demulti-
plexing of different channels takes place on an optical level,
without the inference of electronic circuits and thus increases
the capabilities of optic technology in terms of performance,
reliability, and control.
Broadcast-and-Select networks [5] comprise a number of
nodes and a passive star coupler in order to broadcast from
all inputs to all outputs. Every node can select at a given time
among the channels available to perform transmission.
In general, if we wish to categorize Broadcast-and-Select
architectures based on passive star couplers, we can refer to
four possible configurations [6, 7]:
(a) Fixed optical transmitters and fixed optical receivers (FT-
FR),
(b) Tunable optical transmitters and tunable optical receiv-
ers (TT-TR),
(c) Fixed optical transmitters and tunable optical receivers
(FT-TR), and
(d) Tunable optical transmitters and fixed optical receivers
(TT-FR).
This paper focuses on the Broadcast-and-Select Star local
area network with one tunable transmitter and one fixed re-
ceiver (TT-FR) per node (Fig. 1). The network comprises N
nodes and Wwavelengths.
It can be considered as a general principle of the protocol
that the data transmission occurs sequentially in transmission
frames, which are divided in timeslots. In each frame, the
algorithm examines the transmission requests of the nodes
of the network and performs a scheduling process in order to
212 Photonic Network Communications (2006) 11:211–227
Fig. 1 Broadcast and select star network with tunable transmitter and
fixed receiver per node. The network has Nnodes and Wchannels
define the order of the data transmission of each node to the
desired transmission channel. There is not any restriction in
the number of nodes that can transmit in each frame. Every
node can send their requests and have their transmission pro-
cessed to a timeslot. In addition, every node has access to
every transmission channel. However, there are two things
that cannot be realized. First, the transmission node itself can-
not transmit simultaneously in two channels and second, two
or more nodes cannot transmit simultaneously in the same
channel. In the FatMAC protocol, which is considered to be
the ancestor of this philosophy [8], the transmission of the
requests occurs during the reservation phase, while the esti-
mation of the scheduling occurs during the data phase. From
then on, there have been improvements of FatMAC [8], as
HRP/TSA [9,10], On-line internal-based scheduling (OIS)
[11], and Predictive Online Scheduling algorithm (POSA)
[12].
The present paper presents an algorithm, check and sort —
predictive online scheduling algorithm (CS-POSA), which
improves the performance of the above algorithms. Its goal
is to eliminate the estimation time of the scheduling that
delays the data transmission through its predictions of the
requests of the nodes. At the same time, it considers each
node individually and serves it according to its workload.
Moreover, since the algorithm adopts the method of predic-
tion it must go through some training for a certain amount
of time before it starts fully functioning. The most powerful
element of the algorithm is that it significantly minimizes the
unused timeslots that increase the size of the frame. It con-
sequently increases the performance of the network. This
is achieved in a simple way, i.e., by changing the order of
service and examination of the nodes while the scheduling
matrix is formed. All the above is realized while maintain-
ing the advantage of prediction of the requests of the nodes
within a minimum time delay.
The improvement that CS-POSA brings up is presented
through a detailed series of figures that represent the results
of the simulations both of the channel utilization and the
throughput of the entire network. The question whether the
algorithm brings extra delay is answered through through-
put-delay figures and throughput-delay jitter figure, while the
performance of the algorithm is examined in different con-
texts of network workload through throughput-load figures.
The paper is organized as follows: section ‘Network back-
ground’ analyses the architecture and the structural elements
of the network, while section ‘OIS and POSA protocols’ anal-
yses the two protocols (OIS, POSA) with the previous pro-
gress and work to be improved. Section ‘CS-POSA’ presents
the new algorithm CS-POSA, and is followed by the figures
and the detailed comparisons between the performance of
the two algorithms, POSA and CS-POSA in Section ‘De-
tailed Performance Analysis’. Finally, concluding remarks
are given in the sixth section.
Network background
This section describes the structure of the network, provid-
ing information on the material used as well as the principles
adopted for the operation of the scheduling algorithms.
Network structure
The network comprises Nnodes and Wchannels. Each node
disposes an array of tunable transmitters, which provides the
transmission of data to the appropriate channels. Moreover,
the node has a fixed receiver, which allows receiving data
in the particular channel, which is dedicated to each node,
known also as home channel. Thus, the network is multi-
casting and unicasting. The connection of the channels is
accomplished through a passive device that allows transpar-
ent and immediate transfer of data from the transmitters to
the receivers.
It is apparent that the effectiveness of such a TT-FR system
strongly depends on the relation between nodes and channels.
In general, we can discern four cases: First, the number of
nodes is fewer than the channels available (N<W).Inthis
case, a number of channels W−Nremains idle, since the
paths-routes of transmission are ample for the communica-
tion of Nnodes. This case then, has no practical meaning.
Second, the number of nodes and channels is equal (N=W).
In such a case, the number of paths practically equals the
number needed for immediate communication to occur. If we
ignore the case where two or more nodes attempt to transmit
concurrently and designate the same node as the receiver,
then we could argue that this is the ideal case for a direct
communication, since each node transmits immediately and
without delay the data to the transmission channel, which
Photonic Network Communications (2006) 11:211–227 213
is unique for every node. However, this case is rare due to
functionality and cost reasons. So, the number of channels
is reduced in relation to the nodes of the network, with a
consequence to reveal the losses in the utilization of the
throughput and in the mean time delay. In the third case then,
(N>W)a number of nodes, which equals to N/Wis shared
with a home channel. That is, if we dispose 10 nodes and two
channels, it is inferred that five nodes share the same home
channel. Finally, it is worth mentioning the forth case where
the network disposes a great number of nodes and a very
small number of channels (NW). It is apparent then, that
the application of a very operative algorithm is needed, so as
not to burden the network with huge time loss and a small
throughput.
Categorization of MAC protocols
In a medium access control (MAC) protocol, a series of
rules applied to a network are implied so that it can provide
communication services between the nodes of the network.
Furthermore, due to WDM networks, where the medium of
communication is the split into channels, the protocol pro-
vides appropriate channel manipulation, so that the delay of
information transport between the nodes can be diminished.
Apart from that, the channel utilization, which leads to high
network throughput, is preserved at high standards.
The objective of MAC protocols is to coordinate the nodes
that want to transmit and receive data. This coordination is
combined with channel availability. Additionally, node and
transmission channels scheduling is resembled to the time
unit, so that conclusions can be drawn referring to the over-
all network performance. Indeed, in order to draw conclu-
sions for the functionality of a MAC protocol, we should
refer to the kinds of collisions that appear in WDM Broad-
cast-and-Select networks under examination. There are two
kinds of collisions, under investigation [13]: (a) channel col-
lision and (b) receiver collision. The first kind appears when
two or more nodes try to transmit within the same wavelength
simultaneously. In such a case, the protocol predicts possi-
ble channel collisions; otherwise, it leads to retransmission
of packets, since the information is ruined. The second kind
of collision appears when two or more nodes try to trans-
mit the data simultaneously to the same node in different
wavelengths. Again, in such a case packet retransmission is
needed. Undoubtedly, such a case is possible only in archi-
tectures with tunable receivers.
A basic distinction among MAC protocols is the existence
of a control channel (Fig. 2). In the case where in the network
there is at least one channel dedicated to the coordination of
channels and their transmission time, then the protocol is
based on pre-transmission coordination. In a different case,
i.e., if the network does not make use of a separate channel for
the node transmission control, the protocol is pre-allocation
Pre-allocation
Pre-transmission
MAC Protocols
With receiver collisions Without receiver collisions
Offline Online
Fig. 2 Categorization of MAC protocols
based. Undoubtedly, at many instances it is observed that
protocols do not dispose a separate control channel but exert
control through control packets.
The fixed objective of MAC protocols is to eliminate the
two kinds of collisions that have been mentioned above.
In the pre-transmission case, there are protocols that elim-
inate the collisions that arise when two or more nodes try
to transmit data simultaneously with different wavelengths
at the same time towards the same node. Other protocols do
not succeed in eliminating the collisions. Representatives of
MAC protocols that allow receiver collisions is the family of
Aloha [14], slotted Aloha [14], Delayed Slotted Aloha [14],
Aloha CSMA [14], Aloha/slotted CSMA [15], DTWDMA
[16], Quadro Mechanism [17], and N-DT-WDMA [18].
In all the aforementioned protocols, the initiating mecha-
nism at the packet collision moment, leads to retransmission,
a fact that reduces the network performance with apparent
negative consequences in bandwidth and delay.
The other category of pre-transmission coordination-based
protocols, predicts the possibility of receiver collision and
allows one of the participating nodes to transmit at the spe-
cific time to the common node-receiver. The other sched-
ule nodes safely transmit their data at different moments.
The assurance of successful and safe transmission of data is
achieved through a scheduling algorithm, based on the trans-
mitting node’s requirements. In other words, the algorithm
accepts initially the requirements of all nodes and organizes
them in a transmission frame, called traffic demand matrix,
D=[di,j]. Time is divided in timeslots. Usually, transmis-
sion is organized as frames where each frame is composed
of a reservation phase followed by a data phase. The frame
then stores for every node the number of timeslots required
for transmission to a specific channel. Then the nodes trans-
mit the requested data during the current frame at different
moments.
Another important parameter is the size of information
that the scheduling algorithm needs in order to function. In
the case where the algorithm requires knowledge on the entire
214 Photonic Network Communications (2006) 11:211–227
demand matrix Din order to compute the schedule, i.e., re-
quires the requests of every node, then it is characterized as
offline. On the other hand, if the algorithm is restrained in
partial control information, it can start to operate with the
sole knowledge of the requests of the first node N.Inthis
case, the algorithm is characterized as online. It is apparent
that, having knowledge on the entire matrix, we get a bet-
ter and overall view of the requests. Consequently, we can
act more efficiently. This means that offline algorithms excel
in efficiency, whereas online algorithms excel in matters of
time delay between the reservation phase and the transmis-
sion phase. This is a fact, because scheduling time is pre-
served due to the speed of the algorithm, which equals to
the entrance of a part of the requests of the node. A further
drawback of offline algorithms is the high complexity, which
varies from O (M2C) to O (M3C3) [11]. The high levels of
complexity do not conform to optic fiber technology and lead
to respective long time delays. Besides, high complexity con-
veys high hardware implementation, like the case of optic
technology, where the development of hardware devices is
slow and immature. Some examples of offline algorithms are
SS/TDMA [19], MULTI-FIT [20], SRA [21], and TAA [21].
OIS and POSA protocols
The next section analyses the two protocols (OIS, POSA)
with the previous progress and work.
A brief reference to OIS
The OIS [11] is a typical online algorithm, which exploits
the advantages of the algorithms that do not need the whole
demand matrix but a part of it. Its online identity is proved
by the fact that it starts computing the schedule as soon as the
first node sends its set of requests. Thus, the algorithm exam-
ines one node after the other with their requests and moves
on to construct the scheduling matrix. So, on the one hand it
saves time since it starts without delay from the first node,
while on the other hand, it is the same algorithm that occurs
in every node and in every frame regardless of the workload
of each node.
As the network consists of Nnodes and Wchannels, the
algorithm functions as following. The moment that a set of
requests of the sequential node nis known, then the algo-
rithm examines the availability of the channels for t1times-
lots transmission that node nrequires. In the case that the
available channel wis located in the time gap between time-
slot tand timeslot t+(t1−1), then the next step is to examine
any potential collisions. In other words, the algorithm checks
whether in the timeslots tuntil t+(t1−1), node ncan be
scheduled to transmit in another channel w1(w1= w).If
the registration has been accomplished, then the timeslots t
to t+(t1−1)are bound to node nfor the wth channel.
Thereafter, the lists are renewed and other requests from the
remaining N−nnodes are examined. Consequently, the
request table (scheduling matrix) of OIS contains for every
timeslot the nodes that transmit at that moment and the equiv-
alent transmission channel.
The function of the algorithm is better comprehended, if
an example with a given demand matrix is examined. Let us
consider the following demand matrix D,3×3, with three
nodes (n0,n1,n2)and three transmission channels (w0,w1,
w2).
D=⎡
⎣
1···2···2
3···3···1
5···4···3
⎤
⎦.
According to table D, node n0requests one timeslot for chan-
nel w0, two timeslots for channel w1and two timeslots for
channel w2. Respectively, node n1requests three timeslots
for channel w0, three timeslots for channel w1and one time-
slot for channel w2. Finally, node n2requests five times-
lots for channel w0, four timeslots for channel w1, and three
timeslots for channel w2. The algorithm does not require to
know the rows of all three nodes in order to function, but only
the ones of node n0[1, 2, 2] (row 1). So, having received the
requests of node n0, the algorithm estimates the time gaps of
the channels w0,w1, and w2. Now, one third of the schedul-
ing matrix starts to form (Fig. 3.). It is worth mentioning that
the completion of the table is done having also in mind the
tuning time of the transmitter that due to simplicity reasons,
is considered to be equal to one timeslot. Continuing, the
intervals that are disposed to fulfill the requests of node n1
are estimated, so the second third of the scheduling matrix
starts to form (Fig. 4.). Finally, the intervals that are disposed
to fulfill the rows of node n2are estimated and the schedul-
ing matrix is complete. In Fig. 5, it can be observed that the
frame examined lasts for nineteen timeslots. The number of
the timeslots in which the channels remain idle controls the
performance of the algorithm. It is obvious that channel w0
transmits data for ten continuous timeslots (or it tunes to a
new channel), while it remains idle for nine timeslots idle.
Channel w1transmits (or tunes to a new channel) for twelve
timeslots and the rest remains idle. Finally, channel w2trans-
mits only for nine timeslots and the rest remains idle.
Description of POSA
The POSA [12] is a variation of OIS having added the ele-
ment of the prediction of the requests. The main aim of POSA
is to decrease the time of the estimation of the scheduling
matrix helped by a hidden Markov chain. With this method,
the algorithm succeeds in predicting the requests of the nodes
for the subsequent frame based on the requests of the nodes
of the previous frames. In this way, time is saved since the
Photonic Network Communications (2006) 11:211–227 215
Fig. 3 Scheduling matrix after
receiving the requests of node n0timeslots
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
n0
n0n0
n0n0
T
T
T
T
T
w2
wo
w1
Fig. 4 Scheduling matrix after
receiving the requests of node n1timeslots
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
n0
n0n0
n0n0
T
T
T
T
T
T
w2
wo
w1
n1
n1n1n1
n1n1n1
T
Fig. 5 Final scheduling matrix
after receiving all requests timeslots
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
n0
n0n0
n0n0n1
n1n1n1n2n2
n1n1n1
n2n2
T
T
T
T
T
T
Tn2n2n2n2
Tn2n2n2w2
wo
w1
algorithm does not wait for the nodes to send their requests
and then to construct the scheduling matrix. Having predicted
the requests of the nodes, the scheduling is pipelining at the
real time of the transmission of the packets. This parallel
elaboration leads to an important—if not complete- decrease
of the time of estimation of the scheduling. From the way that
the algorithm functions, it is understood that the algorithm
works 100% only when the following relation occurs:
Tp+Ts≤Tr+Td,
where, Tp is the prediction time, Ts the scheduling operation
time, Tr the reservation time, and Td is the actual data trans-
fer time. Apart from this relation that has to be followed in
order the algorithm not to decrease its performance, POSA
possesses another point that has to be paid special atten-
tion to. It is obvious that the most crucial element of the
algorithm is its predictor. This predictor after having com-
pleted a period of training during which it only learns without
predicting, starts predicting based on all the real requests of
the nodes. Nevertheless, it is natural for the predictor to make
some errors in a small or large percentage. So, it is proved
by simulations [12] that the 70% of the predictions have an
error rate of less than 20%. This fact allows the use of the
POSA method of prediction without dramatic consequences
in its performance. The POSA’s basic operation occurs in
three phases: the learning phase, the switching phase, and
the prediction phase. In the first phase, the predictor learns
building its history queue, while the reservation, the sched-
uling, and the transmission of the packets operate just as
in OIS. In the next phase, the algorithm moves from learn-
ing to prediction, while in the next (ending) phase the algo-
rithm stops constructing the scheduling matrix based on real
requests but based on its own predictions for the next frame.
Thus, two things occur simultaneously: the prediction of the
scheduling matrix for the next frame and the transmission of
the packets for the current frame. The algorithm continues
the alternation of the situations (prediction-transmission),
216 Photonic Network Communications (2006) 11:211–227
while at the same time it builds its history queue so that
the predictor is informed of the changes in the traffic of the
network.
Before making a brief overview to the operation of the pre-
dictor, it is important to mention the three primary assump-
tions that have to be taken into consideration so that the
algorithm is able to predict. First, there must be a predictable
underlying pattern to be detected within the N×Wmatrix.
Second, the number of the requests at each node must be
independent, and third, there must be an upper bound Kon
the requests of every node.
As already mentioned, the predictor is the most impor-
tant element of the algorithm and assuming that there are N
nodes and Wvarious channels, its aim is to build the N×W
traffic matrix, which accept values with a lower bound of 0
and an upper bound of K, i.e., K+1 different states. Fig. 6
shows the form of the traffic matrix after the operation of the
predictor. The value pi,j(0≤i≤N−1,0≤j≤W−1)
occurs in the range of values from zero to K, where Kis a
constant value, so that it is possible to construct a probabi-
listic-based deterministic predictor. Consequently, the value
p0,0is the number of timeslots that the predictor predicted
for node n0, which will be transmitted through channel w0in
the following frame. Respectively, the value p0,1is the num-
ber of timeslots that the predictor predicted for node n0,tobe
transmitted through the channel w1in the next frame and so
on. These values can be from zero to K. Thus, the predictor
operates for the rest of the table entries, so it can be argued
that there are N×Wdifferent independent predictors that
predict the requests of the nodes for the following frame and
operate both simultaneously and independently since there
is no exchange of information between them.
Each individual predictor pn,w from the N×Wma-
trix maintains its current state (i.e., the number of timeslots
p00 p01 p0(w-1)
p10 p11 p1(w-1)
p(n-1)0 p(n-1)1 p(n-1)(w-1)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
channels
nodes
Fig. 6 The traffic matrix after the operation of the predictor
requested by node nin the channel win the previous frame)
in a table with the K+1 different states that may occur at
any time. For each of the K+1 different states, i.e., for each
one of the entries in the table, the predictor maintains K+1
models that consist of the state transition tables for each pos-
sible state and a history queue which records the probability
of a change of state of the pi,jfrom one state to another.
Consequently, the predictor pn,w predicts the most probable
state from the K+1 states based on the K+1 models, which
it maintains. Of course, in each frame the predictor provides
the models with the real states that the nodes send so that it
is informed of the changes in the workload in each node. The
history queue is maintained for two reasons. First, it is used
to provide the predictor a clear picture regarding the current
traffic in each pn,w and also to resolve problems in predictive
ties. It achieves its first aim by enqueueing at the tail of the
queue the most recent state transition and dequeueing from
the head of the queue the oldest recording transition. Then,
it achieves its second aim by traversing the history queue
from the tail of the queue until one of the state transitions
within the tie is found. This state is recorded in the traffic
matrix for the following frame.
Learning algorithm
The predictor uses two different algorithms, the learning
algorithm and the prediction algorithm. During each frame
of data, the predictor first runs the learning algorithm and
then the prediction algorithm. The first algorithm is respon-
sible for informing and updating the data of the history queue,
while the second one is responsible for predicting the demand
matrix as accurately as it can.
The learning algorithm is implemented in three steps:
1. At the beginning, the predictor pn,w [f−1], i.e., the pre-
dictorofnodenwhich transmitsinchannelwduringframe
fof data (Fig. 7), shows the real number of the requests
of frame f−1, which was requested by node n for chan-
nel w. Within the state transition table corresponding to
the current state pn,w [f]the algorithm increases the ele-
mentrepresentingthestateinwhich the predictor actually
changed into during the previous frame (f−1).
For example, let us assume that during frame 1980, node 12
for channel 2 requested three timeslots (Fig. 8). Thus, the
frame
f-1
frame
f
frame
f+1
current
Fig. 7 The sequence of the frames
Photonic Network Communications (2006) 11:211–227 217
predictor p12,2[1980]=3. Consequently, it will increase
(Fig. 9) for the state p12,2[1979]=4 (i.e., it is equal to the
actual number of requested slots for the frame 1979). There
is a change in state 3 for the predictor in node 12 and in
channel 12.
2. The head of the history queue is dequeued in the position
f−1−V, the oldest transition recorded by the individ-
ual predictor pn,w and the element corresponding to the
specific state pn,w is decreased.
For example, let us assume that in the position 1980-1000
(considering that the queue is 1000 large), the queue of the
state p12,2[1979]=4 shows 0, i.e., it has changed from
states 4 to 0, then the probability of change from states 4 to
0 is decreased (Fig. 10).
3. The state of the predictor changes to a state that repre-
sents the real number of slots requested by node nin
channel wduring data frame f.
For example, the predictor p12,2[1980] now shows state 3
which means that the real number of slots requested by node
12 in channel 2 during frame 1980 is three.
Prediction algorithm
The prediction algorithm is implemented in two steps:
(1) Being in data frame f, the algorithm determines the
most probable state for the transition of the current state
frame
1979
requested
4
frame
1980
requested
3
frame
1981
requested
1
current
Node=12
Channel =2
Fig. 8 Operation of the learning algorithm
from state 4 to ...
state 0 state 1 state 2 state 3 state 4
+1
Fig. 9 Initial state transition table for state 4 after update for frame
1979 to state 3
from state 4 to ...
state 0 state 1 state 2 state 3 state
4
-1
Fig. 10 Initial state transition table for state 4 after update for transition
at frame f−1−V
to be made and gives it as a prediction for the following
frame.
For example, the predictor p12,2[1980] equals to three times-
lots. So in this phase, the predictor should predict the number
of slots that are most probable to be requested by node 12
in channel 2 for data frame 1981. It examines the proba-
bility for change of each state and selects the greatest nine
(Fig. 11). Thus, for frame 1981 the state 1 will be selected
being the right state.
(2) If there is more than one states with the same highest
transition count, then the tie is resolved by traversing
the history queue from the tail. The first instance of one
of the tied transitions encountered within the history
queue is the output of the specific predictor.
For example, being in frame 1981, the predictor shows the
state 1. Consequently, the algorithm examines, which state
is the most probable for the following frame i.e., which state
has the highest count of transitions. However, in the figure,
three states can be detected (states 1–3) having the same
count of transitions and being claimed by the specific pre-
dictor (Fig. 12). In a case like this, the algorithm must select
the state that is most probable to be chosen for the following
transmission frame. So, the algorithm traverses the queue
from the tail to find the most recent transition from either
from state3 to ...
state 0 state 1 state 2 state 3 state 4
22 35 33 23 11
Fig. 11 Initial state transition table for state 3. The predictor outputs
state 1, which has the highest state transition count
218 Photonic Network Communications (2006) 11:211–227
from state1 to ...
state 0 state 1 state 2 state 3 state
4
23 30 30 30 17
Fig. 12 Initial state transition table for state 1. The predictor must
determine to which of these three states (1, 2, or 3) state 1 most recently
changed to
334... 211014
HEAD TAIL
Fig. 13 History queue for state 1. Traversing the queue from the tail
the predictor outputs state 1, as the most recent transition
state 1 or 2, or 3. If the tail of the state 1 is the one that is
detected in the figure, then state 1 will be chosen (Fig. 13).
CS-POSA
The new proposed algorithm is called CS-POSA. It is based
on the two protocols presented in section ‘OIS and POSA
Protocols’, being their continuation and improvement. Its aim
is to extend POSA maintaining the pipelining of the sched-
ule computation and the full operation of the predictor. The
extension of CS-POSA is based on shifting of the schedule
computation of the nodes or in other words, on guiding the
order of checking and programming of the nodes. Shifting is
based on the workload of each node, which means that the
new protocol comprehends better not only the general traffic
of the network but also the specific workload in each node.
This is accomplished through analytical examination of the
requests from each node and then their comparison. So, the
grading of the workload of each node is known, as it appears
conclusively from all the queues.
Phases of CS-POSA
CS-POSA operates in three phases (Fig. 14):
1. Learning Phase: In this phase, CS-POSA learns from
the workload of the network how to maintain the history
queues. Here, it must be pinpointed that CS-POSA does
not examine the workload of each node individually.
2. Switching Phase: Here is a change of phases from learn-
ing to prediction.
3. Prediction Phase: In this phase, CS-POSA predicts the
requests of the nodes for the following frame. The inno-
vation that is introduced here is the way of processing
the predictions. POSA ignores the variety of the traffic
among the nodes building the transmission scheduling
matrix starting from the predicted requests of the first
node, then the second one and so on until the last one.
This is due to the fact that POSA uses OIS to construct
the scheduling matrix examining one after the other the
requests of the first to the last node. CS-POSA, on the
contrary, does not always blindly follow the same service
order, i.e., from the first to the last. It examines the sum-
mative workload, i.e., the sum of the requests of each node
to all destinations and based on it, it processes them in a
declining order.
Structural comparison among OIS, POSA and CS-POSA
The structures of the three algorithms are presented compara-
tively in Fig. 15. It is assumed that real time flows from top to
the bottom. Three are main phases, receiving requests from
the nodes, estimating the scheduling matrix, and transmis-
sion. The contrast between OIS and the other two algorithms
is obvious since OIS does not use parallel processing as the
other two algorithms do, i.e., the method of pipelining, in
order to minimize the computation time of the scheduling.
On the one side, POSA carries out simultaneously both the
transmission of the data based on the scheduling matrix that
the predictor constructed in the previous transmission frame,
and the prediction of the scheduling matrix for the following
frame. At the same time, there is operation of the learning
algorithm that receives the real requests of the nodes and
renews the history queue of each predictor. After complet-
ing the phase of prediction, POSA constructs the scheduling
matrix for the following frame. On the other side, CS-POSA
follows the same order and parallelism with POSA but before
it constructs the final scheduling matrix it transfers the pro-
cessing order of the nodes starting from the one with the
highest count of requests and finishing to the one with the
smallest count. In this way it maintains the parallel process-
ing of the computing schedule phase with the transmission
phase, i.e., the real transmission of the data to the nodes.
Predictor of CS-POSA
Assuming that there are Nnodes and Wchannels, each node
possesses a set of Wqueues and each queue can store 0 to K
actual requested numbers of slots. The aim of the predictor
is to construct the N×Wdemand matrix Dwith Nrows
Photonic Network Communications (2006) 11:211–227 219
Fig. 14 Phases of CS-POSA
frame 1 frame V
frame V+1
switching phase
frame 2
learning phase
frame 2
prediction phase
frame 1 frame f
Fig. 15 Structural comparison
among OIS, POSA, and
CS-POSA
real
time
getting
requests
computing
schedule prediction
computing
schedule
getting
requests
filling
history
getting
requests
filling
history
getting
requests
filling
history
transmission
transmission
transmission
getting
requests
filling
history
transmission
transmission
transmission
shifting
getting
requests
filling
history
shifting
getting
requests
filling
history
shifting
prediction
prediction
prediction
OIS POSA CS-POSA
next frame
next frame
next frame
next frame
next frame
next frame
next frame next frame
computing
schedule based
on shifting
prediction
computing
schedule based
on shifting
prediction
computing
schedule based
on shifting
transmission
transmission
getting
requests
computing
schedule
computing
schedule
computing
schedule
(nodes) and Wcolumns (channels). Each value in the table
can vary between 0 and K. Thus, the total predictor looks
like N×Wseparate and independent predictors whose aim
is to predict accurately the expected next number of slots for
node n, which transmits in channel w.
The predictor needs the following environment in order to
operate:
1. Probability of prediction through a traffic model.
2. Independent and uninfluenced operation of each of the
different N×Wpredictors.
3. The demand matrix which will be constructed cannot have
a value higher than K, where Kis a known constant
value.
220 Photonic Network Communications (2006) 11:211–227
There are four goals that are set for CS-POSA in order to
operate effectively. These are:
1. Accuracy of prediction: A high level of accuracy is desired
in the predictions. In POSA and CS-POSA, this is achieved
through empirical results [12].
2. Real time learning: The predictor should be capable of
responding dynamically to changes in network activity.
Moreover, CS-POSA comprehends the network activity
even better since it receives the requests of each node
separately.
3. Asymptotic time complexity: The CS-POSA algorithm
must not have a high level of complexity. POSA has an
asymptotic time complexity less than O (MC2K). The
same goal is achieved by CS-POSA, too.
4. Scalability: In order to be useful, the algorithm must be
scalable with values as Nnodes, Wchannels, an upper
bound on K, and the size of the table V. This goal is also
achieved both by POSA and by CS-POSA.
Scheduling shifting
In order to understand better the need for studying and co-
estimating the individual workload in each node separately,
a specific example is examined.
The following traffic matrix has been constructed by nine
individual predictors:
D=⎡
⎣
1···2···2
3···3···1
5···4···3
⎤
⎦.
It is clear that the predictor p0,0predicted one timeslot for
node n0with channel w0, the predictor p0,1predicted two
timeslots for node n0with channel w1, and so on. Thus,
totally there are:
p0,0=1 node n0,channel w0,
p0,1=2 node n0,channel w1,
p0,2=2 node n0,channel w2,
p1,0=3 node n1,channel w0,
p1,1=3 node n1,channel w1,
p1,2=1 node n1,channel w2,
p2,0=5 node n2,channel w0,
p2,1=4 node n2,channel w1,
p2,2=3 node n2,channel w2.
The POSA operating like OIS, will construct the following
schedule matrix (Fig. 16):
As can be seen in the Fig. 16, it is assumed that one timeslot
is given to tuning latency time. Before CS-POSA constructs
the schedule matrix, it takes the two following steps:
Step 1. Add each row of the traffic matrix DinanewtableS
that will register the total amount of requests by each node:
D=⎡
⎣
1···2···2
3···3···1
5···4···3
⎤
⎦,S=⎡
⎣
5
7
12
⎤
⎦.
So, Table Sconsists of the total amount of the requests of
the three nodes for the three transmission channels. Table S
is a mirror of the activity that each node has.
Step 2. Grade table Sin a declining order. In case those two
nodes are found with the same total number of requests, then
the selection is random. In this way, vector Schanges in the
ordered vector S:
S=⎡
⎣
12
7
5
⎤
⎦.
That denotes that the requests of node n2will be first exam-
ined, then those of node n1and finally those of node n0.
For the same demand matrix, CS-POSA will construct the
following scheduling matrix (Fig. 17):
It is clear from both two Figs. 16 and 17 that the scheduling
matrix of POSA spends a total of 19 slots from which 26 out
of 57, i.e., a percentage of 43% wasted. On the other side, the
scheduling of matrix of CS-POSA spends a total of 15 times-
lots from which 14 out of 42, i.e., a percentage of 33.33% is
wasted. Of course, the case that has been examined is quite
specialized. In section ‘Detailed Performance Analysis’, the
two algorithms are compared and contrasted giving results
for the utilization of the channels, the throughput of the net-
work, the connection between load and throughput, load and
delay jitter, throughput and delay, and throughput and delay
jitter. The two algorithms have been measured during 10,000
transmission frames from which the first 1000 belong in the
phase of learning.
Algorithm complexity
A substantial element in the function of the algorithm is time
complexity. Such a factor is examined since the algorithm has
to do with the optical network where the speed is at the max-
imum. So, the algorithm must keep up with this speed and
at the same time it must allow the network some changes in
terms of nodes and channels without dramatic consequences
in its performance. Moreover, the time complexity of the
algorithm must not influence the function of the predictor so
that the predictor predicts correctly each change of the nodes
and the channels of the network.
It is significant that in POSA, time complexity of the over-
all predictor is given by:
O(K+1+V)(NW).
Photonic Network Communications (2006) 11:211–227 221
Fig. 16 Scheduling matrix,
constructed by POSA timeslots
01234567891011 171615141312 18
n0
n0n0
n0n0
n1
n1n1n1n2n2n2n2
T
T
T
T
T
T
n1
n1
n1Tn2n2n2n2
Tn2n2n2
w2
wo
w1
Fig. 17 Scheduling matrix,
constructed by CS-POSA timeslots
0123456 1011121314
T
T
T
n2n2n2n2n2
n2n2n2
n2
n2n2
n2
n1n1n1
n1
n0
n0n0
n0n0T
T
T
n1n1n1
T
w2
wo
w1
987
Using Pvarious processors that process the algorithm at the
same time, it comes up that, if the factor Pis considered to be
at the level of NW (i.e., P=NW/p), where pis a constant,
then:
O(K+1+V)(NW)
P.
The CS-POSA on the other side, maintains each individual
predictor at the same value of time complexity:
O(K+1+V).
In the case that there are a lot of processors so that the predic-
tors can function simultaneously, the overall time complexity
is equal to:
O(K+1+V).
Thus, CS-POSA does not bring any extra complexity in the
function of the predictor maintaining at the same time scala-
bility in the system. However, the algorithm also performs a
shifting of the load of each node, which, of course, has noth-
ing to do with the function of the predictor. The sorting that
is introduced by the algorithm occurs outside the context of
the prediction and does not influence at all. So, considering
that the shifting has the following complexity:
O(Nlog N)
and co-estimating the fact that the algorithm works with
Pdifferent processors at the level of NW, then the extra
complexity of the CS-POSA is:
Olog N
W.
Of course, the focus is on the local networks where the value
of Nis at low levels. Anyway, it must be stressed that the extra
complexity of the algorithm is minimal and neither delays
nor influences the function of the algorithm, since there is
the probability of each individual predictor to predict both
the workload and the position of the node on table `
Swithout
any classification, maintaining the complexity at the same
levels with that of POSA.
Detailed performance analysis
This section presents the performance analysis results. Two
algorithms, POSA and CS-POSA, have been studied and
analyzed in the context of utilization and throughput, un-
der uniform traffic. Also, the behavior of the two algorithms
is presented when the workload of the network is increased in
the context throughput-delay, throughput-delay jitter, through-
put-load, and delay jitter-load. In the results of the simula-
tion, it is assumed that Nis the number of nodes; Wis the
number of the channels, and Kis the maximum value over
all entries in the traffic matrix. Also, it should be mentioned
that the tuning latency time is considered to be equal to zero
timeslots for simplicity reasons.
The simulation took place in a Cenvironment. Its duration
was 10,000 frames from which the 1000 belong to the learn-
ing phase of the algorithms. A random number generator
222 Photonic Network Communications (2006) 11:211–227
was used to provide values to the traffic matrix. The values
range between 0 and Kand in order the goal of scalability to
be achieved, the value of Kis not constant in the following
experiments but each time it is equal to:
K=NW
5.
Measures and measurements that have been studied
In the analysis of the two algorithms, common measures and
measurements have been used and are presented below:
(A) Schedule length is symbolized by Land denotes the
number of slots in the data phase as determined by the
schedule algorithm.
(B) Total slots requested by all nodes are symbolized by
Rand denotes the total number of timeslots that were
requested by all the nodes of the network.
(C) Schedule or channel utilization is symbolized by Uand
denotes the number of slots actually utilized for packet
transmission in a scheduling matrix. Scheduling utili-
zation is defined as:
U=totalslots
actualslots ∗channels or U=R
LW .
(D) Throughput is symbolized by and denotes the average
number of bits transmitted per transmission frame per
channel. It is measured in Megabit per second. So:
=lR
W(C+Ll/S)
ldenotes the packet length in bits, Cthe computa-
tion time in microseconds and Sthe transmission rate
in Mbps. Since the two algorithms, which are exam-
ined, do not waste computation delay due to pipelining
throughput, the relation finally becomes:
=R
LW Sor =US.
(E) Delay is symbolized by Dand denotes the mean time
delay of the transmitted data in timeslots. It equals to the
number of timeslots that pass from the moment that a
packet with data is produced in the tails until the moment
it is transmitted. If for example, a packet with data has
been produced at the time moment t1and in the sched-
uling matrix it has been set to be transmitted at the time
moment t2, where t2−t1=ttimeslots, then D=t.
(F) Delay jitter is symbolizes by σand denotes the variation
of the delays with which packets traveling on a network
connection reach their destination [22].
Scheduling utilization results
The results from the comparison between the two algorithms
are shown in the Fig. 18. It is clear that CS-POSA is obvi-
ously improved from POSA. This improvement lasts in all
numbers of nodes from 6 to 60 that were simulated both for 8
and 12 channels. The difference between the two algorithms
is not greater for eight channels. The biggest difference for
eight channels appears to be on the level of 3.7% for 12 nodes,
while the smallest reaches 0.5% for 60 nodes. The biggest
difference for 12 channels appears to be on the level of 4.45%
for 24 nodes, while the smallest reaches 0.9% for 60 channels.
The most important conclusion from the comparison between
the two algorithms when measuring the schedule utilization
is that CS-POSA remains constantly better than POSA for
each number of nodes, either for 8 or for 12 channels.
Throughout results
The results from the comparison between the two algorithms
on the issue of throughput, are presented in the two follow-
ing figures for two different speed lines, i.e., for 1.2 (Fig. 19)
and 2.4Gbps (Fig. 20). For 1.2Gbps the maximum through-
put that POSA provides is 11.5Gbps in 12 channels, while
CS-POSA provides 12Gbps in 12 channels. For 2.4Gbps
the maximum throughput that POSA provides is 23, while
CS-POSA provides 23.6Gbps.
The greatest difference that the two algorithms present is
in 12 channels and reaches 1.5Gbps. It is worth mentioning
that again CS-POSA is constantly better that POSA both for
1.2Gbps and for 2.4Gbps, regardless of the number of the
nodes in the network.
Throughput vs. delay results
The results from the comparison between the two algorithms
are presented in the two figures that follow (Figs. 21 and 22).
The behavior of the two algorithms is presented, i.e., the rela-
tion throughput-delay, altering the values of the workload of
the network i.e., of K. The values of Kthat are tested are: 2,
4, 6, 8, 10, 20, 30, 40, 50, 100, and 200. The number of the
nodes is 24, while the available channels are in the first case
8 and in the second case 12. The line speed has been set in
2.4Gbps.
In the first graph (Fig. 21), it is obvious that there is a
constant difference between the algorithms in the context of
throughput as the time delay is increased. In other words, it
can be observed that for each value of K, both algorithms
have almost the same time delay, while CS-POSA is seen
improved in the context of throughput. It is typical behavior
that when the two algorithms show a time delay of almost
100 timeslots (POSA 98.83 CS-POSA 100.70), their equiva-
lent difference in throughput is improved for the CS-POSA,
which reaches 500Mbps.
Photonic Network Communications (2006) 11:211–227 223
Fig. 18 Channel utilization
40%
45%
50%
55%
60%
65%
70%
75%
80%
85%
90%
Nodes
Utilization %
6 121824303642485460
POSA W = 8
CS-
POSA W = 8
POSA W = 12 CS-POSA W = 12
Fig. 19 Throughput (1.2 Gbps)
4
5
6
7
8
9
10
11
12
Nodes
Throughput in Gbps
6 121824303642485460
POSA W = 8
CS-
POSA W = 8
POSA W = 12 CS-POSA W = 12
In the second graph (Fig. 22), the two algorithms are com-
pared in the same Kvalues, with 24 nodes but with 12 chan-
nels. The results do not differ greatly from the first graph,
since for each value of the workload, CS-POSA precedes
POSA without a significant time delay. It is typical that their
greatest difference in mean time delay is in eight timeslots
(425 for POSA and 433 for CS-POSA) when the value of K
is 200, i.e., every position of the traffic matrix can receive
values of 0 to 200 timeslots. Of course, with a difference
of eight timeslots, the equivalent difference in Mbps reaches
660 Mbps. Moreover, when Kreceives the value 6, the differ-
ence becomes greatest and reaches 1620Mbps, while the
equivalent difference in mean time delay does not surpass
0.7 timeslots (34.02 for POSA and 34.71 for CS-POSA).
So, it can be concluded that CS-POSA does not lack signifi-
cantly in time delay, that means that the improvement that it
brings to the network in the context of throughput is stable
and without extra time network burden.
Throughput vs. load results
The results from the comparison of the two algorithms are
shown in the figure that follows (Fig. 23). The figure presents
the behavior of the algorithms, i.e., the relation between
throughput and load, changing the values of the workload
of the network, i.e., of K. The values of Kthat are tested are:
2, 4, 6, 8, 10, 20, 30, 40, 50, 100, and 200. The number of the
nodes is 24, while the available channels are 12. The speed
of the line has been defined at 2.4Gbps.
It must be mentioned that while the workload of the net-
work is increased, the throughput is decreased. For example,
when K equals 2, the throughput equals 22.21 Gbps. When K
equals 10, the throughput equals approximately 21.7Gbps.
Finally, when Kequals 200, the throughput is decreased
reaching 19.13Gbps. This phenomenon is not often met in
the category of the networks examined. Nevertheless, it ap-
pears in both algorithms examined, OIS [11], POSA [12],
224 Photonic Network Communications (2006) 11:211–227
Fig. 20 Throughput (2.4 Gbps)
8
10
12
14
16
18
20
22
24
6 12182430 3642485460
Nodes
Throughput in Gbps
POSA W = 8
CS-
POSA W = 8
POSA W = 12 CS-POSA W = 12
Fig. 21 Throughput vs. delay
(2.4Gbps)
0
50
100
150
200
250
300
350
400
450
500
13.0 13.5 14.0 14.5 15.0 15.5 16.0
Throu
g
h
p
ut in Gb
p
s
Delay in timeslots
POSAW = 8
CS-POSA W = 8
Fig. 22 Throughput vs. delay
(2.4Gbps)
0
50
100
150
200
250
300
350
400
450
500
17 18 19 20 21 22 23
Throu
g
h
p
ut in Gb
p
s
Delay in timeslots
POSAW = 12
CS-POSAW = 12
Photonic Network Communications (2006) 11:211–227 225
Fig. 23 Throughput vs. load
(2.4Gbps)
18.0
18.5
19.0
19.5
20.0
20.5
21.0
21.5
22.0
22.5
2 6 10 20 30 40 50 100 200
Load (Max K)
Throughput in Gbps
POSA N = 24, W = 12 CS-POSA N = 24, W = 12
48
Fig. 24 Throughput vs. delay
jitter (2.4 Gbps)
0
50
100
150
200
250
300
350
400
450
500
17 18 19 20 21 22 23
Throu
g
hput in Gbps
Delay jitter in timeslots
POSA W = 12
CS-POSA W = 12
Fig. 25 Delay jitter vs. load
(2.4Gbps)
2 6 8 10 20 30 40 50 100 200
0
50
100
150
200
250
300
350
400
450
Delay jitter in timeslots.
Load
(
Max K
)
POSA N = 24, W = 12
CS-POSA N =24, W = 12
4
owing to the architecture of the protocols. When the work-
load is increased means that the sizes of packets that arrive at
the nodes in order to be transmitted, are actually increased.
This is denoted with the increase of the maximum value of K.
When Kis increased, it is difficult for the scheduling algo-
rithm to find open space in the constructed scheduling matrix.
If there was an open space of nine slots in the constructed
scheduling matrix and the packet arrived was of 10 times-
lots size-duration of transmission, then the algorithm could
not break it in pieces. It could then place it at the end of the
matrix where there would be available space for a packet of
10 timeslots. This leads to a decrease of the channel utiliza-
tion as the unused timeslots are increased and the throughput
is decreased.
226 Photonic Network Communications (2006) 11:211–227
Throughput vs. delay jitter results
The results from the comparison between the two algorithms
are presented in the Fig. 24. The behavior of the two algo-
rithms is presented, i.e., the relation throughput-delay jitter,
altering the values of the workload of the network i.e., of K.
The values of Kthat are tested are: 2, 4, 6, 8, 10, 20, 30,
40, 50, 100, and 200. The number of the nodes is 24, while
the available channels are 12. The line speed has been set in
2.4Gbps.
In the graph, it is obvious that there is a constant difference
between the algorithms in the context of throughput as the
time delay is increased. In other words, it can be observed
that for each value of K, both algorithms have almost the
same delay jitter, while CS-POSA is seen improved in the
context of throughput. The maximum difference in delay jit-
ter between the two algorithms is observed when Kranges
from zero to 100 and is approximately equal to 2.57 times-
lots, in favor of POSA, whereas the difference in throughput
is 917 Mbps, in favor of CS-POSA. The minimum difference
is observed when Kranges from 0 to two and is approxi-
mately equal to 0.08 timeslots, in favor of POSA whereas
the difference in throughput is 752Mbps, in favor of CS-
POSA. So, it can be concluded that CS-POSA does not lack
significantly in delay jitter, which means that the improve-
ment that it brings to the network in the context of throughput
is stable.
Delay jitter vs. load results
The results from the comparison of the two algorithms are
shown in the Fig. 25. The figure presents the behavior of the
algorithms, i.e., the relation between delay jitter and load,
changing the values of the workload of the network i.e., of
K. The values of Kthat are tested are: 2, 4, 6, 8, 10, 20, 30,
40, 50, 100, and 200. The number of the nodes is 24, while
the available channels are 12. The speed of the line has been
defined at 2.4Gbps. It can be observed that for each value of
K, both algorithms have almost the same delay jitter. The two
algorithms do not differ greatly, since for each value of the
workload, CS-POSA precedes POSA without a significant
delay-jitter. The maximum difference in delay jitter between
the two algorithms is observed when Kranges from 0 to
100 and is approximately equal to 2.57 timeslots, in favor of
POSA. The minimum difference is observed when Kranges
from zero to 2 and is approximately equal to 0.08 timeslots,
infavorofPOSA.
Conclusions
This paper has presented an improved protocol that belongs
in the category Broadcast-and-Select, with a star coupled
WDM architecture. The protocol is collision-free, pretrans-
mission coordination-based. It uses a learning algorithm, CS-
POSA, to predict the requests of the nodes for the following
data frame. CS-POSA has a advantage of being programmed
to comprehend the workload of each individual node sep-
arately so that it can construct a more effective scheduling
matrix based on which it transmits the data to the destination
nodes. It gets from the predictor the constructed schedul-
ing matrix for the following data frame and simultaneously
shifts the order of control and service of the nodes, starting
from the one with the additively most requests and finish-
ing with the one with the least. All the above occur at the
same time with the transmission of the packets of the cur-
rent transmission frame. Conclusively, CS-POSA maintains
all the positive and remarkable aspects of the OIS and POSA
algorithms and at the same time it uses a different policy for
the evaluation and service of the requests of the nodes. In
this way it improves not only the schedule utilization and the
throughput of the network, but also the mean time delay in
relation to the throughput. So, it is a reliable solution in the
context of network throughput, without extra time burden or
extra hardware implementation.
References
1. Papadimitriou, G.I., Papazoglou, Ch, Pompotrsis, A.S.: Optical
switching: switch fabrics, techniques, and architectures, IEEE/OSA
J. Lightwave Technol. 21(2), 384–405 (2003)
2. Brackett, C.A.: Dense wavelength division multiplexing network:
principles and applications, IEEE J. Select. Areas Commun. 8, 948–
964 (1990)
3. Stern, T.E., Bala, K.: Multiwavelength Optical Networks, Addison-
Wesley, Reading, MA, (1999)
4. Green, P.: Progress in optical networking. IEEE Commun. Magaz.
39(1), 54–61 (2001)
5. Tsukada, M., Keating, A.J.: Broadcast and select switching system
based on optical time-division multiplexing (OTDM) technology.
IEICE Trans. Commun. E82-B(2), 335–343 (1999)
6. Papadimitriou, G.I., Miliou, A.N., Pomportsis, A.S.: OCON: an
optically controlled optical network. Computer Commun. 22, 811–
824 (1998)
7. Papadimitriou, G.I., Miliou, A.N., Pomportsis, A.S.: Optical logic
circuits : a new approach to the control of fibre optic LANs. In:
Proceedings IEEE 23rd Annual Conference on Local Computer
Networks (LCN‘98), Boston, Massachusetts, pp. 326–335, (1998)
8. Sivalingam, K.M., Dowd, P.W.: A multi-level WDM access
protocol for an optically interconnected multiprocessor system.
IEEE/OSA J. Lightwave Technol. 13(11), 2152–2167 (1995)
9. Sivalingam, K.M., Wang, J.: Media access protocols for WDM net-
works with on-line scheduling. IEEE/OSA J. Lightwave Technol.
14(6), 1278–1286 (1996)
10. Sivalingam, K.M., Wang, J., Wu, X., Mishra, M.: Improved on-line
scheduling algorithms for optical WDM networks, DIMACS Work-
shop on Multichannel Optical Networks, New Brunswick, NJ, pp.
43–61 (1998)
11. Sivalingam, K.M., Wang, J., Wu, J., Mishra, M.: An interval-based
scheduling algorithm for optical WDM star networks. Photonic
Netw. Commun. 4(1), 73–87 (2002)
Photonic Network Communications (2006) 11:211–227 227
12. Johnson, E., Mishra, M., Sivalingam, K.M.: Scheduling in optical
WDM networks using hidden Markov chain based traffic predic-
tion. Photonic Netw. Commun. 3(3), 271–286 (2001)
13. Papadimitriou, G.I., Tsimoulas, P.A., Obaidat, M.S., Pomportsis,
A.S.: Multiwavelength Optical LANs, Wiley, New York, (2003)
14. Habbab, I.M.I., Kavehrad, M., Sundberg, C.W.: Protocols for very
high-speed optical fibre local area networks using a passive star
topology. IEEE/OSA J. Lightwave Technol. LT-5(12), 1782–1794
(1987)
15. Shi, H., Kavehrad, M.: Aloha/slotted-CSMA protocol for a very
high-speed optical fibre local area network using passive star topol-
ogy. Proceedings IEEE INFOCOM‘91, Bal Harbour, Florida, USA,
vol. 3. pp. 1510–1515, (1991)
16. Chen, M.S., Dono, N.R., Ramaswami, R.: A media-access pro-
tocol for packet-switched wavelength-division metropolitan area
networks. IEEE J. Select. Areas Commun. 8(6), 1048–1057 (1990)
17. Chlamtac, I., Fumagalli, A.: QUADRO-Star: high performance
optical WDM star networks. Proceedings IEEE Globecom‘91,
Phoenix, Arizona, USA, vol. 42. pp. 2582–2590 (1991)
18. Humblet, P.A., Ramaswami, R., Sivarajan, K.N.: An efficient com-
munication protocol for high-speed packet switched multichannel
networks. IEEE J. Select. Areas Commun. 11(4), 568–578 (1993)
19. Ito, Y.,Urano, Y., Muratani, T., Yamaguchi, M.: Analysis of a switch
matrix for an SS/TDMA system. Proceedings of the IEEE, 65(3),
411–419 (1977)
20. Borella, M.S., Mukherjee, B.: Efficient scheduling of nonuniform
packet traffic in a WDM/TDM local lightwave network with arbi-
trary transceiver tuning latencies. IEEE J. Select. Areas Commun.
14(5), 923–934 (1996)
21. Azizoglou, M., Barry, R.A., Mikhtar, A.: Impact of tuning delay on
the performance of bandwidth-limited optical broadcast networks
with uniform traffic. IEEE J. Select. Areas Commun. 14(5), 935–
944 (1996)
22. Ferrari, D.: Distributed delay jitter control in packet-switching in-
ternetworks. J. Internetwork Res. Exp. 4(1), 1–20 (1993)
Panagiotis G. Sarigiannidis received the B.S.
degree in computer science from the Depart-
ment of Informatics of Aristotle University,
Thessaloniki, Greece, in 2001. He is currently
working toward the Ph.D. degree in optical
networks at the same university. His research
interests include optical networks and optical
switching.
Georgios I. Papadimitriou received the
Diploma and Ph.D. degrees in Computer Engi-
neering from the University of Patras, Greece
in 1989 and 1994, respectively. From 1989 to
1994 he was a Teaching Assistant at the Depart-
ment of Computer Engineering of the Univer-
sity of Patras and a Research Scientist at the
Computer Technology Institute, Patras, Greece.
From 1994 to 1996 he was a Postdoctorate Re-
search Associate at the Computer Technology Institute. From 1997 to
2001, he was a Lecturer at the Department of Informatics, Aristotle Uni-
versity of Thessaloniki, Greece. Since 2001 he is an Assistant Professor
at the Department of Informatics, Aristotle University of Thessaloniki,
Greece. His research interests include optical networks, wireless net-
works, high speed LANs and learning automata. Prof. Papadimitriou is
Associate Editor of six scholarly journals, including the IEEE Transac-
tions on Systems, Man and Cybernetics-Part C, the IEEE Transactions
on Broadcasting, the IEEE Communications Magazine and the IEEE
Sensors Journal. He is co-author of the books “Multiwavelength Opti-
cal LANs” (Wiley, 2003) and “Wireless Networks” (Wiley, 2003) and
co-editor of the book “Applied SystemSimulation” (Kluwer, 2003). He
is the author of more than 120 refereed journal and conference papers.
He is a Senior Member of IEEE.
Andreas S. Pomportsis received the B.S. de-
gree in physics and the M.S. degree in electron-
ics and communications, both from the Univer-
sity of Thessaloniki, Thessaloniki, Greece, and
the Diploma in electrical engineering from the
Technical Universityof Thessaloniki, Thessalo-
niki, Greece. In 1987, he received the Ph.D. de-
gree in computer science from the University of
Thessaloniki. Currently, he is a Professor at the
Department of Informatics, Aristotle University, Thessaloniki, Greece.
He is co-author of the books Wireless Networks (New York: Wiley,
2003) and Multiwavelength Optical LANs (New York: Wiley, 2003).
His research interests include computer networks, learning automata,
computer architecture, parallel and distributed computer systems, and
multimedia systems.