Conference PaperPDF Available

An Implementation of an Overlay Network Architecture Scheme for Streaming Media Distribution.


Abstract and Figures

We introduce the implementation of a streaming video distribution scheme based on client relay modules. The purpose is the formation and maintenance of an overlay network architecture responsible for the dispensation of streaming traffic to end-clients. This architecture has been based on the use of modular system components that can accommodate the integration of existing commercial solutions for media reproduction such as video players (used in the implementation as black box components). The result is a system design capable to manage and sustain a media distribution scheme based on an overlay network infrastructure. The presented implementation has been developed in the context of the EU IST OLYMPIC project and is part of a large network architecture for supporting personalised multimedia distribution for covering major athletic events (Ch. Z. Patrikakis et al., 2003).
Content may be subject to copyright.
An Implementation of an Overlay Network Architecture Scheme for Streaming
Media Distribution
Ch. Z. Patrikakis, Y. Despotopoulos, A. M. Rompotis, N. Minogiannis, A. L. Lambiris, A. D. Salis
Telecommunications Laboratory of the National Technical University of Athens
Heroon Politechniou 9, Zographou, Greece 15773, tel: +30 210 7721513, fax: +30 210 7722534
In this paper we introduce the implementation of a
streaming video distribution scheme based on client
relay modules. The purpose is the formation and
maintenance of an overlay network architecture
responsible for the dispensation of streaming traffic to
end-clients. This architecture has been based on the use
of modular system components that can accommodate
the integration of existing commercial solutions for
media reproduction such as video players (used in the
implementation as black box components). The result is
a system design capable to manage and sustain a media
distribution scheme based on an overlay network
infrastructure. The presented implementation has been
developed in the context of the EU IST OLYMPIC
project and is part of a large network architecture for
supporting personalised multimedia distribution for
covering major athletic events [1].
1. Introduction
For over a decade, researchers have spent
considerable effort on the design of protocols to support
broadcasting for efficient many-to-many
communication. IP Multicasting [2] was introduced at
late 80’s as an extension to the IP dominant protocol in
order to meet the increasing bandwidth requirements of
specific applications such as real time/video on demand
services. To overcome fundamental problems related to
IP multicasting “global” deployment, research has
turned to other solutions based on application layer data
forwarding and group communication services [3]. As a
consequence overlay architectures have been proposed
for supporting data distribution and in some cases used
for serving streaming applications.
The rest of the paper is organized in three parts as
follows: First, the reason and motivation for deploying
overlay network architectures for supporting media
distribution is provided. Next, the integration and
combination of such solutions for providing an end to
end distribution scheme is presented.
Following these introductory sections, the paper
continues with the presentation of the architectural
model, presenting the framework on which the
implementation was based. The next section is
providing the details of the architecture both at
functional and implementation level.
Finally, the last two sections present the trials
performed so far, together with the results, conclusions
and future work that will be integrated in the proposed
2. Necessity and motivation of overlay
The IP Multicast service was proposed as an
extension to the Internet architecture to support efficient
multi-point packet delivery at the network level. With
IP Multicast, a single packet transmitted from a specific
source is delivered to an arbitrary number of receivers
by replicating the packet within the network at fan-out
points (routers) along a distribution tree rooted at the
traffic’s source. Although IP Multicast uses the
resources of the network quite efficiently, its
deployment has been slowed by issues related to
scalable inter-domain routing protocols, charging
models, robust congestion control schemes [4].
Therefore, the existing multicast model targets mainly
in supporting the communication needs of large groups
and is usually limited within areas covered by the same
provider [5][6].
Reckoning the aforementioned, it is clear that there
is a necessity to propose other solutions that will
succeed in transmitting the same data to multiple users
overcoming all these problems. Because of the
problems encountered during the deployment of a
network-level multicast service, many recent research
proposals have agreed on an application-layer multicast
either unicast service and have described solutions for
such a service and its applications [7][8][9][13].
The alternative solution proposed is based on
moving the data replication and distribution schemes to
the network periphery, using application based
multicast models, supported by unicast transmission
towards the peripheral distribution points [10][12]. This
methodology results to a virtual layer built above the
network infrastructure and each of its edges corresponds
to a unicast path between two end systems in the
underlying internet.
The general notion is that applications are self-
organized into a logical overlay network, and transfer
data along the edges of the overlay network using
unicast transport services. The overlay network is built
as a graph with properties so that spanning trees can be
easily embedded without the need for a routing
protocol, e.g., as a hypercube [8]. Application-layer
multicast has a number of appealing features:
There is no requirement for multicast at the network
layer infrastructure or allocation of a global group
identifier (such as the IP multicast address).
Since packets are flowed over the virtual layer via
unicast, flow control, congestion control, and
reliable delivery services available for unicast
transmission can be exploited.
Adaptability: the overlay network can be
dynamically optimized.
Robustness: increased control and adaptable nature
make the overlay network more robust.
Customization: the design and construction criteria
of overlay network can be based on the requirements
of the application.
However, application layer multicast has some
significant drawbacks. Since data is forwarded between
end-systems, end-to-end latencies can be large. In
addition, if multiple edges of the overlay architecture
are mapped to the same network link, multiple copies of
the same data may be transmitted over this link,
resulting in an inefficient use of bandwidth [8][9]. Thus,
important performance metrics for overlay network
topologies deploying application-layer multicast should
be applied in order to reduce end-to-end latencies and to
optimize bandwidth allocation.
The majority of these proposed solutions typically
involve members of a multicast group to organize into
an essentially random application-level mesh topology
over which a traditional multicast routing algorithm
such as DVMRP (Distance Vector Multicast Routing
Protocol) is used to construct distribution trees
(exceptions are TBCP, YOID algorithms). Routing
algorithms require every node to periodically announce
its estimated distance from every possible destination to
its local neighbours; hence every node maintains state
for every other node in the topology. Further, in the
case of a change in the topology, every node must be
informed about this change and update its routing table
if required [10].
Overlay multicast solutions make the deployment of
broadcast functionality easier as they implement their
functionality entirely at IP hosts and require no
modifications at the core network routing technology.
With this wide range of broadcast-capable solutions, we
have ended up with the scenario where a number of
diverse and in some cases non interoperable protocols
co-exist in the Internet. No single protocol has been
deployed globally. In fact intention is not the adherence
of a specific overlay protocol over the Internet. This is
due to a variety of reasons including technical
shortcomings in the protocols and their
implementations, the fact that some protocols are
geared towards specific applications and a range of
business model concerns. Actually, the Internet
landscape is likely to be fragmented into potential
overlapping clouds of broadcasting/multicasting
connectivity with no interoperation across these clouds.
3. Integrating and combining overlay
solutions for end to end media
An important advantage of deploying an overlay
architecture (through an Application Layer Multicast
mechanism) for distributing streaming media is its built-
in adaptation in Content Delivery Networks. The
concept behind such a network model is to push the
content to the edge of the networks and deliver the
content with massive intelligence and manageability.
Since abundant network bandwidth is usually available
in the network periphery, we can then relay the
streaming distribution to the edge of the network. The
purpose is to serve all the clients attached under the
same local network or domain on the edge, rather than
leaving the content distribution fermentation at the
A CDN is a representative overlay network to the
Internet which has been built specifically for the high-
performance delivery of common web objects, static
data and rich multimedia content. CDN’s functionality
is based on layer-4 switching forming an overlay
network architecture capable of providing dedicated
services such as streaming delivery [19] . The
aforementioned overlay solutions can be integrated in
CDN implementations as long as they remain
transparent to the end-users and the network
infrastructure. The interoperability issue between
overlays and CDNs (representative overlay mechanism)
should be confronted when a Provider decides to
combine these techniques but as long as the overlay
network and the CDN operate independently, probably
a simple solution of central overlay node administration
and management can be adopted.
The most attractive feature of deploying overlay
networks in serving streaming media applications is the
fact that these architectures do not require any
modifications at the network layer infrastructure
shifting the burden of transported traffic to higher
levels. Illatively, application layer architectures, acting
complementary to Content Delivery Networks, can be
deployed at the edges providing a further expansion to
the overall content distribution network [13]. According
to the adage “keep simple the core and move the
complexity to the edge” the content delivery procedure
takes place in access points without infecting the
backbone network and thus relieving it from the
increased streaming traffic. In the next section we
provide the description of the overlay architecture
mentioning as well its adaptability to various
networking conditions and its interoperability with
commercial CDN implementations.
4. The overlay network distribution model
The implemented platform provides an overlay
solution for streaming media distribution by using a
relay scheme based on peripheral reflectors or simple
clients. Such schemes have been proposed in the past
based on the use of several proposed architectures for
overlay network architecture creation and management
[7][8][12]. However, the presented implementation does
not rely only on the existence of dedicated relay nodes,
but makes use of the media receiving clients as possible
relay points.
The whole implementation has focussed on
presenting a solution that can be integrated seamlessly
with existing opensource and commercial solutions for
both media server and media clients. In this context, the
media source is considered a black box, having as
unique requirement the compliance to the standards of
RTP protocol and distributed media formats. The same
applies for the media client, in which several
commercial solutions have been tested, in combination
with new software located on the client host in order to
provide full client relay capabilities. The result is the
provision of an overlay distribution scheme that can be
used over every existing scheme (including other
overlay solutions such as Content Delivery Networks)
and in which the participating nodes may decide
whether or not to make use of the client relay
functionality, as it is presented in Figure 1.
Figure 1 : Media production and distribution
This provides a flexible architecture that can be
offered in two different flavours: As a complete end to
end scheme in which a central Media Server is
distributing information to clients forming an
independent overlay distribution architecture, or as an
overlay distribution scheme suitable for distribution of
media over access networks, in combination with other
overlay solutions such as CDNs. In this case, the Media
Server (or even several Media Servers) is considered
the connecting point between the two complementary
overlay implementations (egress point for the CDN).
The next figure presents the aforementioned
interoperation between CDN and application layer
forwarding mechanism. CDN is applied in the core
while application layer forwarding solution is
applicable to the access.
This leads to the media distribution scheme
presented in Figure 2. In this figure, the rectangles
represent a Media Server (MS) or a media Relay Server
(RS), while the circles represent media clients (C).
at the core
at the periphery
Figure 2 : Media distribution scheme
Note that we have three types of clients:
1. Plain media clients that operate in the normal way,
receiving the media stream from a designated MS
(or RS). These clients are represented by white
2. Media clients that though they make use of the
client relay scheme for receiving the media do not
participate as relay nodes for further distribution to
other clients. These are represented by grey circles.
3. Media clients that not only make use of the client
relay scheme for receiving the media, but actively
participate as relay nodes for other clients. These are
represented by grey circles surrounded by a
The implementation allows a client to choose not
only if it wishes to participate in the process of media
relaying at peripheral level, but also the level of
involvement in this architecture: passive, in the case
when it only uses the architecture for receiving media,
or active when it participates in the architecture as a
relay node
5. Functional description and
implementation details
Before proceeding in the description of the
participating nodes, we present the philosophy behind
the implemented architecture. The idea is based on the
use of client hosts that deploy both a media player (for
local playback of the received stream) and a media
server (for relaying the media to other clients) as
depicted in Figure 3.
Based on this implementation, each client instead of
using a direct connection to a media server, as it would
in normal data distribution architecture, receives the
distributed data through the use of a media server
located on the same host, which in turn may be used to
transmit this data to other clients. This distribution
scheme is depicted in the data distribution plane of
Figure 3, in which two types of clients are represented:
a client operating in the normal way, connected directly
to the server, and two clients using the relay
Data distribution
Overlay configuration
Figure 3 : Implementation components
Each of these clients has a local media server acting
as RS, which is the entity responsible for contacting the
media server. The media players in each client get the
requested information using the local servers as relay
nodes. We must note that a major requirement was to
treat the players as black boxes, therefore no
modifications on the player applications was
performed. Evaluation was made with both open-source
and commercial players. This ensures that the provided
solution is open and can accommodate non custom
software for media reproduction on the clients.
5.1. Functional description
5.1.1. Relay Mediator (RM). To provide the means for
diverting the initial client request for a specific media
file from a specific location on the network to the local
relay server and to instruct the server to retrieve this
data and relay it to the requesting client, a relay
mediator is used. Its goal is to capture the initial request
provided by the user who is unaware of the underlying
topology, and to trigger the relay mechanism in order to
receive (and retransmit upon request) the requested
stream. Based on the decision the solution should be
compatible with all existing player implementations, a
special scheme for requesting the media file,
compatible with all existing implementations was
devised. In this, the user provides information about the
requested media to the media player software through
the use of a front end that instead of transforming the
user request for a “MediaFile” from a MediaServer” to
the standard: rtsp://MediaServerAddress/Mediafile,
provides the player with the following request:
Using this format, the media player is directing the
request for the media file to the media server located on
the same host (localhost). However, since the media is
not already on the local media server, the latter needs to
know the address of the server from which it will get
the requested media, before relaying it to the player. For
this, the relay mediator module is used to translate the
last part ?MediaServerAddress” which passes
transparently inside the media player’s request in order
to determine the source of the media. At this point the
architecture is able to provide the means for receiving
the media, through a local relay. However, the target is
to receive the media through a different relay point by
using the relay mechanism. In order to do so, the relay
mediator is not directly requesting the media file from
the address resolved from the Media ServerAddress
information, but provides this information to the
Overlay Configuration Management (OCM) module,
which is responsible for the configuration and
maintenance of the overlay architecture.
5.1.2. Overlay Configuration Management (OCM).
To provide the necessary functionality for overlay
architecture management, each node in the architecture
uses a special module responsible for the configuration
and maintenance of the overlay architecture. We can
distinguish 2 types of this module.
The relay OCM module located in each client host
is responsible for receiving the connection requests
from the Relay Mediator and performing the necessary
actions in order to determine the best suitable
distribution node. This may either be a relay node or the
original server from which the media was requested. In
order to perform these actions, the Relay OCM module
needs to communicate with other OCM modules and
exchange information related to the creation of the
distribution scheme. The procedures and messages used
are described in the next section. Apart from the active
role in the selection of the most suitable distribution
node, the relay OCM module may have a passive role in
cases it is contacted by another relay OCM module. In
such a case, the relay OCM will simply send to the
requesting relay OCM the necessary information in
order to assist it in discovering the most suitable
distribution server.
The server OCM module is usually located on the
server and is responsible for administrating the overlay
distribution scheme. This module is the central point
from which the formation of the overall architecture is
monitored, and to which all media requests are
addressed. In order to keep this module independent,
the implementation has led to a module separate from
the media distribution server. This means that in fact the
module has the ability to be placed in a different
location from the media server.
5.2. Implementation details
5.2.1. RM Implementation. The implementation of the
RM, was based on the QuickTime Streaming Server
(QTSS) [14][15]. QTSS was chosen since it is an open-
source standards-based streaming server. It is also
scalable in that it offers a programming interface for
creating modules, which allow developers to
supplement or enhance the server’s functionality. Using
this API, modules may register to receive notification
for certain events such as an incoming RTSP request.
Upon reception of an event, a module may either handle
it directly and/or pass it back to the server to allow the
default processing to take place. Furthermore, QTSS
supports the notion of relay nodes providing the perfect
platform for building a node that relays a media stream
from a source such as a media server to a client such as
a QuickTime player. In this scenario QTSS acts as a
client in relation to the source and as a source in
relation to the client.
It is worth noting though, that QTSS lacks built-in
functionality to set up a relay session on demand. It is
not possible for a client to request a stream through a
particular relay. Any nodes that should operate as relays
must be appropriately configured through QTSS’s
administrative interface before the client request.
Dynamic relay-session set-up is handled by the
Relay Mediator which is implemented as a QTSS
module. The Relay Mediator monitors every incoming
RTSP message. Upon reception of an RTSP
DESCRIBE message for a particular stream a check is
performed on the message’s command line in order to
assert whether it is in the standard format:
“rtsp://mediaserveraddress/mediafile” or in the
enhanced format:
rtsp://localhost/mediafile?mediaserveraddress”. In the
first case the request is passed back to the local server
and the default processing takes place. If the local
server is configured as a relay node for the MS then
streaming of the mediafile will commence and the client
will receive the requested stream through the server. In
the latter case the command line is parsed and the
mediaserveraddress along with the mediafile
parameters are passed to the Relay OCM as it will be
explained in the next paragraph. Consequently the
Relay OCM returns the address of the ‘closest’ node
able to relay the requested stream. Based on this address
the RM module configures the local QTSS to start
relaying the stream from the given node. Furthermore a
message is sent to the Relay OCM indicating that this
node is as of now an active relay node for the particular
5.2.2. OCM Implementation. The OCM modules are
also implemented as QTSS modules. The following
diagram (Figure 4) presents the messages exchanged
during the request for a media file.
for data
Connect request
Send list
of proposed reflectors
Best SAS connection
calculation procedure
Probe message
best SAS
for data
Client 1 Client 2 (reflector) Client 3 (reflector) Video Server
Probe message
Figure 4 : OCM Message exchange diagram
As we can see, the request for media file from the client
is directed to the local RM, which in turn is passing this
request to the relay OCM module attached to it. This
module will now initiate the relay node discovery
procedure by sending to the server OCM module of the
MS a connect request. The address of the MS is
extracted from the enhanced media request format
described earlier. The server OCM module, upon
reception of this request will check on the list of
available reflectors and based on a predefined set of
criteria in order to perform a first level filtering of the
nodes. The basic one is a special flag that indicates
intention of each client to participate actively in the
media redistribution process. If the user does not wish
to offer his client workstation as a possible reflector,
then the flag is turned down, informing the server that
this node should not participate in the first level filtering
described above.
The ones that will match the criteria set will be
proposed to the requesting client through a response
message containing the addresses of these RSs, as
possible relay nodes. Upon reception of this message
the relay OCM will initiate a probing procedure to each
of the addresses contained in the reply from the MS.
This probing procedure is based on the transmission of
a “ping” like message to all RSs (including the MS).
Once this message is received by the OCM module in
each node, a reply is sent back containing the following
§ The roundtrip time (RTT) between the probing node
and the RS. This time is used as a first metric of the
distance between the two nodes.
§ The connected clients to the RS. This is used as a
second metric, in order to equalize the distribution
of clients connected to relay nodes.
§ The relay level. This is used along with the previous
metric to control the uniform expansion of the tree
in depth and width.
§ The processing power of the RS. This is used as a
third metric indicating the ability of each RS to act
as a relay node.
Upon reception of the replies, the OCM on the
requesting client initiates the selection procedure that
takes into account the information received. Once the
selection is done, the address of the selected RS is
provided to the RS of the requesting client, and a
connection between the requesting client RS and the
selected RS is established.
After the establishment of the connection, the
client’s player is able to receive the requested media
stream via the local RS, transparently from the user. At
the same time, the OCM module sends to the server
OCM (in the MS) that is administrating the media
distribution process the following information:
§ Its IP address and the address of the RS through
which it receives the media stream.
§ The QoS measured at application level. In terms of
media, this information is provided from the RS and
is the frame loss rate.
§ An indication about the intention of the client to
participate (or not) actively as a relay node for other
§ A measurement on the processor occupancy of the
This information is also send periodically (once
every 10 seconds) to the MS in order to keep it always
updated about the status of the distribution scheme. The
MS, based on this information is forming a distribution
tree in which each node is represented by the necessary
metrics presented earlier in the paper.
5.3. Deployment of multicast capabilities at
local level
The process described so far is based on the use of
unicast connections between the participating nodes of
the architecture. However, in cases where multicast can
be deployed at local level, this scheme is used for
distributing the media stream from a RS to other clients
within the same “neighbourhood” cluster. In such a case
we can distinguish two scenarios according to the hops
between the transmitter (MS or RS) and the receiver
The first scenario covers the case where the
distance between the two nodes is greater than one hop.
This is normally the case where the nodes are located in
different LANs. In this case the intermediate routers
need to be setup so as to support multicasting between
the two endpoints. Such a communication scheme must
be preconfigured statically, making this scenario
appealing for solutions oriented towards the media
source (MS RS communication) or solutions that may
serve large domains based on tree structures that spawn
to more than one levels (super domains with many sub
The second scenario covers the case where the two
nodes are located in the same LAN (TTL distance equal
to 1). In this case, multicast distribution of the
information can be supported without any prerequisites
on local router configuration. This scenario is ideal for
supporting distribution between nodes that are located
in the same LAN such as a small office, a department of
a company or a Laboratory. This second scenario, being
of more interest for our implementation has been
integrated and tested through the use of the inherent
capability of Darwin Streaming Server to support
In Figure 5, the process for enabling multicast
distribution of the stream based on the second scenario
is presented.
Router for further
interconnection with the
rest IP infrastructure
Media Server
Relay Server
Hub Laboratory
Figure 5 : ?nabling multicast distribution of the
A multicast stream is sent to a group address. This
means several client computers can tune in to the
same stream.
With a reflected multicast, the server receives a
multicast stream, and then sends it to each client that
tunes in to the stream.
Upon setting up the relay node (RS in our
architecture) it listens to an incoming broadcast (either
unicast or multicast), and forwards, or relays, the stream
to one or more destination addresses. The destination
addresses may be unicast or multicast and the server can
be configured to relay multiple broadcasts at the same
time using internal configuration files [15].
5.4. Recovery mechanism for responding to
interruptions in the distribution chain
When a reflector starts, it registers itself with the
OCM Server. The two modules retain a permanent TCP
connection between them in order to avoid connection
setup time and allow quick communication when
needed. The cost of keeping up the TCP connection
alive is minimal as the traffic carried is very sparse and
messages exchanged are usually less than a hundred
This permanent TCP connection is also an effective
mechanism to detect failures. As soon as this
connection is closed, either because of a network
timeout or a module failure, the server is immediately
informed and can take appropriate action.
In the presence of a failure, as described above,
there are many ways for the server to respond
depending on the configuration. The simplest is to
remove the reflector from all the relay trees in which
participates together with all the reflectors that are
children of the offending node, without taking any
further action. This approach leaves recovery to clients.
When users depending on the failed node discover that
they no longer receive the video stream, they will try to
reconnect, essentially repeating the procedure followed
when they first joined the distribution tree. Since the
server has already updated the relay trees, they will be
redirected to a valid reflector. An enhancement of this
technique is for the server to forcibly require from all
affected nodes to re-connect. Thus, no intervention from
the end-users is required and the tree will be re-
constructed automatically.
This solution introduces a problem when a
significant number of clients/reflectors are under a
failed node as the re-connect procedure is initiated
synchronously on all dependant nodes. Imagine a tree
with 25 nodes where a reflector with 20 children dies.
Those 20 children were probably distributed under a
number of sub-trees, possibly forming an optimized
structure. As all of them were removed from the tree
when the root failed they will try re-connecting to the
remaining 5 nodes forming an un-optimized structure.
To fight this, the server sends RECONNECT
messages gradually, starting by the immediate children
of the failed node and allowing a small time period for
new connections to take place. Then the children one
level deeper in the failed sub-tree are asked to re-
connect and this repeats down to the lowest level. This
way, the disconnected nodes would probably form a
similar structure to the one before the failure. Care must
be taken regarding the time breaks after the
RECONNECT messages as the lowest nodes do not
experience great delay.
The solution implemented, incorporates all three
methods described as to accommodate the capabilities
of the different underlying video distribution systems.
6. Trials and tests performed
The implementation of the architecture proposed in
this paper has resulted in the provision of a system that
can be tested as a stand alone platform for end to end
streaming media distribution, or to provide a solution
that can be deployed in parallel with existing overlay
implementations such as CDNs. Up to now the trials
performed, have focused on the first scenario, while a
full scale trial of the second scenario is scheduled in the
context of the OLYMPIC project.
The tests performed include the distribution of
streaming video to a limited number of clients that act
as relay nodes. The distributed video is MPEG4 type
distributed at a rate of 150 to 500Kbps. The scope of the
trials performed is to test:
The ability to integrate different media players
(both commercial and opensource). For this, three
media players supporting MPEG4 video playback were
used. These were: Apple Quicktime player [16]
MPEG4IP player [17] and the PHILIPS Platform4
Player [18]. The results were quite satisfactory, since all
players were able to reproduce the transmitted stream.
However, in cases where the user is able to control
reception parameters such as timers controlling the
response time from the server, it was necessary to
increase the related value, since the time for discovering
the most suitable relay point and for setting up a
connection to the related RS is added to the setup time
for media distribution.
The stability of the architecture in cases of
failure of relay nodes. For this, the tests included the
disconnection of a relay node which server (directly or
indirectly) several other nodes. This action was
performed “by force” in the means that the rest of the
nodes were not informed about the eminent node
disconnection. The results were quite satisfactory in
terms of architecture reconstruction, since after a period
of a few seconds all nodes reconnected and received the
transmitted stream.
The degradation in quality of the streaming
media information. The tests have demonstrated that
apart from the problems imposed by the network due to
transmission errors, other parameters such as processing
power, memory usage and number of connected users
on a relay node are affecting the quality of the media
stream even to greater points sometimes. For this a
minimum distribution chain level in combination with
measurements on the processing power and memory
available on each relay node must be used together with
measurements on the network characteristics before the
decision of the best relay node from a client.
The OCM monitor tool, a small application built to
graphically monitor and administer the overall
architecture, assisted the tests. The monitor resides on
top of the server OCM module of the MS. A sample
screenshot is presented in Figure 6.
Figure 6 . OCM monitoring tool
In the instance presented, the MS is sending a media
stream to five clients, based on two levels of
(MSà Level1:2RS à Level2: 3RS)
The trials performed proved that the implementation of
the presented architecture can lead to a stable platform
on which several commercial players may be used. The
next step in the trial procedure which will be performed
during the full scale trials will test the ability of the
implementation to be integrated with other overlay
network solutions, and the effect it will have on
network performance.
7. Extensions to the architecture and future
The presented solution has been tested as a stand
alone solution in a laboratory environment not including
any network part with real users (e.g. a service provider
network). However, since the implementation is
provided in the context of an EU IST project, the
presented solution is planned to be integrated as part of
larger testbed in the project trials. In this testbed, the
overlay solution will be integrated with other overlay
network implementations for real time media streaming,
including CDN deployed over a service provider
network. During the trial phase, the interoperability of
the proposed architecture and its performance in real
life situations will be tested.
Apart from the issue of testing the architecture in a
real life scenario, there are also other issues that need to
be investigated, regarding enhancements of the existing
platform. Work towards these enhancements is
underway, but the results are not yet available.
The first is the provision of a light version of the
client, which will not incorporate the capability for
media relaying. In this version, the Relay Server part
will be substituted by a RTSP proxy module that will be
responsible for acting as the mediator between the
media player application and the media distribution
point in order to exploit the capability of relay selection.
This version will target clients with reduced processing
power and clients with network connections with
limited bandwidth (dial up users).
Another enhancement on the proposed solution is
the provision of a handover mechanism between the
relay nodes. This mechanism will incorporate the ability
of a relay node to delegate the role of media relaying to
other nodes whenever a user wishes to leave, while
keeping the handover process transparent to the clients
that are connected to leaving node.
Finally, an enhanced Relay Server version that can
be used as a relay point for big numbers of served
clients is also designed. This version will be able to
incorporate different overlay network discovery and
maintenance mechanisms utilising different protocols
such as [7] or [10] and could be deployed in specific
parts of the access network forming distribution islands
that could be dynamically setup and adapt to changes in
network parameters.
8. Acknowledgements
The work presented in the paper has been performed
in the context of the EU IST OLYMPIC project.
9. References
[1] Ch. Z. Patrikakis, Y. Despotopoulos, A. M. Rompotis, ,
C. Boukouvalas, G. Pediaditis, A. L. Lambiris,
OLYMPIC: Using the Internet for real time coverage of
major athletic events, International Conference on
Cross Media Service Delivery, May 30-31, 2003-
Santorini, Greece
[2] S. Deering, D. Cheriton, Multicast Routing in Datagram
Internetworks and Extended LANs, ACM Transactions
on Computer Systems, May 1990
[3] Ayman El-Sayed, Vincent Roca, Laurent Mathy, A
Survey of Proposals for an Alternative Group
Communication Service, IEEE Network, Jan 2003
[4] C. Diot, B. Levine, B. Lyles, H. Kassem, D. Balensiefen,
Deployment Issues for the IP Multicast Service and
Architecture, IEEE Network, Jan. /Feb. 2000
[5] S. Bhattacharyya, C. Diot, L. Giuliano, R. Rockell, J.
Meylor, D. Meyer, G. Shepherd, and B. Haberman, An
Overview of Source-Specific Multicast (SSM)
Deployment, Internet Engineering Task Force, March
1999, work in progress, Internet Draft, ietf-ssm-
[6] Kumar S. et al. The MASC/BGMP Architecture for
Inter-Domain Multicast Routing, SIGCOMM ’98, 1998
[7] J. Liebeherr and M. Nahas, Application-layer Multicast
with Delaunay Triangulations, IEEE Globecom,
November 2001
[8] J. Liebeherr, T.K. Beam, Hypercast: A protocol for
Maintaining Multicast Group Members in a Logical
Hypercube Topology, Lecture Notes in Computer
Science Vol. 1736, 1999, pp. 72-89
[9] Dimitrios Pendarakis, Sherlia Shi, Dinesh Verma, Marcel
Waldvogel, ALMI: An Application Level Multicast
Infrastructure, in Proc. 3
rd Usenix Sympodium on
Internet Technologies & Systems, March 2001
[10] Yatin Chawathe, Scattercast: An Architecture for
Internet Broadcast Distribution as an Infrastructure
Service”, PhD thesis, University of California, Berkeley,
Dec. 2000.
[11] J. Park, Seok Joo Koh, Shin Gak Kang, Dae Yang Kim.,
Multicast Delivery based on Unicast and Subnet
Multicast, IEEE Communications Letters, Vol. 5, No. 4,
APRIL 2001
[12] Y. Chu, S. Rao, H. Zhang, A case for end system
multicast, Proceedings of ACM Sigmetrics, June 2000
[13] Ch. Z. Patrikakis, Y. Despotopoulos, A. M. Rompotis, A.
L. Lambiris, PERIPHLEX: Multicast delivery using
core unicast distribution with peripheral multicast
reflectors, Poster Session, Twelfth International World
Wide Web Conference (WWW2003) 20-24 May 2003,
Budapest, Hungary
[14] QuickTime Streaming Server Modules, Apple Computer,
Inc., 1999-2002
[15] About Darwin Streaming Server, Apple Computer, Inc.,
[19] Mathew Liste, Content Delivery Networks (CDNs) A
Reference Guide, Cisco 2000
... Second, the utilization of new hand-held wireless devices (such as, media phones, PDAs and laptops) is becoming common. Third, new multimedia applications (i.e., streaming stored/live audio/video) become popular, first in the Internet wired environment and now in wireless mobile environment [2,Chaps.14,15]. ...
... Finally, due to channel reciprocity, the above assumption of LSI at the transmit node of Fig.1 may be considered reasonable in Time-Division-Duplex (TDD) systems, and it is also met in 802.11-based 4GWLANs, where the RTS/CTS handshaking frames can be suitably employed to probe the channel-state [1], [14], [15]. ...
... where (15) r p (σ(t); s(t); q(t)) min s(t); R(σ; E p ); ...
Full-text available
Media-streaming applications over time-varying faded wireless channels present mul-tiple still open challenges, mainly arising from the energy-limited nature of the wireless connections, as well as the delay and delay-jitter sensitive features of the conveyed media traffic. In this paper, we recast the challenges to be tackled in the form of a suitable non-linear constrained optimization problem involving finite-capacity G/G/1 queues at both transmit and receive nodes of the considered connection. Afterwards, we develop an opti-mized scheduling policy that seeks to improve media-streaming performance by building up adaptive joint control of the media source rate (e.g., the connection bandwidth), chan-nel rate (e.g., transmit rate) and delivery rate (e.g., playout rate). Formally speaking, the objective that we consider is the maximization of the average transmit rate, when con-straints on the: i) allowed average and peak energies; ii) maximum connection bandwidth (e.g., maximum source rate); and: iii) buffer' capacities are simultaneously present. Fur-thermore, to properly cope with the jitter-sensitive feature of media-streaming, we also require that the joint controller to be developed is able to guarantee jitter values below any desired target limit. The joint controller we derive as solution of the described opti-mization problem works in an adaptive (e.g., time-varying) and Cross-Layer (CL) way, and requires synergic cooperation of the Application (APP), Data Link (DL) and Phys-ical (PHY) layers of the underlying protocol stack. The presented numerical results give insight about the attained optimized tradeoff among transmit rate, delay and delay-jitter.
... In addition, it is relatively easy to support heterogeneous media clients [1]. Since it is difficult for each sender to subtract its own contribution, the server needs to create a customized stream for each of the active callers, e.g., [2]. Assuming that not all users are using the same media format, the server needs to decode the audio streams to a non-compressed audio format to mix them. ...
... In our particular case, we used the SIP Express Router (SER) [7] and SIP Express Media Server (SEMS) [2] as our SIP Server and Media Server respectively, forming the CS. It is important to notice that in our prototype, at this stage, we are not considering security or authentication issues. ...
Full-text available
Audio conferencing is an important aspect of Internet Telephony services. In this article, using a centralized conferencing architecture, we propose to employ application layer multicast for media distribution, by using "agents" responsible for the delivery of streaming media to end-clients, aiming at reducing the traffic in the network and the server workload. In this model, we use the concepts of active client, which exchanges media directly with the server, and passive client, which is limited to receive media from its agent. In our implementation we use the Real Time Protocol (RTP) for data transmission and the Session Initiation Protocol (SIP) for signaling, hence making it compatible with most existing state-of-the-art hardware and software.
... In the OLYMPIC project (Patrikakis, et al., 2003), a platform that was able to deal with multi-rate switching was developed. It consisted of servers, reflectors (proxy streaming servers) and transcoders. ...
... The problem of rate adaptation as it is related to the OLYMPIC project platform (Patrikakis, et al., 2003) may be formulated as follows: Consider a multimedia stream S encoded at K different bitrates b i , i ∈ {1, ..., K}, resulting in K streams {S 1 , S 2 . .., S K }. ...
Full-text available
In large scale IP Television (IPTV) and Mobile TV distributions, the video signal is typically encoded and transmitted using several quality streams, over IP Multicast channels, to several groups of receivers, which are classified in terms of their reception rate. As the number of video streams is usually constrained by both the number of TV channels and the maximum capacity of the content distribution network, it is necessary to find the selection of video stream transmission rates that maximizes the overall user satisfaction. In order to efficiently solve this problem, this paper proposes the Dynamic Programming Multi-rate Optimization (DPMO) algorithm. The latter was comparatively evaluated considering several user distributions, featuring different access rate patterns. The experimental results reveal that DPMO is significantly more efficient than exhaustive search, while presenting slightly higher execution times than the non-optimal Multi-rate Step Search (MSS) algorithm.
... Η αρχιτεκτονική υπερκείμενων δικτύων του OLYMPIC, συμπληρώνει τον βασικό αυτό μηχανισμό με ενεργούς εξυπηρετητές SAS αλλά και εφαρμογές αναπαραγωγής (Media Players) που μπορούν να βελτιώσουν την κεντρική επιλογή, μελετώντας μια σειρά παραμέτρων απόφασης. Μια πρώτη έκδοση της αρχιτεκτονικής υπάρχει στο[95]. ...
... The scheme targets at meeting users requirements in an efficient and scalable manner even when clients have different access capabilities, terminals or preferences. For example, in a major athletic event [6], a user may not be interested in a specific competition selected for broadcasting by the director. In this case, provided that a system such the one proposed is supported, the user can chose the event to watch through connecting to the Internet via his PC or mobile phone. ...
Conference Paper
The major bottleneck for streaming media content over the Internet was the access technologies used by residential users. As more users access the network with broadband technologies the deployment of end-to-end real time services based on multimedia content is becoming a reality. The sensitivity of such services in terms of time delay and the large amount of network’s band-width consumed must be taken into consideration when designing an architecture capable of delivering streaming media under QoS restrictions. Further-more, the scalability of the distribution scheme for multimedia streaming must be carefully studied and should clearly define all the networking parameters responsible for content delivery. In this work we present an architecture for streaming real time content over the Internet combining centralized and decentralized architectures. The centralized approach is followed in the core of the network, permitting an efficient configuration and interconnection of the sys-tem components. The decentralized approach is followed at the client side in order to quickly select the closest media relaying point for the desired stream.
Conference Paper
Live virtual machine migration aims at enabling the dynamic balanced use of the networking/computing physical resources of virtualized data-centers, so to lead to reduced energy consumption. Here, we analytically characterize, prototype in software and test an optimal bandwidth manager for live mi- gration of VMs in wireless channel. In this paper we present the optimal tunable-complexity bandwidth manager (TCBM) for the QoS live migration of VMs under a wireless channel from smartphone to access point. The goal is the minimization of the migration-induced communication energy under service level agreement (SLA)-induced hard constrains on the total migration time, downtime and overall available bandwidth.
Full-text available
In this paper we develop an optimized control strategy for the connection bandwidth maximization over a time varying wireless channel, by jointly controlling the adaptive source rate and the client/playout buffering policy with constraints on the maximum connection bandwidth allowed at the Appli-cation (APP) layer, the queue-capacities available at the Data-Link (DL) layer and the average and peak transmit energies sustained by the Physical (PHY) layer. The main feature of the approach we follow lies in the fact that the maximization of the throughput is performed with respect to the channel state information as well as the occupancies of the transmitter and receiver buffers, taking into account also for the need to optimize the playout buffer service so to reduce the stream-jitter provided to the final user. The resulting optimal controllers operate in a Cross-Layer (CL) fashion that involves the APP, DL and PHY layers of the underlying protocol stack. Via a parameter-depending optimization we are able to handle the jitter of the stream provided to the final user without significant impact on the bandwidth performances. The carried out numerical tests give insight into the tradeoff among average throughput, delay and jitter attained by the optimized controllers.
Conference Paper
Video streaming is a very demanding application in its resources. We used a previously proposed architecture, to implement a solution that will use the GRID or a Cloud infrastructure to stream video. The goal was to create an application that will allocate resources dynamically and escalate in a manner corresponding to the demand. We developed and tested the application as a proof of concept and created a prototype implementation. This paper documents our experimental methodology and findings.
Conference Paper
Multimedia streaming is a well established application in today's Internet and broadcasts of live events are increasingly popular. It is often the case that network conditions change during a live broadcast and this can potentially lead to Quality of Service issues at the receiving clients. This paper proposes a multimedia streaming architecture that can adapt to changing network conditions. For scalability purposes and to reduce network load, jitter and packet loss, stream relays form an integral part of this architecture as they forward media data to nearby clients. During operation, and in a manner transparent to the user, clients may be dynamically assigned to different stream relays, this results in server load balancing and in maintaining high Quality of Service for the end-users.
Full-text available
Next-generation wireless networks for personal communication services should be designed to transfer delay-sensitive bursty-traffic flows over energy-limited buffer-equipped faded connections. In this application scenario, a still-open question concerns the closed-form design of scheduling policies minimizing the average transfer delay under constraints on both average and peak energies. Since, in this paper, both queue and link states may assume finite, countable infinite, or even uncountable infinite values, we cannot resort to dynamic programming to solve the aforementioned minimization problem. The key point of the (somewhat) novel approach that we follow consists of the minimization (on a per-step basis) of the queue length averaged over the fading statistics and conditioned on the queue occupancy at the previous step when two energy constraints are considered. The first one is on the allowed peak energy, and the second one is on the available average energy conditioned on the current queue occupancy. The resulting optimal scheduler operates cross layer, meaning that it allocates step-by-step energy on the basis of both current queue and link states. We prove that, under the considered energy constraints, the scheduler retains two optimality properties. First, its stability region is the maximal admissible one. Second, the scheduler also minimizes the unconditional average queue length. The numerical tests that have been carried out corroborate these optimality properties and give insight about scheduler performance under heavy-tailed distributed input traffic, such as that generated by variable-bit-rate (VBR) media encoders.
Full-text available
This paper presents a network architecture for supporting real time distribution of multimedia content for covering major athletic events such as Soccer World Cups and Olympic Games. It has been based on ongoing work carried out in the context of the OLYMPIC IST project. The scope of the paper is dual: First, to identify the problems, and set requirements according to user’s heterogeneity and streamed traffic properties. Second, to define specifications for a multimedia distribution architecture from the end user’s point of view as well as the network management perspective. Our approach focuses on the combination of existing core network technologies such as MPLS and DiffServ and their simultaneous unification with diverse access networks (x- DSL, broadband fixed–wireless access). The goal is to support an infrastructure capable of providing real time streaming services in compliance with the specific needs of live coverage of major athletic events.
Conference Paper
Full-text available
Multicast routing enables efficient data distribution to multiple recipients. However, existing work has concentrated on extending single-domain techniques to wide-area networks, rather than providing mechanisms to realize inter-domain multicast on a global scale in the Internet.We describe an architecture for inter-domain multicast routing that consists of two complementary protocols. The Multicast Address-Set Claim (MASC) protocol forms the basis for a hierarchical address allocation architecture. It dynamically allocates to domains multicast address ranges from which groups initiated in the domain get their multicast addresses. The Border-Gateway Multicast Protocol (BGMP), run by the border routers of a domain, constructs inter-domain bidirectional shared trees, while allowing any existing multicast routing protocol to be used within individual domains. The resulting shared tree for a group is rooted at the domain whose address range covers the group's address; this domain is typically the group initiator's domain. We demonstrate the feasibility and performance of these complementary protocols through simulation.This architecture, together with existing protocols operating within each domain, is intended as a framework in which to solve the problems facing the current multicast addressing and routing infrastructure.
Conference Paper
Full-text available
Since the early days of the Internet, extending routing capabilities beyond point-to-point communication has been a desired feature, mostly as a means of resource discovery. The limited size of the Internet at that time permitted the technique of broadcasting a single packet to every possible node. With its growth, Internet-wide broadcasting became increasingly expensive which imposed constraining the scope of broadcast packets to end points that expressed interest in receiving packets of a specific service (Selective Broadcast [612]). This was in fact the first attempt to offer indirectly a group communication service over the Internet.
The purpose of this document is to provide an overview of Source-Specific Multicast (SSM) and issues related to its deployment. It discusses how the SSM service model addresses the challenges faced in inter-domain multicast deployment, changes needed to routing protocols and applications to deploy SSM and interoperability issues with current multicast service models.
Conference Paper
To efficiently support large-scale multicast applications with many thousand si- multaneous members, it is essential that protocol mechanisms be available which support efficient exchange of control information between the members of a multicast group. Recently, we proposed the use of a control topology, which or- ganizes multicast group members in a logical n-dimensional hypercube, and transmits all control information along the edges of the hypercube. In this paper, we present the design, verification, and implementation of a protocol, called Hy- perCast, which maintains members of a large multicast group in a logical hyper- cube. We use measurement experiments of an implementation of the protocol on a networked computer cluster to quantitatively assess the performance of the protocol for multicast group sizes up to 1024 members.
Multicasting, the transmission of a packet to a group of hosts, is an important service for improving the efficiency and robustness of distributed systems and applications. Although multicast capability is available and widely used in local area networks, when those LANs are interconnected by store-and-forward routers, the multicast service is usually not offered across the resulting internetwork. To address this limitation, we specify extensions to two common internetwork routing algorithms—distance-vector routing and link-state routing—to support low-delay datagram multicasting beyond a single LAN. We also describe modifications to the single-spanning-tree routing algorithm commonly used by link-layer bridges, to reduce the costs of multicasting in large extended LANs. Finally, we discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks.
Thesis (Ph. D. in Computer Science)--University of California, Berkeley, Spring 2000. Includes bibliographical references (leaves 209-227).
Conference Paper
Recently, application-layer multicast has emerged as an attempt to support group applications without the need for a network-layer multicast protocol, such as IP multicast. In application-layer multicast, applications arrange themselves as a logical overlay network and transfer data within the overlay network. In this paper, Delaunay triangulations are investigated as an overlay network topology for application-layer multicast. An advantage of Delaunay triangulations is that each application can locally derive next-hop routing information without the need for a routing protocol in the overlay. A disadvantage of a Delaunay triangulation as an overlay topology is that the mapping of the overlay to the network-layer infrastructure may be suboptimal. It is shown that this disadvantage can be partially addressed with a hierarchical organization of Delaunay triangulations. Using network topology generators, the Delaunay triangulation is compared to other proposed overlay topologies for application-layer multicast