Conference Paper

Quality of service management in GMPLS-based grid OBS networks.

DOI: 10.1145/1529282.1529295 Conference: Proceedings of the 2009 ACM Symposium on Applied Computing (SAC), Honolulu, Hawaii, USA, March 9-12, 2009
Source: DBLP


This paper proposes an architecture for the establishment of routes with absolute QoS constraints for optical burst switched grid networks. This model uses traffic engineering provided by GMPLS to build LSPs that matches the required performance in response to a request made by the user/application of the grid. Results show that the proposal is capable of enforcing QoS by reducing the loss experienced by burst classes and allowing a better utilization of the computing resources.

Download full-text


Available from: Rafael Pereira Esteves, Sep 18, 2015
4 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Grids offer a uniform interface to a distributed collection of heterogeneous computational, storage and network resources. Most current operational grids are dedicated to a limited set of computationally and/or data intensive scientific problems. The de facto modus operandi is one where users submit job requests to a grid portal, acting as an interface to the grid's management system, which in turn negotiates with 'dumb' resources (computing and storage elements, network links) and arranges for the jobs to be executed. In this paper, we present a new grid architecture featuring generic application support, direct user access (grid-to-the-home) and decentralized scheduling intelligence in the network. We show how optical burst switching (OBS) enables these features while offering the necessary network flexibility demanded by future grid applications.
    Global Telecommunications Conference Workshops, 2004. GlobeCom Workshops 2004. IEEE; 01/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Optical burst switching (OBS) has been proposed as the next generation optical network for grid computing. In this paper, we envision a heterogeneous Grid served by an Optical Burst Switching framework, where grid traffic co-exists with IP and/or a 10 GE based traffic to achieve economy of scale. This paper addresses the latency that Grid jobs experience in OBS networks. The injection of jumbo size grid jobs can potentially affect the latency experienced by IP/10GE traffic. Simulation results have shown that in Grids served by an optical burst switch, grid jobs consistently have lower latency than co-existing IP/10GE traffic, with a slightly elevated latency of IP/10GE traffic when the size of grid jobs increases. We conclude that given the fact that OBS can efficiently handle the enormous amount of bandwidth made available by DWDM technology, Grid over Optical Burst Switching is a cost effective way to provide grid services, even for latency sensitive grid computing applications.
    High Performance Computing and Communications, Third International Conference, HPCC 2007, Houston, USA, September 26-28, 2007, Proceedings; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: One of the key components in the design of optical burst-switched nodes is the development of channel scheduling algorithms that can efficiently handle data burst contentions. Traditional scheduling techniques use approaches such as wavelength conversion and buffering to resolve burst contention. In this paper, we propose nonpreemptive scheduling algorithms that use burst segmentation to resolve burst contentions. We propose two segmentation-based scheduling algorithms, namely, nonpreemptive minimum overlapping channel (NP-MOC) and NP-MOC with void filling (NP-MOC-VF), which can significantly reduce the loss experienced in an optical burst-switched network. We further reduce packet loss by combining burst segmentation and fiber delay lines (FDLs) to resolve contentions during channel scheduling. We propose two types of scheduling algorithms that are classified based on the placement of the FDL buffers in the optical burst-switched node. These algorithms are referred to as delay-first or segment-first algorithms. The scheduling algorithms with burst segmentation and FDLs are investigated through extensive simulations. The simulation results show that the proposed algorithms can effectively reduce the packet-loss probability compared to existing scheduling techniques. The delay-first algorithms are suitable for applications that have higher delay tolerance and strict loss constraints, while the segment-first algorithms are suitable for applications with higher loss tolerance and strict delay constraints.
    Journal of Lightwave Technology 11/2005; 23(10):3125- 3137. DOI:10.1109/JLT.2005.856265 · 2.97 Impact Factor
Show more