Conference Paper

On the Design of Overlay Networks for IP Links Fault Verification

DOI: 10.1109/GLOCOM.2008.ECP.468 Conference: Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE
Source: IEEE Xplore

ABSTRACT Accurate fault detection and location is essential to the efficient and economical operation of ISP networks. In addition, it affects the performance of Internet applications such as VoIP and online gaming. Fault detection algorithms typically depend on spatial correlation to produce a set of fault hypotheses, the size of which increases by the existence of lost and spurious symptoms, and the overlap among network paths. The network administrator is left with the task of accurately locating and verifying these fault scenarios, which is a tedious and time-consuming task. In this paper, we formulate the problem of designing infrastructure overlay networks for verifying the location of IP links faults taking into account the cost of the debugging paths and the stress on the underlying IP links. We map the problem into a integer generalized flow problem, and prove its NP-hardness. We relax the link stress constraint and formulate the resulting problem as a minimum cost circulation that can be solved in polynomial time. We evaluate the fault verification and IP links coverage capabilities of various overlay network sizes and topologies using real-life Internet topologies. Finally, we identify some interesting research problems in this context.

0 Bookmarks
 · 
96 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Among all topics covered in operations research, network flows theory offers the best context to illustrate the basic concepts of optimization. This book provides an integrative view of the theory, algorithms and applications of network flows. In order for their presentation to be more intuitive and accessible to a wider audience, the authors prefer to adopt a network or graphical viewpoint rather than relying on a linear programming approach. The material in this book can be used to serve the purpose of supplementing either an upper level undergraduate or graduate course. The main features of the book are well outlined in the authors’ preface as follows: “In-depth and self-contained treatment of shortest path, maximum flow, and minimum cost flow problems, including descriptions of new and novel polynomial-time algorithms for these core models. Emphasis on powerful algorithmic strategies and analysis tools, such as data scaling, geometric improvement arguments, and potential function arguments. An easy-to-understand description of several important data structures, including d-heaps, Fibonacci heaps, and dynamic trees. Treatment of other important topics in network optimization and of practical solution techniques such as Lagrangian relaxation. Each new topic introduced by a set of applications and an entire chapter devoted to applications. A special chapter devoted to conducting empirical testing of algorithms. Over 150 applications of network flows to a variety of engineering, management, and scientific domains. Over 800 exercises that vary in difficulty, including many that develop extensions of material covered in the text. Approximately 400 figures that illustrate the material presented in the text. Extensive reference notes that provide readers with historical contexts and with guides to the literature.” In addition to the in-depth analysis of shortest path, maximum flow, minimum cost flow problems, the authors devote several other chapters to more advanced topics such as assignments and matchings, minimum spanning trees, convex cost flows, generalized flows and multicommodity flows. Furthermore, emphasis is put on design, analysis and computation testing of algorithms. Finally, pseudocodes for several algorithms are provided for readers with a basic knowledge of computer science.
    01/1993; Prentice Hall., ISBN: 978-0-13-617549-0
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we consider the problem of inferring link-level loss rates from end-to-end multicast measurements taken from a collection of trees. We give conditions under which loss rates are identifiable on a specified set of links. Two algorithms are presented to perform the link-level inferences for those links on which losses can be identified. One, the minimum variance weighted average (MVWA) algorithm treats the trees separately and then averages the results. The second, based on expectation-maximization (EM) merges all of the measurements into one computation. Simulations show that EM is slightly more accurate than MVWA, most likely due to its more efficient use of the measurements. We also describe extensions to the inference of link-level delay, inference from end-to-end unicast measurements, and inference when some measurements are missing.
    ACM SIGMETRICS Performance Evaluation Review 05/2002;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Fault localization, a central aspect of network fault management, is a process of deducing the exact source of a failure from a set of observed failure indications. It has been a focus of research activity since the advent of modern communication systems, which produced numerous fault localization techniques. However, as communication systems evolved becoming more complex and offering new capabilities, the requirements imposed on fault localization techniques have changed as well. It is fair to say that despite this research effort, fault localization in complex communication systems remains an open research problem. This paper discusses the challenges of fault localization in complex communication systems and presents an overview of solutions proposed in the course of the last ten years, while discussing their advantages and shortcomings. The survey is followed by the presentation of potential directions for future research in this area.
    Science of Computer Programming 01/2004; · 0.55 Impact Factor

Full-text (2 Sources)

Download
0 Downloads
Available from
Dec 4, 2014