Nicolas Liebau

Technical University Darmstadt, Darmstadt, Hesse, Germany

Are you Nicolas Liebau?

Claim your profile

Publications (38)5.96 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The peer-to-peer paradigm gained more and more impact in the last years. The reason for P2P arising now is related to the continuous development of device capabilities in the last years, like CPU power, storage space and bandwidth. However, the demand for services and resources is permanently increasing, although the peers have a variety of other resources themselves. In this paper we present the idea of a P2P system acting as a service provider using the resources of participating peers and stating guarantees on the quality of the service it provides. In order to fulfill these service level agreements, the peers confederate to a distributed supervisor of peer resources (DISPRO), monitoring the network, predicting trends on resource availabilities and deciding on resource allocation strategies. This paper discusses the challenges and a solution draft of the concept of DISPRO.
    Full-text · Article · May 2013
  • Nicolas C. Liebau · Andreas U. Mauthe · Ralf Steinmetz
    [Show abstract] [Hide abstract]
    ABSTRACT: Trustworthy applications in fully decentralized systems require a trust anchor. However, in p2p systems there does not exist a central trust anchor. This paper describes how a distributed anchor can be implemented efficiently in such and environment. Different alternatives have been proposed in the literature; most of them suggesting distributing the trust anchor by using a quorum decision, assuming that a random quorum would ensure trustworthiness. However, there are major issues, e.g., can it be assumed that a quorum was actually randomly chosen? how can it be verified, that a signature was created by a truly random quorum? In this paper a solution for this is presented through the token-based accounting scheme using specific mechanisms that ensure a distributed trust anchor in p2p systems and its evaluation.
    No preview · Article · May 2011 · PIK - Praxis der Informationsverarbeitung und Kommunikation
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Trustworthy applications in fully decentralized systems require a trust anchor. This paper describes how such an anchor can be implemented efficiently in p2p systems. The basic concept is to use threshold cryptography in order to sign messages by a quorum of peers. The focus is put on advanced mechanisms to secure the shares of the secret key over time, using proactive secret sharing. This mechanism was researched in context of the token-based accounting scheme.
    Preview · Conference Paper · Jan 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Reliable communication systems are one of the key success factors for a successful first response mission. Current crisis response communication systems suffer from damaged or destroyed infrastructure or are just overstressed in the case of a large scale disaster. We provide an outline for a distributed communication approach, which fulfills the requirements of first responders. It is based on a layered network topology and current technology used in research projects or already established products. In addition, we propose a testing framework for the evaluation of a crisis response communication system.
    Full-text · Article · Apr 2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This demo paper presents an prototype implementation of a decentralized and distributed approach for spatial queries. The main focus is the location-based search for all the objects or information in the particular geographical area.
    Full-text · Conference Paper · Jan 2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multimedia creation and consumption is highly intensive and makes up the majority of Internet traffic nowadays. End-users are able to share their digital content with each other and to build communities based on interests, which often differ drastically according to location. Distributing these media using a central server can be quite expensive for a content provider. Distributed (peer-to-peer like) systems share costs evenly among participants. Thus, distributed multimedia systems will be more important in the future. The global distribution of end-users aggravates high-quality delivery of multimedia content. In this paper, we argue that geographical location-awareness greatly helps distributed multimedia communication. It increases the quality of multimedia content delivery and at the same time satisfies the growing need for more personalized, location-based services. In this paper, as a proof of concept, we introduce an overlay structure for distributed multimedia systems (and similar systems), which is location-aware and uses the locations of its nodes to optimize node-to-node communication for performance and delay. At the same time, the system enables location-based services.
    No preview · Article · Feb 2008 · Proceedings of the IEEE
  • [Show abstract] [Hide abstract]
    ABSTRACT: Information Lifecycle Management (ILM) speichert Dateien gemäß ihres Wertes. Somit ist die Wertzuweisung von Dateien eine der Hauptaufgaben in ILM. In diesem Papier betrachten wir wie der Wert einer Datei bestimmt werden kann. Während die bekannten Methoden einen Wert als Dezimalzahl ermitteln, präsentieren wir eine Methode, die den Wert einer Datei als die "Wahrscheinlichkeit zukünftiger Zugriffe" ermittelt. Die Anwendbarkeit dieser Methode stellen wir mittels Simulation dar. Dabei vergleichen wir die neue Methode mit einer als Benchmark dienenden optimalen Methode, die allerdings nur unter Laborbedingungen funktioniert.
    No preview · Conference Paper · Jan 2008
  • [Show abstract] [Hide abstract]
    ABSTRACT: Information Lifecycle Management (ILM) is a strategic concept for storage of information and documents. ILM is based on the idea that in an enterprise information have different values. Information with different values are stored on different storage hierarchies. ILM offers significant potential cost savings by tiering storage and 90% of decision makers consider implementing ILM (Linden 2006). Nonetheless, there are too few experience reports and experimenting and researching in real systems are too expensive. This paper addresses this issue and contributes to supporting and assisting IT managers in their decision-making process. ILM automation needs migration rules. There are well-known static, heuristic migration rules and we present a new dynamic migration rule for ILM. These migration rules are implemented in an ILM simulator. We compare the performance of the new dynamic rule with the heuristics. The simulative approach has two advantages. On the one hand it offers predictions about the dynamic behaviour of an ILM migration rule and, on the other hand, it dispenses with real storage hardware. Simulation leads to decisions under certainty. When making a decision under certainty, the major problem is to determine which is the trade-off among different objectives. Cost-benefit analysis can be used to this purpose. A decision matrix is laid where rows represent choices and columns represent states of nature. The simulated results support the choice of migration rules and help to avoid mismanagement and poor investments in advance. The results raise the awareness of choosing the best alternative.
    No preview · Conference Paper · Jan 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Peer-to-peer and mobile networks gained significant attention of both research community and industry. Applying the peer-to-peer paradigm in mobile networks lead to several problems regarding the bandwidth demand of peer-to-peer networks. Time-critical messages are delayed and delivered unacceptably slow. In addition to this, scarce bandwidth is wasted on messages of less priority. Therefore, the focus of this paper is on bandwidth management issues at the overlay layer and how they can be solved. We present HiPNOS.KOM, a priority based scheduling and active queue management system. It guarantees better QoS for higher prioritized messages in upper network layers of peerto- peer systems. Evaluation using the peer-to-peer simulator PeerfactSim.KOM shows that HiPNOS.KOM brings significant improvement in Kademlia in comparison to FIFO and Drop-Tail, strategies that are used nowadays on each peer. User initiated lookups have in Kademlia 24% smaller operation duration when using HiPNOS.KOM.
    Full-text · Conference Paper · Nov 2007
  • Source
    Aleksandra Kovacevic · Nicolas Liebau · Ralf Steinmetz
    [Show abstract] [Hide abstract]
    ABSTRACT: Location based services are becoming increasingly popular as devices that determine geographical position become more available to end users. The main problem of existing solutions to location-based search is keeping information updated requires centralized maintenance at specific times. Therefore, retrieved results do not include all objects that exist in reality. A peer-to-peer (P2P) approach can easily overcome this issue as peers are responsible for the information users are searching for Unfortunately, current state-of-the-art overlays cannot fulfill the requirements for efficient and fully retrievable location-based search. In this paper we present Globase.KOM, a hierarchical tree-based P2P overlay that enables fully retrievable location-based overlay operations which proved to be highly efficient and logarithmically scalable.
    Full-text · Conference Paper · Oct 2007
  • Source

    Full-text · Article · Sep 2007 · it - Information Technology
  • Ralf Steinmetz · Nicolas Liebau · Klaus Wehrle

    No preview · Article · Sep 2007 · it - Information Technology

  • No preview · Article · Sep 2007 · it - Information Technology
  • Lars Arne Turczyk · Nicolas Liebau
    [Show abstract] [Hide abstract]
    ABSTRACT: Information Lifecycle Management (ILM) is a strategic concept for storage of information and documents. ILM is based on the idea that in an enterprise different information have different values. Valuable information is stored on systems with a high quality of service (QoS). The value changes over time and therefore migration of information is required to cheaper storage systems with a lower QoS. Automated migration makes ILM dynamic. Such automation requires storage systems to understand what files are important at what time so that right policies can be applied. In this point ILM nowadays lacks methods and tools. In this paper we describe the modeling of a simulator for Information Lifecycle Management (ILM). The objectives are determined verbally and the related assumptions are specified. The model is implemented into a simulator, which allows dynamic ILM considerations. 90% of decision makers consider implementing ILM (Linden 2006) but there are too few experience reports and experimenting and researching in real systems are too expensive. This paper addresses this issue and contributes to supporting and assisting IT managers in their decision-making process. The simulative approach has two advantages. First, it offers predictions about the long-term dynamic behaviour of an ILM scenario. Second, it dispenses with real storage hardware which allows comparing architectural design alternatives. The first simulation results focus on a reasonable number of hierarchies in ILM scenarios. The results raise the awareness of the required number of hierarchies and the choice of storage technologies. They help to avoid mismanagement and poor investments in advance. This shows that the simulator is a useful tool for design of ILM solutions. Together with other tools like TCO-calculators it supports the decision process of IT managers.
    No preview · Conference Paper · Jan 2007
  • [Show abstract] [Hide abstract]
    ABSTRACT: ILM is based on the idea that in an enterprise different information have different values. Valuable information is stored on systems with a high quality of service (QoS). The value changes over time and therefore migration of information is required to cheaper storage systems with a lower QoS. Automated migration makes ILM dynamic. Such automation requires storage systems to understand what files are important at what time so that right policies can be applied. In this point ILM nowadays lacks information valuation methods. This paper looks at how the value of a file can be measured. Different from traditional methods using metadata leading to a classical decimal-value we show how the value can be derived using a probabilistic method. Here the value of a file is calculated from usage information and expressed as a "probability of further use". This is a new method which allows valuation depending on the future importance of a file. Feasibility of the new method is verified by generating file migration rules for ILM.
    No preview · Conference Paper · Jan 2007
  • [Show abstract] [Hide abstract]
    ABSTRACT: Information Lifecycle Management (ILM) is a strategic concept for storage of information and documents. ILM is based on the idea that in an enterprise information have different values. Information with different values are stored on different storage hierarchies. ILM offers significant potential cost savings by tiering storage and 90% of decision makers consider implementing ILM (Linden 2006). Nonetheless, there are too few experience reports and experimenting and researching in real systems are too expensive. In addition, 66% of IT managers do not have time to put together a basic cost model or a data value model for ILM projects. (Foskett 2006). This paper addresses these issues and contributes to supporting and assisting IT managers in their decision-making process. We present a cost model for ILM in an enterprise. The model is used for ILM simulations. The simulative approach has two advantages. On the one hand it offers predictions about the dynamic behaviour of an ILM scenario and, on the other hand, it dispenses with real storage hardware. The simulated results lead to design guidelines for ILM environments which help to avoid mismanagement and poor investments in advance. The results raise the awareness of the required number of hierarchies and the choice of storage technologies.
    No preview · Conference Paper · Jan 2007
  • Source

    Full-text · Conference Paper · Jan 2007
  • [Show abstract] [Hide abstract]
    ABSTRACT: Structured overlay networks for Peer-to-Peer systems (e.g. based on Distributed Hash Tables) use proactive mechanisms to provide efficient indexing functionality for advertised resources. The majority of their occurrences in proposed systems (e.g. Chord, Pastry) provide upper bounds (logarithmic complexity with respect to the size of the graph representing the network) on the communication cost in worst case scenarios and their performance is superior compared to unstructured alternatives. However, in particular (empirically observed) scenarios where the popularity of the advertised resources follows a distribution considerably different from the uniform distribution, structured P2P networks may perform inferiorly compared to well designed unstructured P2P networks that exploit effectively the resource popularity distribution. In order to address this issue, a very simple caching mechanism is suggested in this paper that preserves the theoretical superiority of structured overlay networks regardless of the popularity of the advertised resources. Moreover, the churn effect observed in Peer-to-Peer systems is considered. The proposed mechanism is evaluated using simulation experimesnts.
    No preview · Chapter · Dec 2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: Today, Peer-to-Peer applications are predominant on the internet when considered in terms of its traffic consumption. However apart from Skype, their commercial success is still very limited. This is due to the difficulties faced when trying to implement crucial functionality such as accounting and charging without violating the Peer-to-Peer paradigm. A fully decentralized accounting scheme based on tokens was presented by the authors last year. In this paper we analyse the interactions between token-based accounting and charging in order to enable peers to charge for their services. We present three different charging schemes using tokens as (1) pure receipts, as (2) Micropayment, and as (3) bill of exchange. These schemes are evaluated based on the provided security and the overhead traffic introduced into a Peer-to-Peer system.
    No preview · Chapter · Jun 2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: The intrinsic properties of the employed graphs in designing peer-to-peer overlay networks are crucial for the performance of the deployed peer-to-peer systems. Several structured topologies have been proposed based on meshes, enhanced rings, redundant tree structures, etc. Among them, de Bruijn graphs are promising alternatives since they provide some crucial asymptotically optimal characteristics. In this paper, we discuss the necessary algorithms and protocol messages to develop efficiently the employed routing procedure of Omicron, which is a hybrid overlay network based on de Bruijn graphs enriched with clustering and role specialization mechanisms. Enhancements of the original de Bruijn structure are advised to cope with the intrinsic issue of uneven distribution of the routing workload. The developed system is evaluated and compared with Chord, which is used as the reference point. The superiority of de Bruijn based overlay networks with respect to scalability is quantitatively demonstrated using simulation experiments. Further, the ability of the two systems to exploit the underlying network is investigated
    No preview · Conference Paper · May 2006