ArticlePDF Available

Early Detection of Message Forwarding Faults

Authors:

Abstract and Figures

In most communication networks, pairs of processors communicate by sending messages over a path connecting them. We present communication e cient protocols that quickly detect and locate any failure along the path. Whenever there is excessive delay inforwarding messages along the path, the protocols detect a failure (even when the delay is caused by maliciously-programmed processors). The protocols ensure optimal time for either message delivery or failure detection. We observe that the actual delivery time of a message over a link is usually much smaller than the apriori known upper bound D on that delivery time. The main contribution of the paper is the way tomodelandtakeadvantage of this observation. We introduce the notion of asynchronously early determinating protocols, as well as protocols that are asynchronously early terminating, i.e., time optimal in both worse case and typical cases. More precisely, we present a time complexity measure according to which one evaluates protocols both in terms of D and.Weobserve that asynchronously early termination is a form of competitiveness. The protocols presented here are asynchorously early terminating since they are time optimal both in terms of D and of. Previous communication e cient solutions were slow in the case where D. Weobserve that this is the most typical case. Preliminary reports of parts of the work reported here appeared in the proceedings of the ICCC 88
Content may be subject to copyright.
... The generalized partial synchrony model stipulating the existence of a fixed but unknown post-GST message delay bound δ is due to [21]. The time complexity measure combining δ with an a priori known conservative message delay bound ≥ δ was first introduced in [33,34]. This work, however, assumed a stronger variant of partial synchrony where δ is unknown but holds throughout the entire execution, rather than eventually [27]. ...
... as needed. Finally, plugging (35) into (34) and using the fact that lg 2δ ≤ lg 2δ + 1, we get ...
Article
Full-text available
Partially synchronous Byzantine consensus protocols typically structure their execution into a sequence of views, each with a designated leader process. The key to guaranteeing liveness in these protocols is to ensure that all correct processes eventually overlap in a view with a correct leader for long enough to reach a decision. We propose a simple view synchronizer abstraction that encapsulates the corresponding functionality for Byzantine consensus protocols, thus simplifying their design. We present a formal specification of a view synchronizer and its implementation under partial synchrony, which runs in bounded space despite tolerating message loss during asynchronous periods. We show that our synchronizer specification is strong enough to guarantee liveness for single-shot versions of several well-known Byzantine consensus protocols, including PBFT and HotStuff. We furthermore give precise latency bounds for these protocols when using our synchronizer. By factoring out the functionality of view synchronization we are able to specify and analyze the protocols in a uniform framework, which allows comparing them and highlights trade-offs.
... Due to unique nature of DTNs, that is intermittent connectivity, asymmetric data rate and long delay, DTNs faces a lot of challenges. Traditional routing and mitigation algorithms of VANETs [23][24][25][26], Mobile ad-hoc network (MANETs) [27][28][29][30], Underwater Wireless Sensor Networks (UWSN) [31][32][33][34], Wireless Sensor Networks (WSN) [35][36][37] and TCP/IP [38][39][40][41][42] based networks are not directly applicable in DTNs. Following are few serious challenging issues in DTNs. ...
Thesis
Full-text available
Delay/Disruption Tolerant Networks (DTNs) are infrastructure less networks, where no end-to-end route exists. Partition (disconnection) in DTNs occurres frequently. DTNs are primarily developed for Interplanetary Networks (IPNs) but also used for various other applications. DakNet, ZebraNet, KioskNet and WiderNet are few specific applications of DTNs. DTNs use bundle protocol to route message. Bundle protocol also has the ability to transfer message reliable by using custody transfer. Due to its unique nature, intermittent connectivity and long delay, DTNs faces a lot of challenges. These challenges include, routing, key management, privacy, fragmentation and misbehaviour nodes. Misbehaviour nodes is one of the key challenging issues in DTNs. Misbehaved malicious and selfish nodes launches various attacks, which includes flood, packet drop and fake packet attack. Misbehaviour node overuses scarce resources of DTNs. which are buffer and bandwidth. This thesis has presents a comprehensive survey on security challenges with focus on misbehaviour nodes in DTNs. Misbehaviour nodes are categorized by two broad categories, malicious and selfish nodes. This thesis divides misbehaviour nodes into different classes through various strategies. This thesis also has uniquely categories detection technique into preventive and detective based schemes. The related work on misbehaviour nodes and detection techniques is summarized and analysed by proposed parameter rigorously. This thesis launches four different flood attacks and shows its impact on packet delivery and packet loss ratio. Thesis also proposed algorithm to mitigate flood attacks. Also this thesis propose three other algorithms to mitigate misbehaved nodes. Finally the detection techniques are evaluated mathematically, and different metrics of performance has been presented.
... Many of these weaknesses arose from the absence of a formal specification, a weak threat model and an excessive requirement for perrouter state (bounded only by the total size of the network). Herzberg and Kutten [20] present an abstract model for Byzantine detection of compromised routers based on timeouts and acknowledgments from the destination and possibly from some of the intermediate routers to the source. The requirement of information from intermediate routers offers a trade-off between fault detection time and message communication overhead. ...
... Specifically, we model the various data forward exploits listed above and then propose matching algorithm for the detection of each exploit. The existing literature on the subject of compromised routers can be summarized as follows: Perlman [9] described robust flooding algorithms for delivering the key state when network " holes" are created due to compromised routers, while detecting inconsistencies between route updates and testing authentication have been addressed in [1,2,3,6,7,8,9]. None of the existing literature has taken an approach similar to ours. ...
Thesis
Full-text available
Abstract- We consider various forms of malicious data forwarding attacks on compromised routers. We evaluate the impact of such attacks on the network performance and develop techniques as for detecting their presence. Our search is implemented via the following constructive steps: -We identify, categorize and model the various forms of malicious forwarding threats on routers. -We identify the operational performance metrics for networks, as evolving in time and set their boundaries associated with satisfactory operation. -We study the effect of each router attack on the network operational performance metrics. -Finally, we develop techniques for monitoring and detecting changes in the network operational performance metrics, as affected by the adopted adversary models.
... Many of these weaknesses arose from the absence of a formal specification, a weak threat model and an excessive requirement for perrouter state (bounded only by the total size of the network). Herzberg and Kutten [20] present an abstract model for Byzantine detection of compromised routers based on timeouts and acknowledgments from the destination and possibly from some of the intermediate routers to the source. The requirement of information from intermediate routers offers a trade-off between fault detection time and message communication overhead. ...
Chapter
Full-text available
Technical Report Abstract: The objective of this research effort was to exploit the C2 Wind Tunnel as an open experimental platform and use it to conduct computational experiments to investigate the resilience of C2 architectures to cyber and physical attacks. In addition, the concept of Integrated Command and Control was investigated with focus on collaborative mission analysis and Course of Action development. Three spirals were conducted, of increasing complexity, to investigate resilience in a contested cyber environment. In the third spiral, two levels were considered: the development of integrated COAs at the staff level when multiple components are involved and at the planning level when multiple Operations Centers are involved. Multiple modeling approaches were used including BPMN to model mission analysis and COA development, Colored Petri Nets to create executable models of these processes, Social Network Analysis to model the Operations centers and agent based modeling to describe their dynamic interactions when collaborating.
Article
Routers in any infrastructure are most interesting part for any attacker because almost whole traffic flows through these and drastic activities can be done with routers. Packet drop attack is one of the attacks that happen on router by dropping out the data packets flowing. Two flavors: Black and Gray hole creates critical situations as Black hole drop out whole traffic whereas Gray hole drops selective data packets which makes it more difficult to detect. In this paper, we proposed an algorithm to detect these attacks by using the Flow conservation property of any network device. Work has been done with the help of packet count on the interfaces of the router.
Article
Full-text available
End-to-end protocols in computer networks in which the topology changes with time are investigated. A protocol that delivers all packets ordered, without duplication, and which uses a window is presented. Using a precise model of the network correctness of the protocol is proven. The use of the window for flow control is also addressed.
Article
Full-text available
We consider the basic task of ofend-to-end communicationin dynamic networks, that is, delivery in finite time of data items generated on-line by a sender, to a receiver, in order and without duplication or omission.A dynamic communication network is one in which links may repeatedly fail and recover. In such a network, though it is impossible to establish a communication path consisting of nonfailed links, reliable communication is possible, if there is no cut of permanently failed links between a sender and receiver.This paper presents the first polynomial complexity end-to-end communication protocol in dynamic networks. In the worst case the protocol sendsO(n2m) messages per data item delivered, wherenandmare the number of processors and number of links in the network respectively. The centerpiece of our solution is the novelslideprotocol, a simple and efficient method for delivering tokens across an unreliable network.Slideis the basis for several self-stabilizing protocols and load-balancing algorithms for dynamic networks that have subsequently appeared in the literature.We use our end-to-end protocol to derive a file-transfer protocol for sufficiently large files. The bit communication complexity of this protocol isO(nD) bits, whereDis the size in bits of the file. This file-transfer protocol yields anO(n)amortizedmessage complexity end-to-end protocol.
Article
Broadcast is the task of delivering copies of a packet to all nodes in a communication network. A broadcast is called reliable if all the packets are accepted by all the nodes in finite time and in the correct order. This paper presents a class of reliable broadcast protocols for unreliable networks. One of these protocols achieves reliable broadcast with minimum broadcast cost, assuming that the network allows reliable broadcast at all. In case that the network's topology is stable, the minimum broadcast delay is achieved. No existing broadcast protocol achieves this goal. As a by-product, we achieve a new reliable routing protocol.
Article
The paper introduces a reliable distributed procedure for establishing and cancelling routes in a circuit-switched data network that uses local path identifiers (LPID). The procedure ensures that the route is set up properly unless a failure is encountered, data messages are delivered to their destinations unless they encounter a cancellation process and the route is cancelled, and all LPID entries are released after a failure or session completion. Copyright © 1986 by The Institute of Electrical and Electronics Engineers. Inc.
Article
The new ARPANET routing algorithm is an improvement over the old procedure in that it uses fewer network resources, operates on more realistic estimates of network conditions, reacts faster to important network changes, and does not suffer from long-term loops or oscillations. In the new procedure, each node in the network maintains a database describing the complete network topology and the delays on all lines, and uses the database describing the network to generate a tree representing the minimum delay paths from a given root node to every other network node. Because the traffic in the network can be quite variable, each node periodically measures the delays along its outgoing lines and forwards this information to all other nodes. The delay information propagates quickly through the network so that all nodes can update their databases and continue to route traffic in a consistent and efficient manner. An extensive series of tests were conducted on the ARPANET, showing that line overhead and CPU overhead are both less than two percent, most nodes learn of an update within 100 ms, and the algorithm detects congestion and routes packets around congested areas.
Article
Considering the urgency of the need for standards which would allow constitution of heterogeneous computer networks, ISO created a new subcommittee for "Open Systems Interconnection" (ISO/ TC97/SC 16) in 1977. The first priority of subcommittee 16 was to develop an architecture for open systems interconnection which could serve as a framework for the definition of standard protocols. As a result of 18 months of studies and discussions, SC16 adopted a layered architecture comprising seven layers (Physical, Data Link, Network, Transport, Session, Presentation, and Application). In July 1979 the specifications of this architecture, established by SC16, were passed under the name of "OSI Reference Model" to Technical Committee 97 "Data Processing" along with recommendations to start officially, on this basis, a set of protocols standardization projects to cover the most urgent needs. These recommendations were adopted by T.C97 at the end of 1979 as the basis for the following development of standards for Open Systems Interconnectlon within ISO. The OSI Reference Model was also recognized by CCITT Rapporteur's Group on "Layered Model for Public Data Network Services." This paper presents the model of architecture for Open Systems Interconnection developed by SC16. Some indications are also given on the initial set of protocols which will-likely be developed in this OSI Reference Model.
Article
In this paper a new class of network synchronization procedures, called Resynch Procedures, is described. A resynch procedure is a mechanism for effectively bringing all nodes of a distributed network to a known state simultaneously, despite arbitrary finite delays between nodes. The procedures presented have the interesting property that no time-outs are required. One use of a resynch procedure is to implement a network protocol that can guarantee that no packets will be lost and no duplicate packets will be inadvertently received, despite arbitrary node and link failures. This appears to be the first demonstration that such fail-safe protocols exist.
Article
We study the basic problem of constructing a spanning tree distributively in an asynchronous general network, in the presence of faults that occured prior to the execution of the construction algorithm. Failures of this type are encountered, for example, during a recovery from a crash in the network. In the case the network has been partitioned we construct a spanning forest. This problem is fundamental in computer communication networks, for example for routing, and as a subroutine for other distributed algorithms. Since we do not assume that a node in the network has any global knowledge about the network, no fault-resilient algorithm can guarantee termination detection. We present for the first time an optimal (in the order of message complexity) fault-resilient spanning forest constructing algorithm for general networks. The algorithm eventually constructs a spanning tree in every component of the network that remained connected in which at least one node initiated the algorithm. Although our algorithm is fault-resilient, the order of the number of messages it uses is the same as that required by a nonresilient algorithm. For a network with m communication lines and n processors, k of which initiate the algorithm spontaneously, the algorithm we present uses at most O(n log k + m) messages. Another major contribution of this paper is the approach we took in order to modify an existing tree construction algorithm for fault-free networks, to make it fault-resilient.