Roy Friedman

Technion - Israel Institute of Technology, H̱efa, Haifa District, Israel

Are you Roy Friedman?

Claim your profile

Publications (115)29.32 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems.
  • Roy Friedman, Anna Kaplun Shulman
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies a density driven virtual topography based publish/subscribe service for mobile ad hoc networks. Two variants of the service are presented and evaluated by extensive simulations; the first is very communication frugal while the latter trades a higher message overhead for faster latency. Both variants are also being compared to two other representative approaches for publish/subscribe in ad hoc networks, a dissemination tree based scheme and an efficient flooding based scheme. It is shown that the density-driven approach outperforms the others in most tested scenarios.
    Ad Hoc Networks 01/2013; 11(1):522–540. DOI:10.1016/j.adhoc.2012.07.010 · 1.94 Impact Factor
  • Source
    V. Drabkin, R. Friedman, G. Kliot, M. Segal
    [Show abstract] [Hide abstract]
    ABSTRACT: Reliable broadcast is a basic service for many collaborative applications as it provides reliable dissemination of the same information to many recipients. This paper studies three common approaches for achieving scalable reliable broadcast in ad hoc networks, namely probabilistic flooding, counter-based broadcast, and lazy gossip. The strength and weaknesses of each scheme are analyzed, and a new protocol that combines these three techniques, called RAPID, is developed. Specifically, the analysis in this paper focuses on the trade-offs between reliability (percentage of nodes that receive each message), latency, and the message overhead of the protocol. Each of these methods excel in some of these parameters, but no single method wins in all of them. This motivates the need for a combined protocol that benefits from all of these methods and allows to trade between them smoothly. Interestingly, since the RAPID protocol only relies on local computations and probability, it is highly resilient to mobility and failures and even selfish behavior. By adding authentication, it can even be made malicious tolerant. Additionally, the paper includes a detailed performance evaluation by simulation. The simulations confirm that RAPID obtains higher reliability with low latency and good communication overhead compared with each of the individual methods.
    IEEE Transactions on Dependable and Secure Computing 01/2012; 8(6-8):866 - 882. DOI:10.1109/TDSC.2010.54 · 1.14 Impact Factor
  • Roy Friedman, Alex Kogan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates a novel efficient approach to utilize multiple radio interfaces for enhancing the performance of reliable multicasts from a single sender to a group of receivers. In the proposed scheme, one radio channel (and interface) is dedicated only for recovery information transmissions. We apply this concept to both ARQ and hybrid ARQ+FEC protocols, formally analyzing the number of packets each receiver needs to process in both our approach and in the common single channel approach. We also present a corresponding efficient protocol, and study its performance by simulation. Both the formal analysis and the simulations demonstrate the benefits of our scheme.
    Reliable Distributed Systems (SRDS), 2012 IEEE 31st Symposium on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Data dissemination is an important service in mobile ad hoc networks (MANETs). The main objective of this paper is to present a dissemination protocol, called locBcast, which utilizes positioning information to obtain efficient dissemination trees with low-control overhead. This paper includes an extensive simulation study that compares locBast with selfP, dominantP, fooding, and a couple of probabilistic-/counter-based protocols. It is shown that locBcast behaves similar to or better than those protocols and is especially useful in the following challenging environments: the message sizes are large, the network is dense, and nodes are highly mobile.
    01/2011; 2011. DOI:10.1155/2011/680936
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a combined power and through- put performance study of WiFi and Bluetooth usage in smart- phones. The study reveals several interesting phenomena and tradeoffs. The conclusions from this study suggest preferred usage patterns, as well as operative suggestions for researchers and smartphone developers. I. INTRODUCTION Smartphones are quickly becoming the main computing and (data) communication platform. These days, smartphones are all equipped with Bluetooth and WiFi, which complement their cellular communication capabilities. Bluetooth was originally placed in mobile phones for personal-area communication, such as wireless earphones, synchronization with a nearby PC, and tethering. WiFi was added more recently in order to improve the users' Web surfing experience whenever a WiFi access point is available. In fact, new market researches predict that between 2012 and 2014, depending on the source, WiFi equipped smartphones will outnumber all other WiFi enabled devices combined (laptops, tablets, WiFi enabled TVs, etc.). When examining the anticipated usage patterns of WiFi on smartphones, it appears that in addition to fast and possibly free Internet access (including VoIP and video), direct communica- tion between nearby devices is of growing interest. Obvious examples include media streaming either between smartphones or between a phone and another nearby wireless device (TV or computer) in a home or office setting. This is exemplified in the plethora of new WiFi based streaming solutions, as well as being one of the main motivations behind the WiGig initiative. Another scenario includes ad-hoc social networking and communication, such as iPhone's iGroups, Nokia's Instant Community, Mobiluck, and WiPeer, to name a few. Such local communication can be potentially performed either over WiFi or over Bluetooth. Some could be performed while relying on a mutual nearby access point (AP), while others might be more natural for WiFi ad-hoc mode. When considering these alternatives, important considerations include the obtainable throughput and power consumption of each. As Bluetooth was planned for personal area communication, its
    INFOCOM 2011. 30th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 10-15 April 2011, Shanghai, China; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper investigates proactive data dissemination and storage schemes for wireless sensor networks (WSNs) with mobile sinks. The focus is on schemes that do not impose any restrictions on the sink’s mobility pattern. The goal is to enable the sink to collect a representative view of the network’s sensed data by visiting any set of x out of n nodes, where x≪n. The question is how to obtain this while maintaining a good trade-off between the communication overhead of the scheme, the storage space requirements on the nodes, and the ratio between the number of visited nodes x and the representativeness of the gathered data. To answer this question, we propose density-based proactivE data dissEmination Protocol (DEEP), which combines a probabilistic flooding with a probabilistic storing scheme. The DEEP protocol is formally analyzed and its performance is studied under simulations using different network densities and compared with a scheme based on random walks, called RaWMS.
    Computer Communications 05/2010; DOI:10.1016/j.comcom.2010.01.003 · 1.35 Impact Factor
  • Source
    Roy Friedman, Alex Kogan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers the problem of calculating dominating sets in bounded degree networks. In these networks, the maximal degree of any node is bounded by δ, which is usually significantly smaller than n, the total number of nodes in the system. Such networks arise in various settings of wireless and peer-to-peer communication. A trivial approach of choosing all nodes into the dominating set yields an algorithm with an approximation ratio of δ + 1. We show that any deterministic algorithm with a non-trivial approximation ratio requires Ω(log* n) rounds, meaning effectively that no local o(δ)-approximation deterministic algorithm may ever exist. On the positive side, we show two deterministic algorithms that achieve log δ and 2 log δ-approximation in O(δ3 + log* n) and O(δ2 logδ + log* n) time, respectively. These algorithms rely on coloring rather than node IDs to break symmetry.
    Proceedings of the 29th Annual ACM Symposium on Principles of Distributed Computing, PODC 2010, Zurich, Switzerland, July 25-28, 2010; 01/2010
  • Roy Friedman, Alex Kogan
    [Show abstract] [Hide abstract]
    ABSTRACT: Short-range wireless communication capabilities enable the creation of ad hoc networks between devices such as smart-phones or sensors, spanning, e.g., an entire high-school or a small university campus. This paper is motivated by the proliferation of devices equipped with multiple such capabilities, e.g., Blue-Tooth (BT) and WiFi for smart-phones, or ZigBee and WiFi for sensors. Yet, each of these interfaces has significantly different, and, to a large extent complementing, characteristics in terms of energy efficiency, transmission range, and bandwidth. Consequently, a viable ad hoc network composed of such devices must be able to utilize the combination of these capabilities in a clever way. For example, BT is an order of magnitude more power efficient than WiFi, but its transmission range is also an order of magnitude shorter. Hence, one would want to shut down as many WiFi transmitters as possible, while still ensuring overall network connectivity. Moreover, for latency and network capacity reasons, in addition to pure connectivity, a desired property of such a solution is to keep the number of BT hops traversed by each transmission below a given threshold k. This paper addresses this issue by introducing the novel k-Weighted Connected Dominating Set (kWCDS) problem and providing a formal definition for it. A distributed algorithm with a proven approximation ratio is presented, followed by a heuristic protocol. While the heuristic protocol has no formally proven approximation ratio, it behaves better than the first protocol in many practical network densities. Beyond that, a tradeoff between communication overhead and the quality of the resulting kWCDS emerges. The paper includes simulation results that explore the performance of the two protocols.
    12/2009: pages 159-173;
  • Source
    Roy Friedman, Galya Tcharny
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose – Mobile ad-hoc networks (MANET) are networks that are formed in an ad-hoc manner by collections of devices that are equipped with wireless communication capabilities, such as the popular WiFi (IEEE 802.11b) standard. As the hardware technology and networking protocols for MANETs become mature and ubiquitous, the main barrier for MANETs to become widely used is applications. Like in other areas of distributed computing, in order to expedite the development of applications, there is a need for middleware services that support these applications. Failure detection has been identified as a basic component for many reliable distributed middleware services and applications. This paper aims to investigate this issue. Design/methodology/approach – This paper presents an adaptation of a gossip-based failure detection protocol to MANETs, and an evaluation by extensive simulations of this protocol's performance in such networks. Findings – The results can be viewed as a feasibility check for implementing failure detection in MANETs, and the conclusions drawn from them can be used to motivate and improve future implementations of both a failure detection component and of applications and middleware services relying on such a component. Originality/value – This paper presents an adaptation of a gossip-based failure detection protocol to MANET environments, and presents an extensive simulation-based performance study of this protocol in MANETs with various parameters.
    International Journal of Pervasive Computing and Communications 11/2009; 5:476-496. DOI:10.1108/17427370911008857
  • Source
    Roy Friedman, Ari Shotland, Gwendal Simon
    [Show abstract] [Hide abstract]
    ABSTRACT: Hybrid networks are formed by a combination of access points and mobile nodes such that the mobile nodes can communicate both through the access points and using ad-hoc networking among themselves. This work deals with providing efficient routing between mobile devices in hybrid networks. Specifically, we assume the existence of a spanning tree from each access point to all mobile devices within the transitive transmission range of the access point. We utilize this spanning tree to design a family of efficient point-to-point routing protocols for communication between the mobile devices themselves. The protocols utilize the tree structure in order to avoid expensive flooding of the entire network. The paper includes a detailed simulation study of several representative communication patterns, which compares our approaches to DSR.
    Ad Hoc Networks 08/2009; DOI:10.1016/j.adhoc.2008.09.008 · 1.94 Impact Factor
  • Aline Viana, Artur Ziviani, Roy Friedman
    IEEE Communications Letters 04/2009; 13(3):178-180. DOI:10.1109/LCOMM.2009.081990 · 1.46 Impact Factor
  • Roy Friedman, Alex Kogan
    [Show abstract] [Hide abstract]
    ABSTRACT: Modern mobile phones and laptops are equipped with multiple wireless communication interfaces, such as WiFi and Bluetooth (BT), enabling the creation of ad-hoc networks. These interfaces significantly differ from one another in their power requirements, transmission range, bandwidth, etc. For example, BT is an order of magnitude more power efficient than WiFi, but its transmission range is an order of magnitude shorter. This paper introduces a management middleware that establishes a power efficient overlay for such ad-hoc networks, in which most devices can shut down their long range power hungry wireless interface (e.g., WiFi). Yet, the resulting overlay is fully connected, and for capacity and latency needs, no message ever travels more than 2k short range (e.g., BT) hops, where k is an arbitrary parameter. The paper describes the architecture of the solution and the management protocol, as well as a detailed simulations based performance study. The simulations largely validate the ability of the management infrastructure to obtain considerable power savings while keeping the network connected and maintaining reasonable latency. The performance study covers both static and mobile networks.
    Middleware 2009, ACM/IFIP/USENIX, 10th International Middleware Conference, Urbana, IL, USA, November 30 - December 4, 2009. Proceedings; 01/2009
  • Source
    Roy Friedman, Noam Mori
    [Show abstract] [Hide abstract]
    ABSTRACT: Finding data items is one of the most basic services of any distributed system. It is particular chal- lenging in ad-hoc networks, due to their inherent decentralized nature and lack of infrastructure. A data location service (DLS) provides this capability. This paper presents 3DLS, a novel density-driven data location service. 3DLS is based on performing biased walks over a density based virtual topography. 3DLS also includes an autonomic dynamic conguration mechanism for adapting the lengths of the walks, in order to ensure good performance in varying circumstances and loads. This is without any ex- plicit knowledge of the network characteristics, such as size, mobility speed, etc. Moreover, 3DLS does not rely on geographical knowledge, its decisions are based only on local information, it does not invoke multi-hop routing, and it avoids ooding the network. The paper includes a detailed performance study of 3DLS, carried by simulations, which compares 3DLS to other known approaches. The simulations results validate the viability of 3DLS.
    Proceedings of the 10th ACM Interational Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2009, New Orleans, LA, USA, May 18-21, 2009; 01/2009
  • Source
    ExtremeCom Workshop; 01/2009
  • Roy Friedman, Alex Kogan
    [Show abstract] [Hide abstract]
    ABSTRACT: A wireless ad hoc network is composed of devices that are capable of communicating directly with their neighbors (roughly speaking, nodes that are nearby). Many such devices are battery-operated, e.g., laptops, smart-phones and PDAs. Thus, their operational life-time before the battery should be recharged or replaced is limited. Among all subsystems operating inside these devices, wireless communication is accounted for the major consumption of power [1,2]. Additionally, platforms enabled with multiple wireless communication interfaces are becoming quite common. This turns the problem of efficient power usage by the wireless communication subsystem even more acute.
    Distributed Computing, 23rd International Symposium, DISC 2009, Elche, Spain, September 23-25, 2009. Proceedings; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].
  • Source
    Vadim Drabkin, Roy Friedman, Gabriel Kliot
    [Show abstract] [Hide abstract]
    ABSTRACT: Developers often use TCP connections to realize reliable point-to-point communication in distributed systems. A common issue in such systems' design is whether a middleware or an application can rely solely on TCP or if a higher-level reliable mechanism should be implemented above it. A related question is whether developers can use the breakage of TCP for failure detection. The famous end-to-end argument answers the first question. Yet common wisdom suggests that TCP breakage always results from the failure of a process or machine on either end of the connection or from a severe networking problem. Consequently, some designers might be tempted to avoid implementing a higher-level reliable delivery mechanism when designing systems for LAN environments. Others might rely on TCP breakage as a definite indication of a failure or a network partition. Here, we highlight the dangers of relying solely on TCP for reliability without any additional message- recovery mechanism at the application level (or at least inside a middleware in the same address space as the application). Also, TCP breakage can occur in a perfectly functioning LAN, so it can't be relied on for failure detection either.
    IEEE Distributed Systems Online 09/2008; DOI:10.1109/MDSO.2008.22
  • Source
    Roy Friedman, Gabriel Kliot, Chen Avin
    [Show abstract] [Hide abstract]
    ABSTRACT: Quorums are a basic construct in solving many fundamental distributed computing problems. One of the known ways of making quorums scalable and efficient is by weakening their intersection guarantee to being probabilistic. This paper explores several access strategies for implementing probabilistic quorums in ad hoc networks. In particular, we present the first detailed study of asymmetric probabilistic bi-quorum systems and show its advantages in ad hoc networks. The paper includes both a formal analysis of these approaches backed by a simulation based study. In particular, we show that one of the strategies, based on Random Walks, exhibits the smallest communication overhead.
    Dependable Systems and Networks With FTCS and DCC, 2008. DSN 2008. IEEE International Conference on; 07/2008
  • Adnan Agbaria, Roy Friedman
    [Show abstract] [Hide abstract]
    ABSTRACT: A large number of distributed checkpointing protocols have appeared in the literature. However, to make informed decisions about which protocol performs best for a given environment, one must use an objective measure for comparing them. Obviously, a distributed checkpointing protocol could be the best in a specific environment, but not in another environment. This paper presents an objective measure, called overhead ratio, for evaluating distributed checkpointing protocols. This measure extends previous evaluation schemes by incorporating several additional parameters that are inherent in distributed environments. In particular, we take into account the rollback propagation of the protocol, which impacts the length of the recovery process, and therefore the expected program run-time in executions that involve failures and recoveries. Using the objective measure as an evaluation technique, the paper also analyses several known protocols and compares their overhead ratios.
    Performance Evaluation 05/2008; DOI:10.1016/j.peva.2007.09.001 · 0.89 Impact Factor

Publication Stats

2k Citations
29.32 Total Impact Points

Institutions

  • 1997–2013
    • Technion - Israel Institute of Technology
      • Electrical Engineering Group
      H̱efa, Haifa District, Israel
  • 1997–2006
    • Cornell University
      • Department of Computer Science
      Ithaca, NY, United States
  • 2004
    • Université de Rennes 1
      Roazhon, Brittany, France
  • 2003
    • University of Illinois, Urbana-Champaign
      • Coordinated Science Laboratory
      Urbana, Illinois, United States