Roy Friedman

Technion - Israel Institute of Technology, H̱efa, Haifa, Israel

Are you Roy Friedman?

Claim your profile

Publications (129)

  • Source
    Davide Frey · Roy Friedman · Achour Mostfaoui · [...] · Franois Taiani
    [Show abstract] [Hide abstract] ABSTRACT: In large scale systems such as the Internet, replicating data is an essential feature in order to provide availability and fault-tolerance. Attiya and Welch proved that using strong consistency criteria such as atomicity is costly as each operation may need an execution time linear with the latency of the communication network. Weaker consistency criteria like causal consistency and PRAM consistency do not ensure convergence. The different replicas are not guaranteed to converge towards a unique state. Eventual consistency guarantees that all replicas eventually converge when the participants stop updating. However, it fails to fully specify the semantics of the operations on shared objects and requires additional non-intuitive and error-prone distributed specification techniques. In addition existing consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. In this deliverable, we address these issues with two novel contributions. The first contribution proposes a notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. We use this graph to provide a generic approach to the hybridization of data consistency conditions into the same system. Based on this, we design a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). The second contribution of this deliverable focuses on improving the limitations of eventual consistency. To this end, we formalize a new consistency criterion, called update consistency, that requires the state of a replicated object to be consistent with a linearization of all the updates. In other words, whereas atomicity imposes a linearization of all of the operations, this criterion imposes this only on updates. Consequently some read operations may return outdated values. Update consistency is stronger than eventual consistency , so we can replace eventually consistent objects with update consistent ones in any program. Finally, we prove that update consistency is universal, in the sense that any object can be implemented under this criterion in a distributed system where any number of nodes may crash.
    Full-text available · Article · Nov 2015
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: Modern mobile handsets and the myriad of wearable devices connected to them offer a wide range of sensing capabilities. The ubiquity of such sensing devices offers the potential to realise novel applications based on collaborative sensing, in which application logic makes use of sensor input from a number of handsets, typically distributed across a defined physical area. Such applications will be enabled by mobile cloud computing, with the devices transferring raw or pre-processed sensed data to application logic hosted in the cloud. This results in a trade-off between the quality of the sensed data received by applications and the energy required to transfer data from the mobile handsets. We address this trade-off by considering a scheme in which a collaborative sensing middleware mediates between multiple applications requiring sensed data and the mobile handsets located within a particular physical area. We present and evaluate an algorithm which seeks to maximise the degree to which sensed data transferred from a given mobile device can be served to more than one application. We show that this algorithm leads to better overall performance in terms of energy used than an algorithm which does not aggregate sensed information between applications.
    Full-text available · Article · Oct 2015 · Sustainable Computing: Informatics and Systems
  • Roy Friedman · Michel Raynal · François Taïani
    [Show abstract] [Hide abstract] ABSTRACT: Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems.
    Article · Nov 2014
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: The proliferation of smart mobile devices, having multiple sensing capabilities and significant computing power, enables their inclusion into mobile sensing systems. Sensor-driven mobile applications are drastically altering various sectors like healthcare, social networks and environmental monitoring. However, using a large number of mobile devices has an impact on the viability of techniques involved in sensing systems. Moreover, continuous sensing affects the battery performance of the mobile device. This motivates a need for energy efficient sensor-data collection with a minimum number of mobile devices. This paper presents an algorithm that makes a trade-off between the energy consumption and the number of involved mobile devices in the sensing environment, subject to satisfying the sensing needs of multiple applications. We compare our algorithm with an energy-efficient solution for sensor allocation. Results show that our algorithm suffers from less than 6% battery loss difference while reducing the number of involved mobile devices by more than 30%.
    Full-text available · Conference Paper · Apr 2014
  • Roy Friedman · Gabriel Kliot · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: Inspired by the proliferation of cloud-based services, this paper studies consensus, one of the most fundamental distributed computing problems, in a hybrid model of computation. In this model, processes (or nodes) exchange information by passing messages or by accessing a reliable and highly-available register hosted in the cloud. The paper presents a formal definition of the model and problem, and studies performance tradeoffs related to using such a register. Specifically, it proves a lower bound on the number of register accesses in deterministic protocols, and gives a simple deterministic protocol that meets this bound when the register is compare-and-swap (CAS). In addition, two efficient protocols are presented; the first one is probabilistic and solves consensus with a single CAS register access in expectation, while the second one is deterministic and requires a single CAS register access when some favorable network conditions occur. A benefit of those protocols is that they can ensure both liveness and safety, and only their efficiency is affected by the probabilistic and timing assumptions.
    Chapter · Dec 2013
  • Roy Friedman · Amit Portnoy
    [Show abstract] [Hide abstract] ABSTRACT: This paper describes TRUSTPACK, a decentralized trust management framework that provides trust management as a generic service. TRUSTPACK is unique in that it does not provide a central service. Instead, it is run by many autonomous services. This design enables TRUSTPACK to alleviate privacy concerns, as well as potentially provide better personalization and scalability when compared with current centralized solutions. A major component of TRUSTPACK is a generic decentralized graph query processing framework called GRAPHPACK, which was also developed as part of this work. GRAPHPACK consists of a decentralized graph processing language as well as an execution engine, as elaborated in this paper. The paper also presents several examples and a case study showing how TRUSTPACK can be used to handle various trust management scenarios, as well as its incorporation in an existing third party P2P file sharing application. Prototypes of TRUSTPACK and GRAPHPACK are available as open source projects at http://code.google.com/p/trustpack/ and http://code.google.com/p/graphpack/, respectively. Copyright © 2013 John Wiley & Sons, Ltd.
    Article · Sep 2013 · Software Practice and Experience
  • Source
    Lei Jiao · Xiaoming Fu · Roy Friedman · [...] · Hannes Tschofenig
    [Show abstract] [Hide abstract] ABSTRACT: Mobile cloud computing is a new rapidly growing field. In addition to the conventional fashion that mobile clients access cloud services as in the well-known client/server model, existing work has proposed to explore cloud functionalities in another perspective - offloading part of the mobile codes to the cloud for remote execution in order to optimize the application performance and energy efficiency of the mobile device. In this position paper, we investigate the state of the art of code offloading for mobile devices, highlight the significant challenges towards a more efficient cloud-based offloading framework, and also point out how existing technologies can provide us opportunities to facilitate the framework implementation.
    Full-text available · Conference Paper · Jul 2013
  • Roy Friedman · Anna Kaplun Shulman
    [Show abstract] [Hide abstract] ABSTRACT: This paper studies a density driven virtual topography based publish/subscribe service for mobile ad hoc networks. Two variants of the service are presented and evaluated by extensive simulations; the first is very communication frugal while the latter trades a higher message overhead for faster latency. Both variants are also being compared to two other representative approaches for publish/subscribe in ad hoc networks, a dissemination tree based scheme and an efficient flooding based scheme. It is shown that the density-driven approach outperforms the others in most tested scenarios.
    Article · Jan 2013 · Ad Hoc Networks
  • Roy Friedman · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: This paper investigates a novel efficient approach to utilize multiple radio interfaces for enhancing the performance of reliable multicasts from a single sender to a group of receivers. In the proposed scheme, one radio channel (and interface) is dedicated only for recovery information transmissions. We apply this concept to both ARQ and hybrid ARQ+FEC protocols, formally analyzing the number of packets each receiver needs to process in both our approach and in the common single channel approach. We also present a corresponding efficient protocol, and study its performance by simulation. Both the formal analysis and the simulations demonstrate the benefits of our scheme.
    Conference Paper · Oct 2012
  • Source
    Vadim Drabkin · Roy Friedman · Gabriel Kliot · Marc Segal
    [Show abstract] [Hide abstract] ABSTRACT: Reliable broadcast is a basic service for many collaborative applications as it provides reliable dissemination of the same information to many recipients. This paper studies three common approaches for achieving scalable reliable broadcast in ad hoc networks, namely probabilistic flooding, counter-based broadcast, and lazy gossip. The strength and weaknesses of each scheme are analyzed, and a new protocol that combines these three techniques, called RAPID, is developed. Specifically, the analysis in this paper focuses on the trade-offs between reliability (percentage of nodes that receive each message), latency, and the message overhead of the protocol. Each of these methods excel in some of these parameters, but no single method wins in all of them. This motivates the need for a combined protocol that benefits from all of these methods and allows to trade between them smoothly. Interestingly, since the RAPID protocol only relies on local computations and probability, it is highly resilient to mobility and failures and even selfish behavior. By adding authentication, it can even be made malicious tolerant. Additionally, the paper includes a detailed performance evaluation by simulation. The simulations confirm that RAPID obtains higher reliability with low latency and good communication overhead compared with each of the individual methods.
    Full-text available · Article · Jan 2012 · IEEE Transactions on Dependable and Secure Computing
  • Roy Friedman · Alex Kogan · Yevgeny Krivolapov
    [Show abstract] [Hide abstract] ABSTRACT: This paper describes a combined power and throughput performance study of WiFi and Bluetooth usage in smartphones. The work measures the obtained throughput in various settings while employing each of these technologies, and the power consumption level associated with them. In addition, the power requirements of Bluetooth and WiFi in their respective noncommunicating modes are also compared. The study reveals several interesting phenomena and tradeoffs. In particular, the paper identifies many situations in which WiFi is superior to Bluetooth, countering previous reports. The study also identifies a couple of scenarios that are better handled by Bluetooth. The conclusions from this study suggest preferred usage patterns, as well as operative suggestions for researchers and smartphone developers. This includes a cross-layer optimization for TCP/IP that could greatly improve the throughput to power ratio whenever the transmitter is more capable than the receiver.
    Conference Paper · Apr 2011
  • Source
    Adnan Agbaria · Muhamad Hugerat · Roy Friedman
    [Show abstract] [Hide abstract] ABSTRACT: Data dissemination is an important service in mobile ad hoc networks (MANETs). The main objective of this paper is to present a dissemination protocol, called locBcast, which utilizes positioning information to obtain efficient dissemination trees with low-control overhead. This paper includes an extensive simulation study that compares locBast with selfP, dominantP, fooding, and a couple of probabilistic-/counter-based protocols. It is shown that locBcast behaves similar to or better than those protocols and is especially useful in the following challenging environments: the message sizes are large, the network is dense, and nodes are highly mobile.
    Full-text available · Article · Jan 2011 · Journal of Computer Networks and Communications
  • Source
    [Show abstract] [Hide abstract] ABSTRACT: This paper investigates proactive data dissemination and storage schemes for wireless sensor networks (WSNs) with mobile sinks. The focus is on schemes that do not impose any restrictions on the sink’s mobility pattern. The goal is to enable the sink to collect a representative view of the network’s sensed data by visiting any set of x out of n nodes, where x≪n. The question is how to obtain this while maintaining a good trade-off between the communication overhead of the scheme, the storage space requirements on the nodes, and the ratio between the number of visited nodes x and the representativeness of the gathered data. To answer this question, we propose density-based proactivE data dissEmination Protocol (DEEP), which combines a probabilistic flooding with a probabilistic storing scheme. The DEEP protocol is formally analyzed and its performance is studied under simulations using different network densities and compared with a scheme based on random walks, called RaWMS.
    Full-text available · Article · May 2010 · Computer Communications
  • Roy Friedman · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: This paper considers the problem of calculating dominating sets in networks with bounded degree. In these networks, the maximal degree of any node is bounded by Δ, which is usually significantly smaller than n, the total number of nodes in the system. Such networks arise in various settings of wireless and peer-to-peer communication. A trivial approach of choosing all nodes into the dominating set yields an algorithm with the approximation ratio of Δ + 1. We show that any deterministic algorithm with non-trivial approximation ratio requires Ω(log* n) rounds, meaning effectively that no o(Δ)-approximation deterministic algorithm with a running time independent of the size of the system may ever exist. On the positive side, we show two deterministic algorithms that achieve logΔ and 2logΔ-approximation in O(Δ3 + log* n) and O(Δ2logΔ + log* n) time, respectively. These algorithms rely on coloring rather than node IDs to break symmetry.
    Conference Paper · Jan 2010
  • Adnan Agbaria · Roy Friedman
    [Show abstract] [Hide abstract] ABSTRACT: Data dissemination is an important service in ad-hoc networks. This work presents a novel approach that utilizes the positioning information and the velocity vector of the immediate neighbors at every node to extrapolate the geographic location of the surrounding nodes at any time in order to obtain efficient dissemination trees with low control overhead. The paper includes an extensive simulation study that compares our approach to other representative overlay and probabilistic forwarding approaches. It is shown that our approach behaves similar or better than the other approaches, and is especially useful in the following challenging environments: the message sizes are large, the network is dense, and nodes are highly mobile. Therefore, we believe that this protocol is appropriate for opportunistic networks.
    Article · Jan 2010
  • Roy Friedman · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: Short-range wireless communication capabilities enable the creation of ad hoc networks between devices such as smart-phones or sensors, spanning, e.g., an entire high-school or a small university campus. This paper is motivated by the proliferation of devices equipped with multiple such capabilities, e.g., Blue-Tooth (BT) and WiFi for smart-phones, or ZigBee and WiFi for sensors. Yet, each of these interfaces has significantly different, and, to a large extent complementing, characteristics in terms of energy efficiency, transmission range, and bandwidth. Consequently, a viable ad hoc network composed of such devices must be able to utilize the combination of these capabilities in a clever way. For example, BT is an order of magnitude more power efficient than WiFi, but its transmission range is also an order of magnitude shorter. Hence, one would want to shut down as many WiFi transmitters as possible, while still ensuring overall network connectivity. Moreover, for latency and network capacity reasons, in addition to pure connectivity, a desired property of such a solution is to keep the number of BT hops traversed by each transmission below a given threshold k. This paper addresses this issue by introducing the novel k-Weighted Connected Dominating Set (kWCDS) problem and providing a formal definition for it. A distributed algorithm with a proven approximation ratio is presented, followed by a heuristic protocol. While the heuristic protocol has no formally proven approximation ratio, it behaves better than the first protocol in many practical network densities. Beyond that, a tradeoff between communication overhead and the quality of the resulting kWCDS emerges. The paper includes simulation results that explore the performance of the two protocols.
    Chapter · Dec 2009
  • Roy Friedman · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: Modern mobile phones and laptops are equipped with multiple wireless communication interfaces, such as WiFi and Bluetooth (BT), enabling the creation of ad-hoc networks. These interfaces significantly differ from one another in their power requirements, transmission range, bandwidth, etc. For example, BT is an order of magnitude more power efficient than WiFi, but its transmission range is an order of magnitude shorter. This paper introduces a management middleware that establishes a power efficient overlay for such ad-hoc networks, in which most devices can shut down their long range power hungry wireless interface (e.g., WiFi). Yet, the resulting overlay is fully connected, and for capacity and latency needs, no message ever travels more than 2k short range (e.g., BT) hops, where k is an arbitrary parameter. The paper describes the architecture of the solution and the management protocol, as well as a detailed simulations based performance study. The simulations largely validate the ability of the management infrastructure to obtain considerable power savings while keeping the network connected and maintaining reasonable latency. The performance study covers both static and mobile networks.
    Conference Paper · Nov 2009
  • Roy Friedman · Galya Tcharny
    [Show abstract] [Hide abstract] ABSTRACT: Purpose – Mobile ad-hoc networks (MANET) are networks that are formed in an ad-hoc manner by collections of devices that are equipped with wireless communication capabilities, such as the popular WiFi (IEEE 802.11b) standard. As the hardware technology and networking protocols for MANETs become mature and ubiquitous, the main barrier for MANETs to become widely used is applications. Like in other areas of distributed computing, in order to expedite the development of applications, there is a need for middleware services that support these applications. Failure detection has been identified as a basic component for many reliable distributed middleware services and applications. This paper aims to investigate this issue. Design/methodology/approach – This paper presents an adaptation of a gossip-based failure detection protocol to MANETs, and an evaluation by extensive simulations of this protocol's performance in such networks. Findings – The results can be viewed as a feasibility check for implementing failure detection in MANETs, and the conclusions drawn from them can be used to motivate and improve future implementations of both a failure detection component and of applications and middleware services relying on such a component. Originality/value – This paper presents an adaptation of a gossip-based failure detection protocol to MANET environments, and presents an extensive simulation-based performance study of this protocol in MANETs with various parameters.
    Article · Nov 2009 · International Journal of Pervasive Computing and Communications
  • Roy Friedman · Alex Kogan
    [Show abstract] [Hide abstract] ABSTRACT: A wireless ad hoc network is composed of devices that are capable of communicating directly with their neighbors (roughly speaking, nodes that are nearby). Many such devices are battery-operated, e.g., laptops, smart-phones and PDAs. Thus, their operational life-time before the battery should be recharged or replaced is limited. Among all subsystems operating inside these devices, wireless communication is accounted for the major consumption of power [1,2]. Additionally, platforms enabled with multiple wireless communication interfaces are becoming quite common. This turns the problem of efficient power usage by the wireless communication subsystem even more acute.
    Conference Paper · Sep 2009
  • Source
    Full-text available · Conference Paper · Aug 2009

Publication Stats

2k Citations

Institutions

  • 1997-2013
    • Technion - Israel Institute of Technology
      • Electrical Engineering Group
      H̱efa, Haifa, Israel
  • 2009
    • National Institute for Research in Computer Science and Control
      Le Chesney, Île-de-France, France
  • 1997-2006
    • Cornell University
      • Department of Computer Science
      Ithaca, NY, United States
  • 2003
    • University of California, Santa Barbara
      • Department of Computer Science
      Santa Barbara, California, United States
  • 2001
    • Hebrew University of Jerusalem
      Yerushalayim, Jerusalem, Israel