Roberto Tamassia

Brown University, Providence, Rhode Island, United States

Are you Roberto Tamassia?

Claim your profile

Publications (318)46.13 Total impact

  • Source
    Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for order queries on lists in zero knowledge in the traditional authenticated data structure model. We call this model Privacy-Preserving Authenticated List (PPAL). In this model, the queries are performed on the list stored in the (untrusted) cloud where data integrity and privacy have to be maintained. To realize an efficient authenticated data structure, we first adapt consistent data query model. To this end we introduce a formal model called Zero-Knowledge List (ZKL) scheme which generalizes consistent membership queries in zero-knowledge to consistent membership and order queries on a totally ordered set in zero knowledge. We present a construction of ZKL based on zero-knowledge set and homomorphic integer commitment scheme. Then we discuss why this construction is not as efficient as desired in cloud applications and present an efficient construction of PPAL based on bilinear accumulators and bilinear maps which is provably secure and zero-knowledge.
    08/2014;
  • Source
    Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
    05/2014;
  • Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
    04/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
    02/2014;
  • Source
    Joshua Brown, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider traffic-update mobile applications that let users learn traffic conditions based on reports from other users. These applications are becoming increasingly popular (e.g., Waze reported 30 million users in 2013) since they aggregate real-time road traffic updates from actual users traveling on the roads. However, the providers of these mobile services have access to such sensitive information as timestamped locations and movements of its users. In this paper, we describe Haze, a protocol for traffic-update applications that supports the creation of traffic statistics from user reports while protecting the privacy of the users. Haze relies on a small subset of users to jointly aggregate encrypted speed and alert data and report the result to the service provider. We use jury-voting protocols based on threshold cryptosystem and differential privacy techniques to hide user data from anyone participating in the protocol while allowing only aggregate information to be extracted and sent to the service provider. We show that Haze is effective in practice by developing a prototype implementation and performing experiments on a real-world dataset of car trajectories.
    09/2013;
  • Charalampos Papamanthou, Elaine Shi, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function's public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.
    Proceedings of the 10th theory of cryptography conference on Theory of Cryptography; 03/2013
  • Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    Proceedings of the 20th international conference on Graph Drawing; 09/2012
  • Source
    Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    09/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decision-making behaviors. As such, verifying the integrity of Internet-based searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" $s$ of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature $s$, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are non-conforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinear-map accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side.
    04/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1/c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1/c. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
    01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of energy efficiency balanced with tracking accuracy in wireless sensor networks (WSNs). Specifically, we focus on the issues related to selecting tracking principals, i.e., the nodes with two special tasks: 1) coordinating the activities among the sensors that are detecting the tracked object's locations in time and 2) selecting a node to which the tasks of coordination and data fusion will be handed off when the tracked object exits the sensing area of the current principal. Extending the existing results that based the respective principal selection algorithms on the assumption that the target's trajectory is approximated with straight line segments, we consider more general settings of (possibly) continuous changes of the direction of the moving target. We developed an approach based on particle filters to estimate the target's angular deflection at the time of a handoff, and we considered the tradeoffs between the expensive in-node computations incurred by the particle filters and the imprecision tolerance when selecting subsequent tracking principals. Our experiments demonstrate that the proposed approach yields significant savings in the number of handoffs and the number of unsuccessful transfers in comparison with previous approaches.
    IEEE Transactions on Vehicular Technology 01/2012; 61(7):3240-3254. · 2.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We address the problem of efficient detection of destination-related motion trends in Wireless Sensor Networks (WSN) where tracking is done in collaborative manner among the sensor nodes participating in location detection. In addition to determining a single location, applications may need to detect whether certain properties are true for the (portion of the) entire trajectories. Transmitting the sequence of (location, time) values to a dedicated sink and relying on the sink to detect the validity of the desired properties is a brute-force approach that generates a lot of communication overhead. We present an in-network distributed algorithm for efficient detecting of the Continuously Moving Towards predicate with respect to a given destination that is either a point or a region with polygonal boundary. Our experiments demonstrate that the proposed approaches yield substantial savings when compared to the brute-force one.
    Mobile Data Management (MDM), 2012 IEEE 13th International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size $O(N^{1/c})$, for some constant $c\ge2$, in a single round, where $N$ is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size $2N^{1/c}$. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
    10/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Oblivious RAM simulation is a method for achieving confidentiality and privacy in cloud computing environments. It involves obscuring the access patterns to a remote storage so that the manager of that storage cannot infer information about its contents. Existing solutions typically involve small amortized overheads for achieving this goal, but nevertheless involve potentially huge variations in access times, depending on when they occur. In this paper, we show how to de-amortize oblivious RAM simulations, so that each access takes a worst-case bounded amount of time.
    07/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of providing privacy-preserving access to an outsourced honest-but-curious data repository for a group of trusted users. We show that such privacy-preserving data access is possible using a combination of probabilistic encryption, which directly hides data values, and stateless oblivious RAM simulation, which hides the pattern of data accesses. We give simulations that have only an $O(\log n)$ amortized time overhead for simulating a RAM algorithm, $\cal A$, that has a memory of size $n$, using a scheme that is data-oblivious with very high probability assuming the simulation has access to a private workspace of size $O(n^\nu)$, for any given fixed constant $\nu>0$. This simulation makes use of pseudorandom hash functions and is based on a novel hierarchy of cuckoo hash tables that all share a common stash. We also provide results from an experimental simulation of this scheme, showing its practicality. In addition, in a result that may be of some theoretical interest, we also show that one can eliminate the dependence on pseudorandom hash functions in our simulation while having the overhead rise to be $O(\log^2 n)$.
    Computing Research Repository - CORR. 05/2011;
  • Source
    Charalampos Papamanthou, Elaine Shi, Roberto Tamassia
    IACR Cryptology ePrint Archive. 01/2011; 2011:587.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This work addresses the problem of geographic routing in the presence of holes or voids in wireless sensor networks. We postulate that, once the boundary of the hole has been established, relying on the existing algorithms for bypassing it may cause severe depletion of the energy reserves among the nodes at (or near) that boundary. This, in turn, may soon render some of those nodes useless for any routing (and/or sensing) purposes, thereby effectively enlarging the size of the pre-existing hole. To extend the lifetime of the nodes along the boundary of a given hole, we propose two heuristic approaches which aim at relieving some of the routing load of the boundary nodes. Towards that, our approaches propose that some of the routes that would otherwise need to bypass the hole along the boundary, should instead start to deviate from their original path further from the hole. Our experiments demonstrate that the proposed approaches not only increase the lifetime of the nodes along the boundary of a given hole, but also yield a more uniform depletion of the energy reserves in its vicinity.
    Proceedings of the Global Communications Conference, GLOBECOM 2011, 5-9 December 2011, Houston, Texas, USA; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the design of protocols for set-operation verification, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present new authenticated data structures that allow any entity to publicly verify a proof attesting the correctness of primitive set operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinear-map accumulators as well as on a primitive called accumulation tree, our protocols achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), while incurring no extra asymptotic space overhead. The proof construction is also efficient, adding a logarithmic overhead to the computation of the answer of a set-operation query. In contrast, existing schemes entail high communication and verification costs or high storage costs. Applications of interest include efficient verification of keyword search and database queries. The security of our protocols is based on the bilinear q-strong Diffie-Hellman assumption.
    Advances in Cryptology - CRYPTO 2011 - 31st Annual Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2011. Proceedings; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article addresses the problem of performing Nearest Neighbor (NN) queries on uncertain trajectories. The answer to an NN query for certain trajectories is time parameterized due to the continuous nature of the motion. As a consequence of uncertainty, there may be several objects that have a non-zero probability of being a nearest neighbor to a given querying object, and the continuous nature further complicates the semantics of the answer. We capture the impact that the uncertainty of the trajectories has on the semantics of the answer to continuous NN queries and we propose a tree structure for representing the answers, along with efficient algorithms to compute them. We also address the issue of performing NN queries when the motion of the objects is restricted to road networks. Finally, we formally define and show how to efficiently execute several variants of continuous NN queries. Our experiments demonstrate that the proposed algorithms yield significant performance improvements when compared with the corresponding naïve approaches.
    The VLDB Journal 01/2011; 20:767-791. · 1.40 Impact Factor
  • Source
    Charalampos Papamanthou, Roberto Tamassia
    IACR Cryptology ePrint Archive. 01/2011; 2011:102.

Publication Stats

7k Citations
46.13 Total Impact Points

Institutions

  • 1970–2014
    • Brown University
      • Department of Computer Science
      Providence, Rhode Island, United States
  • 1999–2003
    • University of California, Irvine
      • Department of Computer Science
      Irvine, CA, United States
    • Technion - Israel Institute of Technology
      H̱efa, Haifa District, Israel
  • 2000
    • University of Waterloo
      Waterloo, Ontario, Canada
  • 1993
    • Johns Hopkins University
      • Department of Computer Science
      Baltimore, Maryland, United States
  • 1991
    • University of Texas at Dallas
      • Department of Computer Science
      Richardson, TX, United States
  • 1990
    • Università Degli Studi Roma Tre
      Roma, Latium, Italy
  • 1989
    • Sapienza University of Rome
      • Department of Computer Science
      Roma, Latium, Italy
  • 1983–1989
    • The American University of Rome
      Roma, Latium, Italy
  • 1985–1988
    • University of Illinois, Urbana-Champaign
      • Coordinated Science Laboratory
      Urbana, IL, United States
  • 1984
    • University of Rome Tor Vergata
      Roma, Latium, Italy