Roberto Tamassia

Brown University, Providence, Rhode Island, United States

Are you Roberto Tamassia?

Claim your profile

Publications (330)78.76 Total impact


  • No preview · Article · Feb 2015 · Proceedings of the VLDB Endowment
  • [Show abstract] [Hide abstract]
    ABSTRACT: Suppose a client stores (Formula presented.) elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history. Solutions to this authentication problem allow the client to gain trust in the operations performed by a faulty or even malicious server that lies outside the administrative control of the client. We present two novel schemes that implement an authenticated hash table. An authenticated hash table exports the basic hash-table functionality for maintaining a dynamic set of elements, coupled with the ability to provide short cryptographic proofs that a given element is a member or not of the current set. By employing efficient algorithmic constructs and cryptographic accumulators as the core security primitive, our schemes provide constant proof size, constant verification time and sublinear query or update time, strictly improving upon previous approaches. Specifically, in our first scheme which is based on the RSA accumulator, the server is able to construct a (non-)membership proof in constant time and perform updates in (Formula presented.) time for any fixed constant (Formula presented.). A variation of this scheme achieves a different trade-off, offering constant update time and (Formula presented.) query time. Our second scheme uses an accumulator based on bilinear pairings to achieve (Formula presented.) update time at the server while keeping all other complexities constant. A variation of this scheme achieves (Formula presented.) time for queries and constant update time. An experimental evaluation of both solutions shows their practicality.
    No preview · Article · Jan 2015 · Algorithmica
  • Source
    Technical Report: Accountable Storage
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proof-of-storage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice.
    Full-text · Technical Report · Dec 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proof-of-storage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice.
    Full-text · Article · Dec 2014
  • Source
    Esha Ghosh · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for order queries on lists in zero knowledge in the traditional authenticated data structure model. We call this model Privacy-Preserving Authenticated List (PPAL). In this model, the queries are performed on the list stored in the (untrusted) cloud where data integrity and privacy have to be maintained. To realize an efficient authenticated data structure, we first adapt consistent data query model. To this end we introduce a formal model called Zero-Knowledge List (ZKL) scheme which generalizes consistent membership queries in zero-knowledge to consistent membership and order queries on a totally ordered set in zero knowledge. We present a construction of ZKL based on zero-knowledge set and homomorphic integer commitment scheme. Then we discuss why this construction is not as efficient as desired in cloud applications and present an efficient construction of PPAL based on bilinear accumulators and bilinear maps which is provably secure and zero-knowledge.
    Preview · Article · Aug 2014
  • Source
    Esha Ghosh · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
    Preview · Article · May 2014
  • Esha Ghosh · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
    No preview · Article · Apr 2014
  • Source
    Olga Ohrimenko · Michael T. Goodrich · Roberto Tamassia · Eli Upfal
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
    Full-text · Article · Feb 2014
  • Source
    Joshua Brown · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider traffic-update mobile applications that let users learn traffic conditions based on reports from other users. These applications are becoming increasingly popular (e.g., Waze reported 30 million users in 2013) since they aggregate real-time road traffic updates from actual users traveling on the roads. However, the providers of these mobile services have access to such sensitive information as timestamped locations and movements of its users. In this paper, we describe Haze, a protocol for traffic-update applications that supports the creation of traffic statistics from user reports while protecting the privacy of the users. Haze relies on a small subset of users to jointly aggregate encrypted speed and alert data and report the result to the service provider. We use jury-voting protocols based on threshold cryptosystem and differential privacy techniques to hide user data from anyone participating in the protocol while allowing only aggregate information to be extracted and sent to the service provider. We show that Haze is effective in practice by developing a prototype implementation and performing experiments on a real-world dataset of car trajectories.
    Preview · Article · Sep 2013
  • Source
    Charalampos Papamanthou · Elaine Shi · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function’s public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.
    Preview · Conference Paper · Mar 2013
  • Charalampos Papamanthou · Elaine Shi · Roberto Tamassia · Ke Yi
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x 1,x 2,…,x n and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a cryptographic proof to be used for verifying the correctness of the output. Due to the nature of the streaming setting, the verifier can only keep small local state (e.g., logarithmic) which must be updatable in a streaming manner and with no interaction with the prover. Such constraints make the problem particularly challenging and rule out applying existing verifiable computation schemes. We propose streaming authenticated data structures, a model that enables efficient verification of data structure queries on a stream. Compared to previous work, we achieve an exponential improvement in the prover’s running time: While previous solutions have linear prover complexity (in the size of the stream), even for queries executing in sublinear time (e.g., set membership), we propose a scheme with \(O(\log M\ log n)\) prover complexity, where n is the size of the stream and M is the size of the universe of elements. Our schemes support a series of expressive queries, such as (non-)membership, successor, range search and frequency queries, over an ordered universe and even in higher dimensions. The central idea of our construction is a new authentication tree, called generalized hash tree. We instantiate our generalized hash tree with a hash function based on lattices assumptions, showing that it enjoys suitable algebraic properties that traditional Merkle trees lack. We exploit such properties to achieve our results.
    No preview · Article · Jan 2013
  • Olga Ohrimenko · Hobart Reynolds · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: Alice uses a web mail service and searches through her emails by keywords and dates. How can Alice be sure that search results she gets contain all the relevant emails she received in the past? We consider this problem and provide a solution where Alice sends to the server authentication information for every new email. In response to a query, the server augments the results with a cryptographic proof computed using the authentication information. Alice uses the proof and a locally-stored cryptographic digest to verify the correctness of the result. Our method adds a small overhead to the usual interaction between the email client and server.
    No preview · Chapter · Jan 2013
  • Michael T. Goodrich · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    No preview · Conference Paper · Sep 2012
  • Source
    Michael T. Goodrich · Olga Ohrimenko · Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    Full-text · Article · Sep 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of energy efficiency balanced with tracking accuracy in wireless sensor networks (WSNs). Specifically, we focus on the issues related to selecting tracking principals, i.e., the nodes with two special tasks: 1) coordinating the activities among the sensors that are detecting the tracked object's locations in time and 2) selecting a node to which the tasks of coordination and data fusion will be handed off when the tracked object exits the sensing area of the current principal. Extending the existing results that based the respective principal selection algorithms on the assumption that the target's trajectory is approximated with straight line segments, we consider more general settings of (possibly) continuous changes of the direction of the moving target. We developed an approach based on particle filters to estimate the target's angular deflection at the time of a handoff, and we considered the tradeoffs between the expensive in-node computations incurred by the particle filters and the imprecision tolerance when selecting subsequent tracking principals. Our experiments demonstrate that the proposed approach yields significant savings in the number of handoffs and the number of unsuccessful transfers in comparison with previous approaches.
    No preview · Article · Sep 2012 · IEEE Transactions on Vehicular Technology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of verifying the correctness and completeness of the result of a keyword search. We introduce the concept of an authenticated web crawler and present its design and prototype implementation. An authenticated web crawler is a trusted program that computes a specially-crafted signature over the web contents it visits. This signa-ture enables (i) the verification of common Internet queries on web pages, such as conjunctive keyword searches—this guarantees that the output of a conjunctive keyword search is correct and complete; (ii) the verification of the content returned by such Internet queries—this guarantees that web data is authentic and has not been maliciously altered since the computation of the signature by the crawler. In our solu-tion, the search engine returns a cryptographic proof of the query result. Both the proof size and the verification time are proportional only to the sizes of the query description and the query result, but do not depend on the number or sizes of the web pages over which the search is performed. As we experimentally demonstrate, the prototype implementa-tion of our system provides a low communication overhead between the search engine and the user, and fast verification of the returned results by the user.
    Preview · Article · Jun 2012 · Proceedings of the VLDB Endowment
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decision-making behaviors. As such, verifying the integrity of Internet-based searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" $s$ of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature $s$, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are non-conforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinear-map accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side.
    Full-text · Article · Apr 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: We address the problem of efficient detection of destination-related motion trends in Wireless Sensor Networks (WSN) where tracking is done in collaborative manner among the sensor nodes participating in location detection. In addition to determining a single location, applications may need to detect whether certain properties are true for the (portion of the) entire trajectories. Transmitting the sequence of (location, time) values to a dedicated sink and relying on the sink to detect the validity of the desired properties is a brute-force approach that generates a lot of communication overhead. We present an in-network distributed algorithm for efficient detecting of the Continuously Moving Towards predicate with respect to a given destination that is either a point or a region with polygonal boundary. Our experiments demonstrate that the proposed approaches yield substantial savings when compared to the brute-force one.
    No preview · Conference Paper · Jan 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1/c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1/c. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
    No preview · Article · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size $O(N^{1/c})$, for some constant $c\ge2$, in a single round, where $N$ is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size $2N^{1/c}$. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
    Preview · Article · Oct 2011

Publication Stats

10k Citations
78.76 Total Impact Points

Institutions

  • 1970-2015
    • Brown University
      • Department of Computer Science
      Providence, Rhode Island, United States
  • 1999-2003
    • University of California, Irvine
      • Department of Computer Science
      Irvine, CA, United States
    • University of California, Riverside
      • Department of Computer Science and Engineering
      Riverside, California, United States
    • Università Degli Studi Roma Tre
      Roma, Latium, Italy
    • Technion - Israel Institute of Technology
      H̱efa, Haifa, Israel
    • Johns Hopkins University
      • Department of Computer Science
      Baltimore, Maryland, United States
  • 2001
    • University of Newcastle
      • Department of Computer Science
      Newcastle, New South Wales, Australia
  • 1986-1996
    • University of Illinois, Urbana-Champaign
      • Coordinated Science Laboratory
      Urbana, Illinois, United States
    • Sapienza University of Rome
      • Department of Computer Science
      Roma, Latium, Italy
  • 1995
    • Newcastle University
      Newcastle-on-Tyne, England, United Kingdom
  • 1991
    • National Research Council
      Roma, Latium, Italy
  • 1987
    • University of Rome Tor Vergata
      Roma, Latium, Italy
  • 1983
    • The American University of Rome
      Roma, Latium, Italy