Roberto Tamassia

Brown University, Providence, Rhode Island, United States

Are you Roberto Tamassia?

Claim your profile

Publications (325)71.52 Total impact

  • 02/2015; 8(7-7):750-761. DOI:10.14778/2752939.2752944
  • Algorithmica 01/2015; DOI:10.1007/s00453-014-9968-3 · 0.57 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proof-of-storage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice.
  • Source
    Technical Report: Accountable Storage
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proof-of-storage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice.
  • Source
    Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for order queries on lists in zero knowledge in the traditional authenticated data structure model. We call this model Privacy-Preserving Authenticated List (PPAL). In this model, the queries are performed on the list stored in the (untrusted) cloud where data integrity and privacy have to be maintained. To realize an efficient authenticated data structure, we first adapt consistent data query model. To this end we introduce a formal model called Zero-Knowledge List (ZKL) scheme which generalizes consistent membership queries in zero-knowledge to consistent membership and order queries on a totally ordered set in zero knowledge. We present a construction of ZKL based on zero-knowledge set and homomorphic integer commitment scheme. Then we discuss why this construction is not as efficient as desired in cloud applications and present an efficient construction of PPAL based on bilinear accumulators and bilinear maps which is provably secure and zero-knowledge.
  • Source
    Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
  • Esha Ghosh, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a formal model for membership and order queries on privacy-preserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacy-preserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
  • Source
    Joshua Brown, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider traffic-update mobile applications that let users learn traffic conditions based on reports from other users. These applications are becoming increasingly popular (e.g., Waze reported 30 million users in 2013) since they aggregate real-time road traffic updates from actual users traveling on the roads. However, the providers of these mobile services have access to such sensitive information as timestamped locations and movements of its users. In this paper, we describe Haze, a protocol for traffic-update applications that supports the creation of traffic statistics from user reports while protecting the privacy of the users. Haze relies on a small subset of users to jointly aggregate encrypted speed and alert data and report the result to the service provider. We use jury-voting protocols based on threshold cryptosystem and differential privacy techniques to hide user data from anyone participating in the protocol while allowing only aggregate information to be extracted and sent to the service provider. We show that Haze is effective in practice by developing a prototype implementation and performing experiments on a real-world dataset of car trajectories.
  • Source
    Charalampos Papamanthou, Elaine Shi, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function's public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.
    Proceedings of the 10th theory of cryptography conference on Theory of Cryptography; 03/2013
  • Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    Proceedings of the 20th international conference on Graph Drawing; 09/2012
  • Source
    Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of energy efficiency balanced with tracking accuracy in wireless sensor networks (WSNs). Specifically, we focus on the issues related to selecting tracking principals, i.e., the nodes with two special tasks: 1) coordinating the activities among the sensors that are detecting the tracked object's locations in time and 2) selecting a node to which the tasks of coordination and data fusion will be handed off when the tracked object exits the sensing area of the current principal. Extending the existing results that based the respective principal selection algorithms on the assumption that the target's trajectory is approximated with straight line segments, we consider more general settings of (possibly) continuous changes of the direction of the moving target. We developed an approach based on particle filters to estimate the target's angular deflection at the time of a handoff, and we considered the tradeoffs between the expensive in-node computations incurred by the particle filters and the imprecision tolerance when selecting subsequent tracking principals. Our experiments demonstrate that the proposed approach yields significant savings in the number of handoffs and the number of unsuccessful transfers in comparison with previous approaches.
    IEEE Transactions on Vehicular Technology 09/2012; 61(7):3240-3254. DOI:10.1109/TVT.2012.2201188 · 2.64 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of verifying the correctness and completeness of the result of a keyword search. We introduce the concept of an authenticated web crawler and present its design and prototype implementation. An authenticated web crawler is a trusted program that computes a specially-crafted signature over the web contents it visits. This signa-ture enables (i) the verification of common Internet queries on web pages, such as conjunctive keyword searches—this guarantees that the output of a conjunctive keyword search is correct and complete; (ii) the verification of the content returned by such Internet queries—this guarantees that web data is authentic and has not been maliciously altered since the computation of the signature by the crawler. In our solu-tion, the search engine returns a cryptographic proof of the query result. Both the proof size and the verification time are proportional only to the sizes of the query description and the query result, but do not depend on the number or sizes of the web pages over which the search is performed. As we experimentally demonstrate, the prototype implementa-tion of our system provides a low communication overhead between the search engine and the user, and fast verification of the returned results by the user.
    06/2012; 5(10). DOI:10.14778/2336664.2336666
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decision-making behaviors. As such, verifying the integrity of Internet-based searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" $s$ of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature $s$, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are non-conforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinear-map accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1/c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1/c. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We address the problem of efficient detection of destination-related motion trends in Wireless Sensor Networks (WSN) where tracking is done in collaborative manner among the sensor nodes participating in location detection. In addition to determining a single location, applications may need to detect whether certain properties are true for the (portion of the) entire trajectories. Transmitting the sequence of (location, time) values to a dedicated sink and relying on the sink to detect the validity of the desired properties is a brute-force approach that generates a lot of communication overhead. We present an in-network distributed algorithm for efficient detecting of the Continuously Moving Towards predicate with respect to a given destination that is either a point or a region with polygonal boundary. Our experiments demonstrate that the proposed approaches yield substantial savings when compared to the brute-force one.
    Mobile Data Management (MDM), 2012 IEEE 13th International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size $O(N^{1/c})$, for some constant $c\ge2$, in a single round, where $N$ is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size $2N^{1/c}$. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article addresses the problem of performing Nearest Neighbor (NN) queries on uncertain trajectories. The answer to an NN query for certain trajectories is time parameterized due to the continuous nature of the motion. As a consequence of uncertainty, there may be several objects that have a non-zero probability of being a nearest neighbor to a given querying object, and the continuous nature further complicates the semantics of the answer. We capture the impact that the uncertainty of the trajectories has on the semantics of the answer to continuous NN queries and we propose a tree structure for representing the answers, along with efficient algorithms to compute them. We also address the issue of performing NN queries when the motion of the objects is restricted to road networks. Finally, we formally define and show how to efficiently execute several variants of continuous NN queries. Our experiments demonstrate that the proposed algorithms yield significant performance improvements when compared with the corresponding naïve approaches.
    The VLDB Journal 10/2011; 20:767-791. DOI:10.1007/s00778-011-0249-3 · 1.70 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Oblivious RAM simulation is a method for achieving confidentiality and privacy in cloud computing environments. It involves obscuring the access patterns to a remote storage so that the manager of that storage cannot infer information about its contents. Existing solutions typically involve small amortized overheads for achieving this goal, but nevertheless involve potentially huge variations in access times, depending on when they occur. In this paper, we show how to de-amortize oblivious RAM simulations, so that each access takes a worst-case bounded amount of time.

Publication Stats

9k Citations
71.52 Total Impact Points

Institutions

  • 1970–2015
    • Brown University
      • Department of Computer Science
      Providence, Rhode Island, United States
  • 1999–2003
    • University of California, Irvine
      • Department of Computer Science
      Irvine, CA, United States
    • Johns Hopkins University
      Baltimore, Maryland, United States
  • 2001
    • University of Newcastle
      • Department of Computer Science
      Newcastle, New South Wales, Australia
  • 1986–1996
    • University of Illinois, Urbana-Champaign
      • Coordinated Science Laboratory
      Urbana, Illinois, United States
  • 1989
    • Sapienza University of Rome
      • Department of Computer Science
      Roma, Latium, Italy
  • 1983
    • The American University of Rome
      Roma, Latium, Italy