Publications (330)78.76 Total impact

 [Show abstract] [Hide abstract]
ABSTRACT: Suppose a client stores (Formula presented.) elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history. Solutions to this authentication problem allow the client to gain trust in the operations performed by a faulty or even malicious server that lies outside the administrative control of the client. We present two novel schemes that implement an authenticated hash table. An authenticated hash table exports the basic hashtable functionality for maintaining a dynamic set of elements, coupled with the ability to provide short cryptographic proofs that a given element is a member or not of the current set. By employing efficient algorithmic constructs and cryptographic accumulators as the core security primitive, our schemes provide constant proof size, constant verification time and sublinear query or update time, strictly improving upon previous approaches. Specifically, in our first scheme which is based on the RSA accumulator, the server is able to construct a (non)membership proof in constant time and perform updates in (Formula presented.) time for any fixed constant (Formula presented.). A variation of this scheme achieves a different tradeoff, offering constant update time and (Formula presented.) query time. Our second scheme uses an accumulator based on bilinear pairings to achieve (Formula presented.) update time at the server while keeping all other complexities constant. A variation of this scheme achieves (Formula presented.) time for queries and constant update time. An experimental evaluation of both solutions shows their practicality. 
Technical Report: Accountable Storage
[Show abstract] [Hide abstract]
ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proofofstorage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice. 
Article: Accountable Storage
[Show abstract] [Hide abstract]
ABSTRACT: We introduce Accountable Storage (AS), a framework allowing a client with small local space to outsource n file blocks to an untrusted server and be able (at any point in time after outsourcing) to provably compute how many bits have been discarded by the server. Such protocols offer " provable storage insurance " to a client: In case of a data loss, the client can be compensated with a dollar amount proportional to the damage that has occurred, forcing the server to be more " accountable " for his behavior. The insurance can be captured in the SLA between the client and the server. Although applying existing techniques (e.g., proofofstorage protocols) could address the AS problem, the related costs of such approaches are prohibitive. Instead, our protocols can provably compute the damage that has occurred through an efficient recovery process of the lost or corrupted file blocks, which requires only sublinear O(δ log n) communication, computation and local space, where δ is the maximum number of corrupted file blocks that can be tolerated. Our technique is based on an extension of invertible Bloom filters, a data structure used to quickly compute the distance between two sets. Finally, we show how our AS protocol can be integrated with Bitcoin, to support automatic compensations proportional to the number of corrupted bits at the server. We also build and evaluate our protocols showing that they perform well in practice.  [Show abstract] [Hide abstract]
ABSTRACT: We introduce a formal model for order queries on lists in zero knowledge in the traditional authenticated data structure model. We call this model PrivacyPreserving Authenticated List (PPAL). In this model, the queries are performed on the list stored in the (untrusted) cloud where data integrity and privacy have to be maintained. To realize an efficient authenticated data structure, we first adapt consistent data query model. To this end we introduce a formal model called ZeroKnowledge List (ZKL) scheme which generalizes consistent membership queries in zeroknowledge to consistent membership and order queries on a totally ordered set in zero knowledge. We present a construction of ZKL based on zeroknowledge set and homomorphic integer commitment scheme. Then we discuss why this construction is not as efficient as desired in cloud applications and present an efficient construction of PPAL based on bilinear accumulators and bilinear maps which is provably secure and zeroknowledge.  [Show abstract] [Hide abstract]
ABSTRACT: We introduce a formal model for membership and order queries on privacypreserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacypreserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.  [Show abstract] [Hide abstract]
ABSTRACT: We introduce a formal model for membership and order queries on privacypreserving authenticated lists. In this model, the queries are performed on the list stored in the cloud where data integrity and privacy have to be maintained. We then present an efficient construction of privacypreserving authenticated lists based on bilinear accumulators and bilinear maps, analyze the performance, and prove the integrity and privacy of this construction under widely accepted assumptions.  [Show abstract] [Hide abstract]
ABSTRACT: We present a simple, efficient, and secure dataoblivious randomized shuffle algorithm. This is the first secure dataoblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for networkbased outsourcing of data.  [Show abstract] [Hide abstract]
ABSTRACT: We consider trafficupdate mobile applications that let users learn traffic conditions based on reports from other users. These applications are becoming increasingly popular (e.g., Waze reported 30 million users in 2013) since they aggregate realtime road traffic updates from actual users traveling on the roads. However, the providers of these mobile services have access to such sensitive information as timestamped locations and movements of its users. In this paper, we describe Haze, a protocol for trafficupdate applications that supports the creation of traffic statistics from user reports while protecting the privacy of the users. Haze relies on a small subset of users to jointly aggregate encrypted speed and alert data and report the result to the service provider. We use juryvoting protocols based on threshold cryptosystem and differential privacy techniques to hide user data from anyone participating in the protocol while allowing only aggregate information to be extracted and sent to the service provider. We show that Haze is effective in practice by developing a prototype implementation and performing experiments on a realworld dataset of car trajectories. 
Conference Paper: Signatures of Correct Computation
[Show abstract] [Hide abstract]
ABSTRACT: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function’s public key can be updated in time proportional to the number of updated coefficients, without performing a lineartime computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x 1,x 2,…,x n and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a cryptographic proof to be used for verifying the correctness of the output. Due to the nature of the streaming setting, the verifier can only keep small local state (e.g., logarithmic) which must be updatable in a streaming manner and with no interaction with the prover. Such constraints make the problem particularly challenging and rule out applying existing verifiable computation schemes. We propose streaming authenticated data structures, a model that enables efficient verification of data structure queries on a stream. Compared to previous work, we achieve an exponential improvement in the prover’s running time: While previous solutions have linear prover complexity (in the size of the stream), even for queries executing in sublinear time (e.g., set membership), we propose a scheme with \(O(\log M\ log n)\) prover complexity, where n is the size of the stream and M is the size of the universe of elements. Our schemes support a series of expressive queries, such as (non)membership, successor, range search and frequency queries, over an ordered universe and even in higher dimensions. The central idea of our construction is a new authentication tree, called generalized hash tree. We instantiate our generalized hash tree with a hash function based on lattices assumptions, showing that it enjoys suitable algebraic properties that traditional Merkle trees lack. We exploit such properties to achieve our results. 
Chapter: Authenticating Email Search Results
[Show abstract] [Hide abstract]
ABSTRACT: Alice uses a web mail service and searches through her emails by keywords and dates. How can Alice be sure that search results she gets contain all the relevant emails she received in the past? We consider this problem and provide a solution where Alice sends to the server authentication information for every new email. In response to a query, the server augments the results with a cryptographic proof computed using the authentication information. Alice uses the proof and a locallystored cryptographic digest to verify the correctness of the result. Our method adds a small overhead to the usual interaction between the email client and server. 
Conference Paper: Graph Drawing in the Cloud: Privately Visualizing Relational Data Using Small Working Storage
[Show abstract] [Hide abstract]
ABSTRACT: We study graph drawing in a cloudcomputing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.  [Show abstract] [Hide abstract]
ABSTRACT: We study graph drawing in a cloudcomputing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.  [Show abstract] [Hide abstract]
ABSTRACT: This paper addresses the problem of energy efficiency balanced with tracking accuracy in wireless sensor networks (WSNs). Specifically, we focus on the issues related to selecting tracking principals, i.e., the nodes with two special tasks: 1) coordinating the activities among the sensors that are detecting the tracked object's locations in time and 2) selecting a node to which the tasks of coordination and data fusion will be handed off when the tracked object exits the sensing area of the current principal. Extending the existing results that based the respective principal selection algorithms on the assumption that the target's trajectory is approximated with straight line segments, we consider more general settings of (possibly) continuous changes of the direction of the moving target. We developed an approach based on particle filters to estimate the target's angular deflection at the time of a handoff, and we considered the tradeoffs between the expensive innode computations incurred by the particle filters and the imprecision tolerance when selecting subsequent tracking principals. Our experiments demonstrate that the proposed approach yields significant savings in the number of handoffs and the number of unsuccessful transfers in comparison with previous approaches.  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of verifying the correctness and completeness of the result of a keyword search. We introduce the concept of an authenticated web crawler and present its design and prototype implementation. An authenticated web crawler is a trusted program that computes a speciallycrafted signature over the web contents it visits. This signature enables (i) the verification of common Internet queries on web pages, such as conjunctive keyword searches—this guarantees that the output of a conjunctive keyword search is correct and complete; (ii) the verification of the content returned by such Internet queries—this guarantees that web data is authentic and has not been maliciously altered since the computation of the signature by the crawler. In our solution, the search engine returns a cryptographic proof of the query result. Both the proof size and the verification time are proportional only to the sizes of the query description and the query result, but do not depend on the number or sizes of the web pages over which the search is performed. As we experimentally demonstrate, the prototype implementation of our system provides a low communication overhead between the search engine and the user, and fast verification of the returned results by the user.  [Show abstract] [Hide abstract]
ABSTRACT: Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decisionmaking behaviors. As such, verifying the integrity of Internetbased searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" $s$ of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature $s$, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are nonconforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinearmap accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side. 
Conference Paper: Motion Trends Detection in Wireless Sensor Networks
[Show abstract] [Hide abstract]
ABSTRACT: We address the problem of efficient detection of destinationrelated motion trends in Wireless Sensor Networks (WSN) where tracking is done in collaborative manner among the sensor nodes participating in location detection. In addition to determining a single location, applications may need to detect whether certain properties are true for the (portion of the) entire trajectories. Transmitting the sequence of (location, time) values to a dedicated sink and relying on the sink to detect the validity of the desired properties is a bruteforce approach that generates a lot of communication overhead. We present an innetwork distributed algorithm for efficient detecting of the Continuously Moving Towards predicate with respect to a given destination that is either a point or a region with polygonal boundary. Our experiments demonstrate that the proposed approaches yield substantial savings when compared to the bruteforce one. 
Article: Practical oblivious storage
[Show abstract] [Hide abstract]
ABSTRACT: We study oblivious storage (OS), a natural way to model privacypreserving data outsourcing where a client, Alice, stores sensitive data at an honestbutcurious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1/c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1/c. These assumptions model realworld cloud storage scenarios, where tradeoffs occur between latency, bandwidth, and the size of the client's private memory.  [Show abstract] [Hide abstract]
ABSTRACT: We study oblivious storage (OS), a natural way to model privacypreserving data outsourcing where a client, Alice, stores sensitive data at an honestbutcurious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size $O(N^{1/c})$, for some constant $c\ge2$, in a single round, where $N$ is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size $2N^{1/c}$. These assumptions model realworld cloud storage scenarios, where tradeoffs occur between latency, bandwidth, and the size of the client's private memory.
Publication Stats
10k  Citations  
78.76  Total Impact Points  
Top Journals
Institutions

19702015

Brown University
 Department of Computer Science
Providence, Rhode Island, United States


19992003

University of California, Irvine
 Department of Computer Science
Irvine, CA, United States 
University of California, Riverside
 Department of Computer Science and Engineering
Riverside, California, United States 
Università Degli Studi Roma Tre
Roma, Latium, Italy 
Technion  Israel Institute of Technology
H̱efa, Haifa, Israel 
Johns Hopkins University
 Department of Computer Science
Baltimore, Maryland, United States


2001

University of Newcastle
 Department of Computer Science
Newcastle, New South Wales, Australia


19861996

University of Illinois, UrbanaChampaign
 Coordinated Science Laboratory
Urbana, Illinois, United States 
Sapienza University of Rome
 Department of Computer Science
Roma, Latium, Italy


1995

Newcastle University
NewcastleonTyne, England, United Kingdom


1991

National Research Council
Roma, Latium, Italy


1987

University of Rome Tor Vergata
Roma, Latium, Italy


1983

The American University of Rome
Roma, Latium, Italy
