Moni Naor’s research while affiliated with Weizmann Institute of Science and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (311)


Adversarially Robust Bloom Filters: Monotonicity and Betting
  • Article
  • Full-text available

April 2025

·

6 Reads

Chen Lotan

·

Moni Naor

A Bloom filter is a probabilistic data structure designed to provide a compact representation of a set S of elements from a large universe U. The trade-off for this succinctness is allowing some errors. The Bloom filter efficiently answers membership queries: given any query x, if x is in S, it must answer ’Yes’; if x is not in S, it should answer ’Yes’ only with a small probability (at most ε). Traditionally, the error probability of the Bloom filter is analyzed under the assumption that the query is independent of its internal randomness. However, Naor and Yogev (Crypto 2015) focused on the behavior of this data structure in adversarial settings; where the adversary may choose the queries adaptively. One particular challenge in this direction is to define rigorously the robustness of Bloom filters in this model. In this work, we continue investigating the definitions of success of the adaptive adversary. Specifically, we focus on two notions proposed by Naor and Oved (TCC 2022) and examine the relationships between them. In particular, we highlight the notion of Bet-or-Pass as being stronger than others, such as Monotone-Test Resilience.

Download

Figure 2: Relationship between A, A ′ B and C. A red arrow indicates that this is the direction of the edge if the two vertices have an edge in G. A blue arrow indicates that this is the direction of the edge if there is no edge in G. A black arrow indicates that this is the direction of the edge regardless of G. For A and A ′ , we have a i → a ′ j if i = j and otherwise a ′ j → a i .
Figure 3: Relationship between x 1 , x 2 and the remaining sets.
From Donkeys to Kings in Tournaments

October 2024

·

23 Reads

A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a \emph{king}. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k3k \le 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n3/2)O(n^{3/2}) queries). On the other hand, we show that finding k4k \ge 4 kings requires Ω(n2)\Omega(n^2) queries, even in the randomized case. We consider the RAM model for k4k \geq 4. We show an algorithm that finds k kings in time O(kn2)O(kn^2), which is optimal for constant values of k. Alternatively, one can also find k4k \ge 4 kings in time nωn^{\omega} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.






Adjacency Sketches in Adversarial Environments

September 2023

·

10 Reads

An adjacency sketching or implicit labeling scheme for a family F\cal F of graphs is a method that defines for any n vertex GFG \in \cal F an assignment of labels to each vertex in G, so that the labels of two vertices tell you whether or not they are adjacent. The goal is to come up with labeling schemes that use as few bits as possible to represent the labels. By using randomness when assigning labels, it is sometimes possible to produce adjacency sketches with much smaller label sizes, but this comes at the cost of introducing some probability of error. Both deterministic and randomized labeling schemes have been extensively studied, as they have applications for distributed data structures and deeper connections to universal graphs and communication complexity. The main question of interest is which graph families have schemes using short labels, usually O(logn)O(\log n) in the deterministic case or constant for randomized sketches. In this work we consider the resilience of probabilistic adjacency sketches against an adversary making adaptive queries to the labels. This differs from the previously analyzed probabilistic setting which is ``one shot". We show that in the adaptive adversarial case the size of the labels is tightly related to the maximal degree of the graphs in F\cal F. This results in a stronger characterization compared to what is known in the non-adversarial setting. In more detail, we construct sketches that fail with probability ε\varepsilon for graphs with maximal degree d using 2dlog(1/ε)2d\log (1/\varepsilon) bit labels and show that this is roughly the best that can be done for any specific graph of maximal degree d, e.g.\ a d-ary tree.



Private Everlasting Prediction

May 2023

·

10 Reads

A private learner is trained on a sample of labeled points and generates a hypothesis that can be used for predicting the labels of newly sampled points while protecting the privacy of the training set [Kasiviswannathan et al., FOCS 2008]. Research uncovered that private learners may need to exhibit significantly higher sample complexity than non-private learners as is the case with, e.g., learning of one-dimensional threshold functions [Bun et al., FOCS 2015, Alon et al., STOC 2019]. We explore prediction as an alternative to learning. Instead of putting forward a hypothesis, a predictor answers a stream of classification queries. Earlier work has considered a private prediction model with just a single classification query [Dwork and Feldman, COLT 2018]. We observe that when answering a stream of queries, a predictor must modify the hypothesis it uses over time, and, furthermore, that it must use the queries for this modification, hence introducing potential privacy risks with respect to the queries themselves. We introduce private everlasting prediction taking into account the privacy of both the training set and the (adaptively chosen) queries made to the predictor. We then present a generic construction of private everlasting predictors in the PAC model. The sample complexity of the initial training sample in our construction is quadratic (up to polylog factors) in the VC dimension of the concept class. Our construction allows prediction for all concept classes with finite VC dimension, and in particular threshold functions with constant size initial training sample, even when considered over infinite domains, whereas it is known that the sample complexity of privately learning threshold functions must grow as a function of the domain size and hence is impossible for infinite domains.


New Algorithms and Applications for Risk-Limiting Audits

May 2023

·

11 Reads

Risk-limiting audits (RLAs) are a significant tool in increasing confidence in the accuracy of elections. They consist of randomized algorithms which check that an election's vote tally, as reported by a vote tabulation system, corresponds to the correct candidates winning. If an initial vote count leads to the wrong election winner, an RLA guarantees to identify the error with high probability over its own randomness. These audits operate by sequentially sampling and examining ballots until they can either confirm the reported winner or identify the true winner. The first part of this work suggests a new generic method, called ``Batchcomp", for converting classical (ballot-level) RLAs into ones that operate on batches. As a concrete application of the suggested method, we develop the first ballot-level RLA for the Israeli Knesset elections, and convert it to one which operates on batches. We ran the suggested ``Batchcomp" procedure on the results of 22nd, 23rd and 24th Knesset elections, both with and without errors. The second part of this work suggests a new use-case for RLAs: verifying that a population census leads to the correct allocation of political power to a nation's districts or federal-states. We present an adaptation of ALPHA, an existing RLA method, to a method which applies to censuses. Our census-RLA is applicable in nations where parliament seats are allocated to geographical regions in proportion to their population according to a certain class of functions (highest averages). It relies on data from both the census and from an additional procedure which is already conducted in many countries today, called a post-enumeration survey.


Citations (74)


... As a result, the adversary can generate valid signatures of the victim user. Recent studies, such as those in [7][8][9], have explored applying post-quantum cryptographic schemes. Although such schemes enable users to protect their digital assets against quantum attacks, users may fail or simply forget to migrate their classical digital signature-protected assets to systems secured by post-quantum cryptographic schemes. ...

Reference:

On the Proof of Ownership of Digital Wallets
That’s Not My Signature! Fail-Stop Signatures for a Post-quantum World
  • Citing Chapter
  • August 2024

... They came up with a notion (later called Always-Bet) and constructions satisfying it, as well as impossibility results showing the need for computational assumptions if the number of queries is unbounded. Naor and Oved [NO22] refined the resiliency notion and came up with two notions of robustness and explored the relationships between them. The robustness test is defined as a game with an adversary, the adversary chooses the set S and adaptively queries the Bloom filter. ...

Bet-or-Pass: Adversarially Robust Bloom Filters
  • Citing Chapter
  • January 2023

Lecture Notes in Computer Science

... Their protocol works in the single-server model, where each user can securely communicate with an honest-but-curious powerful server, and provides strong cryptographic guarantees in the presence of an adversary simultaneously corrupting the server and a small fraction of the users. This is a natural threat model for distributed data analysis tasks where a powerful but untrusted server is tasked with analyzing data distributed across a large number of less powerful devices, and consequently has received significant attention from both theoretical and practical perspectives [1,11,14,33,40,41,44]. ...

MPC for Tech Giants (GMPC): Enabling Gulliver and the Lilliputians to Cooperate Amicably
  • Citing Preprint
  • July 2022

... Kamara et al. [4] proposed the concept of dynamic searchable encryption, making searchable encryption no longer limited to static operations. Although subsequent research efforts focused on effectiveness [17,28,29], dynamics [11,30,31], localization [32,33], security [34][35][36], and complex functions [37][38][39], they still suffer from leaking some important information. Attackers can use these leakages to attack and recover data and cause more serious information leakages [5][6][7][8]. ...

Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two-Dimensional Balanced Allocations
  • Citing Article
  • September 2021

SIAM Journal on Computing

... Multi-factor authentication (MFA) [Ometov et al. 2018] was then introduced to add a new layer of security to the process. Out-of-band authentication [Naor et al. 2020], including SMS and email-based one-time passwords (OTPs), biometrics, physical keys, and digital signatures, are also used as authentication factors. ...

The Security of Lazy Users in Out-of-Band Authentication
  • Citing Article
  • April 2020

ACM Transactions on Privacy and Security

... These definitions can be easily extended to a more general setting [9], where the number of interaction rounds between Arthur and Merlin is constant but not fixed to only one round per player. This model was introduced in [19] and further studied in [9,12,25]. For instance, a dMAM protocol involves three rounds: Merlin provides certificates to Arthur, then Arthur (the nodes) challenges Merlin by sending random strings. ...

The Power of Distributed Verifiers in Interactive Proofs
  • Citing Chapter
  • December 2020

... This problem is addressed by the concept of "heavy hitters". T -heavy hitters allow computing the T most popular responses (for a given threshold T ) among clients' inputs and have a broad range of applications: from finding popular websites that users visit or malicious URLs that cause browsers to crash [10,30], to discovering commonly used passwords [39], learning new words typed by users and identifying frequently used emojis [27], to name a few. Private heavy-hitters allow computing these results while also preserving client privacy. ...

How to (not) Share a Password: Privacy Preserving Protocols for Finding Heavy Hitters with Adversarial Behavior
  • Citing Conference Paper
  • November 2019

... In this process, the black box testing stands out for its independent focus on the functional interface, which is committed to ensuring the smoothness and accuracy of system operation from the user's perspective. The white box testing explores the system's internal structure in depth and verifies the correctness of its internal logic with a rigorous attitude [35][36][37]. These two testing methods complement each other and jointly build a solid quality defense line. ...

White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing
  • Citing Article
  • July 2019

Journal of the ACM

... Related Work. There have been many recent efforts to rigorously define a game-based [3,6,21,22] and simulator-based [11,12,26] adversarial model for probabilistic data structures such as the Bloom Filter and the Learned Bloom Filter. Our work builds upon these insights by applying them to LSM stores and designing countermeasures tailored to real-world storage systems. ...

Bloom Filters in Adversarial Environments
  • Citing Article
  • June 2019

ACM Transactions on Algorithms