Ronald L. Rivest’s research while affiliated with Distributed Artificial Intelligence Laboratory and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (127)


Scan, Shuffle, Rescan: Two-Prover Election Audits With Untrusted Scanners
  • Chapter

February 2025

·

2 Reads

Douglas W. Jones

·

Sunoo Park

·

Ronald L. Rivest

·

Adam Sealfon

Schematic of the causal chain from GHG emissions to climate damages, including the unique effects of four climate controls: emissions Mitigation, carbon dioxide Removal (CDR), Geoengineering by solar radiation management (SRM), and Adaptation. Climate controls yield benefits in terms of avoided climate damages, which are balanced against control deployment costs. Images (a)–(f) are all stated to be in the public domain. Reproduced from https://freesvg.org/. (g) Reproduced from Pixabay (2021).
Baseline (thick grey line) and optimally cost-beneficial (thick black line) (a) effective CO $_{2e}$ emissions, (b) CO $_{2e}$ concentrations, (c) radiative forcing, and (d) temperature change (relative to preindustrial). Colored wedges show a natural decomposition of the effects of the four different climate controls, computed by setting downstream controls to zero in (5): Mitigation (blue); carbon dioxide Removal (CDR; orange); SRM or Geoengineering (red); and Adaptation (green). Adaptation does not directly affect realized temperatures but is included in the plot using the adaptive pseudo-temperature construct (equation (7)); in this scenario, however, the optimal adaptation is so minor that its wedge is covered by the thick black line. Equivalent curves for the cost-effectiveness analysis are shown in figure S8.
Results of cost-benefit analysis and their sensitivity to the discount rate ρ. (a) Optimized fractional control deployments and (b) corresponding discounted costs and benefits relative to the no-climate-policy baseline scenario. The mitigation curve in (a) ends in 2150 because baseline emissions go to zero and there is nothing to mitigate thereafter. The total positive area shaded in grey in (b) is the maximum net present benefit (equation (10)), defined as the benefits (green) minus the costs (purple). Since the discount rate is equal to the economic growth rate and $E_{0} = 100 \times 10^{12}$ USD yr⁻¹, the y-axis in (b) can also be read as a percentage of GWP in a given year. (c) Time-mean control deployments (colors; left axis) and adaptive temperatures in 2100 (black; right axis), as a function of the discount rate.
Sensitivity of optimally cost-effective SRM deployments to the discount rate ρ and SRM scaling cost $\mathcal{C}_{G}$ . Colored contours show SRM’s average share of the cooling required to keep global warming $T_{M,R,G}$ below the threshold $T^{\star}$ , normalized by the baseline temperature deficit $T^{\star} - T$ . A value of $100\%$ corresponds to an SRM-only control portfolio, whereas a value of $0\%$ means only other controls are used. The grey dot shows the location of the relatively conservative (risk-averse) default values used here, where SRM provides roughly $25\%$ of the required cooling.
Storyline A: a surrogate policy decision-maker prescribes optimized mitigation and CDR trajectories which would limit warming below ${1.5}\ ^{\circ}\textrm{C}$ (thick black lines), but realized deployments repeatedly fall short of these control targets by a fraction s, the shortfall. In the absence of a policy response, these control shortfalls lead to a substantial overshoot of the $T^{\star} = {1.5}\ ^{\circ}\textrm{C}$ goal (thick dashed lines). Panels (a)–(c) illustrate a policy process for sequentially responding to these control shortfalls every $\Delta t = {10}$ years, for an arbitrary example value $s = 60\%$ . After computing an optimized future projection, $M \rightarrow M + \Delta M$ (thin solid lines), realized climate controls are incremented suboptimally, $M \rightarrow M + (1-s)\Delta M$ (thin dashed lines; see also annotated arrows). After $\Delta t = {10}$ years of realized shortfalls (gold line from one square to the next), the decision-maker re-optimizes their prescription of future deployments, and the process repeats. For $s = 60\%$ , a temporary shortfall in mitigation (b) and CDR (c) results in an overshoot of the temperature goal by ${0.4}\ ^{\circ}\textrm{C}$ (a), although temperatures are eventually stabilized at this level by a delayed decarbonization of $M \approx 100\%$ (b) and intensified CDR deployments (c), which partially compensates for earlier control shortfalls. Panel (d) shows how the maximum realized temperature asymptotes to the temperature goal of ${1.5}\ ^{\circ}\textrm{C}$ for $s\lt30\%$ in the optimal limit and to the catastrophic baseline warming of ${4.75}\ ^{\circ}\textrm{C}$ in the sub-optimal limit of no climate control ( $s \rightarrow 100\%$ ).

+1

A simple model for assessing climate control trade-offs and responding to unanticipated climate outcomes
  • Article
  • Full-text available

October 2021

·

154 Reads

·

8 Citations

Persistent greenhouse gas (GHG) emissions threaten global climate goals and have prompted consideration of climate controls supplementary to emissions mitigation. We present MARGO, an idealized model of optimally-controlled climate change, which is complementary to both simpler conceptual models and more complicated Integrated Assessment Models. The four methods of controlling climate damage—mitigation, carbon dioxide removal (CDR), adaptation, and solar radiation modification (SRM)—are not interchangeable, as they enter at different stages of the causal chain that connects GHG emissions to climate damages. Early and aggressive mitigation is necessary to stabilize GHG concentrations below a tolerable level. While the most cost-beneficial and cost-effective pathways to reducing climate suffering include deployments of all four controls, the quantitative trade-offs between the different controls are sensitive to value-driven parameters and poorly-known future costs and damages. Static policy optimization assumes perfect foresight and obscures the active role decision-makers have in shaping a climate trajectory. We propose an explicit policy response process wherein climate control policies are re-adjusted over time in response to unanticipated outcomes. We illustrate this process in two ‘storyline’ scenarios: (a) near-term increases in mitigation and CDR are deficient, such that climate goals are expected to slip out of reach; (b) SRM is abruptly terminated after 40 years of successful deployment, causing an extremely rapid warming which is amplified by an excess of GHGs due to deterred mitigation. In both cases, an optimized policy response yields substantial benefits relative to continuing the original policy. The MARGO model is intentionally designed to be as simple, transparent, customizable, and accessible as possible, addressing concerns about previous climate-economic modelling approaches and enabling a more diverse set of stakeholders to engage with these essential and timely topics.

Download

Going from bad to worse: from Internet voting to blockchain voting

February 2021

·

412 Reads

·

161 Citations

Journal of Cybersecurity

Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide. This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures. Online voting may seem appealing: voting from a computer or smartphone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly, given the current state of computer security, any turnout increase derived from Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero day, and denial-of-service attacks continue to be effective. This article analyzes and systematizes prior research on the security risks of online and electronic voting, and shows that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce ‘additional’ problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals.


Fig. 1: Laptop experiment, outdoor setup.
Fig. 2: Laptop experiment results: range measurement vs truth. 1 ft = 30 cm.
Fig. 3: Prototype BLE/Ultrasonic Protocol.
Fig. 4: Prototype BLE/Ultrasonic Protocol Message Timeline.
SonicPACT: An Ultrasonic Ranging Method for the Private Automated Contact Tracing (PACT) Protocol

December 2020

·

58 Reads

·

1 Citation

·

Michael Specter

·

Michael Wentz

·

[...]

·

Throughout the course of the COVID-19 pandemic, several countries have developed and released contact tracing and exposure notification smartphone applications (apps) to help slow the spread of the disease. To support such apps, Apple and Google have released Exposure Notification Application Programming Interfaces (APIs) to infer device (user) proximity using Bluetooth Low Energy (BLE) beacons. The Private Automated Contact Tracing (PACT) team has shown that accurately estimating the distance between devices using only BLE radio signals is challenging. This paper describes the design and implementation of the SonicPACT protocol to use near-ultrasonic signals on commodity iOS and Android smartphones to estimate distances using time-of-flight measurements. The protocol allows Android and iOS devices to interoperate, augmenting and improving the current exposure notification APIs. Our initial experimental results are promising, suggesting that SonicPACT should be considered for implementation by Apple and Google.


A multi-control climate policy process for a trusted decision maker

May 2020

·

22 Reads

Persistent greenhouse gas (GHG) emissions threaten global climate goals and have prompted consideration of climate controls supplementary to emissions mitigation. We present an idealized model of optimally-controlled climate change, which is complementary to simpler analytical models and more comprehensive Integrated Assessment Models. We show that the four methods of controlling climate damage– mitigation, carbon dioxide removal, adaptation, and solar radiation modification– are not interchangeable, as they enter at different stages of the causal chain that connects GHG emissions to climate damages. Early and aggressive mitigation is always necessary to stabilize GHG concentrations at a tolerable level. The most cost-effective way of keeping warming below 2 degrees Celsius is a combination of all four controls; omitting solar radiation modification– a particularly contentious climate control– increases net control costs by 31%. At low discount rates, near-term mitigation and carbon dioxide removal are used to permanently reduce the warming effect of GHGs. At high discount rates, however, GHGs concentrations increase rapidly and future generations are required to use solar radiation modification to offset a large greenhouse effect. We propose a policy response process wherein climate policy decision-makers re-adjust their policy prescriptions over time based on evolving climate outcomes and revised model assumptions. We demonstrate the utility of the process by applying it to three hypothetical scenarios in which model biases in 1) baseline emissions, 2) geoengineering (CDR and SRM) costs, and 3) climate feedbacks are revealed over time and control policies are re-adjusted accordingly.


Consistent Sampling with Replacement

August 2018

·

10 Reads

We describe a very simple method for `consistent sampling' that allows for sampling with replacement. The method extends previous approaches to consistent sampling, which assign a pseudorandom real number to each element, and sample those with the smallest associated numbers. When sampling with replacement, our extension gives the item sampled a new, larger, associated pseudorandom number, and returns it to the pool of items being sampled.


Towards secure quadratic voting

July 2017

·

56 Reads

·

17 Citations

Public Choice

We provide an overview of some of the security issues involved in securely implementing Lalley and Weyl’s “Quadratic Voting” (Lalley and Weyl, Quadratic voting, 2016), and suggest some possible implementation architectures. Our proposals blend end-to-end verifiable voting methods with anonymous payments. We also consider new refund rules for quadratic voting, such as a “lottery” method.


Time-Space Trade-offs in Population Protocols

January 2017

·

21 Reads

·

124 Citations

In this paper, we explore this trade-off and provide new upper and lower bounds for majority and leader election. First, we prove a unified lower bound, which relates the space available per node with the time complexity achievable by a protocol: for instance, our result implies that any protocol solving either of these tasks for n agents using O(log log n) states must take Ω(n/polylogn) expected time. This is the first result to characterize time complexity for protocols which employ super-constant number of states per node, and proves that fast, poly-logarithmic running times require protocols to have relatively large space costs.


The Optimality of Correlated Sampling

December 2016

·

22 Reads

·

12 Citations

In the "correlated sampling" problem, two players, say Alice and Bob, are given two distributions, say P and Q respectively, over the same universe and access to shared randomness. The two players are required to output two elements, without any interaction, sampled according to their respective distributions, while trying to minimize the probability that their outputs disagree. A well-known protocol due to Holenstein, with close variants (for similar problems) due to Broder, and to Kleinberg and Tardos, solves this task with disagreement probability at most 2δ/(1+δ)2 \delta/(1+\delta), where δ\delta is the total variation distance between P and Q. This protocol has been used in several different contexts including sketching algorithms, approximation algorithms based on rounding linear programming relaxations, the study of parallel repetition and cryptography. In this note, we give a surprisingly simple proof that this protocol is in fact tight. Specifically, for every δ(0,1)\delta \in (0,1), we show that any correlated sampling scheme should have disagreement probability at least 2δ/(1+δ)2\delta/(1+\delta). This partially answers a recent question of Rivest. Our proof is based on studying a new problem we call "constrained agreement". Here, Alice is given a subset A[n]A \subseteq [n] and is required to output an element iAi \in A, Bob is given a subset B[n]B \subseteq [n] and is required to output an element jBj \in B, and the goal is to minimize the probability that iji \neq j. We prove tight bounds on this question, which turn out to imply tight bounds for correlated sampling. Though we settle basic questions about the two problems, our formulation also leads to several questions that remain open.



Citations (71)


... The ocean has also served as the dominant reservoir for heat produced by the earth's energy imbalance resulting from anthropogenic changes in atmospheric composition. Since 1971, observations indicate that 90% of this heat has been absorbed by the ocean (with approximately half of this anthropogenic heat residing below 700 m depth; von Schuckmann et al., 2023), significantly slowing transient global warming on land (Drake et al., 2021). The deep ocean accounts for 95% of Earth's habitable space and supports a plethora of unique ecosystems (Ramirez-Llodra et al., 2010) including those likely to have hosted the development of life on the planet (Baross and Hoffman, 1985;Martin et al., 2008). ...

Reference:

Future directions for deep ocean climate science and evidence-based decision making
A simple model for assessing climate control trade-offs and responding to unanticipated climate outcomes

... Key challenges include, first, the penetration of fake and duplicate digital identities (a.k.a. sybils), and second, the perils of large-scale online voting, which is considered to be untenable by some leading experts (Park et al. 2021). Federated assemblies can be viewed as a step in an effort to address these challenges. ...

Going from bad to worse: from Internet voting to blockchain voting

Journal of Cybersecurity

... The number of hierarchy elements determines the complexity of specifying a segmentation. Lower complexity is advantageous by the minimum description length (MDL) principle, which minimizes a cost composed of the description cost and the approximation cost, and relies on statistical justifications [12][13][14][15][16]. Moreover, representation by a small number of elements opens possibilities for a new type of segmentation algorithms that are based on search, for example, in contrast to the greedy current algorithms. ...

Inferring decision trees using the minimum description lenght principle
  • Citing Article
  • March 1989

Information and Computation

... Rule lists [22], and more in general rule-based models such as decision trees, are among the best-known and easily interpretable models. A rule list is a sequence of rules, and the prediction from such a model is obtained by applying the first rule in the list whose condition is satisfied for the given input. ...

Learning Decision Lists
  • Citing Article
  • November 1987

Machine Learning

... Formally, a protocol is a five-tuple P = (Q, s 0 , Y, δ, π), where Q is the set of agent states, s 0 is the initial state (unused in self-stabilizing protocols), Y is the set of output symbols, δ : Q × Q → Q × Q is the transition function, and π : Q → Y is the output function. 1 A global state of the population (or configuration) is a function C : A → Q that represents the current state of each agent. A configuration C is output-stable (or simply stable) if no agent ever changes its output in any subsequent execution from C. ...

Time-Space Trade-offs in Population Protocols
  • Citing Conference Paper
  • January 2017

... Sampling methods based on common randomness offer a convenient solution, and have been shown to achieve matching probabilities close to that of the maximal coupling despite their relative simplicity [9]. In particular, if Alice and Bob both sample from p X and q Y by applying the Gumbel-max trick to shared random numbers, it is possible to achieve Pr[X = Y ] ≥ (1 − d TV (p X , q Y ))/(1 + d TV (p X , q Y )), which is a lower bound in the communication-free setting [2,9]. ...

The Optimality of Correlated Sampling
  • Citing Article
  • December 2016

... In further work [4], Alistarh and Gelashvili studied the relevant upper bounds including a new leader election protocol stabilising in time O(log 3 n) assuming O(log 3 n) states per agent. Later, Alistarh et al. [1] considered more general trade-offs between the number of states and the time complexity of stabilisation. In particular, they proposed a separation argument distinguishing between slowly stabilising population protocols which utilise o(log log n) states and rapidly stabilising protocols relying on O(log n) states per agent. ...

Time-Space Trade-offs in Population Protocols
  • Citing Article
  • February 2016

... Active tags have a stronger signal and are more reliable than passive tags as they can conduct a session with a RFID reader. 47,50 They operate generally at higher frequencies. The signals are captured by the reader over a longer distance. ...

Security and privacy aspects of low-cost Radio Frequency Identification systems
  • Citing Conference Paper
  • January 2004

Lecture Notes in Computer Science

... For this reason, chaos-based systems are known as deterministic systems. Their nature of randomness, sensitivity to original conditions, and ergodicity are unique characteristics (Stallings 2006;Chuang et al. 2011;Al-Najjar 2012;Banthia and Tiwari 2013;Rivest 1990;Matthews 1989;Wheeler and Matthews 1991;Chen and Liao 2005;Masood et al. 2020aMasood et al. , 2021Masood et al. , 2020bAhmad et al. 2020;Hanouti et al. 2020;Butt et al. 2020;Munir et al. 2020). These characteristics lead to a reliable cryptosystem, while chaotic maps and dynamical systems help to generate longterm chaotic sequences. ...

Cryptography
  • Citing Chapter
  • December 1990