February 2025
·
2 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
February 2025
·
2 Reads
October 2021
·
154 Reads
·
8 Citations
Persistent greenhouse gas (GHG) emissions threaten global climate goals and have prompted consideration of climate controls supplementary to emissions mitigation. We present MARGO, an idealized model of optimally-controlled climate change, which is complementary to both simpler conceptual models and more complicated Integrated Assessment Models. The four methods of controlling climate damage—mitigation, carbon dioxide removal (CDR), adaptation, and solar radiation modification (SRM)—are not interchangeable, as they enter at different stages of the causal chain that connects GHG emissions to climate damages. Early and aggressive mitigation is necessary to stabilize GHG concentrations below a tolerable level. While the most cost-beneficial and cost-effective pathways to reducing climate suffering include deployments of all four controls, the quantitative trade-offs between the different controls are sensitive to value-driven parameters and poorly-known future costs and damages. Static policy optimization assumes perfect foresight and obscures the active role decision-makers have in shaping a climate trajectory. We propose an explicit policy response process wherein climate control policies are re-adjusted over time in response to unanticipated outcomes. We illustrate this process in two ‘storyline’ scenarios: (a) near-term increases in mitigation and CDR are deficient, such that climate goals are expected to slip out of reach; (b) SRM is abruptly terminated after 40 years of successful deployment, causing an extremely rapid warming which is amplified by an excess of GHGs due to deterred mitigation. In both cases, an optimized policy response yields substantial benefits relative to continuing the original policy. The MARGO model is intentionally designed to be as simple, transparent, customizable, and accessible as possible, addressing concerns about previous climate-economic modelling approaches and enabling a more diverse set of stakeholders to engage with these essential and timely topics.
February 2021
·
412 Reads
·
161 Citations
Journal of Cybersecurity
Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide. This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures. Online voting may seem appealing: voting from a computer or smartphone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly, given the current state of computer security, any turnout increase derived from Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero day, and denial-of-service attacks continue to be effective. This article analyzes and systematizes prior research on the security risks of online and electronic voting, and shows that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce ‘additional’ problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals.
December 2020
·
58 Reads
·
1 Citation
Throughout the course of the COVID-19 pandemic, several countries have developed and released contact tracing and exposure notification smartphone applications (apps) to help slow the spread of the disease. To support such apps, Apple and Google have released Exposure Notification Application Programming Interfaces (APIs) to infer device (user) proximity using Bluetooth Low Energy (BLE) beacons. The Private Automated Contact Tracing (PACT) team has shown that accurately estimating the distance between devices using only BLE radio signals is challenging. This paper describes the design and implementation of the SonicPACT protocol to use near-ultrasonic signals on commodity iOS and Android smartphones to estimate distances using time-of-flight measurements. The protocol allows Android and iOS devices to interoperate, augmenting and improving the current exposure notification APIs. Our initial experimental results are promising, suggesting that SonicPACT should be considered for implementation by Apple and Google.
May 2020
·
22 Reads
Persistent greenhouse gas (GHG) emissions threaten global climate goals and have prompted consideration of climate controls supplementary to emissions mitigation. We present an idealized model of optimally-controlled climate change, which is complementary to simpler analytical models and more comprehensive Integrated Assessment Models. We show that the four methods of controlling climate damage– mitigation, carbon dioxide removal, adaptation, and solar radiation modification– are not interchangeable, as they enter at different stages of the causal chain that connects GHG emissions to climate damages. Early and aggressive mitigation is always necessary to stabilize GHG concentrations at a tolerable level. The most cost-effective way of keeping warming below 2 degrees Celsius is a combination of all four controls; omitting solar radiation modification– a particularly contentious climate control– increases net control costs by 31%. At low discount rates, near-term mitigation and carbon dioxide removal are used to permanently reduce the warming effect of GHGs. At high discount rates, however, GHGs concentrations increase rapidly and future generations are required to use solar radiation modification to offset a large greenhouse effect. We propose a policy response process wherein climate policy decision-makers re-adjust their policy prescriptions over time based on evolving climate outcomes and revised model assumptions. We demonstrate the utility of the process by applying it to three hypothetical scenarios in which model biases in 1) baseline emissions, 2) geoengineering (CDR and SRM) costs, and 3) climate feedbacks are revealed over time and control policies are re-adjusted accordingly.
August 2018
·
10 Reads
We describe a very simple method for `consistent sampling' that allows for sampling with replacement. The method extends previous approaches to consistent sampling, which assign a pseudorandom real number to each element, and sample those with the smallest associated numbers. When sampling with replacement, our extension gives the item sampled a new, larger, associated pseudorandom number, and returns it to the pool of items being sampled.
July 2017
·
56 Reads
·
17 Citations
Public Choice
We provide an overview of some of the security issues involved in securely implementing Lalley and Weyl’s “Quadratic Voting” (Lalley and Weyl, Quadratic voting, 2016), and suggest some possible implementation architectures. Our proposals blend end-to-end verifiable voting methods with anonymous payments. We also consider new refund rules for quadratic voting, such as a “lottery” method.
January 2017
·
21 Reads
·
124 Citations
In this paper, we explore this trade-off and provide new upper and lower bounds for majority and leader election. First, we prove a unified lower bound, which relates the space available per node with the time complexity achievable by a protocol: for instance, our result implies that any protocol solving either of these tasks for n agents using O(log log n) states must take Ω(n/polylogn) expected time. This is the first result to characterize time complexity for protocols which employ super-constant number of states per node, and proves that fast, poly-logarithmic running times require protocols to have relatively large space costs.
December 2016
·
22 Reads
·
12 Citations
In the "correlated sampling" problem, two players, say Alice and Bob, are given two distributions, say P and Q respectively, over the same universe and access to shared randomness. The two players are required to output two elements, without any interaction, sampled according to their respective distributions, while trying to minimize the probability that their outputs disagree. A well-known protocol due to Holenstein, with close variants (for similar problems) due to Broder, and to Kleinberg and Tardos, solves this task with disagreement probability at most , where is the total variation distance between P and Q. This protocol has been used in several different contexts including sketching algorithms, approximation algorithms based on rounding linear programming relaxations, the study of parallel repetition and cryptography. In this note, we give a surprisingly simple proof that this protocol is in fact tight. Specifically, for every , we show that any correlated sampling scheme should have disagreement probability at least . This partially answers a recent question of Rivest. Our proof is based on studying a new problem we call "constrained agreement". Here, Alice is given a subset and is required to output an element , Bob is given a subset and is required to output an element , and the goal is to minimize the probability that . We prove tight bounds on this question, which turn out to imply tight bounds for correlated sampling. Though we settle basic questions about the two problems, our formulation also leads to several questions that remain open.
March 2016
·
16 Reads
This paper presents a new crypto scheme whose title promises it to be so boring that no-one will bother reading past the abstract. Because of this, the remainder of the paper is left blank.
... The ocean has also served as the dominant reservoir for heat produced by the earth's energy imbalance resulting from anthropogenic changes in atmospheric composition. Since 1971, observations indicate that 90% of this heat has been absorbed by the ocean (with approximately half of this anthropogenic heat residing below 700 m depth; von Schuckmann et al., 2023), significantly slowing transient global warming on land (Drake et al., 2021). The deep ocean accounts for 95% of Earth's habitable space and supports a plethora of unique ecosystems (Ramirez-Llodra et al., 2010) including those likely to have hosted the development of life on the planet (Baross and Hoffman, 1985;Martin et al., 2008). ...
October 2021
... Key challenges include, first, the penetration of fake and duplicate digital identities (a.k.a. sybils), and second, the perils of large-scale online voting, which is considered to be untenable by some leading experts (Park et al. 2021). Federated assemblies can be viewed as a step in an effort to address these challenges. ...
Reference:
Federated Assemblies
February 2021
Journal of Cybersecurity
... The number of hierarchy elements determines the complexity of specifying a segmentation. Lower complexity is advantageous by the minimum description length (MDL) principle, which minimizes a cost composed of the description cost and the approximation cost, and relies on statistical justifications [12][13][14][15][16]. Moreover, representation by a small number of elements opens possibilities for a new type of segmentation algorithms that are based on search, for example, in contrast to the greedy current algorithms. ...
March 1989
Information and Computation
... Rule lists [22], and more in general rule-based models such as decision trees, are among the best-known and easily interpretable models. A rule list is a sequence of rules, and the prediction from such a model is obtained by applying the first rule in the list whose condition is satisfied for the given input. ...
November 1987
Machine Learning
... The main way to avoid such manipulations is to ensure complete voting secrecy, in the sense that no one can prove what they voted. The most popular solution is E2E voting, which is rigorously discussed and applied to QV in (Park and Rivest 2017). ...
July 2017
Public Choice
... Formally, a protocol is a five-tuple P = (Q, s 0 , Y, δ, π), where Q is the set of agent states, s 0 is the initial state (unused in self-stabilizing protocols), Y is the set of output symbols, δ : Q × Q → Q × Q is the transition function, and π : Q → Y is the output function. 1 A global state of the population (or configuration) is a function C : A → Q that represents the current state of each agent. A configuration C is output-stable (or simply stable) if no agent ever changes its output in any subsequent execution from C. ...
January 2017
... Sampling methods based on common randomness offer a convenient solution, and have been shown to achieve matching probabilities close to that of the maximal coupling despite their relative simplicity [9]. In particular, if Alice and Bob both sample from p X and q Y by applying the Gumbel-max trick to shared random numbers, it is possible to achieve Pr[X = Y ] ≥ (1 − d TV (p X , q Y ))/(1 + d TV (p X , q Y )), which is a lower bound in the communication-free setting [2,9]. ...
December 2016
... In further work [4], Alistarh and Gelashvili studied the relevant upper bounds including a new leader election protocol stabilising in time O(log 3 n) assuming O(log 3 n) states per agent. Later, Alistarh et al. [1] considered more general trade-offs between the number of states and the time complexity of stabilisation. In particular, they proposed a separation argument distinguishing between slowly stabilising population protocols which utilise o(log log n) states and rapidly stabilising protocols relying on O(log n) states per agent. ...
February 2016
... Active tags have a stronger signal and are more reliable than passive tags as they can conduct a session with a RFID reader. 47,50 They operate generally at higher frequencies. The signals are captured by the reader over a longer distance. ...
January 2004
Lecture Notes in Computer Science
... For this reason, chaos-based systems are known as deterministic systems. Their nature of randomness, sensitivity to original conditions, and ergodicity are unique characteristics (Stallings 2006;Chuang et al. 2011;Al-Najjar 2012;Banthia and Tiwari 2013;Rivest 1990;Matthews 1989;Wheeler and Matthews 1991;Chen and Liao 2005;Masood et al. 2020aMasood et al. , 2021Masood et al. , 2020bAhmad et al. 2020;Hanouti et al. 2020;Butt et al. 2020;Munir et al. 2020). These characteristics lead to a reliable cryptosystem, while chaotic maps and dynamical systems help to generate longterm chaotic sequences. ...
December 1990