Fred B. Schneider's research while affiliated with Cornell University and other places
What is this page?
This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.
If you're a ResearchGate member, you can follow this page to keep up with this author's work.
If you are this author, and you don't want us to display this page anymore, please let us know.
Publications (225)
How to avoid insider cyber-attacks by creating a corporate culture that infuses trust.
Widespread deployment of Intelligent Infrastructure and the Internet of Things creates vast troves of passively-generated data. These data enable new ubiquitous computing applications---such as location-based services---while posing new privacy threats. In this work, we identify challenges that arise in applying use-based privacy to passively-gener...
A reactive information flow (RIF) automaton for a value v specifies (i) restrictions on uses for v and (ii) the RIF automaton for any value that might be derived from v. RIF automata thus specify how transforming a value alters restrictions for the result. As labels, RIF automata are both expressive and intuitive vehicles for describing allowed inf...
Use-based privacy restricts how information may be used, making it well-suited for data collection and data analysis applications in networked information systems. This work investigates the feasibility of enforcing use-based privacy in distributed systems with adversarial service providers. Three architectures that use Intel-SGX are explored: sour...
Proposing a stronger foundation for an engineering discipline to support the design of secure systems.
A call for discussion of governmental investment and intervention in support of cybersecurity.
Theft of secrets is nothing new. Nor is it new to publicize stolen secrets with hopes of influencing (or instigating) leadership changes in government. So the theft of confidential information being stored by the Democratic National Committee (DNC) is part of a long tradition, albeit perpetrated in a new venue: cyberspace.
The
omni-kernel
architecture is designed around pervasive monitoring and scheduling. Motivated by new requirements in virtualized environments, this architecture ensures that all resource consumption is measured, that resource consumption resulting from a scheduling decision is attributable to an activity, and that scheduling decisions are fine-g...
This paper proposes a mechanism for expressing and enforcing security policies for shared data. Security policies are expressed as stateful meta-code operations; meta-code can express a broad class of policies, including access-based policies, use-based policies, obligations, and sticky policies with declassification. The meta-code is interposed in...
In a method for improving the efficiency of a search engine in accessing, searching and retrieving information in the form of documents stored in document or content repositories, the search engine comprises an array of search nodes hosted on one or more servers. An index of the stored document is created. The search engine processes a user search...
Only recently have approaches to quantitative information flow started to challenge the presumption that all leaks involving a given number of bits are equally harmful. This paper proposes a framework to capture the semantics of information, making quantification of leakage independent of the syntactic representation of secrets. Secrets are defined...
Paxos, Viewstamped Replication, and Zab are replication protocols that ensure
high-availability in asynchronous environments with crash failures. Various
claims have been made about similarities and differences between these
protocols. But how does one determine whether two protocols are the same, and
if not, how significant the differences are?
We...
Identity management systems store attributes associated with users and employ these attributes to facilitate authorization. The authors analyze existing systems and describe a privacy-driven taxonomy of design choices, which can help technical experts consulting on public policy relating to identity management. The US National Strategy for Trusted...
An educated computer security workforce is essential to building trustworthy systems. Yet, issues about what should be taught and how are being ignored by many of the university faculty who teach cybersecurity courses--a problematic situation. Author Fred Schneider explores the issues.
The "Cloud" is a wonderfully expansive phrase used to denote computation and data storage centralized in a large datacenter and elastically accessed across a network. The concept is not new; web sites and business servers have run in datacenters for ...
Great research, by definition, will have valuable impacts. But just because an activity is undertaken by a researcher and has valuable impacts does not make it great research—or even research.
Multi-verifier signatures generalize public-key signatures to a secret-key setting. Just like public-key signatures, these signatures are both transferable
and secure under arbitrary (unbounded) adaptive chosen-message attacks. In contrast to public-key signature schemes, however,
we exhibit practical constructions of multi-verifier signature schem...
This paper describes the design and implementation of a new operating system authorization architecture to support trustworthy computing. Called logical attestation, this architecture provides a sound framework for reasoning about run time behavior of applications. Logical attestation is based on attributable, unforgeable statements about program p...
This paper presents the design and implementation of NetQuery, a knowledge plane for federated networks such as the Internet. In such networks, not all administrative domains will generate information that an application can trust and many administrative domains may have restrictive policies on disclosing network information. Thus, both the trustwo...
A succession of doctrines for enhancing cybersecurity has been advocated in the past, including prevention, risk management, and deterrence through accountability. None has proved effective. Proposals that are now being made view cybersecurity as a public good and adopt mechanisms inspired by those used for public health. This essay discusses the f...
Coordination in a distributed system is facilitated if there is a unique
process, the leader, to manage the other processes. The leader creates edicts
and sends them to other processes for execution or forwarding to other
processes. The leader may fail, and when this occurs a leader election protocol
selects a replacement. This paper describes Neri...
Depending on their configuration, administration, and provisioning, networks provide drastically different features. For instance, some networks provide little failure resilience while others provision failover capacity and deploy middleboxes to protect against denial of service attacks [1, 2]. Yet the standard IP interface masks these differences;...
Policy proposals are best made relative to a cybersecurity doctrine rather than suggested piecemeal as is being done today. A doctrine of deterrence through accountability, for example, would be a basis for rationalizing proposals that equate attacks with crimes and focus on network-wide authentication and identification mechanisms. A new doctrine...
Nexus Authorization Logic (NAL) provides a principled basis for specifying and reasoning about credentials and authorization policies. It extends prior access control logics that are based on “says” and “speaks for” operators. NAL enables authorization of access requests to depend on (i) the source or pedigree of the requester, (ii) the outcome of...
Nexus Authorization Logic (NAL) provides a principled basis for specifying and reasoning about credentials and authorization policies. It extends prior access control logics that are based on “says” and “speaks for” operators. NAL enables authorization of access requests to depend on (i) the source or pedigree of the requester, (ii) the outcome of...
The formal methods, fault-tolerance, and cyber-security research communities explore models that differ from each other. The differences frustrate efforts at cross-community collaboration. Moreover, ignorance about these differences means the status quo is likely to persist. This paper discusses two of the key differences: (i) the trace-based seman...
Trace properties, which have long been used for reasoning about systems, are sets of execution traces. Hyperproperties, introduced here, are sets of trace properties. Hyperproperties can express security policies, such as secure information flow and service level agreements, that trace properties cannot. Safety and liveness are generalized o hyperp...
With more than 4 billion cell phones in the world, with growth that exceeds that of desktops and laptops, and with a total cost of ownership that makes cell phones affordable to more of the world than a PC will ever be, the market is responding. Yet the cell phone industry, governments, and researchers seemingly have little interest in cell phone s...
Two kinds of integrity measures—contamination and suppression—are introduced. Contamination measures how much untrusted information reaches trusted outputs; it is the dual of information-flow confidentiality. Suppression measures how much information is lost from outputs; it does not have a confidentiality dual. Two forms of suppression are conside...
Proactive obfuscation is a new method for creating server replicas that are likely to have fewer shared vulnerabilities. It uses semantics-preserving code transformations to generate diverse exe- cutables, periodically restarting servers with these fresh versions. The periodic restarts help bound the number of compromised replicas that a service ev...
Cyber-security today is focused largely on defending against known attacks. We learn about the latest attack and find a hack to defend against it. So our defenses improve only after they have been successfully penetrated. This is a recipe to ensure some attackers succeed---not a recipe for achieving system trustworthiness. We must move beyond react...
Trustworthy services, as a result from the interactions of replication with threshold cryptography for use in environments that satisfy weak assumptions, are investigated. A trustworthy service must tolerate attacks as well as failures. Two general components are involved in building trustworthy services such as processors and channels. Processors...
Using exams to create labels for our workforce might sound like a way to get more trustworthy systems, but it's not. If it walks like a duck, quacks like a duck, and looks like a duck, then there's good reason to believe that it's a duck. But you don't get a duck just by calling something a duck, and you don't get trustworthy systems simply by intr...
To reason about information flow, a new model is developed that describes how attacker beliefs change due to the attacker's observation of the execution of a probabilistic (or deterministic) program. The model enables compositional reason- ing about information flow from attacks involving sequences of interactions. The model also supports a new met...
The conference Program Committees (PC) are adapting the review and selection process dynamics so as to evolve research cultural changes and challenges including number of organizers as SOPS, OSDI, and NSDI. The sheer volume of submissions to top systems conference are leading towards consequent success due to increase in number of researchers in pr...
This paper is a tutorial on the state machine approach. It describes the approach and its implementation for two representative environments. Small examples suffice to illustrate the points. However, the approach has been successfully applied to larger examples; some of these are mentioned in 9. Section 2 describes how a system can be viewed in ter...
Conventional wisdom holds that software monocultures are exceptionally vulnerable to malware outbreaks. The authors argue that this oversimplifies and misleads. An analysis based on attacker reactions suggests that deploying a monoculture in conjunction with automated diversity is indeed a very sensible defense.
Accountability could play a much bigger role in getting software producers to implement systems that are more secure and as an alternative to prevention for defending against attacks. Accountability, however, requires attribution of action. Current system development processes are weak here, as are our system designs. In both settings, forensics is...
An IT monoculture occurs when a large fraction of the computers in computational ecosystem run the same software. By automatically creating diversity, the risks of deploying an IT monoculture are reduced so that it becomes difficult for a single malware vector to wreck havoc. Two of the articles in this collection discuss techniques to create this...
A workshop of experts was convened on October 30, 2007 to consider the trade-offs associated with platform homogeneity in complex distributed systems. There were 18 speakers from industry and academia, and an additional 7 observers from AFCIO and AFOSR. The conclusion was that deploying a monocultures would be an effective way to defend against con...
Network Neutrality requirements are being proposed to promote investment and innovation for the Internet. However, these requirements will likely affect the Internet's trustworthiness too, and there is little discussion about this. Trustworthiness experts must start contributing to the debate their expertise about how to build systems that resist a...
Properties, which have long been used for reasoning about systems, are sets of traces. Hyperproperties, introduced here, are sets of properties. Hyperproperties can express security policies, such as secure information flow, that properties cannot. Safety and liveness are generalized to hyperproperties, and every hyperproperty is shown to be the in...
Consensus is an important building block for building replicated systems, and many con- sensus protocols have been proposed. In this paper, we investigate the building blocks of consensus protocols and use these building blocks to assemble a skeleton that can be congured to produce, among others, three well-known consensus protocols: Paxos, Chandra...
Device drivers typically execute in supervisor mode and thus must be fully trusted. This paper describes how to move them out of the trusted computing base, by running them without supervisor privileges and constraining their interactions with hardware devices. An implementation of this approach in the Nexus operating system executes drivers in use...
IEEE Security & Privacy's associate editor in chief discusses technology's role in identity fraud and identity theft.
Over the last decade, programming language techniques have been applied in non-obvious ways to building secure systems. This
talk will not only survey that work in language based security but show that the theoretical underpinnings of programming languages are a good place to start for developing a much needed
foundation for software system securit...
Nexus is a new operating system that runs on computers equipped with tamperproof secure co-processors; it is designed to support
the construction of trustworthy applications-applications where actions can be attributed with some measure of confidence and where trust assumptions are explicit.
The state machine approach is a general method for achieving fault tolerance and implementing decentralized control in distributed
systems. This paper reviews the approach and identifies abstractions needed for coordinating ensembles of state machines.
Implementations of these abstractions for two different failure models—Byzantine and fail-stop—ar...
MOBILE is an extension of the .NET Common Intermediate Lan- guage that permits certified In-Lined Reference Monitoring on Mi- crosoft .NET architectures. MOBILE programs have the useful prop- erty that if they are well-typed with respect to a declared security policy, then they are guaranteed not to violate that security policy when executed. Thus,...
Techniques for reasoning about safety properties of concurrent programs are discussed and implications for program design noted. The relationship between interference freedom and synchronization mechanisms is described. The methods are illustrated with a number of examples, including partial correctness of a bank simulation, and mutual exclusion, n...
We use the appeal of simplicity and an aversion to complexity in selecting a method for handling partial functions in logic. We conclude that avoiding the undefined by using underspecification is the preferred choice.
A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on proof outlines and can handle maximal parallelism as well as resourceconstrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action.
This project investigated language-based approaches for enforcing security policies and proactive approaches for implementing trustworthy distributed services. One avenue of language-based work produced Cyclone, a type-safe variant of C. The Cyclone language retains the familiar syntax and semantics of C code. but provides the strong security guara...
A precise characterization of those security policies enforceable by program rewriting is given. This characterization exposes and rectifies problems in prior work on execution monitoring, yielding a more precise characterization of those security policies enforceable by execution monitors and a taxonomy of enforceable security policies. Some but n...
Tamper-proof coprocessors for secure computing are poised to become a standard hardware feature on future computers. Such hardware provides the primitives necessary to support trustworthy computing applications, that is, applications that can provide strong guarantees about their run time behavior.
APSS, a proactive secret sharing (PSS) protocol for asynchronous systems, is explained and proved correct. The protocol enables a set of secret shares to be periodically refreshed with a new, independent set, thereby thwarting mobile-adversary attacks. Protocols for asynchronous systems are inherently less vulnerable to denial-of-service attacks, w...
Information leakage traditionally has been defined to occur when uncertainty about secret data is reduced. This uncertainty-based approach is inadequate for measuring information flow when an attacker is making assumptions about secret inputs and these assumptions might be incorrect; such attacker beliefs are an unavoidable aspect of any satisfacto...
A protocol is given to take an ElGamal ciphertext encrypted under the key of one distributed service and produce the corresponding ciphertext encrypted under the key of another distributed service, but without the plaintext ever becoming available. Each distributed service comprises a set of servers and employs threshold cryptography to maintain it...
Successful attacks on computing infrastructures often involve failures of type safety. A major contribution of this grant has been the creation of type systems and type-checking algorithms for low-level languages in use today. In addition, "certifying compilation" was developed to eliminate the need to trust correctness of highlevel language implem...
CorSSO is a distributed service for authentication in networks. It allows application servers to delegate client identity checking to combinations of authentication servers potentially residing in separate administrative domains. In CorSSO, authentication policies enable the system to tolerate expected classes of attacks and failures. A novel parti...
A method for making aspects of a computational model explicit in the formulas of a programming logic is given. The method is based on a new notion of environment—an environment augments the stale transitions defined by a program's atomic actions rather than being interleaved with them. Two simple semantic principles are presented for extending a pr...
CorSSO is a distributed service for authentication in networks. It allows application servers to delegate client identity checking to combinations of authentication servers potentially residing in separate administrative do- mains. In CorSSO, authentication policies enable the sys- tem to tolerate expected classes of attacks and failures. A novel p...
Fault-tolerance and attack-tolerance are crucial for implementing a trustworthy service. An emerging thread of research investigates interactions between fault-tolerance and attack-tolerance---specifically, the coupling of replication with threshold cryptography for use in environments satisfying weak assumptions. This coupling yields a new paradig...
Much about our computing systems has changed since reference monitors were first introduced, 30 years ago. Reference monitors haven't---at least, until recently---but new forms of execution monitoring are now possible, largely due to research done in the formal methods and programming languages communities. This talk will discuss these new approach...
Chain replication is a new approach to coordinating clusters of fail-stop storage servers. The approach is intended for supporting large-scale storage services that exhibit high throughput and availability with- out sacrificing strong consistency guarantees. Be- sides outlining the chain replication protocols them- selves, simulation experiments ex...
A method for automated analysis of fault-tolerance of distributed systems is presented. It is based on a stream (or data-ow) model of distributed computation. Temporal (ordering) relationships between messages received by a component on dierent channels are not captured by this model. This makes the analysis more ecient and forces the use of conser...
"f) Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error. It also reduces the number of potential interactions among privileged programs to the minimum for correct operation, so...
SASI (Security Automata SFI Implementation) enforces security policies by modifying object code for a target system before that system is executed. The approach has been prototyped for two rather different machine architectures: Intel x86 and Java JVML. Details of these prototypes and some generalizations about the SASI approach are discussed. 1.
NAP, a detection and recovery based scheme for implementing fault-tolerant itinerant computations, is presented. We give the semantics for the scheme and describe a protocol that implements NAP in tacoma.
COCA is a fault-tolerant and secure online certification authority that has been built and deployed both in a local area network and in the Internet. Extremely weak assumptions characterize environments in which COCA's protocols execute correctly: no assumption is made about execution speed and message delivery delays; channels are expected to exhi...
A new class of gossip protocols to diffuse updates securely is presented. The protocols rely on annotating updates with the path along which they travel. To avoid a combinatorial explosion in the number of annotated updates, rules are employed to choose which updates to keep. Different sets of rules lead to different protocols. Results of simulated...
The Internet is seeing a rapid increase in on-line newspapers and advertising for new products and sales. Yet only primitive mechanisms are available to help users discover and obtain that subset of these news items likely to be of interest. Current search engines are really only first step. For locating news providers, word-of-mouth and mass maili...
APSS, a proactive secret sharing (PSS) protocol for asynchronous systems, is derived and proved correct. A PSS protocol enables a set of secret shares to be periodically refreshed with a new, independent set, thereby thwarting so-called mobile adversary attacks. APSS tolerates certain attacks that PSS protocols for synchronous systems cannot, becau...
COCA is a fault-tolerant and secure on-line certification authority that has been built and deployed both in a local area network and in the Internet. Replication is used to achieve availability; proactive recovery with threshold cryptography is used for digitally signing certificates in a way that defends against mobile adversaries which attack, c...
For seven years, the Tacoma project has investigated the design and implementation of software support for mobile agents. A series of prototypes has been developed, with experiences in distributed applications driving the e#ort. This paper describes the evolution of these Tacoma prototypes, what primitives each supports, and how the primitives are...
this report are those of the authors and should not be construed as an official Department of Defense position, policy, or decision
Part of the Advanced Automation System (AAS) for air-traffic control is a protocol to permit flight hand-off from one air-traffic controller to another. The protocol must be fault-tolerant and, therefore, is subtle---an ideal candidate for the application of formal methods. This paper describes a formal method for deriving fault-tolerant protocols...
A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on Hoare-style proof outlines and can handle maximal parallelism as well as certain resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic...
Inference rule "substitution of equals for equals" has been formalized in terms of simple substitution (which performs a replacement even though a free occurrence of a variable is captured), contextual substitution (which prevents such capture), and function application. We show that in connection with pure first-order predicate calculus, the funct...
The use of weakest-precondition predicate transformers in the derivation of sequential, process-control software is discussed.
A rigorous, automated approach to analyzing fault-tolerance of distributed systems is presented. The method is based on a stream model of computation that incorporates approximation mechanisms. One application is described: a protocol for fault-tolerant moving agents.
This paper gives a method for proving that a program satisfies a temporal property that has been specified in terms of Buchi automata. The method permits extraction of proof obligations for a property formulated as the Boolean combination of properties, each of which is specified by a deterministic Buchi automaton, directly from the individual auto...
This paper reports experiences in addressing the network-software installation-problem for TACOMA. However, we believe that the techniques employed have utility in other situations as well. The next section gives an overview of TACOMA and the applications it currently supports. Section 3 describes a WWW-based scheme for avoiding software installati...
This paper discusses one aspect of the problem---implementing fault-tolerance without specialized hardware
Most trace-based proof systems for networks of processes are known to be incomplete. Extensions to achieve completeness are generally complicated and cumbersome. In this paper, a simple trace logic is defined and two examples are presented to show its inherent incompleteness. Surprisingly, both examples consist of only one process, indicating that...
Introduction Programming a computer system that is subject to failures is a difficult task. A malfunctioning processor might perform arbitrary and spontaneous state transformations, instead of the transformations specified by the programs it executes. Thus, even a correct program cannot be counted on to implement a desired input-output relation whe...
Introduction One way to implement a fault-tolerant service is the primary-backup or primarycopy approach [1]. With this approach, a service is implemented by a collection of servers. One server is designated as the primary; the others are called backups. Clients send requests to the primary and any responses to requests come from the primary. If th...
An Inexact Agreement protocol allows processors that each have a value approximating v to compute new values that are closer to each other and close to v . Two faulttolerant protocols for Inexact Agreement are described. As long as fewer than 1/3 of the processors are faulty, the protocols give the required convergence; they also permit iteration a...
A method for verifying hybrid systems is given. Such systems involve state components whose values are changed by continuous (physical) processes. The verification method is based on proving that only those executions that satisfy constraints imposed by an environment also satisfy the property of interest. A suitably expressive logic then allows th...
Processes that roam a network---agents---present new technical challenges. Two are discussed here. The first problem, which arises in connection with implementing fault-tolerant agents, concerns how a voter authenticates the agents comprising its electorate. The second is to characterize security policies that are enforceable as well as approaches...
Citations
... For ∀∃ our automata formulation (in Sect. 7) shows that to prove a ∀∃-spec R ∃ ≈> S, instead of explicitly constructing a positive witness of the existential [11,38] one can instead filter out the non-witnesses on the right, so what's left satisfies the corresponding ∀∀-spec R ≈> S. All for some! In practice, many verification tools work directly with verification conditions, although some interactive tools are directly based on a HL (e.g., [16]). ...
... Here, as in most frameworks regarding insider threat, it is the user, their psychology and their actions which are the focus of attention rather than the technology -all the surveys referenced here with the exception of Salem at al. [31] consider psychological drivers for at least malicious behaviour, and for some work [18,42] it is the primary focus. Recently, work [19] has also looked at various methods of ingraining mitigation strategies into corporate culture, such as double-checking work and pair programming, but this obviously comes at a larger cost in human resources. ...
... Pucella and Schneider [174] investigate the effectiveness of defenses based on software diversity, in the context of memory safety. Their main result is to characterize such defenses as to be probabilistically equivalent to strong typing which would guarantee memory safety for buffers, thus reducing the security of the defense mechanism to the strength of strong typing. ...
... To precisely describe a dynamic policy, RIF [36], [35] uses reclassification relation to associate label changes with proram outputs. While this approach is highly expressive, writing down the correct relation with regards to numerous possible outputs is arguably a time-consuming and error-prone task. ...
... To protect against this, more data can be abstracted, i.e., the AP locations can be anonymized as well (while retaining category, floor, and relative information). Yet, it still needs to be established who has the privileges to query for information and what the queries can be [68]. Moreover, campuses can adapt existing access policies for student records to protect student collocation patterns. ...
... To support label introspection securely, our calculi protect each label with the label itself. Kozyri et al. (2019) generalizes this mechanism to chains of labels of arbitrary length (where each label defines the sensitivity of its predecessor) and study the tradeoffs between permissiveness and storage. ...
... It has several mechanisms, such as access control and information flow control, to prevent untrusted nodes from violating integrity and confidentiality. Other language-based information flow control approaches [56,86,108] have also been proposed. ...
... Such carbon footprint inflation can also be achieved by violating the integrity of the sustainability metrics (e.g., code or data) [19], [52] or by manipulating the system traces and logs-the evidence trail of carbon consumption [45] by the compromised VMs or malicious processes in data centers. Similarly, compromised data center providers may report false carbon footprints to the regulators [14] to evade high CO2 taxes or regulations. ...
Reference: Verifiable Sustainability in Data Centers
... As a brand-new data storage, distribution, and management mechanism, blockchain allows users to participate in the calculation and storage of data and verify the authenticity of data with each other, to achieve reliable and reliable transfer of data and value in a decentralized and untrusted manner. With the continuous exploration and application of blockchain technology in various industries, its application model has become more and more mature, and a typical blockchain technology application architecture composed of storage layer, protocol layer, extension layer, and application layer has gradually formed consensus, as shown in Fig. 1 [6,7]. ...
... Computer scientist Fred Schneider describes the practical, political, and economic reasons why available state of the art technologies to defend against cyberattacks have not been, and are unlikely to be fully deployed. 4 He raises the very interesting question of trade-offs between the state's interest in protecting citizens and corporations against cyberattacks and the state's interest in surveillance of the same citizens, as well as others for national security reasons. 5 We (Johns and Riles), professors of law and Far East legal studies respectively, describe the limits of two key modalities of thinking about cybersecurity, which we term the "bunker" and the "vaccine." ...