Conference Paper

Iago attacks: Why the system call API is a bad untrusted RPC interface

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them. We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest. Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Checkoway et al. [4] first demonstrated that an untrusted OS can perform so called Iago attacks to compromise legacy applications by supplying maliciously crafted pointers or lengths as the return value of a traditionally trusted system call like malloc(). These attacks are closely related to a small subset of the vulnerabilities described in this work, specifically attack vector #9, which exploits that pointers or buffer sizes returned by untrusted ocalls may not be properly sanitized (cf. ...
... At this point, all input buffer pointers are validated to fall completely outside the enclave, before being copied 2 ⃝ from untrusted shared memory to a sufficiently-sized shadow buffer allocated on the enclave heap. Finally, the edger8r bridge transfers control 3 ⃝ to the code written by the application developer, which can now safely operate 4 ⃝ on the cloned buffer in enclave memory. A symmetrical path is followed when returning or performing ocalls to the untrusted code outside the enclave. ...
... Attack vector #9 (Iago): Pointers or sizes returned through ocalls should be scrutinized [4]. ▷ Understood, but still prevalent in research libOSs that shield system calls; one instance in a production SDK. ...
Conference Paper
Full-text available
This paper analyzes the vulnerability space arising in Trusted Execution Environments (TEEs) when interfacing a trusted enclave application with untrusted, potentially malicious code. Considerable research and industry effort has gone into developing TEE runtime libraries with the purpose of transparently shielding enclave application code from an adversarial environment. However, our analysis reveals that shielding requirements are generally not well-understood in real-world TEE runtime implementations. We expose several sanitization vulnerabilities at the level of the Application Binary Interface (ABI) and the Application Programming Interface (API) that can lead to exploitable memory safety and side-channel vulnerabilities in the compiled enclave. Mitigation of these vulnerabilities is not as simple as ensuring that pointers are outside enclave memory. In fact, we demonstrate that state-of-the-art mitigation techniques such as Intel's edger8r, Microsoft's "deep copy marshalling", or even memory-safe languages like Rust fail to fully eliminate this attack surface. Our analysis reveals 35 enclave interface sanitization vulnerabilities in 8 major open-source shielding frameworks for Intel SGX, RISC-V, and Sancus TEEs. We practically exploit these vulnerabilities in several attack scenarios to leak secret keys from the enclave or enable remote code reuse. We have responsibly disclosed our findings, leading to 5 designated CVE records and numerous security patches in the vulnerable open-source projects, including the Intel SGX-SDK, Microsoft Open Enclave, Google Asylo, and the Rust compiler.
... Contribution. Much prior attention has been devoted to safe data and control exchange at the enclave-OS interface (e.g., for Iago attacks [36]). Our main contribution is to highlight a third missing defense primitive at the enclave-OS interface: ensuring atomicity in re-entrant enclave code. ...
... Since EEXIT and EENTER do not take care of the register state, the enclave code has to save the enclave CPU context on its private stack and restore it when returning from the ocall later. The ocall interface has been the subject of much scrutiny in prior work, largely due to the risk of Iago attacks [36,44,58,61]. Asynchronous Entry/Exits. ...
... Since the introduction of Intel SGX, there has been abundant work on the security of the synchronous interfaces between untrusted software and SGX enclaves. Previous work has discovered that an attacker can compromise the confidentiality and integrity of an enclave by providing malicious system call return values, referred to as Iago attacks [36]. Eliminating such threats requires enclave software to carefully scrutinize system call return values passed into an enclave [30,57,60], with the aid of formal verification [58] or software testing techniques [40]. ...
Preprint
Exceptions are a commodity hardware functionality which is central to multi-tasking OSes as well as event-driven user applications. Normally, the OS assists the user application by lifting the semantics of exceptions received from hardware to program-friendly user signals and exception handling interfaces. However, can exception handlers work securely in user enclaves, such as those enabled by Intel SGX, where the OS is not trusted by the enclave code? In this paper, we introduce a new attack called SmashEx which exploits the OS-enclave interface for asynchronous exceptions in SGX. It demonstrates the importance of a fundamental property of safe atomic execution that is required on this interface. In the absence of atomicity, we show that asynchronous exception handling in SGX enclaves is complicated and prone to re-entrancy vulnerabilities. Our attacks do not assume any memory errors in the enclave code, side channels, or application-specific logic flaws. We concretely demonstrate exploits that cause arbitrary disclosure of enclave private memory and code-reuse (ROP) attacks in the enclave. We show reliable exploits on two widely-used SGX runtimes, Intel SGX SDK and Microsoft Open Enclave, running OpenSSL and cURL libraries respectively. We tested a total of 14 frameworks, including Intel SGX SDK and Microsoft Open Enclave, 10 of which are vulnerable. We discuss how the vulnerability manifests on both SGX1-based and SGX2-based platforms. We present potential mitigation and long-term defenses for SmashEx.
... Memory arbiter. We add 15 registers to the memory arbiter, one for each enclave (13), the SM and the firmware. Each register defines the memory region assigned to each execution context. ...
... CURE provides cryptographic primitives in the user-space enclaves to encrypt and integrity-protect data shared with the OS. However, using OS services over syscalls always comprises a remaining risk of leaking meta data information [2,77] or of receiving malicious return values from the OS [13]. In user-space enclaves, these attacks must be mitigated on the application level inside the enclave, e.g., by using data-oblivious algorithms [2,68] or by verifying the return values [13]. ...
... However, using OS services over syscalls always comprises a remaining risk of leaking meta data information [2,77] or of receiving malicious return values from the OS [13]. In user-space enclaves, these attacks must be mitigated on the application level inside the enclave, e.g., by using data-oblivious algorithms [2,68] or by verifying the return values [13]. None of these attacks pose a threat to kernel-space enclave since all resources are handled by the enclave RT. ...
Preprint
Security architectures providing Trusted Execution Environments (TEEs) have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers. The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type, however, different services need flexible enclaves that can adjust to their demands. Second, they cannot efficiently support emerging applications (e.g., Machine Learning as a Service), which require secure channels to peripherals (e.g., accelerators), or the computational power of multiple cores. Third, their protection against cache side-channel attacks is either an afterthought or impractical, i.e., no fine-grained mapping between cache resources and individual enclaves is provided. In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: (i) sub-space enclaves provide vertical isolation at all execution privilege levels, (ii) user-space enclaves provide isolated execution to unprivileged applications, and (iii) self-contained enclaves allow isolated execution environments that span multiple privilege levels. Moreover, CURE enables the exclusive assignment of system resources, e.g., peripherals, CPU cores, or cache resources to single enclaves. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. We implemented CURE on a RISC-V-based SoC and thoroughly evaluated our prototype in terms of hardware and performance overhead. CURE imposes a geometric mean performance overhead of 15.33% on standard benchmarks.
... These trade-offs are orthogonal to security concerns pointed out in prior works (c.f. Iago attacks [29], side-channels [88]). We observe that these trade-offs are somewhat fundamental and rooted in 5 specific restrictions imposed by the SGX design, which create sweeping incompatibility with multi-threading, memory mapping, synchronization, signal-handling, shared memory, and other commodity OS abstractions. ...
... We refer to this as a twocopy mechanism. Thus, 1 breaks functionality (e.g., system calls, signal handling, futex), introduces non-transparency (e.g., explicitly synchronizing both copies), and introduces security gaps (e.g., TOCTOU attacks [29,40]). R2. ...
... Building an end-to-end secure sandbox on top of Ratel requires additional security mechanisms, which are common to other systems and are previously known. These mechanisms include encryption/decryption of external file or I/O content [15,46,69,77], sanitization of OS inputs to prevent Iago attacks [29,48,74,79], defenses against known side-channel attacks [21,45,63,70,71], additional attestation or integrity of dynamically loaded/generated code [41][42][43]83], and so on. These are important but largely orthogonal to our focus. ...
Preprint
Enclaves, such as those enabled by Intel SGX, offer a hardware primitive for shielding user-level applications from the OS. While enclaves are a useful starting point, code running in the enclave requires additional checks whenever control or data is transferred to/from the untrusted OS. The enclave-OS interface on SGX, however, can be extremely large if we wish to run existing unmodified binaries inside enclaves. This paper presents Ratel, a dynamic binary translation engine running inside SGX enclaves on Linux. Ratel offers complete interposition, the ability to interpose on all executed instructions in the enclave and monitor all interactions with the OS. Instruction-level interposition offers a general foundation for implementing a large variety of inline security monitors in the future. We take a principled approach in explaining why complete interposition on SGX is challenging. We draw attention to 5 design decisions in SGX that create fundamental trade-offs between performance and ensuring complete interposition, and we explain how to resolve them in the favor of complete interposition. To illustrate the utility of the Ratel framework, we present the first attempt to offer binary compatibility with existing software on SGX. We report that Ratel offers binary compatibility with over 200 programs we tested, including micro-benchmarks and real applications such as Linux shell utilities. Runtimes for two programming languages, namely Python and R, tested with standard benchmarks work out-of-the-box on Ratel without any specialized handling.
... We refer to this as a two-copy mechanism. Thus, R1 breaks functionality (e.g., system calls, signal handling, futex), introduces nontransparency (e.g., explicitly synchronizing both copies), and introduces security gaps (e.g., TOCTOU attacks [18], [29]). ...
... Many challenges are common between the design of RA-TEL presented here and other systems. These include encryption/decryption of external file or I/O content [3], [35], [61], [69], sanitization of OS inputs to prevent Iago attacks [18], [37], [66], [71], defenses against known sidechannel attacks [8], [34], [56], [62], [63], additional attestation or integrity of dynamically loaded/generated code [30]- [32], [72], and so on. These are important but largely orthogonal to the focus of this work. ...
... As in RATEL, other approaches to SGX compatibility eventually have to use OCALLs, ECALLs, and syscalls to exchange information between the enclave and the untrusted software. This interface is known to be vulnerable [18], [71]. Several shielding systems for file [15], [66] and network IO [5], provide specific mechanisms to safeguard the OS interface against these attacks. ...
Preprint
Enclaves, such as those enabled by Intel SGX, offer a powerful hardware isolation primitive for application partitioning. To become universally usable on future commodity OSes, enclave designs should offer compatibility with existing software. In this paper, we draw attention to 5 design decisions in SGX that create incompatibility with existing software. These represent concrete starting points, we hope, for improvements in future TEEs. Further, while many prior works have offered partial forms of compatibility, we present the first attempt to offer binary compatibility with existing software on SGX. We present Ratel, a system that enables a dynamic binary translation engine inside SGX enclaves on Linux. Through the lens of Ratel, we expose the fundamental trade-offs between performance and complete mediation on the OS-enclave interface, which are rooted in the aforementioned 5 SGX design restrictions. We report on an extensive evaluation of Ratel on over 200 programs, including micro-benchmarks and real applications such as Linux utilities.
... When trusted application code in a TEE computes over results produced by an untrusted kernel and hypervisor [1,2], it is difficult at best to reason about the secrecy and integrity properties achieved by the overall ensemble-to establish, despite the wide breadth of the Linux system call interface, that in-enclave code is immune to Iago attacks [3]. In this paper, we argue that an attractive use case for TEEs is tamper-proof audit: the TEE executes a trusted observer (TO) that allows efficient offline validation that application code running outside the TEE has executed as expected. ...
... First, it is difficult to know in practice whether application code that consumes results from untrusted code (e.g., system call return values, as determined by the untrusted kernel and hypervisor) will produce correct results. Checkoway and Shacham catalog a broad range of Iago attacks that untrusted kernels can mount on trusted application code running above them; avenues of attack include system calls that manipulate virtual memory, conduct I/O, and provide access to hardwaregenerated entropy and time [3]. Some of the more tantalizing uses of TEEs specifically target placing legacy application code in enclaves [1,2]. ...
... Finally, we trust the development environment, compilation toolchain, Intel attestation infrastructure, and SGX libraries. Under this threat model, an adversary can mount Iago attacks [3]. Iago attacks exploit an application's trust in the OS's interface to subvert the application's execution. ...
Conference Paper
When trusted application code in a TEE computes over results produced by an untrusted kernel and hypervisor [1, 2], it is difficult at best to reason about the secrecy and integrity properties achieved by the overall ensemble---to establish, despite the wide breadth of the Linux system call interface, that in-enclave code is immune to Iago attacks [3]. In this paper, we argue that an attractive use case for TEEs is tamper-proof audit: the TEE executes a trusted observer (TO) that allows efficient offline validation that application code running outside the TEE has executed as expected. We describe a TO design that inherently does not require any trust of system call results (and thus of the kernel or hypervisor), and DOG, a prototype TO implementation for Intel SGX that upholds application execution integrity, even for applications that do not fit within today's SGX virtual memory limits, and incurs modest execution overhead.
... In our design, data pages are shared between the invisible code and its untrusted counterpart, which may open our system to Iago attacks [15] or Blind ROP [12]. The untrusted OS could maliciously alter code pointers stored in data pages and attempt to jump to arbitrary locations in the invisible code. ...
... Iago attacks. As part of these types of attacks an untrusted OS maliciously alters its reply to the trusted OS in order to affect its security [15]. Memory mappings could be maliciously altered by the untrusted OS by introducing a double mapping between data and a page containing invisible code. ...
... Out of scope We only consider restricting the host-enclave interactions to specified interfaces. The high-level attacks, such as the Iago attack [12] that exploit the specified interfaces are ignored by us. It is because that we regard our work as the first step towards the establishment of the mutual distrust between the enclave and its host application. ...
... High-level attack SGXCapsule focuses on eliminating the enclave-host asymmetries so that the host-enclave interaction can be restricted to the specified interfaces. High-level attacks [12,21] based on the specified interaction interfaces (e.g., ECALL/OCALL in Intel SGX SDK [3]) are not considered. Nevertheless, SGXCapsule is the foundation for the defense mechanism that targets for such high-level attacks. ...
Preprint
Since its debut, SGX has been used in many applications, e.g., secure data processing. However, previous systems usually assume a trusted enclave and ignore the security issues caused by an untrusted enclave. For instance, a vulnerable (or even malicious) third-party enclave can be exploited to attack the host application and the rest of the system. In this paper, we propose an efficient mechanism to confine an untrusted enclave's behaviors. The threats of an untrusted enclave come from the enclave-host asymmetries. They can be abused to access arbitrary memory regions of its host application, jump to any code location after leaving the enclave and forge the stack register to manipulate the saved context. Our solution breaks such asymmetries and establishes mutual distrust between the host application and the enclave. It leverages Intel MPK for efficient memory isolation and the x86 single-step debugging mechanism to capture the event when an enclave is existing. It then performs the integrity check for the jump target and the stack pointer. We have solved two practical challenges and implemented a prototype system. The evaluation with multiple micro-benchmarks and representative real-world applications demonstrated the efficiency of our system, with less than 4% performance overhead.
... Seccomp-bpf [44] interposes system call requests by filtering system call with ID and arguments. In addition to filtering system calls, by manipulating return values of system calls, a malicious operating system can leak the application's secret or break the execution integrity known as the Iago Attack [40]. To prevent Iago attacks, return values also need to be validated [56,62]. ...
... STOCKADE's HW-based communication effectively saves the communication cost than SW-based approaches. As consequence, Stockade shows up to 1.38× faster in read, 1.16× faster in write comparing to Baseline when the chunk size 40 42 44 46 Latency (ms) ...
Preprint
Full-text available
The widening availability of hardware-based trusted execution environments (TEEs) has been accelerating the adaptation of new applications using TEEs. Recent studies showed that a cloud application consists of multiple distributed software modules provided by mutually distrustful parties. The applications use multiple TEEs (enclaves) communicating through software-encrypted memory channels. Such execution model requires bi-directional protection: protecting the rest of the system from the enclave module with sandboxing and protecting the enclave module from a third-part module and operating systems. However, the current TEE model, such as Intel SGX, cannot efficiently represent such distributed sandbox applications. To overcome the lack of hardware supports for sandboxed TEEs, this paper proposes an extended enclave model called Stockade, which supports distributed sandboxes hardened by hardware. Stockade proposes new three key techniques. First, it extends the hardware-based memory isolation in SGX to confine a user software module only within its enclave. Second, it proposes a trusted monitor enclave that filters and validates systems calls from enclaves. Finally, it allows hardware-protected memory sharing between a pair of enclaves for efficient protected communication without software-based encryption. Using an emulated SGX platform with the proposed extensions, this paper shows that distributed sandbox applications can be effectively supported with small changes of SGX hardware.
... With the developing capabilities of adversaries and side channel attacks, vulnerabilities of secure processors continuously keep getting exploited leaking the private digital state (including private keys) of the processor. A recent survey [14] has shown that Intel SGX has been susceptible to a wide range of attacks in the recent years [15]- [43]. We may conclude that hardware isolation as is implemented today for executing enclave code cannot guarantee privacy. ...
... While theoretically and technically intact, these hardware isolation primitives are not sufficient to prevent private data from leaking through side channels, e.g., other sources of information, that can still be observed and used by a malicious adversary to extract sensitive information. In fact, in recent years, it has been shown that Intel SGX is vulnerable against a wide range of side channel attacks [15]- [43]. ...
Preprint
Full-text available
Recent years have witnessed a trend of secure processor design in both academia and industry. Secure processors with hardware-enforced isolation can be a solid foundation of cloud computation in the future. However, due to recent side-channel attacks, the commercial secure processors failed to deliver the promises of a secure isolated execution environment. Sensitive information inside the secure execution environment always gets leaked via side channels. This work considers the most powerful software-based side-channel attackers, i.e., an All Digital State Observing (ADSO) adversary who can observe all digital states, including all digital states in secure enclaves. Traditional signature schemes are not secure in ADSO adversarial model. We introduce a new cryptographic primitive called One-Time Signature with Secret Key Exposure (OTS-SKE), which ensures no one can forge a valid signature of a new message or nonce even if all secret session keys are leaked. OTS-SKE enables us to sign attestation reports securely under the ADSO adversary. We also minimize the trusted computing base by introducing a secure co-processor into the system, and the interaction between the secure co-processor and the attestation processor is unidirectional. That is, the co-processor takes no inputs from the processor and only generates secret keys for the processor to fetch. Our experimental results show that the signing of OTS-SKE is faster than that of Elliptic Curve Digital Signature Algorithm (ECDSA) used in Intel SGX.
... She can install a malicious kernel module to dump the /dev/mem device. Even more, she can perform a syscall parameter tampering (e.g., the Iago attack [17]) to modify arguments of OS functions. She can also get access to sensitive data contained in L1/L2/L3-cache via side-channel attacks [18], e.g., by exploiting timing and page-faults [19]. ...
Article
Protecting data-in-use from privileged attackers is challenging. New CPU extensions (notably: Intel SGX ) and cryptographic techniques (specifically: Homomorphic Encryption ) can guarantee privacy even in untrusted third-party systems. HE allows sensitive processing on ciphered data. However, it is affected by i) a dramatic ciphertext expansion making HE unusable when bandwidth is narrow, ii) unverifiable conditional variables requiring off-premises support. Intel SGX allows sensitive processing in a secure enclave. Unfortunately, it is i) strictly bonded to the hosting server making SGX unusable when the live migration of cloud VMs/Containers is desirable, ii) limited in terms of usable memory, which is in contrast with resource-consuming data processing. In this article, we propose the VIrtual Secure Enclave (VISE) , an approach that effectively combines the two aforementioned techniques, to overcome their limitations and ultimately make them usable in a typical cloud setup. VISE moves the execution of sensitive HE primitives (e.g., encryption) to the cloud in a remotely attested SGX enclave, and then performs sensitive processing on HE data–outside the enclave–leveraging all the memory resources available. We demonstrate that VISE meets the challenging security and performance requirements of a substantial application in the Industrial Control Systems domain. Our experiments prove the practicability of the proposed solution.
... To port legacy software into SGX, developers have to re-design the trust boundary in their code, make sure the security sensitive part is adequately isolated, and define the ECALL and OCALL interfaces to bridge the trusted and untrusted parts of the software. If the partition is carelessly decided, the enclaves can still be breached by the outside malicious entities through Iago attacks [8]. This makes developing and porting SGX programs a non-trivial software engineering challenge. ...
Preprint
The big data industry is facing new challenges as concerns about privacy leakage soar. One of the remedies to privacy breach incidents is to encapsulate computations over sensitive data within hardware-assisted Trusted Execution Environments (TEE). Such TEE-powered software is called secure enclaves. Secure enclaves hold various advantages against competing for privacy-preserving computation solutions. However, enclaves are much more challenging to build compared with ordinary software. The reason is that the development of TEE software must follow a restrictive programming model to make effective use of strong memory encryption and segregation enforced by hardware. These constraints transitively apply to all third-party dependencies of the software. If these dependencies do not officially support TEE hardware, TEE developers have to spend additional engineering effort in porting them. High development and maintenance cost is one of the major obstacles against adopting TEE-based privacy protection solutions in production. In this paper, we present our experience and achievements with regard to constructing and continuously maintaining a third-party library supply chain for TEE developers. In particular, we port a large collection of Rust third-party libraries into Intel SGX, one of the most mature trusted computing platforms. Our supply chain accepts upstream patches in a timely manner with SGX-specific security auditing. We have been able to maintain the SGX ports of 159 open-source Rust libraries with reasonable operational costs. Our work can effectively reduce the engineering cost of developing SGX enclaves for privacy-preserving data processing and exchange.
... Furthermore, while our current performance evaluation gives a first insight into best-and worst-case scenarios, everyday applications would provide a more realistic picture. Eventually, the resilience of SEVGuard's HCI should be examined against malicious hosts which falsify responses to the guest (e.g., via Iago attacks [43]). ...
Thesis
This thesis investigates novel anti-forensic techniques for hiding malicious activity and proposes counter strategies for conducting robust digital analysis through virtualization technology. We begin by surveying the current landscape of memory acquisition, a technique extensively used during forensic investigations. In order to evade analysis, malware nowadays incorporates sophisticated anti-forensics, which hinder the analysis process. We present advances in anti-forensics by introducing new methods for hiding memory from analysis tools to expand existing knowledge. The final part demonstrates analysis techniques that provide resilience against anti-forensics. First, we define a universal taxonomy of different methods for acquiring a system’s memory, many of which have proven to be vital against modern malware threats. Then, based on this taxonomy, we comprehensively survey the field of modern memory acquisition, abstracting from both Operating Systems (OSs) and specific hardware architectures. Finally, we unveil the limitations of today’s acquisition techniques and conclude that most approaches are prone to anti-forensics, enabling malware to subvert the analysis process and escape the investigation. In the second part, we introduce new approaches that hide memory from forensic applications, preventing analysts from accessing the content of specific regions. On the one hand, we manipulate the memory management subsystem of different OSs to alter the memory view of live forensic tools. In addition, we demonstrate different strategies to detect these subversion techniques, providing a possibility to improve respective tools. With Styx, on the other hand, we present a powerful rootkit technique that leverages hardware-based virtualization to counter even robust acquisition methods. Styx subverts the highly privileged hypervisor layer to take complete control over a system without introducing detectable modifications. Furthermore, to prevent acquisition software from noticing the rootkit’s memory footprint, Styx locates in particular memory regions reserved for device mappings. As these regions are not always entirely consumed by devices, the resulting offcuts serve our rootkit as a perfect hiding spot. Furthermore, by simulating invalid address ranges which are not accessible to a processor, Styx deceives forensic tools with a tampered view on these leftovers. Finally, we demonstrate the design of anti-forensic resilient systems which enable a forensically robust analysis through virtualization. We first present SEVGuard which protects (forensic) applications from malicious threats operating at the highest privilege levels. Based on virtualization and encryption features of modern processor architectures, SEVGuard provides a secure execution environment that enforces confidentiality and integrity of existing applications by encrypting their memory and processor state. Instead of protecting an application, StealthProbes hides the analysis component from the examined target, giving analysts the chance to inspect its functionality without risking the sample to notice the investigation. Our system stealthily instruments an application’s memory and hides these modifications, leveraging the latest virtualization features and exploiting cache incoherencies that arise from memory address translations. Furthermore, StealthProbes integrates a transparent function-level tracer for enabling deep insight into an application’s runtime behavior. As a result, even programs that thwart the analysis by enforcing code integrity are stealthily dissectable. For achieving a forensically sound investigation, the actual deployment or execution of a forensic method must not alter the state of an analyzed system. With HyperLeech, we present a minimally invasive approach which uses Direct Memory Access (DMA) to stealthily deploy a forensic hypervisor through external Peripheral Component Interconnect Express (PCIe) hardware. The hypervisor transparently virtualizes the running target system, serving analysts as a stealthy and privileged execution layer for all kinds of forensic tasks. Without causing a notable impact on the target’s state, HyperLeech enables forensic methods to execute without the risk of destroying evidence or alerting malware.
... The shields apply at each location where an application would usually trust the operating system, such as when using sockets or writing files to disk. The shields perform sanity checks on data passed from operating system to enclave to prevent Iago attacks [24]. More specifically, these checks include bound checks and checking for manipulated pointers. ...
Preprint
Full-text available
Data-driven intelligent applications in modern online services have become ubiquitous. These applications are usually hosted in the untrusted cloud computing infrastructure. This poses significant security risks since these applications rely on applying machine learning algorithms on large datasets which may contain private and sensitive information. To tackle this challenge, we designed secureTF, a distributed secure machine learning framework based on Tensorflow for the untrusted cloud infrastructure. secureTF is a generic platform to support unmodified TensorFlow applications, while providing end-to-end security for the input data, ML model, and application code. secureTF is built from ground-up based on the security properties provided by Trusted Execution Environments (TEEs). However, it extends the trust of a volatile memory region (or secure enclave) provided by the single node TEE to secure a distributed infrastructure required for supporting unmodified stateful machine learning applications running in the cloud. The paper reports on our experiences about the system design choices and the system deployment in production use-cases. We conclude with the lessons learned based on the limitations of our commercially available platform, and discuss open research problems for the future work.
... Although side-channel attacks are out of scope of this work, it is worth to mention that the underlying SCONE platform can protect against L1-based side channels attacks [57] and is hardened against Iago attacks [17]. To mitigate the various variants of Spectre [42], we can use LLVM-extensions, e.g., speculative load hardening [13] that prevent exploitable speculation. ...
Preprint
Full-text available
Trust is arguably the most important challenge for critical services both deployed as well as accessed remotely over the network. These systems are exposed to a wide diversity of threats, ranging from bugs to exploits, active attacks, rogue operators, or simply careless administrators. To protect such applications, one needs to guarantee that they are properly configured and securely provisioned with the "secrets" (e.g., encryption keys) necessary to preserve not only the confidentiality, integrity and freshness of their data but also their code. Furthermore, these secrets should not be kept under the control of a single stakeholder - which might be compromised and would represent a single point of failure - and they must be protected across software versions in the sense that attackers cannot get access to them via malicious updates. Traditional approaches for solving these challenges often use ad hoc techniques and ultimately rely on a hardware security module (HSM) as root of trust. We propose a more powerful and generic approach to trust management that instead relies on trusted execution environments (TEEs) and a set of stakeholders as root of trust. Our system, PALAEMON, can operate as a managed service deployed in an untrusted environment, i.e., one can delegate its operations to an untrusted cloud provider with the guarantee that data will remain confidential despite not trusting any individual human (even with root access) nor system software. PALAEMON addresses in a secure, efficient and cost-effective way five main challenges faced when developing trusted networked applications and services. Our evaluation on a range of benchmarks and real applications shows that PALAEMON performs efficiently and can protect secrets of services without any change to their source code.
... SGX-based Systems. Haven [56], Graphene [57,58] and Panoply [59] provide LibOS for SGX, which enable easier application porting and prevent Iago attacks [60]. OpenSGX [61] provides an open research framework for running SGX applications. ...
... By not protecting insensitive data, Ginseng reduces the overhead of protection. Using registers, Ginseng also protects sensitive data against Iago attacks [13], which compromise an application through manipulated system call return values. ...
... Iago attacks. SGX is vulnerable to Iago attacks [89] since it relies on the OS for ring-0 operations. CHANCEL prevents filesystem-related Iago attacks by providing in-memory filesystem functionality at runtime. ...
... Note that the rest of the toolchain (Coccinelle included) is not part of the TCB as the code includes compile time checks that are able to detect invalid transformations. Finally we must also assume that interfaces correctly check arguments and are free of confused deputy/Iago [14] situations. This is not an unreasonable assumption within the core FlexOS codebase. ...
Preprint
Full-text available
At design time, modern operating systems are locked in a specific safety and isolation strategy that mixes one or more hardware/software protection mechanisms (e.g. user/kernel separation); revisiting these choices after deployment requires a major refactoring effort. This rigid approach shows its limits given the wide variety of modern applications' safety/performance requirements, when new hardware isolation mechanisms are rolled out, or when existing ones break. We present FlexOS, a novel OS allowing users to easily specialize the safety and isolation strategy of an OS at compilation/deployment time instead of design time. This modular LibOS is composed of fine-grained components that can be isolated via a range of hardware protection mechanisms with various data sharing strategies and additional software hardening. The OS ships with an exploration technique helping the user navigate the vast safety/performance design space it unlocks. We implement a prototype of the system and demonstrate, for several applications (Redis/Nginx/SQLite), FlexOS' vast configuration space as well as the efficiency of the exploration technique: we evaluate 80 FlexOS configurations for Redis and show how that space can be probabilistically subset to the 5 safest ones under a given performance budget. We also show that, under equivalent configurations, FlexOS performs similarly or better than several baselines/competitors.
... In the current implementation, generic hypercalls such as domctl and sysctl consider only the first parameter used for specifying a subdivided operation. To prevent the Iago attack [48], we need to extend our implementation so as to consider more parameters. Also, it is necessary for one hypercall automaton to accept hypercall sequences issued by a group of multiple processes. ...
Article
Full-text available
Abstract In Infrastructure-as-a-Service (IaaS) clouds, remote users access provided virtual machines (VMs) via the management server. The management server is managed by cloud operators, but not all the cloud operators are trusted in semi-trusted clouds. They can execute arbitrary management commands to users’ VMs and redirect users’ commands to malicious VMs. We call the latter attack the VM redirection attack. The root cause is that the binding of remote users to their VMs is weak. In other words, it is difficult to enforce the execution of only users’ management commands to their VMs. In this paper, we propose UVBond for strongly binding users to their VMs to address this issue. UVBond boots user’s VM by decrypting its encrypted disk inside the trusted hypervisor. Then it issues a VM descriptor to securely identify that VM. To bridge the semantic gap between high-level management commands and low-level hypercalls, UVBond uses hypercall automata, which accept the sequences of hypercalls issued by commands. We have implemented UVBond in Xen and created hypercall automata for various management commands. Using UVBond, we confirmed that a VM descriptor and hypercall automata prevented insider attacks and that the overhead was not large in remote VM management.
... In this paper, we refer to these attacks as cyber-physical attacks, which exhibit the following unique characteristics: (1) the attack surface via sensors is the link between cyber and physical domains; (2) the attackers target the transition process between physical and cyber domains; and (3) different from conventional cyber-domain attacks, these two types of attacks exploit both physical and cyber domains. Therefore, we do not include the following topics in this paper: (1) attacking other cyber components within the CPS, such as the CAN bus [15] [16] [17], electronic control unit (ECU) [18] [19], networks [17] [20] [21], realtime operating systems [22], and firmware [23] [24]; (2) using sensor properties to fingerprint devices [25] [26] [27] or physical objects [28] [29] [30]. ...
Article
With the emergence of low-cost smart and connected IoT devices, the area of cyber-physical security is becoming increasingly important. Past research has demonstrated new threat vectors targeting the transition process between the cyber and physical domains, where the attacker exploits the sensing system as an attack surface for signal injection or extraction of private information. Recently, there have been attempts to characterize an abstracted model for signal injection, but they primarily focus on the path of signal processing. This paper aims to systematize the existing research on security and privacy problems arising from the interaction of cyber world and physical world, with the context of broad CPS applications. The primary goals of the systematization are to (1) reveal the attack patterns and extract a general attack model of existing work, (2) understand possible new attacks, and (3) motivate development of defenses against the emerging cyber-physical threats.
... In this paper, we refer to these attacks as cyber-physical attacks, which exhibit the following unique characteristics: (1) the attack surface via sensors is the link between cyber and physical domains; (2) the attackers target the transition process between physical and cyber domains; and (3) different from conventional cyber-domain attacks, these two types of attacks exploit both physical and cyber domains. Therefore, we do not include the following topics in this paper: (1) attacking other cyber components within the CPS, such as the CAN bus [15] [16] [17], electronic control unit (ECU) [18] [19], networks [17] [20] [21], realtime operating systems [22], and firmware [23] [24]; (2) using sensor properties to fingerprint devices [25] [26] [27] or physical objects [28] [29] [30]. ...
Preprint
With the emergence of low-cost smart and connected IoT devices, the area of cyber-physical security is becoming increasingly important. Past research has demonstrated new threat vectors targeting the transition process between the cyber and physical domains, where the attacker exploits the sensing system as an attack surface for signal injection or extraction of private information. Recently, there have been attempts to characterize an abstracted model for signal injection, but they primarily focus on the path of signal processing. This paper aims to systematize the existing research on security and privacy problems arising from the interaction of cyber world and physical world, with the context of broad CPS applications. The primary goals of the systematization are to (1) reveal the attack patterns and extract a general attack model of existing work, (2) understand possible new attacks, and (3) motivate development of defenses against the emerging cyber-physical threats.
... Compromised OS kernels can also contribute to such attacks (access to userspace vulnerabilities). Authors of [28] show applications can be subverted by an OS kernel based on forged returns from system calls. Other flaws, such as misconfigurations and additive dependencies, may be used from unprotected interfaces. ...
Thesis
In this thesis, we propose an approach for software-defined security in distributed clouds. More specifically, we show to what extent this programmability can contribute to the protection of distributed cloud services, through the generation of secured unikernel images. These ones are instantiated in the form of lightweight virtual machines, whose attack surface is limited and whose security is driven by a security orchestrator. The contributions of this thesis are threefold. First, we present a logical architecture supporting the programmability of security mechanims in a multi-cloud and multi-tenant context. It permits to align and parameterize these mechanisms for cloud services whose resources are spread over several providers and tenants. Second, we introduce a method for generating secured unikernel images in an on-the-fly manner. This one permits to lead to specific and constrained resources, that integrate security mechanisms as soon as the image generation phase. These ones may be built in a reactive or proactive manner, in order to address elasticity requirements. Third, we propose to extend the TOSCA orchestration language, so that is is possible to generate automatically secured resources, according to different security levels in phase with the orchestration. Finally, we detail a prototyping and extensive series of experiments that are used to evaluate the benefits and limits of the proposed approach
... A secure-enclave is a reverse sandbox -it protects the user-level software from being compromised by the environment: the operating system, the virtual machine manager, the BIOS (via SMM), and the hardware surrounding the CPU chip. Any of these may be malicious (like adversary OS Iago attacks [58], or hardware cold boot attacks [110]) or compromised (an OS, VMM, or SMM vulnerability [75,283]). SGX allows clients to securely run software on untrusted servers maintained by a third party such as Amazon cloud computing, Microsoft Azure, or other cloud computing providers. ...
Thesis
A plethora of major security incidents---in which personal identifiers belonging to hundreds of millions of users were stolen---demonstrate the importance of improving the security of cloud systems. To increase security in the cloud environment, where resource sharing is the norm, we need to rethink existing approaches from the ground-up. This thesis analyzes the feasibility and security of trusted execution technologies as the cornerstone of secure software systems, to better protect users' data and privacy. Trusted Execution Environments (TEE), such as Intel SGX, has the potential to minimize the Trusted Computing Base (TCB), but they also introduce many challenges for adoption. Among these challenges are TEE's significant impact on applications' performance and non-trivial effort required to migrate legacy systems to run on these secure execution technologies. Other challenges include managing a trustworthy state across a distributed system and ensuring these individual machines are resilient to micro-architectural attacks. In this thesis, I first characterize the performance bottlenecks imposed by SGX and suggest optimization strategies. I then address two main adoption challenges for existing applications: managing permissions across a distributed system and scaling the SGX's mechanism for proving authenticity and integrity. I then analyze the resilience of trusted execution technologies to speculative execution, micro-architectural attacks, which put cloud infrastructure at risk. This analysis revealed a devastating security flaw in Intel's processors which is known as Foreshadow/L1TF. Finally, I propose a new architectural design for out-of-order processors which defeats all known speculative execution attacks.
Article
Virtualization methods and techniques play an important role in the development of cloud infrastructures and their services. They enable the decoupling of virtualized resources from the underlying hardware, and facilitate their sharing amongst multiple users. They contribute to the building of elaborated cloud services that are based on the instantiation and composition of these resources. Different models may support such a virtualization, including virtualization based on type-I and type-II hypervisors, OS-level virtualization, and unikernel virtualization. These virtualization models pose a large variety of security issues, but also offer new opportunities for the protection of cloud services. In this article, we describe and compare these virtualization models, in order to establish a reference architecture of cloud infrastructure. We then analyze the security issues related to these models from the reference architecture, by considering related vulnerabilities and attacks. Finally, we point out different recommendations with respect to the exploitation of these models for supporting cloud protection.
Article
Applications that process sensitive data can be carefully designed and validated to be difficult to attack, but they are usually run on monolithic, commodity operating systems, which may be less secure. An OS compromise gives the attacker complete access to all of an application's data, regardless of how well the application is built. We propose a new system, Virtual Ghost, that protects applications from a compromised or even hostile OS. Virtual Ghost is the first system to do so by combining compiler instrumentation and run-time checks on operating system code, which it uses to create ghost memory that the operating system cannot read or write. Virtual Ghost interposes a thin hardware abstraction layer between the kernel and the hardware that provides a set of operations that the kernel must use to manipulate hardware, and provides a few trusted services for secure applications such as ghost memory management, encryption and signing services, and key management. Unlike previous solutions, Virtual Ghost does not use a higher privilege level than the kernel. Virtual Ghost performs well compared to previous approaches; it outperforms InkTag on five out of seven of the LMBench microbenchmarks with improvements between 1.3x and 14.3x. For network downloads, Virtual Ghost experiences a 45% reduction in bandwidth at most for small files and nearly no reduction in bandwidth for large files and web traffic. An application we modified to use ghost memory shows a maximum additional overhead of 5% due to the Virtual Ghost protections. We also demonstrate Virtual Ghost's efficacy by showing how it defeats sophisticated rootkit attacks.
Article
Full-text available
The Bitcoin network has offered a new way of securely performing financial transactions over the insecure network. Nevertheless, this ability comes with the cost of storing a large (distributed) ledger, which has become unsuitable for personal devices of any kind. Although the simplified payment verification (SPV) clients can address this storage issue, a Bitcoin SPV client has to rely on other Bitcoin nodes to obtain its transaction history and the current approaches offer no privacy guarantees to the SPV clients. This work presents T³ , a trusted hardware-secured Bitcoin full client that supports efficient oblivious search/update for Bitcoin SPV clients without sacrificing the privacy of the clients. In this design, we leverage the trusted execution and attestation capabilities of a trusted execution environment (TEE) and the ability to hide access patterns of oblivious random access machine (ORAM) to protect SPV clients’ requests from potentially malicious nodes. The key novelty of T³ lies in the optimizations introduced to conventional ORAM, tailored for expected SPV client usages. In particular, by making a natural assumption about the access patterns of SPV clients, we are able to propose a two-tree ORAM construction that overcomes the concurrency limitation associated with traditional ORAMs. We have implemented and tested our system using the current Bitcoin Unspent Transaction Output (UTXO) Set. Our experiment shows that T³ is feasible to be deployed in practice while providing strong privacy and security guarantees to Bitcoin SPV clients.
Chapter
We present SEVGuard, a minimal virtual execution environment that protects the confidentiality of applications based on AMD’s Secure Encrypted Virtualization (SEV). Although SEV was primarily designed for the protection of VMs, we found a way to overcome this limitation and exclusively protect user mode applications. Therefore, we migrate the application into a hardware-accelerated VM and encrypt both its memory and register state. To avoid the overhead of a typical hypervisor, we built our solution on top of the plain Linux Kernel Virtual Machine (KVM) API. With the help of an advanced trapping mechanism, we fully support system and library calls from within the encrypted guest. Furthermore, we allow unmodified code to be transparently virtualized and encrypted by appropriate memory mappings. The memory needed for our minimal VM can be directly allocated within SEVGuard’s address space. We evaluated our execution environment regarding correctness and performance, confirming that SEVGuard can be practically used to protect existing legacy applications.
Article
InkTag is a virtualization-based architecture that gives strong safety guarantees to high-assurance processes even in the presence of a malicious operating system. InkTag advances the state of the art in untrusted operating systems in both the design of its hypervisor and in the ability to run useful applications without trusting the operating system. We introduce paraverification , a technique that simplifies the InkTag hypervisor by forcing the untrusted operating system to participate in its own verification. Attribute-based access control allows trusted applications to create decentralized access control policies. InkTag is also the first system of its kind to ensure consistency between secure data and metadata, ensuring recoverability in the face of system crashes.
Article
Applications that process sensitive data can be carefully designed and validated to be difficult to attack, but they are usually run on monolithic, commodity operating systems, which may be less secure. An OS compromise gives the attacker complete access to all of an application's data, regardless of how well the application is built. We propose a new system, Virtual Ghost, that protects applications from a compromised or even hostile OS. Virtual Ghost is the first system to do so by combining compiler instrumentation and run-time checks on operating system code, which it uses to create ghost memory that the operating system cannot read or write. Virtual Ghost interposes a thin hardware abstraction layer between the kernel and the hardware that provides a set of operations that the kernel must use to manipulate hardware, and provides a few trusted services for secure applications such as ghost memory management, encryption and signing services, and key management. Unlike previous solutions, Virtual Ghost does not use a higher privilege level than the kernel. Virtual Ghost performs well compared to previous approaches; it outperforms InkTag on five out of seven of the LMBench microbenchmarks with improvements between 1.3x and 14.3x. For network downloads, Virtual Ghost experiences a 45% reduction in bandwidth at most for small files and nearly no reduction in bandwidth for large files and web traffic. An application we modified to use ghost memory shows a maximum additional overhead of 5% due to the Virtual Ghost protections. We also demonstrate Virtual Ghost's efficacy by showing how it defeats sophisticated rootkit attacks.
Article
Full-text available
A secure voting machine design must withstand new at- tacks devised throughout its multi-decade service life- time. In this paper, we give a case study of the long- term security of a voting machine, the Sequoia AVC Advantage, whose design dates back to the early 80s. The AVC Advantage was designed with promising secu- rity features: its software is stored entirely in read-only memory and the hardware refuses to execute instructions fetched from RAM. Nevertheless, we demonstrate that an attacker can induce the AVC Advantage to misbehave in arbitrary ways — including changing the outcome of an election — by means of a memory cartridge contain- ing a specially-formatted payload. Our attack makes es- sential use of a recently-invented exploitation technique called return-oriented programming, adapted here to the Z80 processor. In return-oriented programming, short snippets of benign code already present in the system are combined to yield malicious behavior. Our results demonstrate the relevance of recent ideas from systems security to voting machine research, and vice versa. We had no access either to source code or documentation be- yond that available on Sequoia's web site. We have cre- ated a complete vote-stealing demonstration exploit and verified that it works correctly on the actual hardware.
Conference Paper
Full-text available
We present Flicker, an infrastructure for executing security- sensitive code in complete isolation while trusting as few as 250 lines of additional code. Flicker can also provide mean- ingful, ne-grained attestation of the code executed (as well as its inputs and outputs) to a remote party. Flicker guar- antees these properties even if the BIOS, OS and DMA- enabled devices are all malicious. Flicker leverages new commodity processors from AMD and Intel and does not require a new OS or VMM. We demonstrate a full imple- mentation of Flicker on an AMD platform and describe our development environment for simplifying the construction of Flicker-enabled code.
Article
Full-text available
We introduce a system that eliminates the need to run programs in privileged process contexts. Using our system, programs run unprivileged but may execute certain operations with elevated privileges as determined by a configurable policy eliminating the need for suid or sgid binaries. We present the design and analysis of the "Systrace" facility which supports fine grained process confinement, intrusion detection, auditing and privilege elevation. It also facilitates the often di#- cult process of policy generation. With Systrace, it is possible to generate policies automatically in a training session or generate them interactively during program execution. The policies describe the desired behavior of services or user applications on a system call level and are enforced to prevent operations that are not explicitly permitted. We show that Systrace is e#cient and does not impose significant performance penalties.
Article
Full-text available
Manypopular programs, suchasNetscape, use untrusted helper applications to process data from the network. Unfortunately,theunauthenticated networkdatathey interpret could well have been created byanadversary,andthehelper applications are usually too complex to be bug-free. This raises significant security concerns. Therefore, it is desirable to create a secure environmenttocontain untrusted helper applications. Wepropose toreduce therisk of a security breachby restrictingthe program's access totheoperating system. In particular, weintercept andfilter dangerous system calls via the Solaris process tracing facility. This enabled us to build a simple, clean, user-mode implementationofasecure environment for untrusted helper applications. Our implementationhas negligible performance impact, and can protect pre-existingapplications.
Article
Commodity operating systems entrusted with securing sensitive data are remarkably large and complex, and consequently, frequently prone to compromise. To address this limitation, we introduce a virtual-machine-based system called Overshadow that protects the privacy and integrity of application data, even in the event of a total OScompromise. Overshadow presents an application with a normal view of its resources, but the OS with an encrypted view. This allows the operating system to carry out the complex task of managing an application's resources, without allowing it to read or modify them. Thus, Overshadow offers a last line of defense for application data. Overshadow builds on multi-shadowing, a novel mechanism that presents different views of "physical" memory, depending on the context performing the access. This primitive offers an additional dimension of protection beyond the hierarchical protection domains implemented by traditional operating systems and processor architectures. We present the design and implementation of Overshadow and show how its new protection semantics can be integrated with existing systems. Our design has been fully implemented and used to protect a wide range of unmodified legacy applications running on an unmodified Linux operating system. We evaluate the performance of our implementation, demonstrating that this approach is practical.
Article
We introduce return-oriented programming, a technique by which an attacker can induce arbitrary behavior in a program whose control flow he has diverted, without injecting any code. A return-oriented program chains together short instruction sequences already present in a program’s address space, each of which ends in a “return” instruction. Return-oriented programming defeats the W⊕X protections recently deployed by Microsoft, Intel, and AMD; in this context, it can be seen as a generalization of traditional return-into-libc attacks. But the threat is more general. Return-oriented programming is readily exploitable on multiple architectures and systems. It also bypasses an entire category of security measures---those that seek to prevent malicious computation by preventing the execution of malicious code. To demonstrate the wide applicability of return-oriented programming, we construct a Turing-complete set of building blocks called gadgets using the standard C libraries of two very different architectures: Linux/x86 and Solaris/SPARC. To demonstrate the power of return-oriented programming, we present a high-level, general-purpose language for describing return-oriented exploits and a compiler that translates it to gadgets.
Article
Traditional operating systems limit the performance, flexibility, and functionality of applications by fixing the interface and implementation of operating system abstractions such as interprocess communication and virtual memory. The exokernel operating system architecture addresses this problem by providing application-level management of physical resources. In the exokernel architecture, a small kernel securely exports all hardware resources through a low-level interface to untrusted library operating systems. Library operating systems use this interface to implement system objects and policies. This separation of resource protection from management allows application-specific customization of traditional operating system abstractions by extending, specializing, or even replacing libraries. We have implemented a prototype exokernel operating system. Measurements show that most primitive kernel operations (such as exception handling and protected control transfer) are ten to 100 times faster than in Ultrix, a mature monolithic UNIX operating system. In addition, we demonstrate that an exokernel allows applications to control machine resources in way s not possible in traditional operating systems. For instance, virtual memory and interprocess communication abstractions are implemented entirely within an application-level library. Measurements show that application-level virtual memory and interprocess communication primitives are five to 40 times faster than Ultrix's kernel primitives. Compared to state-of-the-art implementations from the literature, the prototype exokernel system is at least five times faster on operations such as exception dispatching and interprocess communication.
Conference Paper
Random number generators (RNGs) are consistently a weak link in the secure use of cryptography. Routine cryp- tographic operations such as encryption and signing can fail spectacularly given predictable or repeated random- ness, even when using good long-lived key material. This has proved problematic in prior settings when RNG imple- mentation bugs, poor design, or low-entropy sources have resulted in predictable randomness. We investigate a new way in which RNGs fail due to reuse of virtual machine (VM) snapshots. We exhibit such VM reset vulnerabilities in widely-used TLS clients and servers: the attacker takes advantage of (or forces) snapshot replay to compromise sessions or even expose a server's DSA signing key. Our next contribution is a backwards-compatible framework for hedging routine cryptographic operations against bad ran- domness, thereby mitigating the damage due to randomness failures. We apply our framework to the OpenSSL library and experimentally confirm that it has little overhead.
Conference Paper
The prevalence of malware such as keyloggers and screen scrapershasmadetheprospectofprovidingsensitiveinfor- mationviawebpagesdisconcertingforsecurity-conscious users. We present Bumpy, a system to exclude the legacy operating system and applications from the trusted com- puting base for sensitive input, without requiring a hyper- visororVMM.Bumpyallowstheusertospecifystringsof input as sensitive when she enters them, and ensures that theseinputsreachthedesiredendpointinaprotectedstate. Theinputsareprocessedinanisolatedcodemoduleonthe user's system, where they can be encrypted or otherwise processed for a remote webserver. We present a prototype implementationofBumpy.
Conference Paper
We report on the aftermath of the discovery of a severe vul- nerability in the Debian Linux version of OpenSSL. Systems aected by the bug generated predictable random numbers, most importantly public/private keypairs. To study user response to this vulnerability, we collected a novel dataset of daily remote scans of over 50,000 SSL/TLS-enabled Web servers, of which 751 displayed vulnerable certicates. We report three primary results. First, as expected from pre- vious work, we nd an extremely slow rate of xing, with 30% of the hosts vulnerable when we began our survey on day 4 after disclosure still vulnerable almost six months later. However, unlike conventional vulnerabilities, which typically show a short, fast xing phase, we observe a much atter curve with
Conference Paper
Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.
Conference Paper
Complexity in commodity operating systems makes compromises inevitable. Consequently, a great deal of work has examined how to protect security-critical por- tions of applications from the OS through mechanisms such as microkernels, virtual machine monitors, and new processor architectures. Unfortunately, most work has focused on CPU and memory isolation and neglected OS semantics. Thus, while much is known about how to prevent OS and application processes from modifying each other, far less is understood about how different OS components can undermine application security if they turn malicious. We consider this problem in the context of our work on Overshadow, a virtual-machine-based system for retrofitting protection in commodity operating systems. We explore how malicious behavior in each major OS sub- system can undermine application security, and present potential mitigations. While our discussion is presented in terms of Overshadow and Linux, many of the prob- lems and solutions are applicable to other systems where trusted applications rely on untrusted, potentially mali- cious OS components.
Conference Paper
We explore the extent to which newly available CPU-based secu- rity technology can reduce the Trusted Computing Base (TCB) for security-sensitive applications. We find that although this new tech- nology represents a step in the right direction, significant perfor- mance issues remain. We offer several suggestions that leverage existing processor technology, retain security, and improve perfor- mance. Implementing these recommendations will finally allow ap- plication developers to focus exclusively on the security of their own code, enabling it to execute in isolation from the numerous vulnera- bilities in the underlying layers of legacy code.
Conference Paper
Commodity operating systems entrusted with securing sensitive data are remarkably large and complex, and consequently, fre- quently prone to compromise. To address this limitation, we in- troduce a virtual-machine-based system called Overshadow that protects the privacy and integrity of application data, even in the event of a total OS compromise. Overshadow presents an applica- tion with a normal view of its resources, but the OS with an en- crypted view. This allows the operating system to carry out the complex task of managing an application's resources, without al- lowing it to read or modify them. Thus, Overshadow offers a last line of defense for application data. Overshadow builds on multi-shadowing, a novel mechanism that presents different views of "physical" memory, depending on the context performing the access. This primitive offers an addi- tional dimension of protection beyond the hierarchical protection domains implemented by traditional operating systems and proces- sor architectures. We present the design and implementation of Overshadow and show how its new protection semantics can be integrated with ex- isting systems. Our design has been fully implemented and used to protect a wide range of unmodified legacy applications running on an unmodified Linux operating system. We evaluate the perfor- mance of our implementation, demonstrating that this approach is practical.
Conference Paper
We propose an architecture that allows code to execute in complete isolation from other software while trusting only a tiny software base that is orders of magnitude smaller than even minimalist virtual machine monitors. Our technique also enables more meaningful attestation than previous proposals, since only measurements of the security-sensitive portions of an application need to be included. We achieve these guarantees by leveraging hardware support provided by commodity processors from AMD and Intel that are shipping today.
Article
Application sandboxes provide restricted execution environments that limit an application's access to sensitive OS resources. These systems are an increasingly popular method for limiting the impact of a compromise. While a variety of mechanisms for building these systems have been proposed, the most thoroughly implemented and studied are based on system call interposition. Current interpositionbased architectures offer a wide variety of properties that make them an attractive approach for building sandboxing systems. Unfortunately, these architectures also possess several critical properties that make their implementation error prone and limit their functionality.
Article
We have implemented a default memory manager for the Mach 3.0 kernel that resides entirely in user space. The default memory manager uses a small set of kernel privileges to lock itself into memory, preventing deadlocks against other Mach system services. An extension to the Mach boot sequence loads both the kernel and user program images at system startup time. The resulting system allows the default memory manager to be built and run in a standard user-level environment, but still operates with the high reliability required by the Mach kernel. The default memory manager is bundled with another component of the Mach 3.0 system: the bootstrap service. This service starts the initial set of system servers that make up a complete operating system based on the Mach 3.0 kernel. Since the real file system may be one of these servers, the bootstrap service needs its own copy of a subset of the file system. This is shared with the default memory manager. Placing these two components outside ...
Article
System call interposition is a powerful method for regulating and monitoring application behavior. In recent years, a wide variety of security tools have been developed that use this technique. This approach brings with it a host of pitfalls for the unwary implementer that if overlooked can allow his tool to be easily circumvented. To shed light on these problems, we present the lessons we learned in the course of several design and implementation cycles with our own system call interposition-based sandboxing tool. We first present some of the problems and pitfalls we encountered, including incorrectly replicating OS semantics, overlooking indirect paths to resources, race conditions, incorrectly subsetting a complex interface, and side effects of denying system calls. We then present some practical solutions to these problems, and provide general principles for avoiding the difficulties we encountered.
The stack is back Presented at Infiltrate 2012
  • Jon Oberheide
Charles Reis, and The Google Chrome Team. The security architecture of the Chromium browser Online: http://seclab
  • Adam Barth
  • Collin Jackson
Implementing an untrusted operating system on trusted hardware
  • David Lie
  • Chandramohan A Thekkath
  • Mark Horowitz
Bypassing browser memory protections in Windows Vista Presented at Black Hat Online: http://www.phreedom.org/research/bypassing-browser-memory-protections/bypassing-browser-memory-protections
  • Alexander Sotirov
  • Mark Dowd