[show abstract][hide abstract] ABSTRACT: Cloud computing promises to significantly change the way we use computers and access and store our personal and busi-ness information. With these new computing and communica-tions paradigms arise new data security challenges. Existing data protection mechanisms such as encryption have failed in preventing data theft attacks, especially those perpetrated by an insider to the cloud provider. We propose a different approach for securing data in the cloud using offensive decoy technology. We monitor data access in the cloud and detect abnormal data access patterns. When unauthorized access is suspected and then verified using challenge questions, we launch a disinformation attack by returning large amounts of decoy information to the attacker. This protects against the misuse of the user's real data. Experiments conducted in a local file setting provide evidence that this approach may provide unprecedented levels of user data security in a Cloud environment.
Workshop on Research for Insider Threat (WRIT); 05/2012
[show abstract][hide abstract] ABSTRACT: The wide adoption of non-executable page protections in recent versions of popular operating systems has given rise to attacks that employ return-oriented programming (ROP) to achieve arbitrary code execution without the injection of any code. Existing defenses against ROP exploits either require source code or symbolic debugging information, or impose a significant runtime overhead, which limits their applicability for the protection of third-party applications. In this paper we present in-place code randomization, a practical mitigation technique against ROP attacks that can be applied directly on third-party software. Our method uses various narrow-scope code transformations that can be applied statically, without changing the location of basic blocks, allowing the safe randomization of stripped binaries even with partial disassembly coverage. These transformations effectively eliminate about 10%, and probabilistically break about 80% of the useful instruction sequences found in a large set of PE files. Since no additional code is inserted, in-place code randomization does not incur any measurable runtime overhead, enabling it to be easily used in tandem with existing exploit mitigations such as address space layout randomization. Our evaluation using publicly available ROP exploits and two ROP code generation toolkits demonstrates that our technique prevents the exploitation of the tested vulnerable Windows 7 applications, including Adobe Reader, as well as the automated construction of alternative ROP payloads that aim to circumvent in-place code randomization using solely any remaining unaffected instruction sequences.
[show abstract][hide abstract] ABSTRACT: Computer security research frequently entails studying real computer systems and their users; studying deployed systems is critical to understanding real world problems, so is having would-be users test a proposed solution. In this paper we focus on three key concepts in re-gard to ethics: risks, benefits, and informed consent. Many researchers are required by law to obtain the approval of an ethics committee for research with human subjects, a process which includes addressing the three concepts focused on in this paper. Computer security researchers who conduct human subjects research should be concerned with these aspects of their methodology regardless of whether they are required to by law, it is our ethical responsibility as professionals in this field. We augment previous discourse on the ethics of computer security research by sparking the discussion of how the nature of security research may complicate determining how to treat human subjects ethically. We con-clude by suggesting ways the community can move forward.
[show abstract][hide abstract] ABSTRACT: Instruction-set randomization (ISR) obfuscates the “language” understood by a system to protect against code-injection attacks
by presenting an ever-changing target. ISR was originally motivated by code injection through buffer overflow vulnerabilities.
However, Stuxnet demonstrated that attackers can exploit other vectors to place malicious binaries into a victim’s filesystem
and successfully launch them, bypassing most mechanisms proposed to counter buffer overflows. We propose the holistic adoption
of ISR across the software stack, preventing the execution of unauthorized binaries and scripts regardless of their origin.
Our approach requires that programs be randomized with different keys during a user-controlled installation, effectively combining
the benefits of code whitelisting/signing and runtime program integrity. We discuss how an ISR-enabled environment for binaries
can be implemented with little overhead in hardware, and show that higher-overhead softwareonly alternatives are possible.
We use Perl and SQL to demonstrate the application of ISR in scripting environments with negligible overhead.
[show abstract][hide abstract] ABSTRACT: We present a practical tool for inserting security features against low-level software attacks into third-party, proprietary
or otherwise binary-only software. We are motivated by the inability of software users to select and use low-overhead protection
schemes when source code is unavailable to them, by the lack of information as to what (if any) security mechanisms software
producers have used in their toolchains, and the high overhead and inaccuracy of solutions that treat software as a black
Our approach is based on SecondWrite, an advanced binary rewriter that operates without need for debugging information or other assist. Using SecondWrite, we
insert a variety of defenses into program binaries. Although the defenses are generally well known, they have not generally
been used together because they are implemented by different (non-integrated) tools. We are also the first to demonstrate
the use of such mechanisms in the absence of source code availability. We experimentally evaluate the effectiveness and performance
impact of our approach. We show that it stops all variants of low-level software attacks at a very low performance overhead,
without impacting original program functionality.
[show abstract][hide abstract] ABSTRACT: In previous work, we introduced a bait-injection system designed to delude and detect crimeware by forcing it to reveal itself
during the exploitation of captured information. Although effective as a technique, our original system was practically limited,
as it was implemented in a personal VM environment. In this paper, we investigate how to extend our system by applying it
to personal workstation environments. Adapting our system to such a different environment reveals a number of challenging
issues, such as scalability, portability, and choice of physical communication means. We provide implementation details and
we evaluate the effectiveness of our new architecture.
[show abstract][hide abstract] ABSTRACT: We put forth the notion of efficient dual receiver cryp- tosystems and implement it based on bilinear pairings over certain elliptic curve groups. The cryptosystem is simple and efficient yet powerful, as it helps to solve two problems of practical importance whose solutions had proven to be elusive until now: (1) A provably secure "combined" public-key cryptosys- tem (with a single secret key per user) where the key is used for both decryption and signing and where encryp- tion can be escrowed and recovered, while the signature capability never leaves its owner. This is an open problem proposed by the work of Haber and Pinkas. (2) A puzzle is a method for rate-limiting remote users by forcing them to solve a computational task (the puzzle). Puzzles have been based on cryptographic challenges in the past, but the successful design of embedding a use- ful cryptographic task inside a puzzle, originally posed by Dwork and Naor, has remained problematic. We model and present "useful security puzzles" applicable as an on- line transaction server (such as a webserver).
[show abstract][hide abstract] ABSTRACT: The prevalence of code injection attacks has led to the wide adoption of exploit mitigations based on non-executable memory pages. In turn, attackers are increas-ingly relying on return-oriented programming (ROP) to by-pass these protections. At the same time, existing detection techniques based on shellcode identification are oblivious to this new breed of exploits, since attack vectors may not contain binary code anymore. In this paper, we present a detection method for the identification of ROP payloads in arbitrary data such as network traffic or process memory buffers. Our technique speculatively drives the execution of code that already exists in the address space of a targeted process according to the scanned input data, and identifies the execution of valid ROP code at runtime. Our experi-mental evaluation demonstrates that our prototype imple-mentation can detect a broad range of ROP exploits against Windows applications without false positives, while it can be easily integrated into existing defenses based on shell-code detection.
[show abstract][hide abstract] ABSTRACT: On May 5, 2010 the last step of the DNSSEC deployment on the 13 root servers was completed. DNSSEC is a set of security extensions
on the traditional DNS protocol, that aim in preventing attacks based on the authenticity and integrity of the messages. Although
the transition was completed without major faults, it is not clear whether problems of smaller scale occurred. In this paper
we try to quantify the effects of that transition, using as many vantage points as possible. In order to achieve that, we
deployed a distributed DNS monitoring infrastructure over the PlanetLab and gathered periodic DNS lookups, performed from
each of the roughly 300 nodes, during the DNSSEC deployment on the last root name server. In addition, in order to broaden
our view, we also collected data using the Tor anonymity network. After analyzing all the gathered data, we observed that
around 4% of the monitored networks had an interesting DNS query failure pattern, which, to the best of our knowledge, was
due to the transition.
Advances in Computing and Communications - First International Conference, ACC 2011, Kochi, India, July 22-24, 2011, Proceedings, Part III; 01/2011
[show abstract][hide abstract] ABSTRACT: Consent-based networking, which requires senders to have permission to send traffic, can protect against multiple attacks
on the network. Highly dynamic networks like Mobile Ad-hoc Networks (MANETs) require destination-based consent networking,
where consent needs to be given to send to a destination in any path. These networks are susceptible to multipath misuses
by misbehaving nodes.
In this paper, we identify the misuses in destination-based consent networking, and provide solution for detecting and recovering
from the misuses. Our solution is based on our previously introduced DIPLOMA architecture. DIPLOMA is a deny-by-default distributed policy enforcement architecture that can protect the end-host services and network bandwidth. DIPLOMA uses capabilities
to provide consent for sending traffic. In this paper, we identify how senders and receivers can misuse capabilities by using
them in multiple paths, and provide distributed solutions for detecting those misuses. To that end, we modify the capabilities
to aid in misuse detection and provide protocols for exchanging information for distributed detection. We also provide efficient
algorithms for misuse detection, and protocols for providing proof of misuse. Our solutions can handle privacy issues associated
with the exchange of information for misuse detection. We have implemented the misuse detection and recovery in DIPLOMA systems
running on Linux operating systems, and conducted extensive experimental evaluation of the system in Orbit MANET testbed.
The results show our system is effective in detecting and containing multipath misuses.
Applied Cryptography and Network Security - 9th International Conference, ACNS 2011, Nerja, Spain, June 7-10, 2011. Proceedings; 01/2011
[show abstract][hide abstract] ABSTRACT: This paper describes the SPARCHS project at Columbia and Princeton Universities. Drawing inspiration from biological defenses, this project aims to enhance security with clean-slate design of hardware. The ideas to be explored in the project and current status are described.
[show abstract][hide abstract] ABSTRACT: We present MINESTRONE, a novel architecture that integrates static analysis, dynamic confinement, and code diversification techniques to enable the identification, mitigation and containment of a large class of software vulnerabilities in third-party software. Our initial focus is on software written in C and C++; however, many of our techniques are equally applicable to binary-only environments (but are not always as efficient or as effective) and for vulnerabilities that are not specific to these languages. Our system seeks to enable the immediate deployment of new software (e.g., a new release of an open-source project) and the protection of already deployed (legacy) software by transparently inserting extensive security instrumentation, while leveraging concurrent program analysis, potentially aided by runtime data gleaned from profiling actual use of the software, to gradually reduce the performance cost of the instrumentation by allowing selective removal or refinement. Artificial diversification techniques are used both as confinement mechanisms and for fault-tolerance purposes. To minimize the performance impact, we are leveraging multicore hardware or (when unavailable) remote servers that enable quick identification of likely compromise. To cover the widest possible range of systems, we require no specific hardware or operating system features, although we intend to take advantage of such features where available to improve both runtime performance and vulnerability coverage.
[show abstract][hide abstract] ABSTRACT: Data loss incidents, where data of sensitive nature are exposed to the public, have become too frequent and have caused damages of millions of dollars to companies and other organizations. Repeatedly, information leaks occur over the Internet, and half of the time they are accidental, caused by user negligence, misconfiguration of software, or inadequate understanding of an application's functionality. This paper presents iLeak, a lightweight, modular system for detecting inadvertent information leaks. Unlike previous solutions, iLeak builds on components already present in modern computers. In particular, we employ system tracing facilities and data indexing services, and combine them in a novel way to detect data leaks. Our design consists of three components: uaudits are responsible for capturing the information that exits the system, while Inspectors use the indexing service to identify if the transmitted data belong to files that contain potentially sensitive information. The Trail Gateway handles the communication and synchronization of uaudits and Inspectors. We implemented iLeak on Mac OS X using DTrace and the Spotlight indexing service. Finally, we show that iLeak is indeed lightweight, since it only incurs 4% overhead on protected applications.
Computer Network Defense (EC2ND), 2010 European Conference on; 11/2010
[show abstract][hide abstract] ABSTRACT: Over the past few years we have seen the use of Internet worms, i.e.,malicious self-replicating programs, as a mechanism to rapidly invade and compromise large numbers of remote computers
. Although the first worms released on the Internet were large-scale, easy-to-spot massive security incidents [6, 19,
20, 26], also known as flash worms , it is currently envisioned (and we see already see signs, in the wild) that future worms will be increasingly difficult
to detect, and will be known as stealth worms. This may be partly because the motives of early worm developers are thought to have been centered around self-gratification
brought by the achievement of compromising large numbers of remote computers, while the motives of recent worm and malware
developers have progressed to more mundane (and sinister) financial and political gains.
[show abstract][hide abstract] ABSTRACT: In this chapter, we propose a design for an insider threat detection system that combines an array of complementary techniques
that aims to detect evasive adversaries. We are motivated by real world incidents and our experience with building isolated
detectors: such standalone mechanisms are often easily identified and avoided by malefactors. Our work-in-progress combines
host-based user-event monitoring sensors with trap-based decoys and remote network detectors to track and correlate insider
activity. We introduce and formalize a number of properties of decoys as a guide to design trap-based defenses to increase
the likelihood of detecting an insider attack. We identify several challenges in scaling up, deploying, and validating our
architecture in real environments.
[show abstract][hide abstract] ABSTRACT: We demonstrate the general applicability of the elastic block cipher method by constructing examples from existing block ciphers:
AES, Camellia, MISTY1 and RC6. An elastic block cipher is a variable-length block cipher created from an existing fixed-length
block cipher. The elastic version supports any block size between one and two times that of the original block size. We compare
the performance of the elastic versions to that of the original versions and evaluate the elastic versions using statistical
tests measuring the randomness of the ciphertext. The benefit, in terms of an increased rate of encryption, of using an elastic
block cipher varies based on the specific block cipher and implementation. In most cases, there is an advantage to using an
elastic block cipher to encrypt blocks that are a few bytes longer than the original block length. The statistical test results
indicate no obvious flaws in the method for constructing elastic block ciphers. We also use our examples to demonstrate the
concept of a generic key schedule for block ciphers. In addition, we present ideas for new modes of encryption using the elastic
block cipher construction.