Edit automata: Enforcement mechanisms for run-time security policies

International Journal of Information Security (Impact Factor: 0.96). 01/2005; 4(1):2-16. DOI: 10.1007/s10207-004-0046-8
Source: CiteSeer


We analyze the space of security policies that can be enforced by monitoring and modifying programs at run time. Our program monitors, called edit automata, are abstract machines that examine the sequence of application program actions and transform the sequence when it deviates from a specified policy. Edit automata have a rich set of transformational powers: they may terminate an application, thereby truncating the program action stream; they may suppress undesired or dangerous actions without necessarily terminating the program; and they may also insert additional actions into the event stream.After providing a formal definition of edit automata, we develop a rigorous framework for reasoning about them and their cousins: truncation automata (which can only terminate applications), suppression automata (which can terminate applications and suppress individual actions), and insertion automata (which can terminate and insert). We give a set-theoretic characterization of the policies each sort of automaton can enforce, and we provide examples of policies that can be enforced by one sort of automaton but not another.

1 Follower
12 Reads
  • Source
    • "Several enforcement techniques have been proposed, for timed or non-timed systems. The technique based on an edit automaton, introduced in [8], has been selected in this work because it is quite suitable to solve the issue that has been pinpointed in the previous section and can be relatively easily applied when the specification model is a Mealy machine, as it will be shown in the next section. In this approach (Figure 3), an enforcement monitor receives a sequence of events σ that may or not satisfy a given property φ (notation σ |= φ?) and generates an output sequence o that satisfies this property (notation o |= φ). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Validation of the behavior of a Programmable Logic Controller (PLC) by comparison of observed I/O se-quences to sequences built from a formal specification model requires that the consequences of the PLC I/O scan-ning cycle be considered. This paper proposes a method based on an enforcement technique to interpret observed I/O sequences so that the result of this comparison be meaningful.
    18th IEEE Conference on Emerging Technologies and factory Automation (ETFA 2013), Cagliari (Italy); 09/2013
  • Source
    • "The algorithms for both the verification and the synthesis are general enough so that they apply to four opacity properties: current-state opacity, initial-state opacity, language-based opacity, and initial-and-final-state opacity. Other works in the computer science literature have also used insertion functions to enforce security properties; see e.g., [8], [14]. However, the class of security policies considered does not include opacity. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Opacity is a confidentiality property that arises in the analysis of security properties in networked systems. It characterizes whether a “secret” of a system can be inferred by an outside observer called an “intruder.” We consider the problem of enforcing opacity in partially-observed discrete event systems modeled as automata. We propose a novel enforcement mechanism based on the use of insertion functions. An insertion function is a monitoring interface at the output of the system that changes the system's output behavior by inserting additional observable events. The insertion function must respond to the full system's output behavior. Also, the insertion function should not create new observed behavior but only replicate existing observable strings. We define the property of “i-enforceability,” when there exists an insertion function that renders a non-opaque system opaque. To synthesize insertion functions that ensure opacity, we define and construct a new structure called the “All Insertion Structure” (AIS). The AIS can be used to verify if a given opacity property is i-enforceable. The AIS enumerates all i-enforcing insertion functions in a compact state transition structure. If a given opacity property has been verified to be i-enforceable, we show how to use the AIS to synthesize an i-enforcing insertion function.
    2012 IEEE 51st Annual Conference on Decision and Control (CDC); 12/2012
  • Source
    • "First, it always produces some performance overhead during the application run time. Second , it complicates to recognize implicit leaks [13] [9]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We report on applying techniques for static information flow analysis to identify privacy leaks in Android applications. We have crafted a framework which checks with the help of a security type system whether the Dalvik bytecode implementation of an Android app conforms to a given privacy policy. We have carefully analyzed the Android API for possible sources and sinks of private data and identified exemplary privacy policies based on this. We demonstrate the applicability of our framework on two case studies showing detection of privacy leaks.
    27th Annual ACM Symposium on Applied Computing; 03/2012
Show more


12 Reads
Available from