Conference Paper

Enforcing Non-safety Security Policies with Program Monitors.

DOI: 10.1007/11555827_21 Conference: Computer Security - ESORICS 2005, 10th European Symposium on Research in Computer Security, Milan, Italy, September 12-14, 2005, Proceedings
Source: DBLP

ABSTRACT We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security commu- nity is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and nd that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We dene an interesting set of properties called innite renewal properties and demonstrate how, when given any reasonable innite renewal property, to construct an edit au- tomaton that provably enforces that property. We analyze the set of innite renewal properties and show that it includes every safety prop- erty, some liveness properties, and some properties that are neither safety nor liveness.

1 Follower
 · 
123 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this project is to develop a feasible design for a Model Development Environment that supports the development of net-centric systems with guarantees of user-specified safety and information assurance policies. This paper focuses on (1) a policy language for expressing system safety and information assurance constraints, (2) analysis mechanisms for detecting policy applicability in a model, and (3) enforcement mechanisms and associated assurance arguments and evidence. An overarching objective is to lower the cost of producing certified software that is ready for service in a SOA. This entails lowering the cost of initial certification and the cost of recertification after modifications.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We are interested in the validation of opacity. Opacity models the impossibility for an attacker to retrieve the value of a secret in a system of interest. Roughly speaking, ensuring opacity provides confidentiality of a secret on the system that must not leak to an attacker. More specifically, we study how we can model-check, verify and enforce at system runtime, several levels of opacity. Besides existing notions of opacity, we also introduce K-step strong opacity, a more practical notion of opacity that provides a stronger level of confidentiality.
    Discrete Event Dynamic Systems 01/2014; DOI:10.1007/s10626-014-0196-4 · 0.67 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study the enforcement of K-step opacity at runtime. In K-step opacity, the knowledge of the secret is of interest to the attacker within K steps after the secret occurs and becomes obsolete afterwards. We introduce the mechanism of runtime enforcer that is placed between the output of the system and the attacker and enforces opacity using delays. If an output event from the system violates K-step opacity, the enforcer stores the event in the memory, for the minimal number of system steps until the secret is no longer interesting to the attacker (or, K-step opacity holds again).
    2013 IEEE 52nd Annual Conference on Decision and Control (CDC); 12/2013

Preview

Download
0 Downloads