[Show abstract][Hide abstract] ABSTRACT: Dependency analysis of critical infrastructures is a computationally intensive problem when dealing with large-scale, cross-sectoral, cascading and common-cause failures. The problem intensifies when attempting a dynamic, time-based dependency analysis. This paper extends a previous graph-based risk analysis methodology to dynamically assess the evolution of cascading failures over time. Various growth models are employed to capture slow, linear and rapidly evolving effects, but instead of using static projections, the evolution of each dependency is “objectified” by a fuzzy system that also considers the effects of nearby dependencies. To achieve this, the impact (and, eventually, risk) of each dependency is quantified on the time axis into a form of many-valued logic. In addition, the methodology is extended to analyze major failures triggered by concurrent common-cause cascading events. A critical infrastructure dependency analysis tool, CIDA, that implements the extended risk-based methodology is described. CIDA is designed to assist decision makers in proactively analyzing dynamic and complex dependency risk paths in two ways: (i) identifying potentially underestimated low risk dependencies and reclassifying them to a higher risk category before they are realized; and (ii) simulating the effectiveness of alternative mitigation controls with different reaction times. Thus, the CIDA tool can be used to evaluate alternative defense strategies for complex, large-scale and multi-sectoral dependency scenarios and to assess their resilience in a cost-effective manner.
Full-text · Article · Dec 2015 · International Journal of Critical Infrastructure Protection
[Show abstract][Hide abstract] ABSTRACT: Browsers enable the user to surf over the Internet and access web sites that may include social media, email service, etc. However, such an activity incorporates various web threats (e.g. tracking, malicious content, etc.) that may imperil the user’s data and any sensitive information involved. Therefore, web browsers offer pre-installed security controls to protect users from these threats. Third-party browser software (i.e. add-ons) is also available that enhances these pre-installed security controls, or substitutes them. In this paper, we examine the available security controls that exist in modern browsers to reveal any gaps in the offered security protection. We also study the available security and privacy addons and observe whether the above mentioned gaps (i.e. when a security control is unavailable) are covered or need to be revisited.
[Show abstract][Hide abstract] ABSTRACT: Recent advances in static and dynamic program analysis resulted in tools capable to detect various types of security bugs in the Applications under Test (AUTs). However, any such analysis is designed for a priori specified types of bugs and it is characterized by some rate of false positives or even false negatives and certain scalability limitations. We present a new analysis and source code classification technique, and a prototype tool aiming to aid code reviews in the detection of general information flow dependent bugs. Our approach is based on classifying the criticality of likely exploits in the source code using two measuring functions , namely Severity and Vulnerability. For an AUT, we analyze every single pair of input vector and program sink in an execution path, which we call an Information Block (IB). A classification technique is introduced for quantifying the Severity (danger level) of an IB by static analysis and computation of its Entropy Loss. An IB's Vulnerability is quantified using a tainted object propagation analysis along with a Fuzzy Logic system. Possible exploits are then characterized with respect to their Risk by combining the computed Severity and Vulnerability measurements through an aggregation operation over two fuzzy sets. An IB is characterized of a high risk, when both its Severity and Vulnerability rankings have been found to be above the low zone. In this case, a detected code exploit is reported by our prototype tool, called Entroine. The effectiveness of our approach has been tested by analyzing 45 Java programs of NIST's Juliet Test Suite, which implement three different common weakness exploits. All existing code exploits were detected without any false positive.
[Show abstract][Hide abstract] ABSTRACT: Dependency risk graphs have been proposed as a tool for analyzing cascading failures due to critical infrastructure dependency chains. However, dependency chain analysis is not by itself adequate to develop an efficient risk mitigation strategy – one that specifies which critical infrastructures should have high priority for applying mitigation controls in order to achieve an optimal reduction in the overall risk. This paper extends previous dependency risk analysis research to implement efficient risk mitigation. This is accomplished by exploring the relation between dependency risk paths and graph centrality characteristics. Graph centrality metrics are applied to design and evaluate the effectiveness of alternative risk mitigation strategies. The experimental evaluations are based on random graphs that simulate common critical infrastructure dependency characteristics as identified by recent empirical studies. The experimental results are used to specify an algorithm that prioritizes critical infrastructure nodes for applying controls in order to achieve efficient risk mitigation.
Full-text · Article · May 2015 · International Journal of Critical Infrastructure Protection
[Show abstract][Hide abstract] ABSTRACT: URL blacklists are used by the majority of modern web browsers as a means to protect users from rogue web sites, i.e. those serving malware and/or hosting phishing scams. There is a plethora of URL blacklists/reputation services, out of which Google’s Safe Browsing and Microsoft’s SmartScreen stand out as the two most commonly used ones. Frequently, such lists are the only safeguard web browsers implement against such threats. In this paper, we examine the level of protection that is offered by popular web browsers on iOS, Android and desktop (Windows) platforms, against a large set of phishing and malicious URL. The results reveal that most browsers – especially those for mobile devices - offer limited protection against such threats. As a result, we propose and evaluate a countermeasure, which can be used to significantly improve the level of protection offered to the users, regardless of the web browser or platform they are using.
[Show abstract][Hide abstract] ABSTRACT: A method for predicting software failures to critical information infrastructures is presented in this paper. Software failures in critical infrastructures can stem from logical errors in the source code which manipulates controllers that handle machinery; i.e. Remote Terminal Units and Programmable Logic Controllers in SCADA systems. Since these controllers are often responsible for handling hardware in critical infrastructures, detecting such logical errors in the software controlling their functionality implies detecting possible failures in the machine itself and, consequently, predicting single or cascading infrastructure failures. Our method may also be tweaked to provide estimates of the impact and likelihood of each detected error. An existing source code analysis method is adjusted to analyze code able to send commands to SCADA systems. A practical implementation of the method is presented and discussed. Examples are given using open-source SCADA operating interfaces.
[Show abstract][Hide abstract] ABSTRACT: Spam over Internet Telephony SPIT is a potential source of disruption in Voice over IP VoIP systems. The use of anti-SPIT mechanisms, such as filters and audio CAPTCHA Completely Automated Public Turing Test to Tell Computer and Humans Apart can prevent unsolicited calls and lead to less unwanted traffic. In this paper, we present a game-theoretic model, in which the game is played between SPIT senders and internet telephony users. The game includes call filters and audio CAPTCHA, so as to classify incoming calls as legitimate or malicious. We show how the resulting model can be used to decide upon the trade-offs present in this problem and help us predict the SPIT sender's behavior. We also highlight the advantages in terms of SPIT call reduction of merely introducing CAPTCHA, and provide experimental verification of our results.