Book

Security in Highly Connected IT Systems Results of the Bavarian Research Alliance FORSEC

Authors:

Abstract

This book reports the final results of the Bavarian Research Alliance FORSEC – Security in Highly Connected IT Systems. FORSEC is a joint research alliance of four Bavarian universities (University of Regensburg, University of Passau, Technical University of Munich and Friedrich- Alexander-University of Erlangen-Nuremberg) and has been generously funded by the Bavarian State Ministry of Education, Science and Arts. The research alliance FORSEC would not have been possible without the work of our participating colleagues, including the Principal Investigators, doctoral students, and student workers, all of whom spent much time in doing collaborative research, writing publications, organizing and attending workshops and conferences over a period of more than four years. We would like to thank them all for making FORSEC a successful research alliance. Being a research alliance of four universities, ten research groups, and eleven research projects, FORSEC has gone beyond what can be achieved by a set of individual research projects that are unconnected to each other. The nature of a collaborative research endeavor has been implemented by the provision of overall guiding research questions, the organizational union of the research projects to overall three research clusters, the conducting of several workshops, and the joint publication of results across research projects and in cooperation between senior researchers and doctoral students. In the first part of this book, we present the overall research goals and questions, and the organizational structure of FORSEC. In the second part, we illustrate the three research clusters of FORSEC, namely PreSTA, STAR and CLOUD, in more detail. In the third and most comprehensive part of this book, we provide a description of all eleven research projects, including their publications in terms of abstract, citation and URL where the full article can be retrieved. In the final reference section, we list all publications of FORSEC in alphabetical order of the first author. We hope that this report and the set of more than 100 FORSEC publications will stimulate further research on IT security, which we believe will remain one of the most challenging areas in future research on information and communication technologies. We would like to thank Eva Weishäupl and Dr. Christian Richthammer for their great editorial support.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In the last few years, research has been motivated to provide a categorization and classification of security concerns accompanying the growing adaptation of Infrastructure as a Service (IaaS) clouds. Studies have been motivated by the risks, threats and vulnerabilities imposed by the components within the environment and have provided general classifications of related attacks, as well as the respective detection and mitigation mechanisms. Virtual Machine Introspection (VMI) has been proven to be an effective tool for malware detection and analysis in virtualized environments. In this paper, we classify attacks in IaaS cloud that can be investigated using VMI-based mechanisms. This infers a special focus on attacks that directly involve Virtual Machines (VMs) deployed in an IaaS cloud. Our classification methodology takes into consideration the source, target, and direction of the attacks. As each actor in a cloud environment can be both source and target of attacks, the classification provides any cloud actor the necessary knowledge of the different attacks by which it can threaten or be threatened, and consequently deploy adapted VMI-based monitoring architectures. To highlight the relevance of attacks, we provide a statistical analysis of the reported vulnerabilities exploited by the classified attacks and their financial impact on actual business processes.
Conference Paper
Full-text available
Performing triage of malicious samples is a critical step in security analysis and mitigation development. Unfortunately, the obfuscation and outright removal of information contained in samples makes this a monumentally challenging task. However, the widely used Portable Executable file format (PE32), a data structure used by the Windows OS to handle executable code, contains hidden information that can provide a security analyst with an upper hand. In this paper, we perform the first accurate assessment of the hidden PE32 field known as the Rich Header and describe how to extract the data that it clandestinely contains. We study 964,816 malware samples and demonstrate how the information contained in the Rich Header can be leveraged to perform rapid triage across millions of samples, including packed and obfuscated binaries. We first show how to quickly identify post-modified and obfuscated binaries through anomalies in the header. Next, we exhibit the Rich Header’s utility in triage by presenting a proof of concept similarity matching algorithm which is solely based on the contents of the Rich Header. With our algorithm we demonstrate how the contents of the Rich Header can be used to identify similar malware, different versions of malware, and when malware has been built under different build environment; revealing potentially distinct actors. Furthermore, we are able to perform these operations in near real-time, less than 6.73 ms on commodity hardware across our studied samples. In conclusion, we establish that this little-studied header in the PE32 format is a valuable asset for security analysts and has a breadth of future potential.
Conference Paper
Full-text available
Recommender systems are pivotal components of modern Internet platforms and constitute a well-established research field. By now, research has resulted in highly sophisticated recommender algorithms whose further optimization often yields only marginal improvements. This paper goes beyond the commonly dominating focus on optimizing algorithms and instead follows the idea of enhancing recom-mender systems with reputation data. Since the concept of reputation-enhanced recommender systems has attracted considerable attention in recent years, the main aim of the paper is to provide a comprehensive survey of the approaches proposed so far. To this end, existing work are identified by means of a systematic literature review and classified according to carefully considered dimensions. In addition, the resulting structured analysis of the state of the art serves as a basis for the deduction of future research directions.
Article
Full-text available
We study the problem of evidence collection in environments where abstraction layers are used to organize data storage. Based on a formal model, the problem of evidence collection is defined as the task to reconstruct high-level from low-level storage. We investigate the conditions under which different levels of evidence collection can be performed and show that abstraction layers, in general, make it harder to acquire evidence. We illustrate our findings by describing several practical scenarios from file systems, memory management, and disk volume management.
Conference Paper
Full-text available
Even though today's recommender algorithms are highly sophisticated, they can hardly take into account the users' situational needs. An obvious way to address this is to initially inquire the users' momentary preferences, but the users' inability to accurately state them upfront may lead to the loss of several good alternatives. Hence, this paper suggests to generate the recommendations without such additional input data from the users and let them interactively explore the recommended items on their own. To support this explorative analysis, a novel visualization tool based on treemaps is developed. The analysis of the prototype demonstrates that the interactive treemap visualization facilitates the users' comprehension of the big picture of available alternatives and the reasoning behind the recommendations. This helps the users get clear about their situational needs, inspect the most relevant recommendations in detail, and �nally arrive at informed decisions.
Article
Full-text available
Reputation systems are an essential part of electronic marketplaces that provide a valuable method to identify honest sellers and punish malicious actors. Due to the continuous improvement of the computation models applied, advanced reputation systems have become non-transparent and incomprehensible to the end-user. As a consequence, users become skeptical and lose their trust toward the reputation system. In this work, we are taking a step to increase the transparency of reputation systems by means of providing interactive visual representations of seller reputation profiles. We thereto propose TRIVIA - a visual analytics tool to evaluate seller reputation. Besides enhancing transparency, our results show that through incorporating the visual-cognitive capabilities of a human analyst and the computing power of a machine in TRIVIA, malicious sellers can be reliably identified. In this way we provide a new perspective on how the problem of robustness could be addressed.
Conference Paper
Full-text available
This paper documents some experiences and lessons learned during the development of an IoT security application for the EU-funded project RERUM. The application provides sensor data with end-to-end integrity protection through elliptic curve digital signatures (ECDSA). Here, our focus is on the cost in terms of hardware, runtime and power- consumption in a real-world trials scenario. We show that providing signed sensor data has little impact on the overall power consumption. We present the experiences that we made with different ECDSA imple- mentations. Hardware accelerated signing can further reduce the costs in terms of runtime, however, the differences were not significant. The relevant aspect in terms of hardware is memory: experiences made with MSP430 and ARM Cortex M3 based hardware platforms revealed that the limiting factor is RAM capacity. Our experiences made during the trials show that problems typical for low-power and lossy networks can be addressed by the chosen network stack of CoAP, UDP, 6LoWPAN and 802.15.4; while still being lightweight enough to drive the application on the constrained devices investigated.
Article
Full-text available
Due to compliance and IT security requirements, company-wide identity and access management within organizations has gained significant importance in research and practice over the last years. Companies aim at standardizing user management policies in order to reduce administrative overhead and strengthen IT security. These policies provide the foundation for every identity and access management system no matter if poured into IT systems or only located within responsible identity and access management (IAM) engineers’ mind. Despite its relevance, hardly any supportive means for the automated detection and refinement as well as management of policies are available. As a result, policies outdate over time, leading to security vulnerabilities and inefficiencies. Existing research mainly focuses on policy detection and enforcement without providing the required guidance for policy management nor necessary instruments to enable policy adaptibility for today’s dynamic IAM. This paper closes the existing gap by proposing a dynamic policy management process which structures the activities required for policy management in identity and access management environments. In contrast to current approaches, it utilizes the consideration of contextual user management data and key performance indicators for policy detection and refinement and offers result visualization techniques that foster human understanding. In order to underline its applicability, this paper provides an evaluation based on real-life data from a large industrial company.
Article
Full-text available
Mobile devices, like tablets and smartphones, are common place in everyday life. Thus, the degree of security these devices can provide against digital forensics is of particular interest. A common method to access arbitrary data in main memory is the cold boot attack. The cold boot attack exploits the remanence effect that causes data in DRAM modules not to lose the content immediately in case of a power cut-off. This makes it possible to restart a device and extract the data in main memory. In this paper, we present a novel framework for cold boot-based data acquisition with a minimal bare metal application on a mobile device. In contrast to other cold boot approaches, our forensics tool overwrites only a minimal amount of data in main memory. This tool requires no more than three kilobytes of constant data in the kernel code section. We hence sustain all of the data relevant for the analysis of the previously running system. This makes it possible to analyze the memory with data acquisition tools. For this purpose, we extend the memory forensics tool Volatility in order to request parts of the main memory dynamically from our bare metal application. We show the feasibility of our approach on the Samsung Galaxy S4 and Nexus 5 mobile devices along with an extensive evaluation. First, we compare our framework to a traditional memory dump-based analysis. In the next step, we show the potential of our framework by acquiring sensitive user data.
Conference Paper
Full-text available
The vast amount of computation techniques for reputation systems proposed in the past has resulted in a need for a global online trust repository with reusable components. In order to increase the practical usability of such a repository, we propose a software framework that supports the user in selecting appropriate components and automatically combines them to a fully functional computation engine. On the one hand, this lets developers experiment with different concepts and move away from one single static computation engine. On the other hand, our software framework also enables an explorative trust evaluation through user interaction. In this way, we notably increase the transparency of reputation systems. To demonstrate the practical applicability of our proposal, we present realistic use cases and describe how it would be employed in these scenarios.
Conference Paper
Full-text available
Nowadays, many applications by default use encryption of network traffic to achieve a higher level of privacy and confidentiality. One of the most frequently applied cryptographic protocols is Transport Layer Security (TLS). However, also adversaries make use of TLS encryption in order to hide attacks or command & control communication. For detecting and analyzing such threats, making the contents of encrypted communication available to security tools becomes essential. The ideal solution for this problem should offer efficient and stealthy decryption without having a negative impact on over-all security. This paper presents TLSkex (TLS Key EXtractor), an approach to extract the master key of a TLS connection at runtime from the virtual machine’s main memory using virtual machine introspection techniques. Afterwards, the master key is used to decrypt the TLS session. In contrast to other solutions, TLSkex neither manipulates the network connection nor the communicating application. Thus, our approach is applicable for malware analysis and intrusion detection in scenarios where applications cannot be modified. Moreover, TLSkex is also able to decrypt TLS sessions that use perfect forward secrecy key exchange algorithms. In this paper, we define a generic approach for TLS key extraction based on virtual machine introspection, present our TLSkex prototype implementation of this approach, and evaluate the prototype.
Conference Paper
Full-text available
In this paper, we demonstrate that Android mal-ware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.
Conference Paper
Full-text available
Due to compliance and IT security requirements, company-wide Identity and Access Management within organizations has gained significant importance in research and practice over the last years. Companies aim at standardizing user management policies in order to reduce administrative overhead and strengthen IT security. Despite of its relevance, hardly any supportive means for the automated detection and refinement as well as management of policies are available. As a result, policies outdate over time, leading to security vulnerabilities and inefficiencies. Existing research mainly focuses on policy detection without providing the required guidance for policy management. This paper closes the existing gap by proposing a Dynamic Policy Management Process which structures the activities required for policy management in Identity and Access Management environments. In contrast to current approaches it fosters the consideration of contextual user management data for policy detection and refinement and offers result visualization techniques that foster human understanding. In order to underline its applicability, this paper provides a naturalistic evaluation based on real-life data from a large industrial company.
Conference Paper
Full-text available
Recent data breaches caused by highly-privileged insiders (e.g. the NSA/Snowden case) as well as the proliferation of mobile and cloud applications in enterprises imposes new challenges for Identity Management. To cope with these challenges, business analysts have predicted a variety of trends for enterprise Identity Management. In this paper, we conduct a thorough literature analysis to examine to which extent the scientific community seizes upon these trends and identify major research areas therein. Results show that despite the analysts' predictions, research stagnates for attribute-based access control and privileged user management, while for cloud-based IdM and bring your own device it corresponds to the analysts' forecast.
Conference Paper
Full-text available
In the recent past, the application of role-based access control for streamlining Identity and Access Management in organizations has gained significant importance in research and practice. After the initial setup of a role model, the central challenge is its operative management and strategic maintenance. In practice, organizations typically struggle with a high number of potentially outdated and erroneous role definitions leading to security vulnerabilities and compliance violations. Applying a process-oriented approach for assessing and optimizing role definitions is mandatory to keep a role model usable and up to date. Existing research on role system maintenance only provides a limited technical perspective without focusing on the required guidance and applicability in practice. This paper closes the existing gap by proposing ROPM, a structured Role Optimization Process Model for improving the quality of existing role definitions. Based on comprehensive tool support it automates role optimization activities and integrates both, a technical as well as a business-oriented perspective. It is based on the iterative application of role cleansing and role model extension activities in order to reduce erroneous role definitions and (remodel l roles according to organizational requirements. In order to underline applicability, this paper provides a naturalistic evaluation based on real-life data.
Conference Paper
Full-text available
In the recent years, virtual machine introspection has become a valuable technique for developing security applications for virtualized environments. With the increasing popularity of the ARM architecture and the recent addition of hardware virtualization extensions there is a growing need for porting existing tools to this new platform. Porting these applications requires proper hypervisor support, which we have been exploring and developing for the upcoming Xen 4.6 release. In this paper we explore using ARM's two-stage paging mechanisms with Xen to enable stealthy, efficient tracing of guest operating systems for security purposes.
Conference Paper
We report the results of a field experiment where we sent to over 1200 university students an email or a Facebook message with a link to (non-existing) party pictures from a non-existing person, and later asked them about the reasons for their link clicking behavior. We registered a significant difference in clicking rates: 20% of email versus 42.5% of Facebook recipients clicked. The most frequently reported reason for clicking was curiosity (34%), followed by the explanations that the message fit recipient’s expectations (27%). Moreover, 16% thought that they might know the sender. These results show that people’s decisional heuristics are relatively easy to misuse in a targeted attack, making defense especially challenging.
Conference Paper
The protection of assets, including IT resources, intellectual property and business processes, against security attacks has become a challenging task for organizations. From an economic perspective, firms need to minimize the probability of a successful security incident or attack while staying within the boundaries of their information security budget in order to optimize their investment strategy. In this paper, an optimization model to support information security investment decision-making in organizations is proposed considering the two conflicting objectives (simultaneously minimizing the costs of countermeasures while maximizing the security level). Decision models that support the firms' decisions considering the trade-off between the security level and the investment allocation are beneficial for organizations to facilitate and justify security investment choices.
Conference Paper
The collection of monitoring data in distributed systems can serve many different purposes, such as system status monitoring, performance evaluation, and optimization. There are many well-established approaches for data collection and visualization in these areas. For objectives such as debugging complex distributed applications, in-depth analysis of malicious attacks, and forensic investigations, the joint analysis and visualization of a large variety of data gathered at different layers of the system is of great value. The utilization of heavy-weight monitoring techniques requires a cost-aware on-demand activation of such monitoring. We present an architecture for an interactive and cost-aware visualization of monitoring data combined from multiple sources in distributed systems. We introduce two distinguishing properties: the possibilities to reconfigure data collection and a cost prediction mechanism that supports the user in a cost-aware, dynamic activation of monitoring components in an interactive in-depth analysis. We illustrate the use of such cost prediction for monitoring using VMI-based mechanisms.
Conference Paper
Virtual machine introspection (VMI) is a technology with many possible applications, such as malware analysis and intrusion detection. However, this technique is resource intensive, as inspecting program behavior includes recording of a high number of events caused by the analyzed binary and related processes. In this paper we present an architecture that leverages cloud resources for virtual machine-based malware analysis in order to train a classifier for detecting cloud-specific malware. This architecture is designed while having in mind the resource consumption when applying the VMI-based technology in production systems, in particular the overhead of tracing a large set of system calls. In order to minimize the data acquisition overhead, we use a data-driven approach from the area of resource-aware machine learning. This approach enables us to optimize the trade-off between malware detection performance and the overhead of our VMI-based tracing system.
Conference Paper
A honeypot provides information about the new attack and exploitation methods and allows analyzing the adversary's activities during or after exploitation. One way of an adversary to communicate with a server is via secure shell (SSH). SSH provides secure login, file transfer, X11 forwarding, and TCP/IP connections over untrusted networks. SSH is a preferred target for attacks, as it is frequently used with password-based authentication, and weak passwords are easily exploited using brute-force attacks. In this paper, we introduce a Virtual Machine Introspection based SSH honeypot. We discuss the design of the system and how to extract valuable information such as the credential used by the attacker and the entered commands. Our experiments show that the system is able to detect the adversary's activities during and after exploitation, and it has advantages compared to currently used SSH honeypot approaches.
Conference Paper
Multi-cloud architectures enable the design of resilient distributed service applications. Such applications can benefit from a combination of intrusion-tolerant replication across clouds with intrusion detection and analysis mechanisms. Such mechanisms enable the detection of attacks that affect multiple replicas and thus exceed the intrusion masking capability, and in addition support fast reaction and recovery from local intrusions. In this work-in-progress paper we present a security analysis on which an intrusion detection and analysis service can be based on. We sketch the architecture of such a cross-cloud intrusion detection architecture that combines a set of well-known mechanisms. The goal of our approach is obtaining a resource-efficient service with optimal resilience against malicious attacks.
Conference Paper
Activity recognition using sensors of mobile devices is a topic of interest of many research efforts. It has been established that user-specific training gives good accuracy in accelerometer-based activity recognition. In this paper we test a different approach: offline user-independent activity recognition based on pretrained neural networks with Dropout. Apart from satisfactory recognition accuracy that we prove in our tests, we foresee possible advantages in removing the need for users to provide labeled data and also in the security of the system. These advantages can be the reason for applying this approach in practice, not only in mobile phones but also in other embedded devices.
Conference Paper
Today, a Web browser is a user's gateway to a multitude of Web applications, each with its own balance between confidentiality and integrity versus cross-application content sharing. Modern Web browsers apply the same permissive security policy to all content regardless of its demand for security -- a behavior that enables attacks such as cross-site request forgery (CSRF) or sidejacking. To defend against such attacks, existing countermeasures enforce overly strict policies, which expose incompatibilities with real-world Web applications. As a consequence, users get annoyed by malfunctions. In this paper, we show how browser behavior can be adapted based on the user's authentication status. The browser can enforce enhanced security policies, if necessary, and permit modern communication features, if possible. Our approach mitigates CSRF, session hijacking, sidejacking, and session fixation attacks. We present the implementation as a browser extension, named LogSec, that passively detects the user's authentication status without server-side support and is transparent for the user.
Conference Paper
\(\mathsf {RSS}\) allow the redaction of parts from signed data. Updatable \(\mathsf {RSS}\) additionally enable the signatory to add new elements, while signatures can be merged by third parties under certain conditions. We propose a framework for two new real-life application scenarios and implement it using an \(\mathsf {RSS}\) with sufficient functionality on three different platforms, ranging from a potent cloud to a very resource-constrained Android device. Our evaluation shows impractical run time especially on the IoT device for the existing construction that was proven to be secure in the standard model. Thus, we provide an adjusted scheme with far better performance, which we prove to be secure in the random oracle model. Furthermore, we show how to increase performance using parallelization and several optimizations.
Conference Paper
With over one billion sold devices, representing 80% market share, Android remains the most popular platform for mobile devices. Application piracy on this platform is a major concern and a cause of significant losses: about 97% of the top 100 paid apps were found to be hacked in terms of repackaging or the distribution of clones. Therefore new and stronger methods aiming to increase the burden on reverse engineering and modification of proprietary mobile software are required. In this paper, we propose an application of the Android native code component to implement strong software self-protection for apps. Within this scope, we present three dynamic obfuscation techniques, namely dynamic code loading, dynamic re-encryption, and tamper proofing. We provide a practical evaluation of this approach, assessing both the cost and efficiency of its achieved protection level. Our results indicate that with the proposed methods one can reach significant complication of the reverse-engineering process, while being affordable in terms of execution time and application size.
Article
The efficiency of network virtualization depends on the appropriate assignment of resources. The underlying problem, called virtual network embedding, has been much discussed in the literature, and many algorithms have been proposed, attempting to optimize the resource assignment in various respects. Evaluation of those algorithms requires a large number of randomly generated embedding scenarios. This paper presents a novel scenario generation approach and demonstrates how to produce scenarios with a guaranteed exact solution, thereby, facilitating better evaluation of embedding algorithms.
Conference Paper
Automatic malware classification is an essential improvement over the widely-deployed detection procedures using manual signatures or heuristics. Although there exists an abundance of methods for collecting static and behavioral malware data, there is a lack of adequate tools for analysis based on these collected features. Machine learning is a statistical solution to the automatic classification of malware variants based on heterogeneous information gathered by investigating malware code and behavioral traces. However, the recent increase in variety of malware instances requires further development of effective and scalable automation for malware classification and analysis processes. In this paper, we investigate the topic modeling approaches as semantics-aware solutions to the classification of malware based on logs from dynamic malware analysis. We combine results of static and dynamic analysis to increase the reliability of inferred class labels. We utilize a semi-supervised learning architecture to make use of unlabeled data in classification. Using a nonparametric machine learning approach to topic modeling we design and implement a scalable solution while maintaining advantages of semantics-aware analysis. The outcomes of our experiments reveal that our approach brings a new and improved solution to the reoccurring problems in malware classification and analysis.
Conference Paper
Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.
Conference Paper
In Smart Grid a customer’s privacy is threatened by the fact that an attacker could deduce personal habits from the detailed consumption data. We analysed the publications in this field of research and found out that privacy does not seem to be the main focus. To verify this guess, we analysed it with the technique of directed graphs. This indicates that privacy isn’t yet sufficiently investigated in the Smart Grid context. Hence we suggest a decentralised IDS based on NILM technology to protect customer’s privacy. Thereby we would like to initiate a discussion about this idea.
Chapter
In modern industrial solutions, Ethernet-based communication networks have been replacing bus technologies. Ethernet is no longer found only in inter-controller or manufacturing execution systems, but has penetrated into the real-time sensitive automation process (i.e., close to the machines and sensors). Ethernet itself adds many advantages to industrial environments where digitalization also means more data-driven IT services interacting with the machines. However, in order to cater to the needs of both new and more automation-related communication, a better restructuring of the network and resources among multitenant systems needs to be carried out. Various Industrial Ethernet (IE) standards already allow some localized separation of application flows with the help of Quality of Service (QoS) mechanisms. These technologies also expect some planning or engineering of the system which takes place by estimating worst-case scenarios of possible traffic generated by all assumed applications. This approach, however, lacks the flexibility to add new services or to extend the system participants on the fly without a major redesign and reconfiguration of the whole network. Network virtualization and segmentation is used to satisfy these requirements of more support for dynamic scenarios, while keeping and protecting time-critical production traffic. Network virtualization allows slicing of the real physical network connecting a set of applications and end devices into logically separated portions or Slices. A set of resource demands and constraints is defined on a Slice or Virtual Network level. Slice links are then mapped over physical paths starting from end devices through forwarding devices that can guarantee these demands and constraints. In this chapter, the modeling of virtual industrial network constraints is addressed with a focus on communication delay. For evaluation purposes, the modeled network and mapping criteria are implemented in the Virtual Network Embedding (VNE) traffic-engineering platform ALEVIN [1].
Conference Paper
Having about 80 % of the market share, Android is currently the clearly dominating platform for mobile devices. Application theft and repackaging remains a major threat and a cause of significant losses, affecting as much as 97 % of popular paid apps. The ease of decompilation and reverse engineering of high-level bytecode, in contrast to native binary code, is considered one of the main reasons for the high piracy rate. In this paper, we address this problem by proposing four static obfuscation techniques: native opaque predicates, native control flow flattening, native function indirection, and native field access indirection. These techniques provide a simple and yet effective way of reducing the task of bytecode reverse engineering to the much harder task of reverse engineering native code. For this purpose, native function calls are injected into an app’s bytecode, introducing artificial dependencies between the two execution domains. The adversary is forced to analyze the native code in order to be able to comprehend the overall app’s functionality and to successfully launch static and dynamic analyses. Our evaluation results of the proposed protection methods witness an acceptable cost in terms of execution time and application size, while significantly complicating the reverse-engineering process.
Conference Paper
We investigate the problem of creating complex software obfuscation for mobile applications. We construct complex software obfuscation from sequentially applying simple software obfuscation methods. We define several desirable and undesirable properties of such transformations, including idempotency and monotonicity. We empirically evaluate a set of 7 obfuscation methods on 240 Android Packages (APKs). We show that many obfuscation methods are idempotent or monotonous.
Conference Paper
In this work, we present the first statistical results on users’ understanding, usage and acceptance of a privacy-enhancing technology (PET) that is called “attribute-based credentials”, or Privacy-ABCs. We identify some shortcomings of the previous technology acceptance models when they are applied to PETs. Especially the fact that privacy-enhancing technologies usually assist both, the primary and the secondary goals of the users, was not addressed before. We present some interesting relationships between the acceptance factors. For example, understanding of the Privacy-ABC technology is correlated to the perceived usefulness of Privacy-ABCs. Moreover, perceived ease of use is correlated to the intention to use the technology. This confirms the conventional wisdom that understanding and usability of technology play important roles in the user adoption of PETs.
Conference Paper
Psychology and neuroscience literature shows the existance of upper bounds on the human capacity for executing cognitive tasks and for information processing. These bounds are where, demonstrably, people start experiencing cognitive strain and consequently committing errors in the tasks execution. We argue that the usable security discipline should scientifically understand such bounds in order to have realistic expectations about what people can or cannot attain when coping with security tasks. This may shed light on whether Johnny will be ever be able to encrypt. We propose a conceptual framework for evaluation of human capacities in security that also assigns systems to complexity categories according to their security and usability. From what we have initiated in this paper, we ultimately aim at providing designers of security mechanisms and policies with the ability to say: "This feature of the security mechanism X or this security policy element Y is inappropriate, because this evidence shows that it is beyond the capacity of its target community".
Conference Paper
Modern web applications frequently implement complex control flows, which require the users to perform actions in a given order. Users interact with a web application by sending HTTP requests with parameters and in response receive web pages with hyperlinks that indicate the expected next actions. If a web application takes for granted that the user sends only those expected requests and parameters, malicious users can exploit this assumption by crafting harming requests. We analyze recent attacks on web applications with respect to user-defined requests and identify their root cause in the missing enforcement of allowed next user requests. Based on this result, we provide our approach, named Ghostrail, a control-flow monitor that is applicable to legacy as well as newly developed web applications. It observes incoming requests and lets only those pass that were provided as next steps in the last web page. Ghostrail protects the web application against race condition exploits, the manipulation of HTTP parameters, unsolicited request sequences, and forceful browsing. We evaluate the approach and show that it neither needs a training phase nor a manual policy definition while it is suitable for a broad range of web technologies. © IFIP International Federation for Information Processing 2014.
Conference Paper
Efficient and secure management of access to resources is a crucial challenge in today's corporate IT environments. During the last years, introducing company-wide Identity and Access Management (IAM) infrastructures building on the Role-based Access Control (RBAC) paradigm has become the de facto standard for granting and revoking access to resources. Due to its static nature, the management of role-based IAM structures, however, leads to increased administrative efforts and is not able to model dynamic business structures. As a result, introducing dynamic attribute-based access privilege provisioning and revocation is currently seen as the next maturity level of IAM. Nevertheless, up to now no structured process for incorporating Attribute-based Access Control (ABAC) policies into static IAM has been proposed. This paper closes the existing research gap by introducing a novel migration guide for extending static IAM systems with dynamic ABAC policies. By means of conducting structured and tool-supported attribute and policy management activities, the migration guide supports organizations to distribute privilege assignments in an application-independent and flexible manner. In order to show its feasibility, we provide a naturalistic evaluation based on two real-world industry use cases.
Conference Paper
Reputation systems have been extensively explored in various disciplines and application areas. A problem in this context is that the computation engines applied by most reputation systems available are designed from scratch and rarely consider well established concepts and achievements made by others. Thus, approved models and promising approaches may get lost in the shuffle. In this work, we aim to foster reuse in respect of trust and reputation systems by providing a hierarchical component taxonomy of computation engines which serves as a natural framework for the design of new reputation systems. In order to assist the design process we, furthermore, provide a component repository that contains design knowledge on both a conceptual and an implementation level. © IFIP International Federation for Information Processing 2014.
Conference Paper
In this paper, we propose a new approach for the static detection of Android malware by means of machine learning that is based on software complexity metrics, such as McCabe’s Cyclomatic Complexity and the Chidamber and Kemerer Metrics Suite. The practical evaluation of our approach, involving 20,703 benign and 11,444 malicious apps, witnesses a high classification quality of our proposed method, and we assess its resilience against common obfuscation transformations. With respect to our large-scale test set of more than 32,000 apps, we show a true positive rate of up to 93% and a false positive rate of 0.5% for unobfuscated malware samples. For obfuscated malware samples, however, we register a significant drop of the true positive rate, whereas permission-based classification schemes are immune against such program transformations. According to these results, we advocate for our new method to be a useful detector for samples within a malware family sharing functionality and source code. Our approach is more conservative than permission-based classifications, and might hence be more suitable for an automated weighting of Android apps, e.g., by the Google Bouncer.
Conference Paper
The live migration of Virtual Machines (VMs) is a key technology in server virtualization solutions used to deploy Infrastructure-as-a-Service (IaaS) clouds. This process, on one hand, increases the elasticity, fault tolerance, and maintainability in the virtual environment. On the other hand, it increases the security challenges in cloud environments, especially when the migration is performed between different data centers. Secure live migration mechanisms are required to keep the security requirements of both cloud customers and providers satisfied. These mechanisms are known to increase the migration downtime of the VMs, which plays a significant role in the compliance to Service Level Agreements (SLAs). This paper discusses the main threats caused by live migration and the main approaches for securing the migration. The requirements of a comprehensive Quality of Service (QoS)-aware secure live migration solution that keeps both security and QoS requirements satisfied are defined.
Conference Paper
The Minimal-Hitting-Set attack (HS-attack) is a well-known, provably optimal exact attack against the anonymity provided by Chaumian Mixes (Threshold-Mixes). This attack allows an attacker to identify the fixed set of communication partners of a given user by observing all messages sent and received by a Chaum Mix. In contrast to this, the Statistical Disclosure attack (SDA) provides a guess of that user’s contacts, based on statistical analyses of the observed message exchanges. We contribute the first closed formula that shows the influence of traffic distributions on the least number of observations of the Mix to complete the HS-attack. This measures when the Mix fails to hide a user’s partners, such that the user cannot plausibly deny the identified contacts. It reveals that the HS-attack requires asymptotically less observations to identify a user’s partners than the SDA, which guesses them with a given bias. This number of observations is \(O(\frac{1}{p})\) for the HS-attack and \(O(\frac{1}{p^2})\) for the SDA, where \(p\) the probability that the attacked user contacts his least frequent partner.
Conference Paper
Due to the proliferation of cloud computing, cloud-based systems are becoming an increasingly attractive target for malware. In an Infrastructure-as-a-Service (IaaS) cloud, malware located in a customer’s virtual machine (VM) affects not only this customer, but may also attack the cloud infrastructure and other co-hosted customers directly. This paper presents CloudIDEA, an architecture that provides a security service for malware defens in cloud environments. It combines lightweight intrusion monitoring with on-demand isolation, evidence collection, and in-depth analysis of VMs on dedicated analysis hosts. A dynamic decision engine makes on-demand decisions on how to handle suspicious events considering cost-efficiency and quality-of-service constraints.