IEEE Transactions on Dependable and Secure Computing

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 1545-5971
Publications
Article
The classic problem of determining the diagnosability of a given network has been studied extensively. Under the PMC model, this paper addresses the problem of determining the diagnosability of a class of networks called (1,2)-Matching Composition Networks, each of which is constructed by connecting two graphs via one or two perfect matchings. By applying our results to multiprocessor systems, we can determine the diagnosability of hypercubes, twisted cubes, locally twisted cubes, generalized twisted cubes, recursive circulants G(2^{n},4) for odd n, folded hypercubes, augmented cubes, crossed cubes, Möbius cubes, and hyper-Petersen networks, all of which belong to the class of (1,2)-matching composition networks.
 
Article
The JPEG 2000 image compression standard is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. The implementations of the JPEG 2000 codec are susceptible to computer-induced soft errors. One situation requiring fault tolerance is remote-sensing satellites, where high energy particles and radiation produce single event upsets corrupting the highly susceptible data compression operations. This paper develops fault tolerance error-detecting capabilities for the major subsystems that constitute a JPEG 2000 standard. The nature of the subsystem dictates the realistic fault model where some parts have numerical error impacts whereas others are properly modeled using bit-level variables. The critical operations of subunits such as discrete wavelet transform (DWT) and quantization are protected against numerical errors. Concurrent error detection techniques are applied to accommodate the data type and numerical operations in each processing unit. On the other hand, the embedded block coding with optimal truncation (EBCOT) system and the bitstream formation unit are protected against soft-error effects using binary decision variables and cyclic redundancy check (CRC) parity values, respectively. The techniques achieve excellent error-detecting capability at only a slight increase in complexity. The design strategies have been tested using Matlab programs and simulation results are presented.
 
Article
Specifying and managing access control policies is a challenging problem. We propose to develop formal verification techniques for access control policies to improve the current state of the art of policy specification and management. In this paper, we formalize classes of security analysis problems in the context of role-based access control. We show that in general these problems are PSPACE-complete. We also study the factors that contribute to the computational complexity by considering a lattice of various subcases of the problem with different restrictions. We show that several subcases remain PSPACE-complete, several further restricted subcases are NP-complete, and identify two subcases that are solvable in polynomial time. We also discuss our experiences and findings from experimentations that use existing formal method tools, such as model checking and logic programming, for addressing these problems.
 
Article
The generalized temporal role-based access control (GTRBAC) model provides a comprehensive set of temporal constraint expressions which can facilitate the specification of fine-grained time-based access control policies. However, the issue of the expressiveness and usability of this model has not been previously investigated. In this paper, we present an analysis of the expressiveness of the constructs provided by this model and illustrate that its constraints-set is not minimal. We show that there is a subset of GTRBAC constraints that is sufficient to express all the access constraints that can be expressed using the full set. We also illustrate that a nonminimal GTRBAC constraint set can provide better flexibility and lower complexity of constraint representation. Based on our analysis, a set of design guidelines for the development of GTRBAC-based security administration is presented.
 
Article
In this paper, we propose a role-based access control (RBAC) method for grid database services in open grid services architecture-data access and integration (OGSA-DAI). OGSA-DAI is an efficient grid-enabled middleware implementation of interfaces and services to access and control data sources and sinks. However, in OGSA-DAI, access control causes substantial administration overhead for resource providers in virtual organizations (VOs) because each of them has to manage a role-map file containing authorization information for individual grid users. To solve this problem, we used the community authorization service (CAS) provided by the globus toolkit to support the RBAC within the OGSA-DAI framework. The CAS grants the membership on VO roles to users. The resource providers then need to maintain only the mapping information from VO roles to local database roles in the role-map files, so that the number of entries in the role-map file is reduced dramatically. Furthermore, the resource providers control the granting of access privileges to the local roles. Thus, our access control method provides increased manageability for a large number of users and reduces day-to-day administration tasks of the resource providers, while they maintain the ultimate authority over their resources. Performance analysis shows that our method adds very little overhead to the existing security infrastructure of OGSA-DAI.
 
Schematized System Architecture.  
Schematization of the (Normal) Behavior of the Compared Protocols (Filled Boxes Represent Eager Disk Accesses).  
Article
In this paper, we address reliability issues in three-tier systems with stateless application servers. For these systems, a framework called e-Transaction has been recently proposed, which specifies a set of desirable end-to-end reliability guarantees. In this article, we propose an innovative distributed protocol providing e-Transaction guarantees in the general case of multiple, autonomous back-end databases (typical of scenarios with multiple parties involved within a same business process). Differently from existing proposals coping with the e-Transaction framework, our protocol does not rely on any assumption on the accuracy of failure detection. Hence, it reveals suited for a wider class of distributed systems. To achieve such a target, our protocol exploits an innovative scheme for distributed transaction management (based on ad hoc demarcation and concurrency control mechanisms), which we introduce in this paper. Beyond providing the proof of protocol correctness, we also discuss hints on the protocol integration with conventional systems (e.g., database systems) and show the minimal overhead imposed by the protocol.
 
Article
The increasing use of Internet in a variety of distributed multiparty interactions and transactions with strong real-time requirements has pushed the search for solutions to the problem of attribute-based digital interactions. A promising solution today is represented by automated trust negotiation systems. Trust negotiation systems allow subjects in different security domains to securely exchange protected resources and services. These trust negotiation systems, however, by their nature, may represent a threat to privacy in that credentials, exchanged during negotiations, often contain sensitive personal information that may need to be selectively released. In this paper, we address the problem of preserving privacy in trust negotiations. We introduce the notion of privacy preserving disclosure, that is, a set that does not include attributes or credentials, or combinations of these, that may compromise privacy. To obtain privacy preserving disclosure sets, we propose two techniques based on the notions of substitution and generalization. We argue that formulating the trust negotiation requirements in terms of disclosure policies is often restrictive. To solve this problem, we show how trust negotiation requirements can be expressed as property-based policies that list the properties needed to obtain a given resource. To better address this issue, we introduce the notion of reference ontology, and formalize the notion of trust requirement. Additionally, we develop an approach to derive disclosure policies from trust requirements and formally state some semantics relationships (i.e., equivalence, stronger than) that may hold between policies. These relationships can be used by a credential requestor to reason about which disclosure policies he/she should use in a trust negotiation.
 
Enhanced RAPID (lines that were modified w.r.t Fig. 3 are boxed while lines B18-B27 were added).
Simulation setup. 
Message delivery ratio when all nodes are mobile (comparing RAPID with different values of ).
Network load in terms of total number of transmissions when all nodes are mobile (comparing RAPID with different values of ).
Average latency to deliver a message to (the last node among) 98 percent of the nodes when all nodes are mobile with varying values of (with 100 broadcasting nodes).
Article
Reliable broadcast is a basic service for many collaborative applications as it provides reliable dissemination of the same information to many recipients. This paper studies three common approaches for achieving scalable reliable broadcast in ad hoc networks, namely probabilistic flooding, counter-based broadcast, and lazy gossip. The strength and weaknesses of each scheme are analyzed, and a new protocol that combines these three techniques, called RAPID, is developed. Specifically, the analysis in this paper focuses on the trade-offs between reliability (percentage of nodes that receive each message), latency, and the message overhead of the protocol. Each of these methods excel in some of these parameters, but no single method wins in all of them. This motivates the need for a combined protocol that benefits from all of these methods and allows to trade between them smoothly. Interestingly, since the RAPID protocol only relies on local computations and probability, it is highly resilient to mobility and failures and even selfish behavior. By adding authentication, it can even be made malicious tolerant. Additionally, the paper includes a detailed performance evaluation by simulation. The simulations confirm that RAPID obtains higher reliability with low latency and good communication overhead compared with each of the individual methods.
 
Article
This paper studies key management, a fundamental problem in securing mobile ad hoc networks (MANETs). We present IKM, an ID-based key management scheme as a novel combination of ID-based and threshold cryptography. IKM is a certificateless solution in that public keys of mobile nodes are directly derivable from their known IDs plus some common information. It thus eliminates the need for certificate-based authenticated public-key distribution indispensable in conventional public-key management schemes. IKM features a novel construction method of ID-based public/private keys, which not only ensures high-level tolerance to node compromise, but also enables efficient network-wide key update via a single broadcast message. We also provide general guidelines about how to choose the secret-sharing parameters used with threshold cryptography to meet desirable levels of security and robustness. The advantages of IKM over conventional certificate-based solutions are justified through extensive simulations. Since most MANET security mechanisms thus far involve the heavy use of certificates, we believe that our findings open a new avenue towards more effective and efficient security design for MANETs
 
JOIN Protocol. Node 1 joins the tier by conducting the JOIN protocol with node 2 that is already in the tier.
Article
To ensure fair and secure communication in Mobile Ad hoc Networks (MANETs), the applications running in these networks must be regulated by proper communication policies. However, enforcing policies in MANETs is challenging because they lack the infrastructure and trusted entities encountered in traditional distributed systems. This paper presents the design and implementation of a policy enforcing mechanism based on Satem, a kernel-level trusted execution monitor built on top of the Trusted Platform Module. Under this mechanism, each application or protocol has an associated policy. Two instances of an application running on different nodes may engage in communication only if these nodes enforce the same set of policies for both the application and the underlying protocols used by the application. In this way, nodes can form trusted application-centric networks. Before allowing a node to join such a network, Satem verifies its trustworthiness of enforcing the required set of policies. Furthermore, Satem protects the policies and the software enforcing these policies from being tampered with. If any of them is compromised, Satem disconnects the node from the network. We demonstrate the correctness of our solution through security analysis, and its low overhead through performance evaluation of two MANET applications.
 
Article
Network survivability is the ability of a network to stay connected under failures and attacks, which is a fundamental issue to the design and performance evaluation of wireless ad hoc networks. In this paper, we focus on the analysis of network survivability in the presence of node misbehaviors and failures. First, we propose a novel semi-Markov process model to characterize the evolution of node behaviors. As an immediate application of the proposed model, we investigate the problem of node isolation where the effects of denial-of-service (DoS) attacks are considered. Then, we present the derivation of network survivability and obtain the lower and upper bounds on the topological survivability for k-connected networks. We find that the network survivability degrades very quickly with the increasing likelihood of node misbehaviors, depending on the requirements of disjoint outgoing paths or network connectivity. Moreover, DoS attacks have a significant impact on the network survivability, especially in dense networks. Finally, we validate the proposed model and analytical result by simulations and numerical analysis, showing the effects of node misbehaviors on both topological survivability and network performance.
 
Article
A zone-based anonymous positioning routing protocol for ad hoc networks, enabling anonymity of both source and destination, is proposed and analyzed. According to the proposed algorithm, a source sends data to an anonymity zone, where the destination node and a number of other nodes are located. The data is then flooded within the anonymity zone so that a tracer is not able to determine the actual destination node. Source anonymity is also enabled because the positioning routing algorithms do not require the source ID or its position for the correct routing. We develop anonymity protocols for both routeless and route-based data delivery algorithms. To evaluate anonymity, we propose a "measure of anonymity," and we develop an analytical model to evaluate it. By using this model, we perform an extensive analysis of the anonymity protocols to determine the parameters that most impact the anonymity level.
 
Article
The uniqueness of security vulnerabilities in ad hoc networks has given rise to the need for designing novel intrusion detection algorithms, different from those present in conventional networks. In this work, we propose an autonomous host-based intrusion detection system for detecting malicious sinking behavior. The proposed detection system maximizes the detection accuracy by using cross-layer features to define a routing behavior. For learning and adaptation to new attack scenarios and network environments, two machine learning techniques are utilized. Support Vector Machines (SVMs) and Fisher Discriminant Analysis (FDA) are used together to exploit the better accuracy of SVM and faster speed of FDA. Instead of using all cross-layer features, features from MAC layer are associated/correlated with features from other layers, thereby reducing the feature set without reducing the information content. Various experiments are conducted with varying network conditions and malicious node behavior. The effects of factors such as mobility, traffic density, and the packet drop ratios of the malicious nodes are analyzed. Experiments based on simulation show that the proposed cross-layer approach aided by a combination of SVM and FDA performs significantly better than other existing approaches.
 
Article
A new deadlock-free routing scheme for meshes is proposed based on a new virtual network partitioning scheme, called channel overlapping. Two virtual networks can share some common virtual channels based on the new virtual network partitioning scheme. The deadlock-free adaptive routing method is then extended to deadlock-free adaptive fault-tolerant routing in 3D meshes still with two virtual channels. A few faulty nodes can make a higher dimensional mesh unsafe for fault-tolerant routing methods based on the block fault model, where the whole system (n-dimensional space) forms a fault block. Planar safety information in meshes is proposed to guide fault-tolerant routing and classifies fault-free nodes inside 2D planes. Many nodes globally marked as unsafe in the whole system become locally enabled inside 2D planes. This fault-tolerant deadlock-free adaptive routing algorithm is also extended to the one in an n-dimensional meshes with two virtual channels. Extensive simulation results are presented and compared to previous methods.
 
Article
Data sensing and retrieval in wireless sensor systems have a widespread application in areas such as security and surveillance monitoring, and command and control in battlefields. In query-based wireless sensor systems, a user would issue a query and expect a response to be returned within the deadline. While the use of fault tolerance mechanisms through redundancy improves query reliability in the presence of unreliable wireless communication and sensor faults, it could cause the energy of the system to be quickly depleted. Therefore, there is an inherent trade-off between query reliability versus energy consumption in query-based wireless sensor systems. In this paper, we develop adaptive fault-tolerant quality of service (QoS) control algorithms based on hop-by-hop data delivery utilizing “source” and “path” redundancy, with the goal to satisfy application QoS requirements while prolonging the lifetime of the sensor system. We develop a mathematical model for the lifetime of the sensor system as a function of system parameters including the “source” and “path” redundancy levels utilized. We discover that there exists optimal “source” and “path” redundancy under which the lifetime of the system is maximized while satisfying application QoS requirements. Numerical data are presented and validated through extensive simulation, with physical interpretations given, to demonstrate the feasibility of our algorithm design.
 
Article
The capability of dynamically adapting to distinct runtime conditions is an important issue when designing distributed systems where negotiated quality of service (QoS) cannot always be delivered between processes. Providing fault tolerance for such dynamic environments is a challenging task. Considering such a context, this paper proposes an adaptive programming model for fault-tolerant distributed computing, which provides upper-layer applications with process state information according to the current system synchrony (or QoS). The underlying system model is hybrid, composed by a synchronous part (where there are time bounds on processing speed and message delay) and an asynchronous part (where there is no time bound). However, such a composition can vary over time, and, in particular, the system may become totally asynchronous (e.g., when the underlying system QoS degrade) or totally synchronous. Moreover, processes are not required to share the same view of the system synchrony at a given time. To illustrate what can be done in this programming model and how to use it, the consensus problem is taken as a benchmark problem. This paper also presents an implementation of the model that relies on a negotiated quality of service (QoS) for communication channels
 
The states and the transitions corresponding to the propositional variables in the 3-SAT formula. (Except for transitions marked as fault, all are program transitions. Also, note that the program has no long transitions that originate from a i and no short transitions that originate from c i .)  
The partial structure of the fault-tolerant program.  
Fault-intolerant program in the BT model derived using the above reduction.
Article
In this paper, we investigate the effect of the representation of safety specification on the complexity of adding masking fault tolerance to programs - where, in the presence of faults, the program 1) recovers to states from where it satisfies its (safety and liveness) specification and 2) preserves its safety specification during recovery. Specifically, we concentrate on two approaches for modeling the safety specifications: 1) the bad transition (BT) model, where safety is modeled as a set of bad transitions that should not be executed by the program, and 2) the bad pair (BP) model, where safety is modeled as a set of finite sequences consisting of at most two successive transitions. If the safety specification is specified in the BT model, then it is known that the complexity of automatic addition of masking fault tolerance to high atomicity programs - where processes can read/write all program variables in an atomic step) - is polynomial in the state space of the program. However, for the case where one uses the BP model to specify safety specification, we show that the problem of adding masking fault tolerance to high atomicity programs is NP-complete. Therefore, we argue that automated synthesis of fault-tolerant programs is likely to be more successful if one focuses on problems where safety can be represented in the BT model.
 
Article
In this paper, we survey the adoption of the platform for privacy preferences protocol (P3P) on Internet Web sites to determine if P3P is a growing or stagnant technology. We conducted a pilot survey in February 2005 and our full survey in November 2005. We compare the results from these two surveys and the previous (July 2003) survey of P3P adoption. In general, we find that P3P adoption is stagnant, and errors in P3P documents are a regular occurrence. In addition, very little maintenance of P3P policies is apparent. These observations call into question P3P's viability as an online privacy-enhancing technology. Our survey exceeds other previous surveys in our use of both detailed statistical analysis and scope; our February pilot survey analyzed more than 23,000 unique Web sites, and our full survey in November 2005 analyzed more than 100,000 unique Web sites.
 
Article
Fast and accurate generation of worm signatures is essential to contain zero-day worms at the Internet scale. Recent work has shown that signature generation can be automated by analyzing the repetition of worm substrings (that is, fingerprints) and their address dispersion. However, at the early stage of a worm outbreak, individual edge networks are often short of enough worm exploits for generating accurate signatures. This paper presents both theoretical and experimental results on a collaborative worm signature generation system (WormShield) that employs distributed fingerprint filtering and aggregation over multiple edge networks. By analyzing real-life Internet traces, we discovered that fingerprints in background traffic exhibit a Zipf-like distribution. Due to this property, a distributed fingerprint filtering reduces the amount of aggregation traffic significantly. WormShield monitors utilize a new distributed aggregation tree (DAT) to compute global fingerprint statistics in a scalable and load-balanced fashion. We simulated a spectrum of scanning worms including CodeRed and Slammer by using realistic Internet configurations of about 100,000 edge networks. On average, 256 collaborative monitors generate the signature of CodeRedl-v2 135 times faster than using the same number of isolated monitors. In addition to speed gains, we observed less than 100 false signatures out of 18.7-Gbyte Internet traces, yielding a very low false-positive rate. Each monitor only generates about 0.6 kilobit per second of aggregation traffic, which is 0.003 percent of the 18 megabits per second link traffic sniffed. These results demonstrate that the WormShield system offers distinct advantages in speed gains, signature accuracy, and scalability for large-scale worm containment.
 
Article
Alert aggregation is an important subtask of intrusion detection. The goal is to identify and to cluster different alerts-produced by low-level intrusion detection systems, firewalls, etc.-belonging to a specific attack instance which has been initiated by an attacker at a certain point in time. Thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts may then be the basis for reporting to security experts or for communication within a distributed intrusion detection system. We propose a novel technique for online alert aggregation which is based on a dynamic, probabilistic model of the current attack situation. Basically, it can be regarded as a data stream version of a maximum likelihood approach for the estimation of the model parameters. With three benchmark data sets, we demonstrate that it is possible to achieve reduction rates of up to 99.96 percent while the number of missing meta-alerts is extremely low. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance.
 
Article
Lots of conference key agreement protocols have been suggested to secure computer network conference. Most of them operate only when all conferees are honest, but do not work when some conferees are malicious and attempt to delay or destruct the conference. Recently, Tzeng proposed a conference key agreement protocol with fault tolerance in terms that a common secret conference key among honest conferees can be established even if malicious conferees exist. In the case where a conferee can broadcast different messages in different subnetworks, Tzeng's protocol is vulnerable to a "different key attack" from malicious conferees. In addition, Tzeng's protocol requires each conferee to broadcast to the rest of the group and receive n - 1 message in a single round (where n stands for the number of conferees). Moreover, it has to handle n simultaneous broadcasts in one round. In this paper, we propose a fault-tolerant conference key agreement protocol, in which each conferee only needs to send one message to a "semitrusted" conference bridge and receive one broadcast message. Our protocol is an identity-based key agreement, built on elliptic curve cryptography. It is resistant to the different key attack from malicious conferees and needs less communication cost than Tzeng's protocol.
 
Correlation process overview. 
Alert attributes. 
Article
Alert correlation is a process that analyzes the alerts produced by one or more intrusion detection systems and provides a more succinct and high-level view of occurring or attempted intrusions. Even though the correlation process is often presented as a single step, the analysis is actually carried out by a number of components, each of which has a specific goal. Unfortunately, most approaches to correlation concentrate on just a few components of the process, providing formalisms and techniques that address only specific correlation issues. This paper presents a general correlation model that includes a comprehensive set of components and a framework based on this model. A tool using the framework has been applied to a number of well-known intrusion detection data sets to identify how each component contributes to the overall goals of correlation. The results of these experiments show that the correlation components are effective in achieving alert reduction and abstraction. They also show that the effectiveness of a component depends heavily on the nature of the data set analyzed.
 
Article
In this paper, we consider two kinds of sequential checkpoint placement problems with infinite/finite time horizon. For these problems, we apply approximation methods based on the variational principle and develop computation algorithms to derive the optimal checkpoint sequence approximately. Next, we focus on the situation where the knowledge on system failure is incomplete, i.e., the system failure time distribution is unknown. We develop the so-called min-max checkpoint placement methods to determine the optimal checkpoint sequence under an uncertain circumstance in terms of the system failure time distribution. In numerical examples, we investigate quantitatively the proposed distribution-free checkpoint placement methods, and refer to their potential applicability in practice.
 
Article
In this paper, we propose a security model to capture active attacks against multipath key establishment (MPKE) in sensor networks. Our model strengthens previous models to capture more attacks and achieve essential security goals for multipath key establishment. In this model, we can apply protocols for perfectly secure message transmission to solve the multipath key establishment problem. We propose a simple new protocol for optimal one-round perfectly secure message transmission based on Reed-Solomon codes. Then, we use this protocol to obtain two new multipath key establishment schemes that can be applied provided that fewer than one-third of the paths are controlled by the adversary. Finally, we describe another MPKE scheme that tolerates a higher fraction (less than half) of paths controlled by the adversary. This scheme is based on a new protocol for a weakened version of message transmission, which is very simple and efficient. Our multipath key establishment schemes achieve improved security and lower communication complexity, as compared to previous schemes.
 
Article
In this paper, a general model of multibit Differential Power Analysis (DPA) attacks to precharged buses is discussed, with emphasis on symmetric-key cryptographic algorithms. Analysis provides a deeper insight into the dependence of the DPA effectiveness (i.e., the vulnerability of cryptographic chips) on the parameters that define the attack, the algorithm, and the processor architecture in which the latter is implemented. To this aim, the main parameters that are of interest in practical DPA attacks are analytically derived under appropriate approximations, and a novel figure of merit to measure the DPA effectiveness of multibit attacks is proposed. This figure of merit allows for identifying conditions that maximize the effectiveness of DPA attacks, i.e., conditions under which a cryptographic chip should be tested to assess its robustness. Several interesting properties of DPA attacks are derived, and suggestions to design algorithms and circuits with higher robustness against DPA are given. The proposed model is validated in the case of DES and AES algorithms with both simulations on an MIPS32 architecture and measurements on an FPGA-based implementation of AES. The model accuracy is shown to be adequate, as the resulting error is always lower than 10 percent and typically of a few percentage points.
 
Article
A discrete optimization model is proposed to allocate redundancy to critical IT functions for disaster recovery planning. The objective is to maximize the overall survivability of an organization's IT functions by selecting their appropriate redundancy levels. A solution procedure based on probabilistic dynamic programming is presented along with two examples.
 
A snapshot of instantaneous traffic passing through: (a) router 1 and (b) router 2 (10,000 samples each, taken in June 2007 and February 2007, respectively). Average traffic is 30.42 Mbps in (a) and 366.87 Kbps in (b).  
Article
This paper proposes a novel method to detect anomalies in network traffic, based on a nonrestricted α-stable first-order model and statistical hypothesis testing. To this end, we give statistical evidence that the marginal distribution of real traffic is adequately modeled with α-stable functions and classify traffic patterns by means of a Generalized Likelihood Ratio Test (GLRT). The method automatically chooses traffic windows used as a reference, which the traffic window under test is compared with, with no expert intervention needed to that end. We focus on detecting two anomaly types, namely floods and flash-crowds, which have been frequently studied in the literature. Performance of our detection method has been measured through Receiver Operating Characteristic (ROC) curves and results indicate that our method outperforms the closely-related state-of-the-art contribution described in. All experiments use traffic data collected from two routers at our university-a 25,000 students institution-which provide two different levels of traffic aggregation for our tests (traffic at a particular school and the whole university). In addition, the traffic model is tested with publicly available traffic traces. Due to the complexity of α-stable distributions, care has been taken in designing appropriate numerical algorithms to deal with the model.
 
Article
The goals of the present contribution are twofold. First, we propose the use of a non-Gaussian long-range dependent process to model Internet traffic aggregated time series. We give the definitions and intuition behind the use of this model. We detail numerical procedures that can be used to synthesize artificial traffic exactly following the model prescription. We also propose original and practically effective procedures to estimate the corresponding parameters from empirical data. We show that this empirical model relevantly describes a large variety of Internet traffic, including both regular traffic obtained from public reference repositories and traffic containing legitimate (flash crowd) or illegitimate (DDoS attack) anomalies. We observe that the proposed model accurately fits the data for a wide range of aggregation levels. The model provides us with a meaningful multiresolution (i.e., aggregation level dependent) statistics to characterize the traffic: the evolution of the estimated parameters with respect to the aggregation level. It opens the track to the second goal of the paper: anomaly detection. We propose the use of a quadratic distance computed on these statistics to detect the occurrences of DDoS attack and study the statistical performance of these detection procedures. Traffic with anomalies was produced and collected by us so as to create a controlled and reproducible database, allowing for a relevant assessment of the statistical performance of the proposed (modeling and detection) procedures
 
Article
This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/Lincoln Laboratory (MIT/LL) attack data set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and signature generation over Internet connection episodes
 
Article
Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client's IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular Web sites. Web site administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers' definitions of misbehavior-servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.
 
Article
Most of the existing privacy-preserving techniques, such as k-anonymity methods, are designed for static data sets. As such, they cannot be applied to streaming data which are continuous, transient, and usually unbounded. Moreover, in streaming applications, there is a need to offer strong guarantees on the maximum allowed delay between incoming data and the corresponding anonymized output. To cope with these requirements, in this paper, we present Continuously Anonymizing STreaming data via adaptive cLustEring (CASTLE), a cluster-based scheme that anonymizes data streams on-the-fly and, at the same time, ensures the freshness of the anonymized data by satisfying specified delay constraints. We further show how CASTLE can be easily extended to handle ℓ-diversity. Our extensive performance study shows that CASTLE is efficient and effective w.r.t. the quality of the output data.
 
Original Dataset
Suppressed Data with k = 2
Generalized Data with k = 2
Article
Suppose Alice owns a k-anonymous database and needs to determine whether her database, when inserted with a tuple owned by Bob, is still k-anonymous. Also, suppose that access to the database is strictly controlled, because for example data are used for certain experiments that need to be maintained confidential. Clearly, allowing Alice to directly read the contents of the tuple breaks the privacy of Bob (e.g., a patient's medical record); on the other hand, the confidentiality of the database managed by Alice is violated once Bob has access to the contents of the database. Thus, the problem is to check whether the database inserted with the tuple is still k-anonymous, without letting Alice and Bob know the contents of the tuple and the database, respectively. In this paper, we propose two protocols solving this problem on suppression-based and generalization-based k-anonymous and confidential databases. The protocols rely on well-known cryptographic assumptions, and we provide theoretical analyses to proof their soundness and experimental results to illustrate their efficiency.
 
Illustration of Discretionary Access Control. A subject has an arbitrary number of permissions (authorizations) which relate operations (access modes) to objects. A permission relates one operation to one object but an operation or an object can be used in multiple permissions.  
Illustration of Role-Based Access Control. A subject is assigned a role (or multiple roles). Each role contains permissions that relate a particular operation to a particular object. Roles can contain other roles which establishes role hierarchies. The cardinality constraints indicate that a user can have multiple roles which in turn can have several super-or sub-roles. Every role can contain several permissions which relates one operation to one object. One object or operation can be used in several permissions.
Article
As organizations increase their reliance on, possibly distributed, information systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. Though a number of techniques, such as encryption and electronic signatures, are currently available to protect data when transmitted across sites, a truly comprehensive approach for data protection must also include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics, and other relevant contextual information, such as time. It is well understood today that the semantics of data must be taken into account in order to specify effective access control policies. Also, techniques for data integrity and availability specifically tailored to database systems must be adopted. In this respect, over the years, the database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. However, despite such advances, the database security area faces several new challenges. Factors such as the evolution of security concerns, the "disintermediation" of access to data, new computing paradigms and applications, such as grid-based computing and on-demand business, have introduced both new security requirements and new contexts in which to apply and possibly extend current approaches. In this paper, we first survey the most relevant concepts underlying the notion of database security and summarize the most well-known techniques. We focus on access control systems, on which a large body of research has been devoted, and describe the key access control models, namely, the discretionary and mandatory access control models, and the role-based access control (RBAC) model. We also discuss security for advanced data management systems, and cover topics such as access control for XML. We then discuss current challenges for database security and some preliminary approaches that address some of these challenges.
 
Article
We investigate the hardness of malicious attacks on multiple-tree topologies of push-based Peer-to-Peer streaming systems. In particular, we study the optimization problem of finding a minimum set of target nodes to achieve a certain damage objective. For this, we differentiate between three natural and increasingly complex damage types: global packet loss, service loss when using Multiple Description Coding, and service loss when using Forward Error Correction. We show that each of these attack problems is NP-hard, even for an idealized attacker with global knowledge about the topology. Despite tree-based topologies seem susceptible to such attacks, we can even prove that (under strong assumptions about NP) there is no polynomial time attacker, capable of guaranteeing a general solution quality within factors of c<sub>1</sub> log(n) and c<sub>2</sub>2log<sup>1-δ</sup><sup>n</sup> (with n topology nodes, δ = 1/log log<sup>d</sup> n for d <; 1/2 and constants c<sub>1</sub>, c<sub>2</sub>), respectively. To our knowledge, these are the first lower bounds on the quality of polynomial time attacks on P2P streaming topologies. The results naturally apply to major real-world DoS attackers and show hard limits for their possibilities. In addition, they demonstrate superior stability of Forward Error Correction systems compared to Multiple Description Coding and give theoretical foundation to properties of stable topologies.
 
Article
The importance of software security has been profound, since most attacks to software systems are based on vulnerabilities caused by poorly designed and developed software. Furthermore, the enforcement of security in software systems at the design phase can reduce the high cost and effort associated with the introduction of security during implementation. For this purpose, security patterns that offer security at the architectural level have been proposed in analogy to the well-known design patterns. The main goal of this paper is to perform risk analysis of software systems based on the security patterns that they contain. The first step is to determine to what extent specific security patterns shield from known attacks. This information is fed to a mathematical model based on the fuzzy-set theory and fuzzy fault trees in order to compute the risk for each category of attacks. The whole process has been automated using a methodology that extracts the risk of a software system by reading the class diagram of the system under study.
 
Conference Paper
Code injection attacks, despite being well researched, continue to be a problem today. Modern architectural solutions such as the NX-bit and PaX have been useful in limiting the attacks, however they enforce program layout restrictions and can often times still be circumvented by a determined attacker. We propose a change to the memory architecture of modern processors that addresses the code injection problem at its very root by virtually splitting memory into code memory and data memory such that a processor will never be able to fetch injected code for execution. This virtual split memory system can be implemented as a software only patch to an operating system, and can be used to supplement existing schemes for improved protection. Our experimental results show the system is effective in preventing a wide range of code injection attacks while incurring acceptable overhead.
 
The active NIPS splitter architecture.
RPC protocol rule, formulated as a string matching problem.
Performance improvement (reduction in user time) as a function of locality buffer size.
Sensor processing cost (time to process all packets in a trace), with user and system time breakdown.
Article
State-of-the-art high-speed network intrusion detection and prevention systems are often designed using multiple intrusion detection sensors operating in parallel coupled with a suitable front-end load-balancing traffic splitter. In this paper, we argue that, rather than just passively providing generic load distribution, traffic splitters should implement more active operations on the traffic stream, with the goal of reducing the load on the sensors. We present an active splitter architecture and three methods for improving performance. The first is early filtering/forwarding, where a fraction of the packets is processed on the splitter instead of the sensors. The second is the use of locality buffering, where the splitter reorders packets in a way that improves memory access locality on the sensors. The third is the use of cumulative acknowledgments, a method that optimizes the coordination between the traffic splitter and the sensors. Our experiments suggest that early filtering reduces the number of packets to be processed by 32 percent, giving an 8 percent increase in sensor performance, locality buffers improve sensor performance by 10-18 percent, while cumulative acknowledgments improve performance by 50-90 percent. We have also developed a prototype active splitter on an IXP1200 network processor and show that the cost of the proposed approach is reasonable.
 
Article
With the growing size and complexity of software applications, research in the area of architecture-based software reliability analysis has gained prominence. The purpose of this paper is to provide an overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations
 
Article
Inference has been a longstanding issue in database security, and inference control, aiming to curb inference, provides an extra line of defense to the confidentiality of databases by complementing access control. However, in traditional inference control architecture, database server is a crucial bottleneck, as it enforces highly computation-intensive auditing for all users who query the protected database. As a result, most auditing methods, though rigorously studied, are not practical for protecting large-scale real-world database systems. In this paper, we shift this paradigm by proposing a new inference control architecture, entrusting inference control to each user's platform that is equipped with trusted computing technology. The trusted computing technology is designed to attest the state of a user's platform to the database server, so as to assure the server that inference control could be enforced as prescribed. A generic protocol is proposed to formalize the interactions between the user's platform and database server. The authentication property of the protocol is formally proven. Since inference control is enforced in a distributed manner, our solution avoids the bottleneck in the traditional architecture, thus can potentially support a large number of users making queries.
 
Article
Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better coordination of application subsystems compared to federated systems. An integrated architecture shares the system's communication resources by using a single physical network for exchanging messages of multiple application subsystems. Similarly, the computational resources (for example, memory and CPU time) of each node computer are available to multiple software components. In order to support a seamless system integration without unintended side effects in such an integrated architecture, it is important to ensure that the software components do not interfere through the use of these shared resources. For this reason, the DECOS integrated architecture encapsulates application subsystems and their constituting software components. At the level of the communication system, virtual networks on top of an underlying time-triggered physical network exhibit predefined temporal properties (that is, bandwidth, latency, and latency jitter). Due to encapsulation, the temporal properties of messages sent by a software component are independent from the behavior of other software components, in particular from those within other application subsystems. This paper presents the mechanisms for the temporal partitioning of communication resources in the dependable embedded components and systems (DECOS) integrated architecture. Furthermore, experimental evidence is provided in order to demonstrate that the messages sent by one software component do not affect the temporal properties of messages exchanged by other software components. Rigid temporal partitioning is achievable while at the same time meeting the performance requirements imposed by present-day automotive applications and those envisioned for the future (for example, X-by-wire). For this purpose, we use an experimental framework with an implementation of virtual networks on top of a time division mu- - ltiple access (TDMA)-controlled Ethernet network.
 
Article
Group communication systems are high-availability distributed systems providing reliable and ordered message delivery, as well as a membership service, to group-oriented applications. Many such systems are built using a distributed client-server architecture where a relatively small set of servers provide service to numerous clients. In this work, we show how group communication systems can be enhanced with security services without sacrificing robustness and performance. More specifically, we propose several integrated security architectures for distributed client-server group communication systems. In an integrated architecture, security services are implemented in servers, in contrast to a layered architecture, where the same services are implemented in clients. We discuss performance and accompanying trust issues of each proposed architecture and present experimental results that demonstrate the superior scalability of an integrated architecture.
 
Article
The advent of diminutive technology feature sizes has led to escalating transistor densities. Burgeoning transistor counts are casting a dark shadow on modern chip design: global interconnect delays are dominating gate delays and affecting overall system performance. Networks-on-Chip (NoC) are viewed as a viable solution to this problem because of their scalability and optimized electrical properties. However, on-chip routers are susceptible to another artifact of deep submicron technology, Process Variation (PV). PV is a consequence of manufacturing imperfections, which may lead to degraded performance and even erroneous behavior. In this work, we present the first comprehensive evaluation of NoC susceptibility to PV effects, and we propose an array of architectural improvements in the form of a new router design-called SturdiSwitch-to increase resiliency to these effects. Through extensive reengineering of critical components, SturdiSwitch provides increased immunity to PV while improving performance and increasing area and power efficiency.
 
Article
The web is a complicated graph, with millions of websites interlinked together. In this paper, we propose to use this web sitegraph structure to mitigate flooding attacks on a website, using a new web referral architecture for privileged service (“WRAPS”). WRAPS allows a legitimate client to obtain a privilege URL through a simple click on a referral hyperlink, from a website trusted by the target website. Using that URL, the client can get privileged access to the target website in a manner that is far less vulnerable to a distributed denial-of-service (DDoS) flooding attack than normal access would be. WRAPS does not require changes to web client software and is extremely lightweight for referrer websites, which makes its deployment easy. The massive scale of the web sitegraph could deter attempts to isolate a website through blocking all referrers. We present the design of WRAPS, and the implementation of a prototype system used to evaluate our proposal. Our empirical study demonstrates that WRAPS enables legitimate clients to connect to a website smoothly in spite of a very intensive flooding attack, at the cost of small overheads on the website's ISP's edge routers. We discuss the security properties of WRAPS and a simple approach to encourage many small websites to help protect an important site during DoS attacks.
 
Sequence diagram for the recovery of an aborted transaction.
A three-tier banking application running on top of our fault tolerance infrastructure.
Article
In this paper, we describe a software infrastructure that unifies transactions and replication in three-tier architectures and provides data consistency and high availability for enterprise applications. The infrastructure uses transactions based on the CORBA object transaction service to protect the application data in databases on stable storage, using a roll-backward recovery strategy, and replication based on the fault tolerant CORBA standard to protect the middle-tier servers, using a roll-forward recovery strategy. The infrastructure replicates the middle-tier servers to protect the application business logic processing. In addition, it replicates the transaction coordinator, which renders the two-phase commit protocol nonblocking and, thus, avoids potentially long service disruptions caused by failure of the coordinator. The infrastructure handles the interactions between the replicated middle-tier servers and the database servers through replicated gateways that prevent duplicate requests from reaching the database servers. It implements automatic client-side failover mechanisms, which guarantee that clients know the outcome of the requests that they have made, and retries aborted transactions automatically on behalf of the clients.
 
PROCESSOR MODELS WITH VARIOUS COMPLEXITIES 
Node structure. Ci is the core, Ri the router. The figure also shows the test bits BN, BE, BS, and BW, which are updated by the diagnosis process and control the inhibition of communications (see text).  
Node structure. Ci is the core, Ri the router. Ti is the tester of the router Ri. TDG and TED are the test data generator and the test data detector for interconnects (see text).  
SBST STATISTICS FOR VARIOUS PROCESSOR MODELS 
Article
We study chip self-organization and fault tolerance at the architectural level to improve dependable continuous operation of multicore arrays in massively defective nanotechnologies. Architectural self-organization results from the conjunction of self-diagnosis and self-disconnection mechanisms (to identify and isolate most permanently faulty or inaccessible cores and routers), plus self-discovery of routes to maintain the communication in the array. In the methodology presented in this work, chip self-diagnosis is performed in three steps, following an ascending order of complexity: interconnects are tested first, then routers through mutual test, and cores in the last step. The mutual testing of routers is especially important as faulty routers are disconnected by good ones with no assumption on the behavior of defective elements. Moreover, the disconnection of faulty routers is not physical (“hard”) but logical (“soft”) in that a good router simply stops communicating with any adjacent router diagnosed as defective. There is no physical reconfiguration in the chip and no need for spare elements. Ultimately, the multicore array may be viewed as a black box, which incorporates protection mechanisms and self-organizes, while the external control reduces to a simple chip validation test which, in the simplest cases, reduces to counting the number of valid and accessible cores.
 
Article
To achieve high reliability despite hard faults that occur during operation and to achieve high yield despite defects introduced at fabrication, a microprocessor must be able to tolerate hard faults. In this paper, we present a framework for autonomic self-repair of the array structures in microprocessors (e.g., reorder buffer, instruction window, etc.). The framework consists of three aspects: 1) detecting/diagnosing the fault, 2) recovering from the resultant error, and 3) mapping out the faulty portion of the array. For each aspect, we present design options. Based on this framework, we develop two particular schemes for self-repairing array structures (SRAS). Simulation results show that one of our SRAS schemes adds some performance overhead in the fault-free case, but that both of them mask hard faults 1) with less hardware overhead cost than higher-level redundancy (e.g., IBM mainframes) and 2) without the per-error performance penalty of existing low-cost techniques that combine error detection with pipeline flushes for backward error recovery (BER). When hard faults are present in arrays, due to operational faults or fabrication defects, SRAS schemes outperform BER due to not having to frequently flush the pipeline.
 
Top-cited authors
Brian Randell
  • Newcastle University
Carl Landwehr
  • Le Moyne College
Algirdas Avizienis
  • University of California, Los Angeles
Elisa Bertino
  • Purdue University
Haining Wang
  • University of Delaware