ACM Transactions on Autonomous and Adaptive Systems

Published by Association for Computing Machinery
Print ISSN: 1556-4665
Publications
State transition diagram of node v ∈ V at time t
Theoretical modeling of computer virus/worm epidemic dynamics is an important problem that has attracted many studies. However, most existing models are adapted from biological epidemic ones. Although biological epidemic models can certainly be adapted to capture some computer virus spreading scenarios (especially when the so-called homogeneity assumption holds), the problem of computer virus spreading is not well understood because it has many important perspectives that are not necessarily accommodated in the biological epidemic models. In this paper we initiate the study of such a perspective, namely that of adaptive defense against epidemic spreading in arbitrary networks. More specifically, we investigate a non-homogeneous Susceptible-Infectious-Susceptible (SIS) model where the model parameters may vary with respect to time. In particular, we focus on two scenarios we call semi-adaptive defense and fully-adaptive} defense, which accommodate implicit and explicit dependency relationships between the model parameters, respectively. In the semi-adaptive defense scenario, the model's input parameters are given; the defense is semi-adaptive because the adjustment is implicitly dependent upon the outcome of virus spreading. For this scenario, we present a set of sufficient conditions (some are more general or succinct than others) under which the virus spreading will die out; such sufficient conditions are also known as epidemic thresholds in the literature. In the fully-adaptive defense scenario, some input parameters are not known (i.e., the aforementioned sufficient conditions are not applicable) but the defender can observe the outcome of virus spreading. For this scenario, we present adaptive control strategies under which the virus spreading will die out or will be contained to a desired level.
 
Misleading notification  
The algorithm requires (2t + 1) rounds
The configuration C[2 − 1] is bivalent (Lemma 7, Case 1)  
No configuration C[2 − 1] is bivalent (Lemma 7, Case 2)  
This paper addresses the consensus problem in asynchronous systems prone to process crashes, where additionally the processes are anonymous (they cannot be distinguished one from the other: they have no name and execute the same code). To circumvent the three computational adversaries (asynchrony, failures and anonymity) each process is provided with a failure detector of a class denoted ψ, that gives it an upper bound on the number of processes that are currently alive (in a non-anonymous system, the classes ψ and \(\cal P\) -the class of perfect failure detectors- are equivalent).
 
An example showing a strict biangular circle (α = β).
An example showing the initial configuration in the proof of Lemma 6.
We consider distributed systems made of weak mobile robots, that is, mobile devices, equipped with sensors, that are anonymous , autonomous , disoriented , and oblivious . The Circle Formation Problem (CFP) consists of the design of a protocol insuring that, starting from an initial arbitrary configuration where no two robots are at the same position, all the robots eventually form a regular n-gon —the robots take place on the circumference of a circle C with equal spacing between any two adjacent robots on C . CFP is known to be unsolvable by arranging the robots evenly along the circumference of a circle C without leaving C —that is, starting from a configuration where the robots are on the boundary of C . We circumvent this impossibility result by designing a scheme based on concentric circles . This is the first scheme that deterministically solves CFP. We present our method with two different implementations working in the semi-synchronous system (SSM) for any number n ≥ 5 of robots.
 
We generalize the classic dining philosophers problem to separate the conflict and communication neighbors of each process. Communication neighbors may directly exchange information while conflict neighbors compete for the access to the exclusive critical section of code. This generalization is motivated by a number of practical problems in distributed systems including problems in wireless sensor networks. We present a self-stabilizing deterministic algorithm — KDP\mathcal{KDP} that solves a restricted version of the generalized problem where the conflict set for each process is limited to its k-hop neighborhood. Our algorithm is terminating. We formally prove KDP\mathcal{KDP} correct and evaluate its performance. We then extend KDP\mathcal{KDP} to handle fully generalized problem. We further extend it to handle a similarly generalized drinking philosophers problem. We describe how KDP\mathcal{KDP} can be implemented in wireless sensor networks and demonstrate that this implementation does not jeopardize its correctness or termination properties.
 
Mechanism Overview.  
Average number of action vectors considered per number of agents. Note that Y-axis uses a logarithmic scale.  
Average % of utility (product) loss for different numbers of target agents. As before, we were only able to obtain results for up to 40 target agents without heuristics.  
Social Media involve many shared items, such as photos, which may concern more than one user. The first challenge we address in this paper is to develop a way for users of such items to take a decision on to whom to share these items. This is not an easy problem, as users' privacy preferences for the same item may conflict, so an approach that just merges in some way the users' privacy preferences may provide unsatisfactory results. We propose a negotiation mechanism for users to agree on a compromise for the conflicts found. The second challenge we address in this paper relates to the exponential complexity of such a negotiation mechanism, which could make it too slow to be used in practice in a Social Media infrastructure. To address this, we propose heuristics that reduce the complexity of the negotiation mechanism and show how substantial benefits can be derived from the use of these heuristics through extensive experimental evaluation that compares the performance of the negotiation mechanism with and without these heuristics. Moreover, we show that one such heuristic makes the negotiation mechanism produce results fast enough to be used in actual Social Media infrastructures with near-optimal results.
 
This article presents a hierarchical approach for detecting faults in wireless sensor networks (WSNs) after they have been deployed. The developers of WSNs can specify “invariants” that must be satisfied by the WSNs. We present a framework, ...
 
Autonomic communication and computing is a new paradigm for dynamic service integration over a network. An autonomic network crosses organizational and management boundaries and is provided by entities that see each other just as partners. For many services no autonomic partner may guess a priori what will be sent by clients nor clients know a priori what credentials are required to access a service. To address this problem we propose a new interactive access control : servers should interact with clients, asking for missing credentials necessary to grant access, whereas clients may supply or decline the requested credentials. Servers evaluate their policies and interact with clients until a decision of grant or deny is taken. This proposal is grounded in a formal model on policy-based access control. It identifies the formal reasoning services of deduction, abduction and consistency. Based on them, the work proposes a comprehensive access control framework for autonomic systems. An implementation of the interactive model is given followed by system performance evaluation.
 
Autonomic communications seek to improve the ability of network and services to cope with unpredicted change, including changes in topology, load, task, the physical and logical characteristics of the networks that can be accessed, and so forth. Broad-ranging autonomic solutions require designers to account for a range of end-to-end issues affecting programming models, network and contextual modeling and reasoning, decentralised algorithms, trust acquisition and maintenance---issues whose solutions may draw on approaches and results from a surprisingly broad range of disciplines. We survey the current state of autonomic communications research and identify significant emerging trends and techniques.
 
Ubiquitous and pervasive computing deals with the design of autonomous and adaptive systems and services that interact with the closest environment enhanced by context awareness and emergence functionalities. In this article, we investigate the relationships between the environment, the actions (services), and the selection algorithm that is guaranteed to take the system to a state that suits a stochastically changing environment. Making the assumption that peering relationships between potential actions can be specified by an affinity network, the action selection mechanism is translated into an iterative algorithm that lets each activity update its strength until it converges to a solution. In pervasive environments, where services and devices interfere with each other, the proposed action selection approach prevents unexpected and undesirable behaviors or oscillating loops in a such dynamic environment.
 
The minimum-energy multicast tree problem aims to construct a multicast tree rooted at the source node and spanning all the destination nodes such that the sum of transmission power at non-leaf nodes is minimized. However, aggressive power assignment at non-leaf nodes, although conserving more energy, results in multicast trees that suffer from higher hop count and jeopardizes delay-sensitive applications, signifying a clear tradeoff between energy efficiency and delay. This article formulates these issues as a constrained Steiner tree problem, and describes a distributed constrained Steiner tree algorithm, which jointly conserves energy and bounds delay for multicast routing in ad hoc networks. In particular, the proposed algorithm concurrently constructs a constrained Steiner tree, performs transmission power assignment at non-leaf nodes, and strives to minimize the sum of transmission power of non-leaf nodes, subject to the given maximum hop count constraint. Simulation results validate the effectiveness and reveal the characteristics of the proposed algorithm.
 
Publish/Subscribe (P/S) is a communication paradigm of growing popularity for information dissemination in large-scale distributed systems. The weak coupling between information producers and consumers in P/S systems is attractive for loosely coupled and dynamic network infrastructures such as ad hoc networks. However, achieving end-to-end timeliness and reliability properties when P/S events are causally dependent is an open problem in ad hoc networks. In this article, we present, evaluate benefits of, and compare with past work an architecture design that can effectively support timely and reliable delivery of events and causally related events in ad hoc environments, and especially in mobile ad hoc networks (MANETs). With observations from both realistic application model and simulation experiments, we reveal causal dependencies among events and their significance in a typical use notional system. We also examine and propose engineering methodologies to further tailor an event-based system to facilitate its self-reorganizing capability and self-reconfiguration. Our design features a two-layer structure, including novel distributed algorithms and mechanisms for P/S tree construction and maintenance. The trace-based experimental simulation studies illustrate our design's effectiveness in both cases with and without causal dependencies.
 
One of the major challenges in the research of mobile ad hoc networks is designing dynamic, scalable, and low cost (in terms of utilization of resources) routing protocols usable in real-world applications. Routing in ad hoc networks has been explored to a large extent over the past decade and different protocols have been proposed. They are based on a two-dimensional view of the ad hoc network geographical region, and are not always realistic. In this article, we propose a bird flight-inspired, highly scalable, dynamic, energy-efficient, and position-based routing protocol called Bird Flight-Inspired Routing Protocol (BFIRP). The proposed protocol is inspired by the navigation of birds over long distances following the great circle arc, the shortest arc connecting two points on the surface of a sphere. This sheds light on how birds save their energy while navigating over thousands of miles. The proposed algorithm can be readily applied in many real-world applications, as it is designed with a realistic three-dimensional view of the network’s geographic region. In the proposed algorithm, each node obtains its location coordinates (X, Y, Z), and speed from the GPS (Global Positioning System); whereas, the destination’s location coordinates (X, Y, Z), and speed are obtained from any other distributed localized service. Based on the location information, the source and each intermediate node choose their immediate neighbor as the next hop that has the maximum priority. The priority is calculated by taking into consideration the energy of the node, the distance between the node and the destination and the degree of closeness of the node to the trajectory of the great circle arc between the current node and the destination. The proposed algorithm is simulated in J-SIM and compared with the algorithms of Ad Hoc On Demand Distance Vector (AODV), and Most Forward Within Distance R (MFR) routing protocols. The results of the simulations show that the proposed BFIRP algorithm is highly scalable, and has low end-to-end delay compared to AODV. The algorithm is also simulated in various scenarios, and the results demonstrate that BFIRP is more efficient than AODV in energy and throughput by 20% and 15% respectively. It also shows satisfactory improvement over MFR in terms of throughput and routing overhead.
 
Anycasting is a network service that selects the best one of service providers in an anycast group as a destination. While anycasting offers better service flexibility in mobile ad hoc networks (MANETs), it also incurs new problems. In MANETs, every node can move arbitrarily, and the routes from mobile nodes to their service providers would vary. Therefore, anycast service discovery in MANETs usually relies on network-layer message broadcasting, which leads to large traffic overhead for the scarce bandwidth of MANETs. In this work, we present a traffic-control scheme for anycast service discovery in MANETs. Our scheme can reduce the volume of query messages and the reply messages. In addition to basic anycasting, our scheme also supports k-anycast service that requests for k anycast service providers in each service instance. With k-anycast service, the fault tolerance and service flexibility of our scheme can be improved. Experimental results demonstrate that our scheme is efficient and feasible for MANETs.
 
world problems, a self-organized MAS should exhibit complex adaptive organizations. In this respect the holonic paradigm provides a solution for modelling complex organizational structures. Holons are defined as self-similar entities that are neither parts or wholes. The organizational structure produced by holons is called a holarchy. A Holonic MAS (HMAS) considers agents as holons which are grouped according to holarchies. The goal of this paper is to introduce an architecture that allows holons to adapt to their environment. The used metaphor is based upon the immune system and considers stimulations/requests as antigens and selected antibodies as reactions/answers. Each antibody is activated by specific antigens and stimulated and/or inhib- ited by other antibodies. The immune system rewards (resp. penalizes) selected antibodies which constitutes a good (resp. wrong) answer to a request. This mechanisms allows an agent to choose out of a set of possible behaviors the one which seems the best fitting for a specific context. In this context, each holon, atomic or composed, encapsulates an immune system in order to select a behavior. For composed holons, each sub-holon is represented by the selected antibody of its immune system. The super-holon's immune system therefore contains an antibody per sub-holon. This recursive architecture corresponds with the recursive nature of the holarchy. This architec- ture is presented with an example of simulated robot soccers. From experiments under dierent conditions we show that this architecture has interesting properties.
 
Predicting future calls can be the next advanced feature of the next-generation telecommunication networks as the service providers are looking to offer new services to their customers. Call prediction can be useful to many applications such as planning daily schedules, avoiding unwanted communications (e.g. voice spam), and resource planning in call centers. Predicting calls is a very challenging task. We believe that this is an emerging area of research in ambient intelligence where the electronic devices are sensitive and responsive to people's needs and behavior. In particular, we believe that the results of this research will lead to higher productivity and quality of life. In this article, we present a Call Predictor (CP) that offers two new advanced features for the next-generation phones namely “Incoming Call Forecast” and “Intelligent Address Book.” For the Incoming Call Forecast, the CP makes the next-24-hour incoming call prediction based on recent caller's behavior and reciprocity. For the Intelligent Address Book, the CP generates a list of most likely contacts/numbers to be dialed at any given time based on the user's behavior and reciprocity. The CP consists of two major components: Probability Estimator (PE) and Trend Detector (TD). The PE computes the probability of receiving/initiating a call based on the caller/user's calling behavior and reciprocity. We show that the recent trend of the caller/user's calling pattern has higher correlation to the future pattern than the pattern derived from the entire historical data. The TD detects the recent trend of the caller/user's calling pattern and computes the adequacy of historical data in terms of reversed time (time that runs towards the past) based on a trace distance. The recent behavior detection mechanism allows CP to adapt its computation in response to the new calling behaviors. Therefore, CP is adaptive to the recent behavior. For our analysis, we use the real-life call logs of 94 mobile phone users over nine months, which were collected by the Reality Mining Project group at MIT. The performance of the CP is validated for two months based on seven months of training data. The experimental results show that the CP performs reasonably well as an incoming call predictor (Incoming Call Forecast) with false positive rate of 8%, false negative rate of 1%, and error rate of 9%, and as an outgoing call predictor (Intelligent Address Book) with the accuracy of 70% when the list has five entries. The functionality of the CP can be useful in assisting its user in carrying out everyday life activities such as scheduling daily plans by using the Incoming Call Forecast, and saving time from searching for the phone number in a typically lengthy contact book by using the Intelligent Address Book. Furthermore, we describe other useful applications of CP besides its own aforementioned features including Call Firewall and Call Reminder.
 
Device for measuring airflow volume
CO 2 Levels Observed in Multiple Locations In and Around Student Lounge 12
This article presents a hierarchical approach for detecting faults in wireless sensor networks (WSNs) after they have been deployed. The developers of WSNs can specify “invariants” that must be satisfied by the WSNs. We present a framework, Hierarchical SEnsor Network Debugging (H-SEND), for lightweight checking of invariants. H-SEND is able to detect a large class of faults in data-gathering WSNs, and leverages the existing message flow in the network by buffering and piggybacking messages. H-SEND checks as closely to the source of a fault as possible, pinpointing the fault quickly and efficiently in terms of additional network traffic. Therefore, H-SEND is suited to bandwidth or communication energy constrained networks. A specification expression is provided for specifying invariants so that a protocol developer can write behavioral level invariants. We hypothesize that data from sensor nodes does not change dramatically, but rather changes gradually over time. We extend our framework for the invariants that includes values determined at run-time in order to detect data trends. The value range can be based on information local to a single node or the surrounding nodes' values. Using our system, developers can write invariants to detect data trends without prior knowledge of correct values. Automatic value detection can be used to detect anomalies that cannot be detected in existing WSNs. To demonstrate the benefits of run-time range detection and fault checking, we construct a prototype WSN using CO2 and temperature sensors coupled to Mica2 motes. We show that our method can detect sudden changes of the environments with little overhead in communication, computation, and storage.
 
Autonomic communications aim at easing the burden of managing complex and dynamic networks, and designing adaptive, self-turning and self-stabilizing networks to provide much needed flexibility and functional scalability. With the ever-increasing number of multicast applications made recently, considerable efforts have been focused on the design of adaptive flow control schemes for autonomic multicast services. The main difficulties in designing an adaptive flow controller for autonomic multicast service are caused by heterogeneous multicast receivers, especially those with large propagation delays, since the feedback arriving at the source is somewhat outdated and can be harmful to the control operations. To tackle the preceding problem, this article describes a novel, adaptive, and autonomic multicast scheme, the so-called Proportional, Integrative, Derivative plus Neural Network (PIDNN) predictive technique, which consists of two components: the Proportional Integrative plus Derivative (PID) controller and the Back Propagation BP Neural Network (BPNN). In this integrated scheme, the PID controllers are located at the next upstream main branch nodes of the multicast receivers, and have explicit rate algorithms to regulate the receiving rates of the receivers; while the BPNN is located at the multicast source, and predicts the available bandwidth of those longer delay receivers to compute the expected rates of the longer delay receivers. The ultimate sending rate of the multicast source is the maximum of the aforesaid receiving rates that can be accommodated by its participating branches. This network-assisted property is different from the existing control schemes, in that the PIDNN controller can release the irresponsiveness of a multicast flow caused by those long propagation delays from the receivers. By using BPNN, this active scheme makes the control more responsive to the receivers with longer propagation delay. Thus the rate adaptation can be performed in a timely manner, for the sender to respond to network congestion quickly. We analyze the theoretical aspects of the proposed algorithm, show how the control mechanism can be used to design a controller to support multirate multicast transmission based on feedback of explicit rates, and verify this matching using simulations. Simulation results demonstrate that the proposed PIDNN controller avoids overflow of multicast traffic, and performs better than the existing scheme PNN [Tan et al. 2005] and the multicast schemes based on control theory. Moreover, it also performs well in the sense that it achieves high link utilization, quick response, good scalability, high unitary throughput, intra-session fairness and inter-session fairness.
 
We present, and evaluate benefits of, a design methodology for translating natural phenomena represented as mathematical models, into novel, self-adaptive, peer-to-peer (p2p) distributed com- puting algorithms ("protocols"). Concretely, our first contribution is a set of techniques to trans- late discrete "sequence equations" (also known as difference equations) into new p2p protocols called "sequence protocols". Sequence protocols are self-adaptive, scalable, and fault-tolerant, with applicability in p2p settings like Grids. A sequence protocol is a set of probabilistic local and message-passing actions for each process. These actions are translated from terms in a set of source sequence equations. Individual processes do not simulate the source sequence equa- tions completely. Instead, each process executes probabilistic local and message passing actions, so that the emergent round-to-round behavior of the sequence protocol in a p2p system can be probabilistically predicted by the source sequence equations. The paper's second contribution is the design and evaluation of a set of sequence protocols for detection of two global triggers in a distributed system: threshold detection and interval detection. This paper's third contribution is a new self-adaptive Grid computing protocol called "HoneyAdapt". HoneyAdapt is derived from sequence equations modeling adaptive bee foraging behavior in nature. HoneyAdapt is intended for Grid applications that allow Grid clients, at run-time, a choice of algorithms for executing chunks of the application's dataset. HoneyAdapt tells each Grid client how to adaptively select at run-time, for each chunk it receives, a "good" algorithm for computing the chunk - this selection is based on continuous feedback from other clients. Finally, we design a variant of HoneyAdapt, called "HoneySort", for application to Grid parallelized sorting settings using the master-worker paradigm. Our evaluation of the above contributions consists of mathematical analysis, large-scale trace-based simulation results, and experimental results from a HoneySort deployment.
 
In Ambient Intelligence (AmI) vision, people should be able to seamlessly and unobtrusively use and configure the intelligent devices and systems in their ubiquitous computing environments without being cognitively and physically overloaded. In other words, the user should not have to program each device or connect them together to achieve the required functionality. However, although it is possible for a human operator to specify an active space configuration explicitly, the size, sophistication, and dynamic requirements of modern living environment demand that they have autonomous intelligence satisfying the needs of inhabitants without human intervention. This work presents a proposal for AmI fuzzy computing that exploits multiagent systems and fuzzy theory to realize a long-life learning strategy able to generate context-aware-based fuzzy services and actualize them through abstraction techniques in order to maximize the users' comfort and hardware interoperability level. Experimental results show that proposed approach is capable of anticipating user's requirements by automatically generating the most suitable collection of interoperable fuzzy services.
 
The ability to retrieve software in an easy and efficient way confers competitive advantage on computer users in general and, even more especially, on users of wireless devices (like some laptops, PDAs, etc.). In this article, we present a software retrieval service that allows users to select and retrieve software in an easy and efficient way, anywhere and anytime. Two relevant components of this service are: 1) a software ontology (software catalog) which provides users with a semantic description of software elements, hiding the location and access method of various software repositories, and 2) a set of specialist agents that allow browsing of the software catalog (automatically customized for each user), and an efficient retrieval method for the selected software. These agents automatically adapt their behavior to different users and situations by considering the profile and preferences of the users and the network status. In summary, our software-obtaining process based on an ontology and autonomous and adaptive agents presents a qualitative advance with respect to existing solutions: our approach adapts to the features of users, relieving them from knowing the technical features of their devices and the location and access method of various remote software repositories.
 
Recent advances in wireless communications and networking, distributed sensing and control, and real-time and embedded systems have led to some new computing paradigms, such as ubiquitous and pervasive computing, wireless sensor networks, and cyber-physical systems. Many novel and attractive applications are made possible, such as advanced automotive systems, environmental control, critical infrastructure control, high confidence medical devices and systems, etc. In these applications, a large-scale wireless networking system that consists of massive numbers of connected processing elements, sensors, and actuators often plays a key role. The high complexity of such large-scale heterogeneous systems raises new challenges to the system design: We expect the system to be context-aware and self-adaptive to internal and external environments to achieve reliable and robust performance; to be self-organizing and scalable so that managing and maintaining costs could be minimized; to have programmable architecture and protocols for achieving optimal performance in different situations/scenarios, etc. These challenges need to be addressed.
 
Prediction on new data in ATIDS.
An intrusion detection system (IDS) is a security layer to detect ongoing intrusive activities in computer systems and networks. Current IDS have two main problems: The first problem is that typically so many alarms are generated as to overwhelm the system operator, many of these being false alarms. The second problem is that continuous tuning of the intrusion detection model is required in order to maintain sufficient performance due to the dynamically changing nature of the monitored system. This manual tuning process relies on the system operators to work out the updated tuning solution and to integrate it into the detection model. In this article, we present an automatically tuning intrusion detection system, which controls the number of alarms output to the system operator and tunes the detection model on the fly according to feedback provided by the system operator when false predictions are identified. This system adapts its behavior (i) by throttling the volume of alarms output to the operator in response to the ability of the operator to respond to these alarms, and (ii) by deciding how aggressively the detection model should be tuned based on the accuracy of earlier predictions. We evaluated our system using the KDDCup'99 intrusion detection dataset. Our results show that an adaptive, automatically tuning intrustion detection system will be both practical and efficient.
 
Ubiquitous and Pervasive Computing (UPC) are recent paradigms with a goal of providing computing and communication services anytime and everywhere. In UPC, automatic service composition requires dealing with four major research issues: service matching and selection, coordination and management, scalability and fault tolerance, and adaptiveness to users’ contexts and network conditions. The articles in this special issue cover some of these topics and constitute a representative sample of the latest developments in adaptive service discovery and composition in UPC.
 
In grid workflow systems, a checkpoint selection strategy is responsible for selecting checkpoints for conducting temporal verification at run-time execution stage. Existing representative checkpoint selection strategies often select some unnecessary checkpoints and omit some necessary ones because they cannot adapt to the dynamics and uncertainty of run-time activity completion duration. In this paper, based on the dynamics and uncertainty of run-time activity completion duration, we develop a novel checkpoint selection strategy that can adaptively select not only necessary but also sufficient checkpoints. Specifically, we introduce a new concept of minimum time redundancy as a key reference parameter for checkpoint selection. An important feature of minimum time redundancy is that it can adapt to the dynamics and uncertainty of run-time activity completion duration. We develop a method on how to achieve minimum time redundancy dynamically along grid workflow execution, and investigate its relationships with temporal consistency. Based on the method and the relationships, we present our strategy and rigorously prove its necessity and sufficiency. The simulation evaluation further experimentally demonstrates such necessity and sufficiency and its significant improvement on checkpoint selection over other representative strategies.
 
Recently, network security has become an extremely vital issue that beckons the development of accurate and efficient solutions capable of effectively defending our network systems and the valuable information journeying through them. In this article, a distributed multiagent intrusion detection system (IDS) architecture is proposed, which attempts to provide an accurate and lightweight solution to network intrusion detection by tackling issues associated with the design of a distributed multiagent system, such as poor system scalability and the requirements of excessive processing power and memory storage. The proposed IDS architecture consists of (i) the Host layer with lightweight host agents that perform anomaly detection in network connections to their respective hosts, and (ii) the Classification layer whose main functions are to perform misuse detection for the host agents, detect distributed attacks, and disseminate network security status information to the whole network. The intrusion detection task is achieved through the employment of the lightweight Adaptive Sub-Eigenspace Modeling (ASEM)-based anomaly and misuse detection schemes. Promising experimental results indicate that ASEM-based schemes outperform the KNN and LOF algorithms, with high detection rates and low false alarm rates in the anomaly detection task, and outperform several well-known supervised classification methods such as C4.5 Decision Tree, SVM, NN, KNN, Logistic, and Decision Table (DT) in the misuse detection task. To assess the performance in a real-world scenario, the Relative Assumption Model, feature extraction techniques, and common network attack generation tools are employed to generate normal and anomalous traffic in a private LAN testbed. Furthermore, the scalability performance of the proposed IDS architecture is investigated through the simulation of the proposed agent communication scheme, and satisfactory linear relationships for both degradation of system response time and agent communication generated network traffic overhead are achieved.
 
This article presents Agilla, a mobile agent middleware designed to support self-adaptive applications in wireless sensor networks. Agilla provides a programming model in which applications consist of evolving communities of agents that share a wireless sensor network. Coordination among the agents and access to physical resources are supported by a tuple space abstraction. Agents can dynamically enter and exit a network and can autonomously clone and migrate themselves in response to environmental changes. Agilla's ability to support self-adaptive applications in wireless sensor networks has been demonstrated in the context of several applications, including fire detection and tracking, monitoring cargo containers, and robot navigation. Agilla, the first mobile agent system to operate in resource-constrained wireless sensor platforms, was implemented on top of TinyOS. Agilla's feasibility and efficiency was demonstrated by experimental evaluation on two physical testbeds consisting of Mica2 and TelosB nodes.
 
Smart networks have grown out of the need for stable, reliable, and predictable networks that will guarantee packet delivery under Quality of Service (QoS) constraints. In this article we present a measurement-based admission control algorithm that helps control traffic congestion and guarantee QoS throughout the lifetime of a connection. When a new user requests to enter the network, probe packets are sent from the source to the destination to estimate the impact that the new connection will have on the QoS of both the new and the existing users. The algorithm uses a novel algebra of QoS metrics, inspired by Warshall's algorithm, to look for a path with acceptable QoS values to accommodate the new flow. We describe the underlying mathematical principles and present experimental results obtained by evaluating the method in a large laboratory test-bed operating the Cognitive Packet Network (CPN) protocol.
 
Various articles focusing on current research in spatial computing are compiled. The article 'Gabriel graphs in arbitrary metric space and their cellular automaton for many grids,' by Maignan and Gruau, addresses the challenge of modeling a spatial system. The article 'Detecting locally distributed predicates,' by de Rosa et al., addresses the issue of creating spatially-oriented programming languages that allow the representation and detection of distributed properties in sparse-topology systems. 'Spatial coordination of pervasive services through chemical-inspired tuple spaces,' by Viroli et al., presents a self-organizing distributed architecture for pervasive computing that supports situatedness, adaptivity, and diversity. 'Infrastructureless spatial storage algorithms,' by Fernandez-Marquez et al., defines and analyses a collection of fault-tolerant algorithms for persistent storage of data at specific geographical zones exploiting the memory of mobile devices located in these areas.
 
Machines would require the ability to perceive and adapt to affects for achieving artificial sociability. Most autonomous systems use Automated Facial Expression Classification (AFEC) and Automated Affect Interpretation (AAI) to achieve sociability. Varying lighting conditions, occlusion, and control over physiognomy can influence the real life performance of vision-based AFEC systems. Physiological signals provide complementary information for AFEC and AAI. We employed transient facial thermal features for AFEC and AAI. Infrared thermal images with participants' normal expression and intentional expressions of happiness, sadness, disgust, and fear were captured. Facial points that undergo significant thermal changes with a change in expression termed as Facial Thermal Feature Points (FTFPs) were identified. Discriminant analysis was invoked on principal components derived from the Thermal Intensity Values (TIVs) recorded at the FTFPs. The cross-validation and person-independent classification respectively resulted in 66.28% and 56.0% success rates. Classification significance tests suggest that (1) like other physiological cues, facial skin temperature also provides useful information about affective states and their facial expression; (2) patterns of facial skin temperature variation can complement other cues for AFEC and AAI; and (3) infrared thermal imaging may help achieve artificial sociability in robots and autonomous systems.
 
The taxonomy between elements of the HISENE framework  
Changes in the ontology of the agent E
Time-costs (in seconds) of S-P AM , T 1 and T 2  
Forming groups of agents is an important task in many agent-based applications, for example when determining a coalition of buyers in an e-commerce community or organizing different Web services in a Web services' composition. A key issue in this context is that of generating groups of agents such that the communication among agents of the same group is not subjected to comprehension problems. To this purpose, several approaches have been proposed in the past in order to form groups of agents based on some similarity measures among agents. Such similarity measures are mainly based on lexical and/or structural similarities among agent ontologies. However, the necessity of taking into account a semantic component of the similarity value arises, for example by considering the context in which a term is used in an agent ontology. Therefore we propose a clustering technique based on the HISENE semantic negotiation protocol, using a similarity value that has lexical, structural and semantic components. Moreover, we introduce a suitable multiagent architecture that allows computing agent similarities by means of an efficient distributed approach.
 
An example of a highway with traffic cameras  
Today's distributed applications such as sensor networks, mobile multimedia applications, and intelligent transportation systems pose huge engineering challenges. Such systems often comprise different components that interact with each other as peers, as such forming a decentralized system. The system components and collaborations change over time, often in unanticipated ways. Multiagent systems belong to a class of decentralized systems that are known for realizing qualities such as adaptability, robustness, and scalability in such environments. A typical way to structure and manage interactions among agents is by means of organizations. Existing approaches usually endow agents with a dual responsibility: on the one hand agents have to play roles providing the associated functionality in the organization, on the other hand agents are responsible for setting up organizations and managing organization dynamics. Engineering realistic multiagent systems in which agents encapsulate this dual responsibility is a complex task. In this article, we present an organization model for context-driven dynamic agent organizations. The model defines abstractions that support application developers to describe dynamic organizations. The organization model is part of an integrated approach, called MACODO: Middleware Architecture for COntext-driven Dynamic agent Organizations. The complementary part of the MACODO approach is a middleware platform that supports the distributed execution of dynamic organizations specified using the abstractions, as described in Weyns et al. [2009]. In the model, the life-cycle management of dynamic organizations is separated from the agents: organizations are first-class citizens, and their dynamics are governed by laws. The laws specify how changes in the system (e.g., an agent joins an organization) and changes in the context (e.g., information observed in the environment) lead to dynamic reorganizations. As such, the model makes it easier to understand and specify dynamic organizations in multiagent systems, and promotes reusing the life-cycle management of dynamic organizations. The organization model is formally described to specify the semantics of the abstractions, and ensure its type safety. We apply the organization model to specify dynamic organizations for a traffic monitoring application.
 
MAS interaction is specified in terms of a list of header fields to define the particular AIPS protocols used and other fields such as sender, receiver, etc. 
Example of 1st-order modal logic formula to define the semantics of a request type com- municative act message. 
Syntax for exchanging CA type messages.
Multi-Agent-Systems or MAS represent a powerful distributed computing model, enabling agents to cooperate and complete with each other and to exchange both semantic content and a semantic context to more automatically and accurately interpret the content. Many types of individual agent and MAS models have been proposed since the mid-1980s, but the majority of these have led to single developer homogeneous MAS systems. For over a decade, the FIPA standards activity has worked to produce public MAS specifications, acting as a key enabler to support interoperability, open service interaction, and to support heterogeneous development. The main characteristics of the FIPA model for MAS and an analysis of design, design choices and features of the model is presented. In addition, a comparison of the FIPA model for system interoperability versus those of other standards bodies is presented, along with a discussion of the current status of FIPA and future directions.
 
One of the major challenges in engineering distributed multiagent systems is the coordination necessary to align the behavior of different agents. Decentralization of control implies a style of coordination in which the agents cooperate as peers with respect to each other and no agent has global control over the system, or global knowledge about the system. The dynamic interactions and collaborations among agents are usually structured and managed by means of roles and organizations. In existing approaches agents typically have a dual responsibility: on the one hand playing roles within the organization, on the other hand managing the life-cycle of the organization itself, for example, setting up the organization and managing organization dynamics. Engineering realistic multiagent systems in which agents encapsulate this dual responsibility is a complex task. In this article, we present a middleware for context-driven dynamic agent organizations. The middleware is part of an integrated approach, called MACODO: Middleware Architecture for COntext-driven Dynamic agent Organizations. The complementary part of the MACODO approach is an organization model that defines abstractions to support application developers in describing dynamic organizations, as described in Weyns et al. [2010]. The MACODO middleware offers the life-cycle management of dynamic organizations as a reusable service separated from the agents, which makes it easier to understand, design, and manage dynamic organizations in multiagent systems. We give a detailed description of the software architecture of the MADOCO middleware. The software architecture describes the essential building blocks of a distributed middleware platform that supports the MACODO organization model. We used the middleware architecture to develop a prototype middleware platform for a traffic monitoring application. We evaluate the MACODO middeware architecture by assessing the adaptability, scalability, and robustness of the prototype platform.
 
A deployment diagram for the different analyzed middleware solutions for the CUE architecture. 
Collaborative Ubiquitous Environments (CUEs) are environments that support collaboration among persons in a context of ubiquitous computing. This article shows how results of the research in the Multi-Agent System (MAS) area, and in particular on MAS environments, can be used to model, design and engineer CUEs, with specific reference to the management of context-awareness information. After a description of the reference scenario, the Multilayered Multi-Agent Situated System model will be introduced and applied to represent and to manage several types of awareness information (both physical and logical contextual information). Finally, three different approaches to the design and engineering of CUEs will then be introduced and evaluated.
 
Many classes of distributed applications, including e-business, e-government, and ambient intelligence, consist of networking infrastructures, where the nodes (peers)—be they software components, human actors or organizational units—cooperate with each other to achieve shared goals. The multi-agent system metaphor fits very well such settings because it is founded on intentional and social concepts and mechanisms. Not surprisingly, many agent-oriented software development methods have been proposed, including GAIA, PASSI, and Tropos. This paper extends the Tropos methodology, enhancing its ability to support high variability design through the explicit modelling of alternatives, it adopts an extended notion of agent capability and proposes a refined Tropos design process. The paper also presents an implemented software development environment for Tropos, founded on the Model-Driven Architecture (MDA) framework and standards. The extended Tropos development process is illustrated through a case study involving an e-commerce application.
 
This article defines and analyzes a collection of algorithms for persistent storage of data at specific geographical zones exploiting the memory of mobile devices located in these areas. Contrarily to other approaches for data dissemination, our approach uses a viral programming model. Data performs an active role in the storage process. It acts as a virus or a mobile agent, finding its own storage and relocating when necessary. We consider geographical areas of any shape and size. Simulation results show that our algorithms are scalable and converge quickly, even though none of them outperform the others for all performance metrics considered.
 
In this paper, we provide convergence results for an Ant Routing (ARA) Algorithm for wireline, packet switched communication networks, that are acyclic. Such algorithms are inspired by the foraging behavior of ants in nature. We consider an ARA algorithm proposed by Bean and Costa [2]. The algorithm has the virtues of being adaptive and distributed, and can provide a multipath routing solution. We consider a scenario where there are multiple incoming data traffic streams that are to be routed to their destinations via the network. Ant packets, which are nothing but probe packets, are used to estimate the path delays in the network. The node routing tables, which consist of routing probabilities for the outgoing links, are updated based on these delay estimates. In contrast to the available analytical studies in the literature, the link delays in our model are stochastic, time-varying, and dependent on the link traffic. The evolution of the delay estimates and the routing probabilities are described by a set of stochastic iterative equations. In doing so, we take into account the distributed and asynchronous nature of the algorithm operation. Using methods from the theory of stochastic approximations, we show that the evolution of the delay estimates can be closely tracked by a deterministic ODE (Ordinary Differential Equation) system, when the step-size of the delay estimation scheme is small. We study the equilibrium behavior of the ODE in order to obtain the equilibrium behavior of the algorithm. We also provide illustrative simulation results.
 
This paper presents So-Grid, a set of bio-inspired algorithms tailored to the decentralized construc- tion of a Grid information system which features adaptive and self-organization characteristics. Such algorithms exploit the properties of swarm systems, in which a number of entities/agents perform simple operations at the local level, but together engender an advanced form of "swarm intelligence" at the global level. In particular, So-Grid provides two main functionalities: logical reorganization of resources, inspired by the behavior of some species of ants and termites which move and collect items within their environment, and resource discovery, inspired by the mecha- nisms through which ants searching for food sources are able to follow the pheromone traces left by other ants. These functionalities are correlated, since an intelligent dissemination can facil- itate discovery. In the Grid environment, a number of ant-like agents autonomously travel the Grid through P2P interconnections and use biased probability functions to: (i) replicate resource descriptors in order to favor resource discovery; (ii) collect resource descriptors with similar char- acteristics in nearby Grid hosts; (iii) foster the dissemination of descriptors corresponding to fresh (recently updated) resources and to resources having high Quality of Service (QoS) characteris- tics. Simulation analysis shows that the So-Grid replication algorithm is capable of reducing the entropy of the system and efficiently disseminating content. Moreover, as descriptors are pro- gressively reorganized and replicated, the So-Grid discovery algorithm allows users to reach Grid hosts that store information about a larger number of useful resources in a shorter amount of time. The proposed approach features interesting characteristics, i.e., self-organization, scalability and adaptivity, which make it useful for a dynamic and partially unreliable distributed system.
 
Distributed real-time and embedded (DRE) systems can be composed of hundreds of software components running across tens or hundreds of networked processors that are physically separated from one another. A key concern in DRE systems is determining the spatial deployment topology, which is how the software components map to the underlying hardware components. Optimizations, such as placing software components with high-frequency communications on processors that are closer together, can yield a number of important benefits, such as reduced power consumption due to decreased wireless transmission power required to communicate between the processing nodes. Determining a spatial deployment plan across a series of processors that will minimize power consumption is hard since the spatial deployment plan must respect a combination of real-time scheduling, fault-tolerance, resource, and other complex constraints. This article presents a hybrid heuristic/evolutionary algorithm, called ScatterD, for automatically generating spatial deployment plans that minimize power consumption. This work provides the following contributions to the study of spatial deployment optimization for power consumption minimization: (1) it combines heuristic bin-packing with an evolutionary algorithm to produce a hybrid algorithm with excellent deployment derivation capabilities and scalability, (2) it shows how a unique representation of the spatial deployment solution space integrates the heuristic and evolutionary algorithms, and (3) it analyzes the results of experiments performed with data derived from a large-scale avionics system that compares ScatterD with other automated deployment techniques. These results show that ScatterD reduces power consumption by between 6% and 240% more than standard bin-packing, genetic, and particle swarm optimization algorithms.
 
PE logic operation. 
Two stages of the BFT with 16 PEs and 4 channels up to the next stage. 
Distribution of node arities (input-arity plus output-arity) for the baseline case and the decomposed case for ConceptNet-default. 
Comparison between placement for locality and load balancing ignoring locality for ConceptNetdefault. Both cases use the static and decomposition optimizations. 
Area in Virtex 6 slices of a PE for each application, benchmark graph, and optimization. 
How do we develop programs that are easy to express, easy to reason about, and able to achieve high performance on massively parallel machines? To address this problem, we introduce GraphStep, a domain-specific compute model that captures algorithms that act on static, irregular, sparse graphs. In GraphStep, algorithms are expressed directly without requiring the programmer to explicitly manage parallel synchronization, operation ordering, placement, or scheduling details. Problems in the sparse graph domain are usually highly concurrent and communicate along graph edges. Exposing concurrency and communication structure allows scheduling of parallel operations and management of communication that is necessary for performance on a spatial computer. We study the performance of a semantic network application, a shortest-path application, and a max-flow/min-cut application. We introduce a language syntax for GraphStep applications. The total speedup over sequential versions of the applications studied ranges from a factor of 19 to a factor of 15,000. Spatially-aware graph optimizations (e.g., node decomposition, placement and route scheduling) delivered speedups from 3 to 30 times over a spatially-oblivious mapping.
 
This article presents a novel intelligent embedded agent approach for reducing the number of associations and interconnections between various agents operating within ad hoc multiagent societies of an Ambient Intelligent Environment (AIE) in order to reduce the processing latency and overheads. The main goal of the proposed fuzzy-based intelligent embedded agents (F-IAS) includes learning the overall network configuration and adapting to the system functionality to personalize themselves to the user needs based on monitoring the user in a lifelong nonintrusive mode. In addition, the F-IAS agents aim to reduce the agent interconnections to the most relevant set of agents in order to reduce the processing overheads and thus implicitly improving the system overall efficiency. We employ embedded ambassador agents, namely embassadors, which are designated F-IAS agents utilized with additional novel characteristics to not only act as a gateway filtering the number of messages multicast across societies but also discover, recommend, and establish associations between agents residing in separate societies. In order to validate the efficiency of the proposed methods for multiagent and society-based intelligent association discovery and learning of F-IAS agents/embassadors we will present two sets of unique experiments. The first experiment describes the obtained results carried out within the intelligent Dormitory (iDorm) which is a real-world testbed for AIE research. Here we specifically demonstrate the utilization of the F-IAS agents and discuss that by optimizing the set of associations the agents increase efficiency and performance. The second set of experiments is based on emulating an iDorm-like large-scale multi-society-based AIE environment. The results illustrate how embassadors discover strongly correlated agent pairs and cause them to form associations so that relevant agents of separate societies can start interacting with each other.
 
We present schemes for providing anonymous transactions while privacy and anonymity are preserved, providing user anonymous authentication in distributed networks such as the Internet. We first present a practical scheme for anonymous transactions while the transaction resolution is assisted by a Trusted Authority. This practical scheme is extended to a theoretical scheme where a Trusted Authority is not involved in the transaction resolution. Given an authority that generates for each player hard to produce evidence EVID (e. g., problem instance with or without a solution) to each player, the identity of a user U is defined by the ability to prove possession of said evidence. We use Zero-Knowledge proof techniques to repeatedly identify U by providing a proof that U has evidence EVID, without revealing EVID, therefore avoiding identity theft. In both schemes the authority provides each user with a unique random string. A player U may produce unique user name and password for each other player S using a one way function over the random string and the IP address of S. The player does not have to maintain any information in order to reproduce the user name and password used for accessing a player S. Moreover, the player U may execute transactions with a group of players S U in two phases; in the first phase the player interacts with each server without revealing information concerning its identity and without possibly identifying linkability among the servers in S U . In the second phase the player allows linkability and therefore transaction commitment with all servers in S U , while preserving anonymity (for future transactions).
 
Set-up of the experiments. The nest is indicated by a light in the centre. The robots in the nest are resting and not active. The other robots are searching the environment, and one in each picture has found and is retrieving a prey.
Performance of different set-ups in simulation. The x and y axes are in logarithmic scale. The value plotted is the number of retrieved prey. For the meaning of the boxes and the other symbols, see Fig. 4 and Fig. 5. See text for the discussion.
Division of labour in the s-bots. Each group of four columns refers to different environments with increasing prey density. Each bar refers to a group size (see the legend). Each bar is divided into three parts whose height is proportional to the ratio of robots belonging to the following groups: foragers (P l > 0.043) on the top, loafers (P l < 0.007) at the bottom, and undecided (0.007 ≤ P l ≤ 0.043) in between. For example, if the top part is 25% of the total height of the bar for a group of 8 robots, it means that on average 2 robots were foragers.
In this article, we analyze the behavior of a group of robots involved in an object retrieval task. The robots' control system is inspired by a model of ants' foraging. This model emphasizes the role of learning in the individual. Individuals adapt to the environment using only locally available information. We show that a simple parameter adaptation is an effective way to improve the efficiency of the group and that it brings forth division of labor between the members of the group. Moreover, robots that are best at retrieving have a higher probability of becoming active retrievers. This selection of the best members does not use any explicit representation of individual capabilities. We analyze this system and point out its strengths and its weaknesses.
 
Gabriel graphs are subgraphs of Delaunay graphs that are used in many domains such as sensor networks and computer graphics. Although very useful in their original form, their definition is bounded to applications involving Euclidean spaces only, but their principles seem to be applicable to a wider range of applications. In this article, we generalize this construct and define metric Gabriel graphs that transport the principles of Gabriel graphs on arbitrary metric space, allowing their use in domains like cellular automata and amorphous computing, or any other domains where a non-Euclidean metric is used. We study global/local properties of metric Gabriel graphs and use them to design a cellular automaton that draws the metric Gabriel graph of its input. This cellular automaton only uses seven states to achieve this goal and has been tested on hexagonal grids, 4-connected, and 8-connected square grids.
 
Most construction of artificial, multicomponent structures is based upon an external entity that directs the assembly process, usually following a script/blueprint under centralized control. In contrast, recent research has focused increasingly on an alternative paradigm, inspired largely by the nest building behavior of social insects, in which components “self-assemble” into a given target structure. Adapting such a nature-inspired approach to precisely self-assemble artificial structures (bridge, building, etc.) presents a formidable challenge: one must create a set of local control rules to direct the behavior of the individual components/agents during the self-assembly process. In recent work, we developed a fully automated procedure that generates such rules, allowing a given structure to successfully self-assemble in a simulated environment having constrained, continuous motion; however, the resulting rule sets were typically quite large. In this article, we present a more effective methodology for automatic rule generation, which makes an attempt to parsimoniously capture both the repeating patterns that exist within a structure, and the behaviors necessary for appropriate coordination. We then empirically show that the procedure developed here generates sets of rules that are not only correct, but significantly reduced in size, relative to our earlier approach. Such rule sets allow for simpler agents that are nonetheless still capable of performing complex tasks, and therefore demonstrate the problem-solving potential of self-organized systems.
 
This paper illustrates the methods and results of two sets of experiments in which a group of mobile robots, called s-bots, are required to physically connect to each other—i.e., to self- assemble—to cope with environmental conditions that prevent them to carry out their task in- dividually. The first set of experiments is a pioneering study on the utility of self-assembling robots to address relatively complex scenarios, such as cooperative object trasport. The results of our work suggest that the s-bots possess hardware characteristics which facilitate the design of control mechanisms for autonomous self-assembly. The second set of experiments is an attempt to integrate within the behavioural repertoire of an s-bot decision making mechanisms to allow the robot to autonomously decide whether or not environmental contingencies require self-assembly. The results show that it is possible to synthesise, by using evolutionary computation techniques, artificial neural networks that integrate both the mechanisms for sensory-motor coordination and for decision making required by the robots in the context of self-assembly.
 
The BEYOND Simulator (1). A GUI to Configure a Simulated Network.
The BEYOND Simulator (2). A GUI to Configure the iNet EE facility.
Network applications are increasingly required to be autonomous, scalable, adaptive to dynamic changes in the network, and survivable against partial system failures. Based on the observation that various biological systems have already satisfied these requirements, this article proposes and evaluates a biologically-inspired framework that makes network applications to be autonomous, scalable, adaptive, and survivable. With the proposed framework, called iNet, each network application is designed as a decentralized group of software agents, analogous to a bee colony (application) consisting of multiple bees (agents). Each agent provides a particular functionality of a network application, and implements biological behaviors such as reproduction, migration, energy exchange, and death. iNet is designed after the mechanisms behind how the immune system detects antigens (e.g., viruses) and produces specific antibodies to eliminate them. It models a set of environment conditions (e.g., network traffic and resource availability) as an antigen and an agent behavior (e.g., migration) as an antibody. iNet allows each agent to autonomously sense its surrounding environment conditions (an antigen) to evaluate whether it adapts well to the sensed environment, and if it does not, adaptively perform a behavior (an antibody) suitable for the environment conditions. In iNet, a configuration of antibodies is encoded as a set of genes, and antibodies evolve via genetic operations such as crossover and mutation. Empirical measurement results show that iNet is lightweight enough. Simulation results show that agents adapt to dynamic and heterogeneous network environments by evolving their antibodies across generations. The results also show that iNet allows agents to scale to workload volume and network size and to survive partial link failures in the network.
 
In service-oriented environments and distributed systems, service composition allows simple services to be dynamically combined into new, more complex services. Service composition techniques are usually designed as an extension to service discovery. Traditional techniques try to match a user’s requirements, often complex, with the available services. However, one-to-one matching is inefficient; it is preferable to meet the request from available services even when one of the basic services is not present. Separating composition and discovery has also led to inefficiency, especially in a highly dynamic environment. With the heterogeneity of networks, users, and applications having multiple sources, constructing service-specific overlays in large distributed networks is challenging. In this article, we propose a new service composition algorithm to deal with the problem of composing multiple autonomic elements to achieve system-wide goals. Using a self-organizing approach, autonomic entities are dynamically and seamlessly composed into service-specific overlay networks. The algorithm combines composition and service discovery into one step, thereby achieving more efficiency and less latency. The decentralized and self-organizing nature of the algorithm allows it to respond rapidly to system changes. Extensive simulation results validate the effectiveness of the approach when it is compared to other solutions.
 
Components of an autonomic system
A distributed application infrastructure A 3-tier distributed infrastructure is shown in Tier 2, the Web Host, runs the Web and the Application servers and has CPU and a disk as queuing centers. The Data Host runs the database and handles queued requests for its CPU and disk resources. The infrastructure runs two applications. The system is modeled as a Queuing Network Model and each application is modeled as a class. Table 1 shows the resource demands
In an autonomic computing system, an autonomic manager makes tuning, load balancing, or provisioning decisions based on a predictive model of the system. This article investigates performance analysis techniques used by the autonomic manager. It looks at the complexity of the workloads and presents algorithms for computing the bounds of performance metrics for distributed systems under asymptotic and nonasymptotic conditions, that is, with saturated and nonsaturated resources. The techniques used are hybrid in nature, making use of performance evaluation and linear and nonlinear programming models. The workloads are characterized by the workload intensity, which represents the total number of users in the system, and by the workload mixes, which depict the number of users in each class of service. The results presented in this article can be applied to distributed transactional systems. Such systems serve a large number of users with many classes of services and can thus be considered as representative of a large class of autonomic computing systems.
 
Top-cited authors
Mazeiar Salehie
  • University of Limerick
Danny Weyns
  • KU Leuven
Erol Gelenbe
  • Polish Academy of Sciences
Marco Dorigo
  • Université Libre de Bruxelles
Fabrice Saffre
  • VTT Technical Research Centre of Finland