Simulation Modelling Practice and Theory

Published by Elsevier
Print ISSN: 1569-190X
Publications
This paper investigates modeling and control issues associated with worm-gear driven systems. In the modeling part, both static and dynamic analyses are conducted to investigate the characteristics of the worm-gear. The static analysis reveals not only the non-backdrivability but also the dependency of break-in torques on the loading torque, direction of motion as well as crucial system parameters. The dynamic analysis generates four linear equations of motion, of which , at any particular instant, only one applies. The applicable equation at any given instant depends on the direction of motion and the relative magnitude between the input torque and the loading torque. In the control part, a sliding controller is designed based on the modeling results. The controller can provide robustness against variations of system parameters caused by the speed-dependent nature of the coefficient of friction. Because the dependency of the dynamic equation on the operating condition may render the controller ill-defined in some scenarios, a lemma is proved and can be used to select proper control parameters to guarantee the well-definedness of the controller. The proposed control scheme is applied to a worm-gear driven positioning platform. Experimental results indicate the proposed control system achieves 20% of the tracking error of a conventional PID control.
 
The square root characteristic commonly used to model the flow through hydraulic orifices may cause numerical problems because the derivative of the flow with respect to the pressure drop tends to infinity when the pressure drop approaches zero. Moreover, for small values of the pressure drop it is more reasonable to assume that the flow depends linearly on the pressure drop.The paper starts from an approximation of the measured characteristic of the discharge coefficient versus the square root of the Reynolds number given by Merritt and proposes a single empirical flow formula that provides a linear relation for small pressure differences and the conventional square root law for turbulent conditions. The transition from the laminar to the turbulent region is smooth. Since the slope of the characteristic is finite at zero pressure difference, numerical difficulties are avoided. The formula comprises physical meaningful terms and employs parameters which have a physical meaning. The proposed orifice model has been used in a bond graph model of a hydraulic sample circuit. Simulation results have proved to be accurate. The orifice model is easily implemented as a library model in a modern modeling language. Ultimately, the model can be adapted to approximate pipe flow losses as well.
 
Allocating submeshes to jobs in mesh-connected multicomputers in a FCFS fashion can lead to poor system performance (e.g., long job waiting delays) because the job at the head of the waiting queue can prevent the allocation of free submeshes to other waiting jobs with smaller submesh requirements. However, serving jobs aggressively out-of-order can lead to excessive waiting delays for jobs with large allocation requests. In this paper, we propose a scheduling scheme that uses a window of consecutive jobs from which it selects jobs for allocation and execution. This window starts with the current oldest waiting job and corresponds to the lookahead of the scheduler. The performance of the proposed window-based scheme has been compared to that of FCFS and other previous job scheduling schemes. Extensive simulation results based on synthetic workloads and real workload traces indicate that the new scheduling strategy exhibits good performance when the scheduling window size is large. In particular, it is substantially superior to FCFS in terms of system utilization, average job turnaround times, and maximum waiting delays under medium to heavy system loads. Also, it is superior to aggressive out-of-order scheduling in terms of maximum job waiting delays. Window-based job scheduling can improve both overall system performance and fairness (i.e., maximum job waiting delays) by adopting large lookahead job scheduling windows.
 
This paper presents an application of Bond Graphs in physiological modelling. In this work, a Bond Graph model is utilised as boundary condition for a detailed model of an idealized mitral valve. Applications of this type fit within the framework described by the “Virtual Physiological Human” initiative. This supports the integration of physical, mechanical and biochemical models encompassing a range of different length and time scales to obtain predictive models of the human body. Because 3D detailed modelling and simulation is computationally intensive, a 3D computational model of a whole biological system is, by today’s standards, impossible to achieve. Due to their unique multi-physics nature of internal coherence, Bond Graphs are particularly suited to biological applications and can be coupled to 3D models and lumped parameter models. A specific application in cardiovascular modelling is demonstrated by focusing on a specific example; a 3D model of the mitral valve coupled to a lumped parameter model of the left ventricle.
 
Although simulation is performed in a wide range of disciplines there has been almost no debate about the practice of simulation across these domains of application. This paper concentrates on two domains of practice, business and military simulation, and identifies three modes of practice: simulation as software engineering, simulation as a process of organisational change and simulation as facilitation. The facets of each of these modes of practice are described, and the predominant usage of the modes in business and the military are identified. The implications for simulation software suppliers, practitioners, researchers, educators and users are discussed.
 
Originally designed to deal with the hidden node problem, the Request-to-Send/Clear-to-Send (RTS/CTS) exchange is often turned off in most infrastructure-based 802.11 networks with the belief that the benefit it brings might not even be able to pay off its transmission overhead. While this is often true for networks using fixed transmission rate, our investigation leads to the opposite conclusion when multiple transmission rates are exploited in WLANs. Through extensive simulations using realistic channel propagation and reception models, we found out that in a heavily loaded multi-rate WLAN, a situation that we call rate avalanche often happens if RTS/CTS is turned off: high collision rates not only lead to retransmissions but also drive the nodes to switch to lower date rates; the retransmissions and the longer channel occupation caused by lower rates will further deteriorate the channel contention, which yields more collisions. This vicious circle could significantly degrade the network performance even no hidden node presents. Our investigation also reveals that, in the absence of effective and practical loss differentiation mechanisms, simply turning on the RTS/CTS could effectively suppress the rate avalanche effect. Various scenarios/conditions are extensively examined to study the impact of RTS/CTS on the network performance. Our study provides some important insights about using the RTS/CTS exchange in multi-rate 802.11 WLANs.
 
We propose a new method to recognize a user’s activities of daily living with accelerometers and RFID sensor. Two wireless accelerometers are used for classification of five human body states using decision tree, and detection of RFID-tagged objects with hand movements provides additional instrumental activity information. Besides, we apply our activity recognition module to the health monitoring system. We derive linear regressions for each activity by finding the correlations between the attached accelerometers and the expended calories calculated from gas exchange analyzer under different activities. Finally, we can predict the expended calories more efficiently with only accelerometer sensor depend on the recognized activity. We implement our proposed health monitoring module on smart phones for better practical use.
 
Using system dynamics models and methods, in this paper we suggest a feedback representation of the ecological theory of organizational inertia and change. The paper pursues two main objectives related to the representation and specification of organizational theories. The first is to identify and specify dynamic elements that are left implicit in the original theoretical narrative. The second objective is to explore conceptual connections between core features of ecological and evolutionary theories of organizations that are typically believed to lead to incommensurable empirical models. We perform a series of simple simulation experiments to explore the behavioral consequences of our representations and identify issues that future research on dynamics of organizations may help to clarify. The main insight offered by our model-based exploration is that organizational inertia––defined as the tendency of formal organizations to resist change––and organizational capabilities––defined as the ability of organizations to innovate and reconfigure their internal resources––should be represented as paired concepts, each understandable only in terms of the other.
 
In-memory (transactional) data stores are recognized as a first-class data management technology for cloud platforms, thanks to their ability to match the elasticity requirements imposed by the pay-as-you-go cost model. On the other hand, defining the well-suited amount of cache servers to be deployed, and the degree of in-memory replication of slices of data, in order to optimize reliability/availability and performance tradeoffs, is far from being a trivial task. Yet, it is an essential aspect of the provisioning process of cloud platforms, given that it has an impact on how well cloud resources are actually exploited. To cope with the issue of determining optimized configurations of cloud in-memory data stores, in this article we present a flexible simulation framework offering skeleton simulation models that can be easily specialized in order to capture the dynamics of diverse data grid systems, such as those related to the specific protocol used to provide data consistency and/or transactional guarantees. Besides its flexibility, another peculiar aspect of the framework lies in that it integrates simulation and machine-learning (black-box) techniques, the latter being essentially used to capture the dynamics of the data-exchange layer (e.g. the message passing layer) across the cache servers. This is a relevant aspect when considering that the actual data-transport/networking infrastructure on top of which the data grid is deployed might be unknown, hence being not feasible to be modeled via white-box (namely purely simulative) approaches. We also provide an extended experimental study aimed at validating instances of simulation models supported by our framework against execution dynamics of real data grid systems deployed on top of either private or public cloud infrastructures.
 
This paper presents a review of developed simulated models for a wireless data acquisition system. The system reads analogue information provided by two sensors used for medical purposes. The real data have been recorded by two, pH and pressure sensors used in diagnosing conditions of the esophagus that are employed to examine the system performance. The created model contains four main simulated units using SIMULINK. The first unit contains the output signal, which is encoded to digital signal based on adapting one of the pulse coding modulation (PCM) algorithms. The second unit simulates the processor function that is responsible for framing, mixing and compressing the incoming bit streams from both sensors. The third unit, where the digital data are modulated and sent through different noisy channels, represents an efficient FSK transmitter/receiver model. At the receiver end, the signal is demodulated and processed inversely to extract the original analogue signal read by the two sensors.In this work, the performance of the systems using different PCM methods will be studied comparatively in order to control the transmission and reduce the amount of data frames sent. This will lead to a significant reduction in power consumption. In addition, the performance of the RF unit through additive White Gaussian noise (AWGN) channel was examined by estimating the average bit error rate (BER) for different carrier frequencies. The effect of the multipath fading, inband/outband interference, and adjacent channel power ratio (ACPR) has also been investigated during system assessment.
 
A sliding-mode controller, developed to drive a two-dimensional, 10-segment human locomotor system model to track kinematic gait samples, is described. Motion tracking by the controller is performed to an accuracy typically within one standard deviation of the mean for the 10 data samples tested, with the driving moments typically within two standard deviations of the mean calculated for the sampled gait trajectories. The sliding-mode approach also generates antagonistic muscle activity at periods when joint angle is particularly sensitive to joint moment, producing results comparable to sampled electromyogram readings.
 
This article describes an application of discrete-event simulation to study logistics activities in a chemical plant. Most chemical production involves continuous flow of materials, such as liquid, gas or solid through the manufacturing and logistics processes. Some simulation issues in this area are conceptualizing production operations for simulation, discretization of continuous processes and building adequate level of detail in the models. The purpose of this study is to determine the required capacity of logistics operations to allow continuous operations of a chemical manufacturing plant. The application has been used to provide critical decision support. The value of the simulation study is not only the simulation model itself but also the process of building it.
 
This paper describes a multi-agent architecture based on the actors computational model, for the distributed simulation of discrete event systems whose entities have a complex dynamic behaviour. Complexity is dealt with by exploiting statechart-based actors which constitute the basic building blocks of a model. Actors are lightweight reactive autonomous agents that communicate to one another by asynchronous message passing. The thread-less character of actors saves memory space and fosters efficient execution. The behaviour of actors is specified through “distilled statecharts” that enable hierarchical and modular specifications. Distributed simulation is achieved by partitioning a system model among a set of logical processes (theatres). Timing management and inter-theatre communications rest in a case on the High Level Architecture services. The paper illustrates the practical application of the proposed modelling and simulation methodology by specifying and analysing a complex manufacturing system.
 
In this paper, an active vibration control (AVC) incorporating active piezoelectric actuator and self-learning control for a flexible plate structure is presented. The flexible plate system is first modelled and simulated via a finite difference (FD) method. Then, the validity of the obtained model is investigated by comparing the plate natural frequencies predicted by the model with the reported values obtained from literature. After validating the model, a proportional or P-type iterative learning (IL) algorithm combined with a feedback controller is applied to the plate dynamics via the FD simulation platform. The algorithms were then coded in MATLAB to evaluate the performance of the control system. An optimized value of the learning parameter and an appropriate stopping criterion for the IL algorithm were also proposed. Different types of disturbances were employed to excite the plate system at different excitation points and the controller ability to attenuate the vibration of observation point was investigated. The simulation results clearly demonstrate an effective vibration suppression capability that can be achieved using piezoelectric actuator with the incorporated self-learning feedback controller.
 
This paper is addressed to the vibration control of an elastic plate, clamped along one side and excited by an impulsive transversal force acting in correspondence of a free corner. The plate is equipped with three couples of piezoelectric patches, used as sensors and actuators. A modal model of the coupled electromechanical structure, obtained by employing a suitable finite-element formulation together with a modal reduction, is used in the controller design. Different H2 control laws have been designed and compared by simulation, in order to evaluate the performance obtained using different combinations of sensors and actuators together with models taking into account an increasing number of structural eigenmodes.
 
Ad hoc networks are self-organizing wireless systems conformed by cooperating neighboring nodes that conform networks with variable topology. Analyzing these networks is a complex task due to their dynamic and irregular nature. Cellular Automata (CA), a very popular technique to study self-organizing systems, can be used to model and simulate ad hoc networks, as the modeling technique resembles the system being modeled. Cell-DEVS was proposed as an extension to CA in which each cell in the system is considered as a DEVS model. The approach permits defining models with asynchronous behavior, and to execute them with high efficiency. We show how these techniques can be used to model mobile wireless ad hoc networks, making easy model definition, analysis and visualization of the results. The use of Cell-DEVS permitted us to easily develop new experiments, which allowed us to extend routing techniques for inter-networking and multicast routing, while permitting seamless integration with traditional networking models.
 
The work studies the properties of a coordination game in which agents repeatedly compete to be in the population minority. The game reflects some essential features of those economic situations in which positive rewards are assigned to individuals who behave in opposition to the modal behavior in a population. Here we model a group of heterogeneous agents who adaptively learn and we investigate the transient and long-run aggregate properties of the system in terms of both allocative and informational efficiency. Our results show that, first, the system long-run properties strongly depend on the behavioral learning rules adopted, and, second, adding noise at the individual decision level and hence increasing heterogeneity in the population substantially improve aggregate welfare, although at the expense of a longer adjustment phase. In fact, the system achieves in that way a higher level of efficiency compared to that attainable by perfectly rational and completely informed agents.
 
Fault-tolerance in a communication network is defined as the ability of the network to effectively utilize its redundancy in the presence of faulty components (i.e., nodes or links). New technologies of integration now enable the design of computing systems with hundreds and even thousands of independent processing elements which can cooperate on the solution of the same problem for a corresponding improvement in the execution time. However, as the number of processing units increases, concerns for reliability and continued operation of the system in the presence of failures must be addressed. Adaptive routing algorithms have been frequently suggested as a means of improving communication performance in large-scale massively parallel computers, Multiprocessors System-on-Chip (MP-SoCs), and peer-to-peer communication networks. Before such schemes can be successfully incorporated in networks, it is necessary to have a clear understanding of the factors which affect their performance potential. This paper proposes a novel analytical model to investigate the performance of five prominent adaptive routings in wormhole-switched 2-D tori fortified with an effective scheme suggested by Chalasani and Boppana [S. Chalasani, R.V. Boppana, Adaptive wormhole routing in tori with faults, IEE Proc. Comput. Digit. Tech. 42(6) (1995) 386–394], as an instance of a fault-tolerant method widely used in the literature to achieve high adaptivity and support inter-processor communications in parallel computers. Analytical approximations of the model are confirmed by comparing them with those obtained through simulation experiments.
 
This research presented a teleonomic-based simulation approach to virtual plants integrating the technology of intelligent agent as well as the knowledge of plant physiology and morphology. Plant is represented as the individual metamers and root agents with both functional and geometrical structure. The development of plant is achieved by the flush growth of metamer and root agents controlled by their internal physiological status and external environment. The eggplant based simulation results show that simple rules and actions (internal carbon allocation among organs, dynamic carbon reserve/mobilization, carbon transport in parallel using a discrete pressure-flow paradigm and child agent position choosing for maximum light interception, etc.) executed by agents can cause the complex adaptive behaviors on the whole plant level: carbon partitioning among metamers and roots, carbon reserve dynamics, architecture and biomass adaptation to environmental heterogeneity and the phototropism, etc. This phenomenon manifest that the virtual plant simulated in presented approach can be viewed as a complex adaptive system.
 
Resource allocation between exploration of emerging technological possibilities and exploitation of known technological possibilities involves a delicate trade-off. We develop a model to represent this trade-off under the time-pressing situation where the firm’s existing basis of survival is constantly challenged by competitors’ innovation and imitation. We examine how the employment of an adaptive rule improves a balance between the exploration and the exploitation. Simulation experiments show that an adaptively rational decision rule, or a step-by-step exploration of unknown opportunities based on feedback on returns, is more likely to increase firm survival under diverse conditions than an all-or-nothing approach regarding the unknown opportunities. Furthermore, our study suggests that the adaptively rational rule is self-protected from too much loss, while its potential pay-off can be unbounded above.
 
Ambient intelligence refers to environments that are sensitive and responsive to the presence of people thanks to the integration of computer systems. A particular aim of this kind of system is to enhance the everyday experience of people moving inside the related physical environment according to the narrative description of a designer’s desiderata. In this kind of situation computer simulation represents a useful way to envision the behaviour of the responsive environments that are being designed, without actually bringing them into existence in the real world, in order to evaluate their adherence to the designer’s specification. This paper describes two different approaches, respectively based on cellular automata and autonomous agents, to the realization of a self-organization model for an adaptive illumination facility, a physical environment endowed with a set of sensors that perceive the presence of humans (or other entities such as dogs, bicycles, cars) and interact with a set of actuators (lights) that coordinate their state to adapt the ambient illumination to the presence and behaviours of its users. Computer simulation is employed to evaluate the adequacy and feasibility of the approaches in the above scenario.
 
Mobile units have many constraints for reliable communication in today’s mobile environments. Unlike wired networks in mobile networks, mobility induces frequent route changes. Energy conservation is an important issue that has to be taken into account for mobile devices. Every infrastructureless network must be adaptively self-configured particularly in terms of energy, connectivity, resource allocation and memory. Efficient utilization of battery power is important for wireless users because due to their movements their energy is fluctuating at different levels during operation mode. Traffic plays a major role for energy consumption because of the unpredictable incoming flow nature. Generally, the minimum transmission power required to keep the network connected achieves the optimal throughput performance in a dynamically changing topology network. In this paper an adaptive traffic-based control method for energy conservation is described and examined, which bounds an asynchronous operation where each node evaluates dissimilar sleep-wake schedules/states based on each node’s incoming sleep-history traffic. Simulation study is carried out for throughput, traffic characterization against performance and energy conservation evaluation of the proposed model taking into account a number of metrics and estimation of the effects of incrementing the sleep time duration to conserve energy. The proposed method could be applied to infrastructureless networks with any underlying routing protocol to provide independency, portability, as well as “fair” collaboration among energy conservation mechanism and routing protocol.
 
Modeling and simulation of biochemical systems are important tasks because they can provide insights into complicated systems where traditional experimentation is expensive or impossible. Stochastic Hybrid Systems (SHS) are an ideal modeling paradigm for biochemical systems because they combine continuous and discrete dynamics in a stochastic framework. In this work we develop an advanced simulation method for SHS that explicitly considers switching and reflective boundaries and uses probabilistic crossing detection methods to improve accuracy. We also develop an adaptive time stepping algorithm for SHS to improve efficiency. We present case studies for a water/electrolyte balance system in humans and a biodiesel production model. Simulation results are presented to demonstrate the accuracy and efficiency of the improved simulation techniques.
 
The specification of the Multicast Address Dynamic Client Allocation Protocol (MADCAP) includes a mechanism to assign multicast addresses to the hosts in the Internet environment. MADCAP servers provide multicast address allocation services. A MADCAP client (that is, a host requesting multicast address allocation services via MADCAP) identifies a suitable MADCAP server and sends appropriate messages to the MADCAP server in order to request a multicast address. The mechanism makes the reuse of multicast addresses available with the MADCAP server, through the application of the lease-time concept. Estimating the performance measures of such a system, accurately and reliably, is very important for efficient use of resources and for achieving the QoS (Quality of Service) requirements.
 
This paper addresses the problem of verifying the discrete control logic that is typically implemented by programmable controllers. Not only are the logical properties of the controller studied during verification, the behaviour of the overall controlled system is also examined. An approach that combines the calculation of the safety-oriented interlock controllers in terms of supervisory control theory (SCT), the corresponding calculation of the admissible behaviour of the system, and the specification of the desired system operation by Petri nets is proposed. A potential deadlock in the controlled system is then verified by taking the admissible-behaviour model as a process model. The analysis of the simultaneously operated supervisory-control-based interlock controller and the Petri-net-based sequential controller is performed with a C-reachability graph. The paper focuses on the calculation of the graph, and the approach is illustrated with an example of a simple manufacturing cell.
 
In this work human factor is explored by means of agent based simulation and analyzed in the framework of a reputation management system (RMS), within a peer-to-peer (P2P) network. Reputation is about evaluating an agent’s actions and other agents’ opinions about those actions, reporting on those actions and opinions, and reacting to that report, thus creating a feedback loop. This social mechanism has been successfully used to classify agents within normative systems. The systems rely on the feedbacks given by the members of the social network in which the RMS operates. Reputation can thus be seen as an endogenous and self produced indicator, created by the users for the users’ benefit. This implies that users’ participation and collaboration is a key factor for the effectiveness a RMS.
 
Networks are exploding in scale, while their costs are plummeting, and as new applications are rapidly deployed to consume these vast new networking resources, security is becoming of paramount importance. Policy-based Internet management approaches are moving closer to maturity. Configuring a large number of routers, bridges, or servers using generic rules instead of individual configuration appears to be less complex, less error-prone and more flexible. This paper describes the design and modeling of network security agents based on policy-based framework, which has some inherent merits. The need arises for systems to coordinate with one another, to manage a range of malicious attacks across networks at any time. We performed modeling of the network components that include an intrusion detection system, firewall, single sign-on technology, and policy-based framework. The authentication for the network system access is achieved using single sign-on technology. We present modeling methodology for network security agents, which are identified as a component of policy-based network management. Each component is implemented as a hybrid design utilizing modeling concepts from Discrete EVent system Specification (DEVS) formalism and problem-solving concepts from BlackBoard Architecture (BBA) of Artificial Intelligence (AI).
 
Agent-based simulation models can effectively represent decentralized systems. However, many supply-chains are order-driven, and agent modeling cannot effectively represent the order life-cycle. We present a conceptual architecture that combines simulation formalisms, allowing an agent representation of the supply-chain infrastructure while enabling a process-oriented approach to representing orders. This architecture allows for a natural, realistic representation of different supply-chain constructs and subsystems while following a consistent overall viewpoint. Our approach provides for excellent representation of supply-chain operations, allows for very detailed operational data to be gathered, and provides efficient representation of concurrent supply-chain activities in a manner that avoids preemption.
 
Our research relates to multi-agent and oriented object modeling and simulation of the complex systems. Our research interest itself more particularly with system where the spatial and temporal component make a great part of system to model (for example, ecosystems or systems of production). Within the framework of this article, we will be interested in the flexible production systems.The simulation of complex systems requires generally the integration and the coupling of heterogeneous models (multi-agent, mathematical, and so on). This heterogeneity is a consequence of the diversity of the disciplines and abilities of designers. The approach that we develop consists in the development of “virtual laboratories ”. Our platform “virtual laboratory environment” (VLE) enables us to specify, simulate and analyze spatial complex systems. VLE is based on the concepts of reactive agents, objects and spatial and temporal multi-scale systems.
 
Using multi-agent models to study social systems has attracted criticisms because of the challenges involved in their validation. Common criticisms that we have encountered are described, and for each one we attempt to give a balanced perspective of the criticism. A model of intra-state conflict is used to help demonstrate these points. We conclude that multi-agent models for social systems are most useful when (1) the connection between micro-behaviors and macro-behaviors are not well-understood and (2) when data collection from the real-world system is prohibitively expensive in terms of time or money or if it puts human lives at risk.
 
The aim of the research presented in this paper is to develop a simulation testbed to support businesses in their decision-making about the potential use of electronic matching mechanisms. Matching mechanisms are used to match supply and demand of independent selling and buying organizations, each having their own goals, requirements and interests. The autonomous characteristics of agent-based systems and process orientation of discrete-event simulation are combined in our agent-based simulation testbed. In this way both the autonomous trading behavior of organizations and their business processes can be simulated. Several pre-defined components containing the behavior of various kinds of matching mechanism have been developed. These components can be further customized to model an empirical situation more closely. The approach is illustrated with a case study in the computer market.
 
In a previous paper we generated animated agents and their behavior using a combination of XML and images. The behavior of agents was specified as a finite state machine (FSM) in XML. We used images to determine properties of the world that agents react to. While this approach is very flexible, it can be made much faster by using the power available in modern GPUs. In this paper we implement FSMs as fragment shaders using three kinds of images: world space images, agent space images and FSM table images. We show a simple example and compare performance of CPU and GPU implementations. Then we examine a more complex example involving more maps and two types of agents (predator–prey). Furthermore we explore how to render agents in 3D more efficiently by using a variation on pseudoinstancing.
 
This paper describes a multi-agent based simulation (MABS) framework to construct an artificial electric power market populated with learning agents. The artificial market, named TEMMAS (The Electricity Market Multi-Agent Simulator), explores the integration of two design constructs: (i) the specification of the environmental physical market properties and (ii) the specification of the decision-making (deliberative) and reactive agents. TEMMAS is materialized in an experimental setup involving distinct power generator companies that operate in the market and search for the trading strategies that best exploit their generating units’ resources. The experimental results show a coherent market behavior that emerges from the overall simulated environment.
 
Logical structure of a SIM_AGENT cycle.  
Object class publications and subscriptions in Tileworld Federation 
A screen shot of SIM_TILEWORLD.  
An example FOM for SIM_TILEWORLD.  
Over the past decade, there has been a growing interest in utilising intelligent agents in computer games and virtual environments. At the same time, computer game research and development has increasingly drawn on technologies and techniques originally developed in the large scale distributed simulation community, such as the HLA IEEE standard for simulator interoperability. In this paper, we address a central issue for HLA-based games, namely the development of HLA-compliant game agents. We present hla_agent, an HLA-compliant version of the sim_agent toolkit for building cognitively rich agents. We outline the changes necessary to the sim_agent toolkit to allow integration with the HLA, and show that, given certain reasonable assumptions, all necessary code can be generated automatically from the FOM and the object class publications and subscriptions. The integration is transparent in the sense that the existing sim_agent code runs unmodified and the agents are unaware that other parts of the system are running remotely. We present some preliminary performance results, which indicate that the overhead introduced by the HLA extension is modest even for lightweight agents with limited computational requirements.
 
In trading networks many elements determine the success of the network. They can be economic, social, personal, structural, environmental, etc. In many simulation frameworks only one of these elements is considered. However we argue that it is exactly the interaction between the different types of elements that is interesting when considering the mediation of business processes. Whether a mediator has a right of existence does not just depend on the quality of his service, but also on the social structure between suppliers and users, the communication infrastructure, etc. In this paper we propose an agent-based simulation framework in which this type of situations can be studied and we show an example of its use in a simulation of the house market.
 
Stochastic events, such as rush orders, stock-out events, and local failures have an important impact on the performance of distributed production, but they are difficult to anticipate and account for when scheduling production activities. Process statistics and artificial intelligence techniques can provide this knowledge to effectively time synchronization events among the simulation and scheduling federates of a same distributed architecture. Measurable benefits include reduced communication delays and, thus, improved responsiveness of the system to changes in production and new scheduling needs, as they arise. Comparative results on the productivity of actual industrial systems are proposed and discussed in the paper.
 
Heart auscultation (the interpretation by a physician of heart sounds) is a fundamental component of cardiac diagnosis. It is, however, a difficult skill to acquire. In this work, we develop a simple model for the production of heart sounds, and demonstrate its utility in identifying features useful in diagnosis. We then present a prototype system intended to aid in heart sound analysis. Based on a wavelet decomposition of the sounds and a neural network-based classifier, heart sounds are associated with likely underlying pathologies. Preliminary results promise a system that is both accurate and robust, while remaining simple enough to be implemented at low cost.
 
Bulk maize cobs were stored in cribs in Barisal, Bangladesh in 1998 with an initial moisture content of 38% (db) and in 1999 with an initial moisture content of 40% (db) and dried to 16.10% and 18.32% respectively by natural air. The observed air temperature and grain temperature was found to be almost same. A mathematical model was developed to simulate the drying of maize in crib and the model consists of three sets of partial differential equations––mass balance equation, drying rate equation and energy balance equation. The equations were solved by numerical techniques with respect to time and positions, the width of crib being considered as a series of thin layers. The model was validated with the experimental data. Good agreement was found between the simulated temperature and moisture content both in 1998 and 1999.
 
Dynamic models of the heating, ventilation and air-conditioning (HVAC) systems in the building are very useful for controller design, commissioning, and fault detection and diagnosis. Different applications have different requirements on the models and different modeling approaches can be applied. Mathematical modeling with two different approaches, block-wise Simulink and bond graph, is discussed. Advantage and disadvantage of both approaches are expressed. It is shown that combination with two approaches to realize complicated models of building HVAC system for the application of model-based fault detection and diagnosis is a good solution.
 
Commercial air transport has been growing steadily over the last decades and the demand often exceeds the capacity of the air traffic system. Simulation facilities are required in order to improve the performance and usability of assistance systems aiming at air traffic optimisation and to study the interaction of human operators with their working environment. Unlike training simulators, simulation environments for research and development purposes must comply with continuously changing requirements. The simulator must be flexible and scalable to permit adaptation to the specific goals of a simulation. To guarantee successful work in the future a simulation equipment must be capable to attend in distributed simulations. Since the Apron and Tower Simulator of the DLR Institute of Flight Guidance became operational in 1998 a number of different projects demanded this flexibility.
 
In this article (max,+) spectral theory results are applied in order to solve the problem of sizing in a real-time constrained plant. The process to control is a discrete event dynamic system without conflict. Therefore, it can be modeled by a timed event graph, a class of Petri net, whose behavior can be described with linear equations in the (max,+) algebra. First the sizing of the process without constraint is solved. Then we propose to design a simulation model of the plant to validate the sizing of the process.
 
With the overall goal of optimizing the design and operation of steam boilers, a model for optimizing the dynamic performance has been developed. The model has been developed as three sub-models that are integrated into an overall model for the complete boiler. Each of the sub-models consist of a number of differential equations and algebraic equations—a so called DAE system. Two of the DAE systems are of index 1 and they can be solved by means of standard DAE-solvers. For the actual application, the equation systems are integrated by means of MATLAB’s solver: ode23t, that solves moderately stiff ODEs and index 1 DAEs by means of the trapezoidal rule. The last sub-model that models the boilers steam drum consist of two differential and three algebraic equations. The index of this model is greater than 1, which means that ode23t cannot integrate this equation system. In this paper, it is shown how the equation system, by means of an index reduction methodology, can be reduced to a system of ordinary differential equations—ODEs.
 
This paper considers a multi-objective genetic algorithm (GA) coupled with discrete event simulation to solve redundancy allocation problems in systems subject to imperfect repairs. In the multi-objective formulation, system availability and cost may be maximized and minimized, respectively; the failure-repair processes of system components are modeled by Generalized Renewal Processes. The presented methodology provides a set of compromise solutions that incorporate not only system configurations, but also the number of maintenance teams. The multi-objective GA is validated via examples with analytical solutions and shows its superior performance when compared to a multi-objective Ant Colony algorithm. Moreover, an application example is presented and a return of investment analysis is suggested to aid the decision maker in choosing a solution of the obtained set.
 
Ordinal optimization has emerged as an efficient technique for simulation and optimization, converging exponentially in many cases. In this paper, we present a new computing budget allocation approach that further enhances the efficiency of ordinal optimization. Our approach intelligently determines the best allocation of simulation trials or samples necessary to maximize the probability of identifying the optimal ordinal solution. We illustrate the approach’s benefits and ease of use by applying it to two electronic circuit design problems. Numerical results indicate the approach yields significant savings in computation time above and beyond the use of ordinal optimization.
 
In this paper, multi-agent simulation is applied to explore how people organize themselves when they have to perform a task. The multi-agent model that we used is based on the formalization of psychological and organizational theories. Three experiments are presented in which multi-agent simulation is being used to study processes of self-organization. This article is structured as follows. First, we describe how expertise differences and coordination time affect the duration of the task allocation process. Second, we demonstrate how task variety and coordination time are related. Finally, we depict the relation between boredom, performance and task allocation.
 
Resource allocation strategy must offer a robust way for the best assignment of resources to network while at the same time enabling equal share of network resources. This work proposes an active network conceptual model suited for the network resource allocation problem. This constrained active network allocates network resources to each demand performing capacity allocation and bandwidth reservation. Swarm-based active packets continuously communicate with active nodes by using the Split Agent Routing Technique (SART) and inform active nodes about available resources on previously visited node(s) and links. This apparatus enables adaptation of the system to new conditions (bandwidth reservation/capacity allocation). It also enables additional information to be passed to neighboring nodes for which information is embodied in transmitted packets. Thorough examination is made, for the performance of the proposed scheme in the network and the QoS offered, taking into account a number of performance metrics. Simulation results show that this swarm-based active network scheme offers a decentralized control and an efficient way to increase overall performance enabling at the same time optimized throughput.
 
The paper investigates the benefits that the SMA-behaviour can introduce in the dynamical response of a structural system.The influence of SMA tendons contributing to the overall strength of a simple elastic–plastic structural model undergoing horizontal shaking and subject to vertical loads is firstly analysed, proving that coupling super-elastic members with elastic–plastic structures yields an excellent performance in attenuation both of the P–Δ effect and of the final residual deformation.In the second investigation, this system is looked at as an isolation device and is introduced in a m.d.o.f. elastic–plastic structural model; its attitude in suppressing plastic deformations in the super-structure is demonstrated.
 
The authors describe the development of a modular software suite for application in the nuclear power industry. The approach incorporates nuclear data and fuel cycle architecture into a flexible and dynamic modelling environment. The software suite comprising MEEMS (mass, environment, economical modelling and simulation) and MPR (MEEMS pseudo reactor) was developed specifically to enable investigation of novel fuel cycle operations, such as partitioning and transmutation scenarios. The system has been used to model various nuclear fuel cycle scenarios and preliminary results indicate that the approach is a viable method with good prospects for industrial application.
 
This paper explores the coevolution of social networks and behavioral norms. Previous research has investigated the long-term behavior of feedback systems of attraction and influence, particularly the tendency toward homogenization in arbitrary cultural fields. This paper extends those models by allowing that norms diffuse not only by simple contagion but through intentional sanctioning behavior among peers. Further, the model allows for negative relations, where actors differentiate themselves from enemies while seeking to align themselves with friends. Sociometric maps reveal non-trivial system dynamics—structural bifurcation, discrimination between factions, and cycles of deviance and solidarity—emerging from a few elementary agent-level assumptions.
 
Hybrid analytic and simulation models are used in solving complex problems in a variety of domains, but are less commonly used in production system design. This paper reviews hybrid approaches and their applications, proposes a new hybrid modeling class, and illustrates a cost function for selecting analytic or simulation modeling approaches via a problem solving process. To illustrate the new class, a case study is presented, in which a hybrid analytic and simulation modeling approach is used in designing a multi-stage, multi-buffer electronic device assembly line. Development of a robust integrated modeling support environment is proposed as a future direction.
 
Top-cited authors
Søren Tørholm
  • AnyBody Technology
John Rasmussen
  • Aalborg University
Michael Damsgaard
  • AnyBody Technology
Young-Jun Son
  • The University of Arizona
Helen Karatza
  • Aristotle University of Thessaloniki