The Journal of Defense Modeling & Simulation

Published by SAGE Publications
Online ISSN: 1548-5129
Publications
Article
We reproduce apparently complex cellular automaton behaviour with simple partial differential equations as developed in (Keane 09). Our PDE model easily explains behaviour observed in selected scenarios of the cellular automaton wargame ISAAC without resorting to anthropomorphisation of autonomous 'agents'. The insinuation that agents have a reasoning and planning ability is replaced with a deterministic numerical approximation which encapsulates basic motivational factors and demonstrates a variety of spatial behaviours approximating the mean behaviour of the ISAAC scenarios. All scenarios presented here highlight the dangers associated with attributing intelligent reasoning to behaviour shown, when this can be explained quite simply through the effects of the terms in our equations. A continuum of forces is able to behave in a manner similar to a collection of individual autonomous agents, and shows decentralised self-organisation and adaptation of tactics to suit a variety of combat situations.
 
Article
A Dense Plasma Focus (DPF) is a pulsed-power machine that electromagnetically accelerates and cylindrically compresses a shocked plasma in a Z-pinch. The pinch results in a brief (about 100 nanosecond) pulse of X-rays, and, for some working gases, also a pulse of neutrons. A great deal of experimental research has been done into the physics of DPF reactions, and there exist mathematical models describing its behavior during the different time phases of the reaction. Two of the phases, known as the inverse pinch and the rundown, are approximately governed by magnetohydrodynamics, and there are a number of well-established codes for simulating these phases in two dimensions or in three dimensions under the assumption of axial symmetry. There has been little success, however, in developing fully three-dimensional simulations. In this work we present three-dimensional simulations of DPF reactions and demonstrate that 3D simulations predict qualitatively and quantitatively different behavior than their 2D counterparts. One of the most important quantities to predict is the time duration between the formation of the gas shock and Z-pinch, and the 3D simulations more faithfully represent experimental results for this time duration and are essential for accurate prediction of future experiments.
 
Conference Paper
This paper presents a new DEVS/NS-2 modeling and simulation environment which supports both high and low levels of abstraction for network modeling and simulation. DEVS (Discrete Event System Specification) is a well-defined mathematical formalism specification for structure and behavior of dynamic systems. The NS-2 is a discrete event network simulator, whose primary use is intended to build and run various detailed network models and protocols such as TCP/IP, satellite links, and wireless networks. By combining the two powerful modeling and simulation systems, the significant benefits attained by the interoperable simulation of DEVS and NS-2 are reduction of the cost, increased high and low level modeling power, and enhanced reusability. To integrate the systems seamlessly, two major challenges are addressed. The first challenge is to synchronize the different ways of handling event schedules by the two simulation systems. This paper illustrates how the simulation time advances of DEVS and NS-2 are synchronized with each other. The latter problem is related to assigning the appropriate level of model structure and behavior within the combined system. The details of low level network with protocol and component description is modeled by NS-2 while DEVS serves as controller by modeling the high level behavior (e.g. use case scenario builder) of target network models and interaction of the associated actors. In this paper, we take two examples of wireless sensor networks. The first example is to describe our approach to the development process for modeling and simulation in DEVS/NS-2 environment, and the purpose of the second example is to show how DEVS/NS-2 environment is efficient for military system applications. This example is extended to demonstrate an effective way to make a decision on the appropriate level of sensor node's behavior. This leads to the discussion of tradeoffs between energy efficiency and effectiveness of decision making for the sensor network. The advantages and the disadvantages are discussed in the last part of this paper by comparing DEVS/NS-2 environment with its related studies such as DEVS BUS and OPNET.
 
Article
Radio path-loss prediction is an important but computationally expensive component of wireless communications simulation. Models may require significant computation to reach a solution or require that information about the environment between transceivers be collected as model inputs, which may also be computationally expensive. Despite the complexity of the underlying model that generates a path-loss solution, the resulting function is not necessarily complex, and there may be ample opportunity for compression. We introduce a method for rapidly estimating radio path loss with Feed-Forward Neural Networks (FFNNs), in which not only path-loss models but also map topology is implicitly encoded in the network. Since FFNN simulation is amenable to Single Instruction Multiple Data architecture, additional performance can be gained by implementing a trained model in a parallel manner with a Graphical Processing Unit (GPU), such as those found on modern video cards. We first describe the properties of the training data used, which is either taken from measurements of the continental United States, or generated with random processes. Secondly, we discuss the model selection process and the training algorithm used on all candidate networks. Thirdly, we show accuracy evaluations of a number of FFNNs trained to estimate both commercial and public-domain path-loss solution sets. Lastly, we describe the approach used to implement trained networks on a GPU, and provide performance evaluations versus conventional path-loss models running on a Central Processing Unit.
 
Article
The Generic Methodology for Verification and Validation (GM-VV) is a generic and comprehensive methodology for structuring, organizing and managing the verification and validation (V&V) of modelling and simulation (M&S) assets. The GM-VV is an emerging recommended practice within the Simulation Interoperability Standards Organization (SISO). The GM-VV provides a technical framework to efficiently develop arguments to justify why M&S assets are acceptable or unacceptable for a specific intended use. This argumentation supports M&S stakeholders in their acceptance decision-making process regarding the development, application and reuse of such M&S assets. The GM-VV technical framework assures that during the execution of the V&V work the decisions, actions, information and evidence underlying such acceptance arguments will be traceable, reproducible, transparent and documented. Since the GM-VV is a generic (i.e. abstract) methodology it must be tailored to fit the specific V&V needs of a M&S organization, project or application domain. Therefore, V&V practitioners must incorporate specific V&V techniques within the generic architectural template offered by the GM-VV in order to properly assess the M&S assets under review. The first part of this paper provides an introductory overview of the GM-VV basic principles, concepts, methodology components and their interrelationships. The second part of the paper focuses on how the GM-VV may be tailored for a specific simulation application. This effort is illustrated with some results and lessons learned from several technology demonstration programs of the Dutch Ministry of Defence.
 
Article
On large Linux clusters, scalability is the ability of the program to utilize additional processors in a way that provides a near-linear increase in computational capacity for each node employed. Without scalability, the cluster may cease to be useful after adding a very small number of nodes. The Joint Forces Command (JFCOM) Experimentation Directorate (J9) has recently been engaged in Joint Urban Operations (JUO) experiments and counter mortar analyses. Both required scalable codes to simulate over 1 million SAF clutter entities, using hundreds of CPUs. The JSAF application suite, utilizing the redesigned RTI-s communications system, provides the ability to run distributed simulations with sites located across the United States, from Norfolk, Virginia, to Maui, Hawaii. Interest-aware routers are essential for scalable communications in the large, distributed environments, and the RTI-s framework, currently in use by JFCOM, provides such routers connected in a basic tree topology. This approach is successful for small to medium sized simulations, but faces a number of constraining limitations precluding very large simulations.
 
Color-coded threat level system (Source: Homeland Security Advisory System,  dhspublic) 
Article
Using a differential equation modeling approach, this paper explores the issue of public response to, and confidence in, anti-threat warnings. The effects of anti-threat warnings and their associated public confidence levels are modeled as a group of nonlinear differential equations. The analytical solutions of these nonlinear differential equations are derived to show how warning frequency and the duration of a warning affect public confidence, and how the effects of anti-threat warnings are constrained by the degree of public concern as the threat level changes. Phase plane analysis suggests that the number of warnings for a particular type of threat has a threshold level. Below this threshold, increasing the number of reliable warnings can improve the credibility and effectiveness of the warning system. However, once the number of warnings exceeds the threshold, the greater the number of warnings issued the less the public responds and the lower public confidence becomes. The resulting graphic representation is an easy-to-understand method for authorities to use to issue advisory warnings while maintaining the public's confidence in the system.
 
Article
This study illustrates a new approach to conducting capabilities-based analysis by assessing the requirements and capabilities of Army aeromedical evacuation units. We conducted a DOTMLPF (doctrine, organization, training, maintenance, leadership, personnel, facilities) assessment to determine gaps in the current force structure and solutions for future force design. Specifically, this study tackles the following research questions. RQ1: What are the gaps in medical evacuation mission execution for current operations and operations involving geographically dispersed units? RQ2: What capabilities might mitigate these gaps by examining the design characteristics of DOTLMPF? Our research design involved primary collection of data from senior aviation and medical aviation leaders using structured and unstructured survey questions. Using a mixed-method approach, we addressed RQ1 using quantitative methods and RQ2 through qualitative analysis. The results of our study determined the current organizational problems within the Army aeromedical evacuation unit, which can be leveraged for the future joint force design for vertical lift. Our evaluation of medical evacuation DOTMLPF considerations provides a baseline for assessing future Army materiel solutions.
 
Article
This paper presents an approach to understanding network-enabled operations using agent-based simulations. We describe the newly created agent-based software ABSNEC (Agent-Based System for Network Enabled Capabilities), highlighting some of its salient features: the ability to represent human factors towards the analysis of battle outcomes in network operations, and the ability to represent realistic force structures with tiered C2 architectures. We provide affirmative results of three validation techniques to date on the model. Finally, we demonstrate the utilization of ABSNEC to acquire meaningful insights for analysis through two examples: a study on the interrelationship between fratricide, human factors, and situation awareness; and the generation of alternative combat strategies for a military engagement.
 
Article
The foundational concept of Network Enabled Capability relies on effective, timely information sharing. This information is used in analysis, trade and scenario studies, and ultimately decision making. In this paper, the concept of visual analytics is explored as an enabler to facilitate rapid, defensible and superior decision making. By coupling analytical reasoning with the exceptional human capability to rapidly internalize and understand visual data, visual analytics allows individual and collaborative decision making to occur in the face of vast and disparate data, time pressures and uncertainty. An example visual analytics framework is presented in the form of a decision-making environment centered on the Lockheed C-5A and C-5M aircraft. This environment allows rapid trade studies to be conducted on design, logistics and capability within the aircraft’s operational roles. Through this example, the use of a visual analytics decision-making environment within a military environment is demonstrated.
 
Article
In large-scale distributed defense modeling and simulation, Data Distribution Management (DDM) controls and limits the data exchanged reducing the processing requirements of federates. In this paper, we present a comparative study of a recently proposed DDM algorithm, called P-Pruning algorithm, with three other known techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. By populating the multicast group, first only on the basis of X-axis information of routing space, and pruning the non-overlapping subscriber regions within multicast groups in successive steps, the P-Pruning algorithm avoids the computational overheads of other algorithms. From the simulation study, we found that the P-Pruning algorithm is faster than the other three DDM algorithms. The performance evaluation results also show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. We also present the design and implementation of a memory-efficient, scalable enhancement to the P-Pruning algorithm.
 
Article
We develop two risk-analytic approaches for allocating operating funding among defence organization activities. In one, termed the priority method, activities are put in rank order and as many high-priority activities as possible are undertaken while ensuring that the budget holder’s probability of overspending his budget is acceptably small. In the second, termed the knapsack method, there are two kinds of activities: must-do activities and optional activities. Optional activities are selected using a nonlinear integer program that maximizes the value of the optional activities while keeping the probability of overspending sufficiently low. Both approaches are applied in the context of the Department of National Defence in Canada.
 
Article
In this paper we outline statistical methods used to analyze the behavior signatures that are hidden deep within data on terrorist attacks. These methods have the potential to allow military commanders to identify trends in attacks and to make informed decisions about how best to prevent future attacks. While this work focuses primarily on terrorist attacks that have occurred during Operation Iraqi Freedom, the methodology can be expanded and applied to a variety of areas. With the terrorists’ advantage of the element of surprise, analyzing their behavior may appear to be a very daunting task; however, it is demonstrated that even a collection of largely random data can often lead to insightful inferences. This paper provides a discussion of the project as a whole, the challenges faced in data collection and analysis, the methodology used, and the results and implications of the study.
 
Article
This research involves simulating remote sensing conditions using previously collected hyperspectral imagery (HSI) data. The Reed–Xiaoli (RX) anomaly detector is well-known for its unsupervised ability to detect anomalies in hyperspectral images. However, the RX detector assumes uncorrelated and homogeneous data, both of which are not inherent in HSI data. To address this difficulty, we propose a new method termed linear RX (LRX). Whereas RX places a test pixel at the center of a moving window, LRX employs a line of pixels above and below the test pixel. In this paper, we contrast the performance of LRX, a variant of LRX called iterative linear RX (ILRX), the recently introduced iterative RX (IRX) algorithm, and the support vector data description (SVDD) algorithm, a promising new HSI anomaly detector. Through experimentation, the line of pixels used by ILRX shows an advantage over RX and IRX in that it appears to mitigate the deleterious effects of correlation due to the spatial proximity of the pixels; while the iterative adaptation taken from IRX simultaneously eliminates outliers allowing ILRX an advantage over LRX. Such innovations to the basic RX algorithm allow for the reduction of bias and error in the estimation of the mean vector and covariance matrix, thus accounting for a portion of the spatial correlation inherent in HSI data.
 
– DODAF SV-1 of the Embedded Client internal interfaces view  
DODAF SV-1 of the Embedded Client internal interfaces view
– DODAF SV-1 of the Server internal interfaces view The incorporation of TDB changes are performed by a multi-agent blackboard component called the TDB Change Detector and Update Generator. The main components of the TDB Change Detector and Update Generator, shown in Figure 5 below, are the Task Panel, Data Panel, Controller, Agents, Geospatial Framework Scratch Pad, Test Factory and Transformation Factory. Not further described theoretically in this article, the multi-agent blackboard architecture includes a Control Agent, a Factory Agent and a number of Cell Agents and Data Element Agents. The Data Panel and Task Panel are used for communication between the agents and coordination of the modifications to the Geospatial Framework Scratch Pad (Geo-Scratch Pad is used as shorthand). The Data Panel stores the data and correlation sub-goals, which will drive the correlation tasks. The Task Panel is used by the agents to propose potential solutions to the correlation sub-goals. The Geo-Scratch Pad holds the internal TDB Repository of TDB cells and data elements which have been affected by the changes and need to be updated. The Geo- Scratch Pad consists of separate representations for all the affected target formats, each one stored on a separate layer. These Geo-Scratch Pad layers overlap a common geographical area and contain data elements that describe the geographical source data features and target format components.  
– DODAF SV-1 system diagram used to implement the Bi-directional Ontology-Driven TDB Generation Architecture. Experimental tested flows are highlighted in red and green.  
UML class for the OTF Client (PartialTdbGenClient class)
Article
A great deal has changed since 1997 when Schiavone first described the generic pipeline generation process for building an environmental database for military models and simulations. Environment representations have gone from cartoonist graphics to near real-life renderings possessing properties of physics and intelligence where appropriate. Within this article we discuss several contemporary and past terrain database generation processes, as well as challenges faced in generating terrain databases for military models and simulations today. These challenges include decentralized, distributed modification, frequent and incremental updates, and the need for a shared level of validity by users with heterogeneous needs. We propose a novel approach that addresses network-centric terrain database generation and provides distributed bi-directional, incremental updates that addresses many of these needs. We also developed, tested, and analyzed a prototype architecture and two types of feature modification applied to terrain within the network, the results of which are presented, along with conclusions and future research recommendations.
 
Article
This paper presents in brief the formulation of the rigid body separation dynamics that are useful for the design and analysis of satellite separation systems using the helical compression spring mechanism. In satellite launch vehicle chronology, safe injection of the satellite into the desirable orbit is an important final task. The body rate of the satellite during separation should be within the satellite control capabilities. The Taguchi method is employed in order to understand the influencing parameters in the separation process. Statistical analysis has been carried out to identify the probability distribution function for reliability and safety assessment of the satellite separation of a launch vehicle. Using the response surface method (RSM), an empirical relation is obtained for the body rate of the satellite in terms of the 28 physical parameters. The RSM results are compared with the Taguchi simulation results. Reliability and safety assessments are made on the satellite separation of a typical launch vehicle.
 
Article
Tunnels are a challenging environment for radio communications. In this paper we consider the use of autonomous mobile radio nodes (AMRs) to provide wireless tethering between a base station and a leader in a tunnel exploration scenario. We propose a tethering algorithm for AMR motion control based on a consensus variable protocol. Using radio signal strength measurements, the AMRs autonomously space themselves so as to achieve equal radio distance between each entity in the chain from the base station to the leader. Experimental results show the feasibility of our ideas.
 
Article
Representation of search and target acquisition (STA) in military models and simulations arguably abstracts the most critical aspects of combat. This research focuses on the search aspect of STA for the unaided human eye. It is intuitive that an individual’s environmental characteristics and interpretation of the environment in the context of all comprehended information, commonly summarized as their situational awareness (SA), influences attention and search. Current simulation models use a primitive sweeping search method that devotes an unbiased amount of time to every area in an entity’s field of regard and neglects the effects of SA. The goal of this research is to provide empirical results and recommend modeling approaches that improve the representation of unaided search in military models and simulations. The major contributions towards this goal include novel empirical results from two incremental eye-tracking experiments, analysis and modeling of the eye-tracking data to illustrate the effect of the environment and SA on search, and a recommended model for unaided search for high-fidelity combat simulation models. The results of this work support soldier search models driven by metrics that summarize the threat based on environmental characteristics and contextual information.
 
Article
While long considered an important aspect of strategic and theater planning, situational awareness (SA) is the linchpin to both cyber planning and execution. As stated in Joint doctrine, before military activities in the information environment can be accurately and effectively planned, the “state” of the environment must be understood. At its core, cyber situational awareness requires understanding the environment in terms of how information, events, and actions will impact goals and objectives, both now and in the near future. Joint Information Operations (IO) doctrine defines three layers of information inherent to this; physical, informational, and cognitive. While a fair amount of time and effort has been focused on the physical and informational aspects of cyber situational awareness, very little emphasis has been placed on the cognitive layer as it relates to cyber space and how best to model and analyze it. This research examines aspects of the cognitive level by defining a cyber-based behavioral model contingent on the activities a user performs while on the Internet. We believe this is foundational to completely defining a cyber situational awareness model, thus providing commanders and decision makers a more comprehensive and real time view of the environment in which they are operating.
 
Article
In this work, the resistance and deformation characteristics of a brittle material against rain erosion are examined by using the non-linear, explicit software LS-DYNA. The water jet with varying speeds impinges at 90° on silica float glass plates with different thicknesses. In the simulations, the Arbitrary Lagrangian Eulerian method is used for modelling of the water. In order to analyse the deformations on the brittle material Johnson–Holmquist–Ceramics (JH-2) is used as the material model. Minimum plate thickness (for constant water jet speed) and maximum water speed (for constant plate thickness), which do not cause any damage to the target, are determined depending on the geometry, boundary conditions and assumed failure strain value for erosion. The results are compared with the water-hammer pressure.
 
Article
The North Atlantic Treaty Organization (NATO) Network Enabled Capability (NNEC) addresses technical and cognitive abilities of NATO requiring technical and operational interoperability standards and targets for adaptation. Net-enabled Modeling and Simulation (M&S) can support in all life cycle phases. The paper evaluates the contribution of four example technical activities of NATO conducted by different bodies of the Research and Technology Organization: net-enabled M&S by the Modeling and Simulation Group, semantic interoperability by the Information Systems Technology panel, new command and control concepts by the System Analysis & Studies panel, and human factors for NNEC by the Human Factors & Medicine panel. The results request a framework that included technical and operational aspects as well. The technical challenges can be supported by the Levels of Conceptual Interoperability Model, which is extended to an Interoperability Maturity Model. The operational challenges can be supported by the NNEC Command and Control Maturity Model. Both models are aligned to provide the necessary support to address NATO’s technical and cognitive abilities by M&S services.
 
Article
Defense Experimentation (DE) using modeling and simulation (M&S) is increasingly being adopted as a means to better understand complex defense capability problems. This is being given added impetus by the amplified focus on Network Enabled Capability (NEC) and the rising use of advanced information and communications technology within military operations. This paper presents analysis of observed data trends from a broad range of DE experiments performed within capability development programs for the United Kingdom and Australian Governments over the period of 2001–2010. A range of variables were tracked concerning the experiment’s nature, the DE method employed, M&S technology utilized and human resources used across the experiment life cycle. Time and effort results are presented here, broken down by DE method and life-cycle phase. The paper also analyses where reuse took place in the experiment life cycle, and how time and effort were affected by the number of problem-owner and provider stakeholders involved. The insights yielded are expected to help DE planners improve estimation and scheduling of human resources. In turn, this is intended to facilitate delivery of more effective NEC concept development and experimentation.
 
Article
A chaff cloud is an electronic countermeasure to radio frequency emitters. The cloud immerses a protected entity in a multitude of false targets by reradiating incident electromagnetic energy from millions of thin aluminized fibers, foil strips, or elements printed with conductive ink. The elements are cut to form resonant structures to match the principle threat frequencies, making them an effective reradiator. Empirically testing chaff clouds is technically challenging, costly, and time consuming. With an increased emphasis on chaff research comes the need to characterize cloud radar cross-section performance to expand existing knowledge and explore development opportunities for future technologies. This paper describes the computational methodology and results for analyzing standard dipole chaff clouds and Koch snowflakes as a possible new chaff element.
 
CEE participant cells. Photo source: reproduced with permission from The MITRE Corporation.
Data exchanged between experiment participants, systems, and simulations.
Transfer solutions.
Access solutions.
Multilevel environments.
Article
Successful development of the Next Generation Air Transportation System (NextGen) requires the coordinated efforts of government agencies and organizations that must work together to respond to aviation-related crises. Interagency experimentation is a powerful technique for achieving this coordination while refining NextGen concepts. To maximize the benefits of this experimentation, the participants must “play” from their preferred research laboratories and at the appropriate classification level. Cross-domain solutions, used for many years in the military and intelligence communities, offer possibilities for connecting the unclassified and classified laboratories in aviation experiments. A MITRE Innovation Program initiative entitled the NextGen Interagency Experimentation Hub is exploring the application of these cross-domain technologies. The project team is working with candidate cross-domain solutions in a multi-lab test environment while gathering lessons learned about deployed solutions. However, cross-domain solutions are costly due to the use of rigorous practices for developing secure software. Further, each cross-domain solution must pass through an approval process that often takes over a year and incurs additional costs. This paper is a compilation of selected results from the MITRE initiative that (1) presents the possibilities cross-domain solutions offer, (2) promotes understanding of the challenges involved in leveraging these solutions, and (3) provides recommendations and guidance for using them effectively in aviation-related interagency experiments.
 
Article
In this paper, the effects of the use of various aluminium materials as liner material in shaped charges for the perforation of concrete slabs are examined with numerical simulations. Using AUTODYN-2D software, formation of the shaped charge jets for seven different aluminium materials are modelled first and then these jets are directed to 35 MPa compressive strength concrete slabs. Those analyses are performed for a constant liner thickness which corresponds to 8% of the shaped charge diameter. Furthermore, the effect of standoff on the penetration of concrete slabs is examined by using shaped charges with the same geometry and liner material (7075-T6). The cone angle of the shaped charge is taken as 100°.
 
Article
We model insurgency and counter-insurgency (COIN) operations with a large-scale system of differential equations and a dynamically changing coalition network. We use these structures to analyze the components of leadership, promotion, recruitment, financial resources, operational techniques, network communications, coalition cooperation, logistics, security, intelligence, infrastructure development, humanitarian aid, and psychological warfare, with the goal of informing today’s decision makers of the options available in COIN tactics, operations, and strategy. In modern conflicts, techniques of asymmetric warfare wreak havoc on the inflexible, regardless of technological or numerical advantage. In order to be more effective, the US military must improve its COIN capabilities and flexibility to match the adaptability and rapid time-scales of insurgent networks and terror cells. Our simulation model combines elements of traditional differential equation force-on-force modeling with modern social science modeling of networks, PSYOPs, and coalition cooperation to build a framework that can inform various levels of military decision makers in order to understand and improve COIN strategy. We show a test scenario of eight stages of COIN operation to demonstrate how the model behaves and how it could be used to decide on effective COIN resources and strategies.
 
Article
Understanding those factors critical to predicting public response is crucial to our ability to model the consequences of a terrorist strike in an urban area. Sixteen hypothetical damage scenarios were systematically varied according to non-terrorism vs. terrorism, explosions vs. infectious disease releases, terrorists' motives as demands to release prisoners vs. solely to instill fear, non-terrorists' motives as non-intentional vs. intentional (criminal), victims as government officials vs. tourists, non-terrorist incidents as involving no negligence vs. negligence, terrorist acts as non-suicidal vs. suicidal, and number of casualties (0, 15, 495). The setting was a local theme park. Students at a university in San Diego County were randomly assigned to different scenario conditions. For these scenarios, they were asked to address a number of questions regarding their perceptions and likely behaviors during and following an accident or terrorist strike. Results from regression modeling and Multivariate Analysis of Variance (MANOVA) indicated that terrorism and the mechanism used were most influential followed by the presence of suicide or negligence, motive, and victim. Number of casualties made little difference once these other factors were accounted for. To forecast community response, a system dynamics model was introduced that incorporated the study's survey findings. This model simulated the immediate and mid-term diffusion of fear in a community for different types of accidental and terrorist events. These findings should prove useful to those wishing to predict public response to a variety of different contingencies involving terrorism.
 
Article
Drawing appropriate conclusions from simulation results requires a correct understanding of the accuracy and context of those results. Simulation communities often assess simulation results without considering fully uncertainties that might impact the accuracy and context of those results. This creates potential for inappropriate conclusions from simulation results. Much useful work has been done in uncertainty quantification, but most of those efforts have addressed uncertainty in particular parameters and areas. Unfortunately they have not addressed all areas of potential uncertainty that might impact simulation results. A paradigm exists that facilitates consideration of all potential sources of simulation uncertainty. This paper examines simulation uncertainties using that paradigm and indicates potential magnitude of uncertainties for simulation results in various areas. A comprehensive approach to simulation uncertainty not only reduces the likelihood of drawing inappropriate conclusions from simulation results, but it also provides information that can help determine where it is most useful to invest verification and validation resources in efforts to reduce uncertainty in simulation results (i.e., to improve the accuracy of simulation results). Comprehensive assessment of simulation uncertainty may have drawbacks. When addressed comprehensively, simulation uncertainty tends to be larger than desired, and those announcing such run the risk of being bearers of bad news. Realistic appreciation for the uncertainty associated with simulation results can also decrease the importance of those simulation results in decision processes. On the positive side, such realistic and comprehensive appreciation for simulation uncertainty provides a solid factual and logical basis for how to proceed, whether by improving simulation capabilities or by developing alternative approaches to support decision processes. A perspective from comprehensive consideration of simulation uncertainty helps to ensure a proper context for simulation results.
 
Article
While the current routing and congestion control algorithms in use today are often adequate for networks with relatively static topology and relatively lax quality of service (QoS) requirements, these algorithms may not be sufficient for military networks where a strict level of QoS is required in order to achieve mission objectives. Current technology limits a network’s ability to adapt to changes and interactions, and this often results in sub-optimal performance. This article develops a network controller that uses outbound router queue size predictions to optimize computer networks. These queue size predictions are made possible through the use of Kalman filters to detect network congestion. The premise is that intelligent agents can use such predictions to form context-aware, cognitive processes to managing network communication. The system shows great promise when modeled and simulated using the NS2 network simulation platform.
 
Article
Visual cues are an essential part of helicopter flight simulators. The required cues for hover are particularly large, due to closeness to the ground and small movements. However, the research on low-altitude helicopter flight is limited. In this research, the density and height of the three-dimensional (3D) objects in the scene are analysed to find their effect on hovering and low-altitude helicopter flight. An experiment is conducted using a personal computer-based flight simulator on 10 professional military pilots. The results revealed that 3D object density and 3D object height affect both horizontal and vertical hovering performance. In hover and low-altitude flight, altitude control is positively affected by smaller object height. Paradoxically, the pilots preferred the scenes composed of tall and mixture objects. Pilot distance estimation was significantly affected by the knowledge of both object density and object height, but these factors do not individually improve distance estimation.
 
Confusion Matrix for Random Forests. 
Relative Importance for Top 12 Predictors. 
Confusion Matrix for naive Bayes. 
Confusion Matrix for Multinomial Logistic Regression. 
Article
Recently, researchers have become interested in the issue of assessing culpability for terrorist attacks when no one group claims or multiple groups claim responsibility. Several new methods have been put forward for predicting culpability, traditionally assessed by intelligence analysts, using both machine learning and statistical classification models. These models have had varying degrees of success, with new ensemble classification models performing generally better than traditional statistical techniques. This paper applies a relatively new methodology, Random Forests, to the problem of predicting culpability and compares it to some of the more frequently used statistical classification techniques, including multinomial logistic regression and naïve Bayesian classification. Though generally outperforming other techniques, Random Forests struggles with unbalanced data, performing worse than either of the other models tested in the class with the least information. However, this evaluation of Random Forests for the assessment of terrorism culpability is positive. Implications of the model and comparison to other models are discussed and ways forward are suggested.
 
Article
The US Department of Defense (DoD) requires all models and simulations that it manages, develops, and/or uses to be verified, validated, and accredited. Critical to irregular warfare (IW) modeling are interactions between combatants and the indigenous population. Representation of these interactions (human behavior representation (HBR)) requires expertise from several of the many fields of social science. As such, the verification, validation, and accreditation (VVA) of these representations will require adaptation and, in some cases, enhancement of traditional DoD VVA techniques. This paper suggests validation best practices for the DoD modeling community to address new challenges of modeling IW.
 
Article
In this paper we introduce an approach to predict emplacements of improvised explosive devices (IEDs). With a brief review of studies and technology in related areas, this paper identifies and analyzes various factors in a great number of IED/terrorist attacks, and then categorizes the factors/features based on locations and time. By combining the results of analysis with other significant factors, such as casualties, numbers injured, population density, and international impact, this paper proposes an approach to predict IED emplacements with Bayesian inference. The proposed approach has been implemented and the results of testing are consistent with a group of actual incidents.
 
Article
Mobility management is a key aspect of designing and evaluating protocols for Mobile Ad Hoc Networks (MANETs). The high mobility of nodes in a MANET constantly causes the network topology to change. Mobility patterns of nodes have a direct effect on fundamental network characteristics, such as path length, neighborhood size, and link stability. Consequently, the network performance is strongly affected by the nature of mobility patterns. While evaluating protocols for a specific MANET application, it becomes imperative to use a mobility model that is able to capture the movement of nodes in an accurate manner. The objective of this work is to produce mobility models that are able to describe tactical mobility in military applications of MANETs. We provide models of four tactical scenarios, show that these models are accurate compared to synthetic traces, and that when used to evaluate network protocols, they provide different conclusions than when using generic mobility models.
 
Dendritic Diagram Linking Commander's Intent and Operational Plan Objectives to Unit Activities. 
Predicted Changes to MOEs for a Selected COA. 
Example of Visual Information Presented to Game for Peace Participants Depicting Changes to MOEs. 
Article
We present a modeling and simulation approach that clearly increases the efficacy of training and education efforts for peace support operations. Our discussion involves how a computer simulation, the Peace Support Operations Model, is integrated into a training and education venue in Kyrgyzstan for a “Game for Peace.” On September 12–23, 2011 members of NATO’s Partnership for Peace Training and Education Centers collaborated to instruct a United Nations’ Peacekeeping Operations course at the Kyrgyz Separate Rifle Battalion in Bujum, Kyrgyzstan. Phase II of the course was also conducted on October 17–21, 2011 for members of the Peacekeeping Brigade of the Kazakhstan Army (KAZBRIG) in Almaty, Kazakhstan. Although such courses are a mainstay in NATO support in preparing member nations for peace support operations, the application of a computer simulation is unique. We relate the decision to use a computer simulation to support the training event and provide an overview of the methodology for planning and executing the game. Insights from the game about training and educating future peacekeepers and lessons for using computer simulations are instructive for future efforts and mark the way to leverage the advantages of computer simulations.
 
Sim-PETEK architecture
SimulationExecutionRequest and SimulationExe- cutionResults message schema 
Sample xslt for aggregating the results
Article
In this paper we introduce a framework for parallel and distributed execution of simulations (Sim-PETEK), a middleware for minimizing the total run time of batch runs and Monte Carlo trials. Sim-PETEK proposes a generic solution for applications in the simulation domain, which improves on our previous work done to parallelize simulation runs in a single node, multiple central processing unit (CPU) setting. Our new framework aims at managing a heterogeneous computational resource pool consisting of multiple CPU nodes distributed on a potentially geographically dispersed network, through a service-oriented middleware layer that is compliant to Web Services Resource Framework standard, thereby providing a scalable and flexible architecture for simulation software developers. What differentiates Sim-PETEK from a general-purpose, Grid-based job-distribution middleware is a number of simulation-specific aspects regarding the specification, distribution, monitoring, result collection and aggregation of simulation runs. These aspects are prevalent in the structure of the messages and in the protocol of interaction both among the constituent services of the framework and within the interfaces exposed to the external clients.
 
Article
‘Data farming’ is based on the idea that simulation models run thousands of times can provide insights into the possible consequences of different options. However, the validity of the models used for data farming, especially in the context of HSCB (human, social, cultural and behavioural) modelling for decision-making and future studies, is at least questionable. This paper first reflects on the epistemological aspects of this predicament in order to illustrate its fundamental severity. Then, a possible solution is presented that is based on the notion of ‘bad models’, the concept of plausibility, and the method of simulation-based weak point analysis. The approach can be complemented by interactive war gaming. Such a systematic approach appears more defendable than most attempts to use HSCB models for affirmative purposes, and is methodologically easier to implement since it solely requires focusing on the validation of empirically amenable micro-processes.
 
Article
While physical fitness is generally accepted to influence the outcome on the battlefield, it is currently not incorporated into tactical infantry simulations. Infantry soldiers are modeled with equal physical capabilities representing the average of soldiers on the field. However, humans have varying physical capabilities. This research asked the question ‘Does modeling human physical capabilities have an impact upon the tactical success of operations in a simulation?’ Physical fitness data and rushing times were collected, as rushing is a battlefield task influenced by physical fitness. Two scenarios, a helicopter extraction of a squad and rushing for cover in an attempt to throw a grenade, were implemented in agent-based simulations to demonstrate the effect of rushing speed upon the outcome of a tactical infantry scenario. These scenarios used experimentally obtained rushing velocities as input and were compared to real world scenarios to ensure plausibility. In both simulations rushing speed significantly affected the probability of survival of the individual soldier and the probability of success for the mission (i.e. scenario). Therefore, individual rushing speed should be included as a viable input parameter for infantry simulations, as it can affect the outcome of tactical simulation scenarios.
 
Simulation Framework
Reference Models Application
Scenario Editor Application
Article
Using Tactical Environment Simulations as part of simulation systems or real systems; increases the reality and quality of the applications, reduces development time, increases interoperability and reusability of systems. Tactical Environment Simulations are utilized by integrating them into applications which can have a wide variety of requirements; so, they are generally customized to meet specific requirements of the applications. Apart from most of the Commercial-Off-The-Shelf (COTS) tools that are used directly, Tactical Environment Simulations are often provided as Application Frameworks which contain readily available simulation models, middleware, extension points and a tool set to ease customization and integration. In this paper, the requirements and a design overview for an easily extensible, customizable and integrable, High Level Architecture (HLA)-based Tactical Environment Application Framework are provided. After defining the requirements and giving an overview of design, an Application Framework that realizes the defined system is also explained.
 
Article
In the contemporary military environment, making decisions on how to best utilize resources to accomplish a mission with a set of specified constraints is difficult. A Cordon and Search of a village (a.k.a. village search) is an example of such a mission. Leaders must plan the mission, assigning assets (e.g. soldiers, robots, unmanned aerial vehicles, military working dogs) to accomplish the given task in accordance with orders from higher headquarters. Computer tools can assist these leaders in making decisions, and do so in a manner that will ensure the chosen solution is within mission constraints and is robust against uncertainty in environmental parameters. Currently, no such tools exist at the tactical or operational level to assist decision makers in their planning process and, as a result, individual experience and simplistic data tables are the only tools available. Using robustness concepts, this paper proposes a methodology, a mathematical model, and resource allocation heuristics for static planning of village searches that result in a decision-making tool for military leaders.
 
Article
Key aspects of the verification performed on US Army Dugway Proving Ground (DPG) WeatherServer (WXS) are described. WXS is a Test and Training Enabling Architecture (TENA)-based modeling and simulation that distributes three-dimensional meteorological data over time to participants in a distributed test event or joint exercise environment. The verification features an iterative process as an effective measure to reduce time and costs while still allowing a comprehensive review. In addition, the unique role that WXS serves in efficiently providing real-time meteorological data to support Test and Evaluation (T&E) exercises is discussed. The verification process demonstrated the fidelity of the WXS output and the correctness of the coded technical algorithms (e.g., altitude-to-pressure conversions). The iterative process involved cycles of testing and improved software builds that continued until a build of WXS was developed that functioned as designed. Insights obtained during the iterative process applied to the WXS verification included realizing the full benefits of (1) periodic communications and issue resolution among the developer, verification team, and sponsor; (2) allocating sufficient budget for the software developer’s time to support the verification process; and (3) strategically customizing the verification process to allow for an efficient testing and review process following applicable standards.
 
Article
United States ports must be prepared for the threat of a small-vessel attack using weapons of mass destruction (WMD). To reduce the risk of such an attack, modeling was conducted at the Savannah River National Laboratory (SRNL) in Aiken, South Carolina, to develop options for redeployment of existing maritime law enforcement resources, deployment of new resources, and optimal use of geographic terrain. Agent-based modeling (ABM) implemented by the Automated Vulnerability Evaluation for Risks of Terrorism (AVERT®) software was used to conduct computer-based simulation modeling. The port-specific models provided estimates for the probability of encountering an adversary based on allocated resources under varying environmental conditions and traffic flow rates. Defensive resources include patrol and response platforms, some of which may be more appropriate in particular environmental conditions. A diverse range of potential adversary and attack scenarios was assessed for a large area port and also for a port with a narrow inlet, thereby identifying vulnerable pathways. For chokepoint operations, the probability of encountering an adversary was estimated for various configurations and operational tempos. As traffic flow increased, the probability of encountering an adversary decreased because the adversary could assimilate into traffic, while security forces were preoccupied inspecting pleasure craft. However, there was a significant increase in the probability of encountering an adversary (P(Encounter)) when additional patrols were added. Noted was a decreasing marginal benefit of additional patrols at low traffic levels. In open water, use of helicopters on patrol substantially increased the P(Encounter) by directing on-water security to target vessels. This capability was due to the far-reaching vision and speed capabilities of helicopters. As a result of ABM, more effective countermeasures can be deployed with available resources to reduce the risk of a small-vessel attack using WMD. The models can be expanded to all ports in the United States using generic models similar to those presented herein that can be matched to any port based on its size and shape.
 
Article
Risk analysts historically have employed several approaches when examining the risk of terrorism to critical infrastructure. However, frequently these approaches rely on statistical methods, which suffer from a lack of appropriate historical data to produce distributions and do not integrate epistemic uncertainty. Other concerns include locating appropriate subject matter experts who can provide judgment and undertaking an associated validation of these judgments. Given the current body of knowledge, the modeling and simulation community can assist risk analysts to develop more robust methods for the analysis of the risk of terrorism in a critical infrastructure. This paper proposes that risk should be modeled and evaluated as a function of the volume of a risk polyhedron with the key characteristics of the risk of terrorism represented as nodes and the relationship between the key characteristics forming the edges. Specific attention is given to development of the threat component of the risk equation. The values of the key characteristics, instantiated as the length of the edges, are defaulted to absolute uncertainty, the state where there is no information for, or against, a particular causal factor. By adjusting an edge length, the polyhedron’s volume either increases or decreases, depending on the informed state of the modeler. Furthermore, individual threat scenarios can be joined, or nested, to inform risk understanding and management.
 
General model of Tiltrotor UAV.  
Trim state transitions of Tiltrotor UAV.  
Article
Unmanned aerial vehicles (UAVs) are remotely piloted or self-piloted aircrafts that can carry cameras, sensors, communication equipment and other payloads. Tiltrotor UAVs provide a unique platform that fulfills the needs for ever-changing mission requirements, by combining the desired features of hovering like a helicopter and reaching high forward speeds like an airplane, which might be a force multiplier in the battlefield. In this paper, the conceptual design and aerodynamical model of a realizable small-sized Tiltrotor UAV are presented, and the linearized state-space models are obtained around the trim points for airplane, helicopter and conversion modes. Controllers are designed using tracking optimal control method and gain scheduling is employed to obtain a simulation for the whole flight envelope. An interactive software infrastructure is established for the design, analysis and simulation phases, based on the theoretical concepts.
 
Article
This paper describes distributed simulation of MIL-STD-1553B Serial Data Bus interface and protocol based on the Data Distribution Service (DDS) middleware standard. The 1553 bus connects avionics system components, and transports information among them in an aircraft. It is important for system designers to be able to evaluate and verify their component interfaces early in the design phase. The 1553 data bus requires specialized hardware and wiring to operate; thus it is complicated and expensive to verify component interfaces. Therefore modeling the bus on commonly available hardware and networking infrastructure is desirable for early evaluation and verification of designed interfaces. The DDS middleware provides publish/subscribe based communications and facilitates the implementation of distributed systems by providing an abstraction layer over the networking interfaces of the operating systems. This work takes the advantage of the DDS middleware to implement an extensibility 1553 serial data bus simulation tool. The tool is verified using a case study involving a scenario based on the MIL-STD-1760 standard.
 
Article
The same authors had reported a review of game development processes and methodologies in an earlier paper. In that paper, game design elements, such as the players, story, rules, objectives, procedures, conflict, and challenge, and their effect over gameplay were treated. Technical components in that paper included the render engine and rendering pipeline (fixed function pipeline and flexible pipeline), physics engine and physics-related techniques (collision detection, ray-casting, etc.), game codes (for game mechanics, artificial intelligence, scenario creation, and management) and artwork contents (game level, three-dimensional (3D) models, two-dimensional maps for shaders, skeletal animation, and audio assets) that form a game. Along with their development processes and approaches, all of these technical and artwork components are applied on a case problem and are summarized in this paper. The case problem considered is a small part of Gallipoli or Dardanelles Wars in Ottoman history, as a real-time 3D strategy game.
 
Article
This article examines the use of a weighting scheme with a multi-attribute decision-making (MADM) process to improve the CARVER center of gravity analysis that is currently used by Special Operations Forces. We employ Saaty’s Analytical Hierarchy Process (AHP) pairwise comparison method to obtain initial decision maker weights for the decision-making process as an example for a generalizable mission. Firstly, we show the standard current CARVER method, as outlined in FM 34-36. Next, we apply several MADM methods using our suggested AHP weighting scheme to obtain the rankings of the alternatives. We compare the results and provide sensitivity analysis to examine the robustness of each MADM analysis. We conclude that any decision methodology for CARVER that includes a weighting scheme by each decision maker is better than not using any weighting scheme.
 
AICA reference architecture --the key components of the agent
Article
The North Atlantic Treaty Organization (NATO) Research Task Group IST-152 developed a concept and a reference architecture for intelligent software agents performing active, largely autonomous cyber-defense actions on military assets. The group released a detailed report, briefly reviewed in this article, where such an agent is referred to as an Autonomous Intelligent Cyber-defense Agent (AICA). In a conflict with a technically sophisticated adversary, NATO military networks will operate in a heavily contested battlefield. Enemy malware will likely infiltrate and attack friendly networks and systems. Today’s reliance on human cyber defenders will be untenable on the future battlefield. Instead, artificially intelligent agents, such as AICAs, will be necessary to defeat the enemy malware in an environment of potentially disrupted communications where human intervention may not be possible. The IST-152 group identified specific capabilities of AICA. For example, AICA will have to be capable of autonomous planning and execution of complex multi-step activities for defeating or degrading sophisticated adversary malware, with the anticipation and minimization of resulting side effects. It will have to be capable of adversarial reasoning to battle against a thinking, adaptive malware. Crucially, AICA will have to keep itself and its actions as undetectable as possible, and will have to use deceptions and camouflage. The report identifies the key functions and components and their interactions for a potential reference architecture of such an agent, as well as a tentative roadmap toward the capabilities of AICA.
 
Respiration rate-means and standard deviations for 22-inch and 73-inch monitor-size conditions (N = 34).
Article
Developing a stress-management training (SMT) system and protocol for soldiers can help them cope better with stress experienced in theatre operations. Using 3D horror games in virtual reality (VR) can present an attractive simulation method for soldiers. This study was conducted to find out whether it is possible to stress soldiers moderately using VR and which technology is more efficient to do so. A total of 47 soldiers returning from Afghanistan played two 3D first-person shooter (FPS)/horror games (Killing Floor and Left 4 Dead) on three different types of immersive technologies (a 22-inch stereoscopic monitor, a 73-inch stereoscopic TV and a CAVE™). As a control and reference comparison of induced stress, participants were exposed to the Trier Social Stress Test (TSST), a standardized stress-inducing procedure. Results were supporting of our work, devising an effective low-cost and high-buy-in approach to assist in teaching and practicing stress-management skills. Repeated measures analyses of variance (ANOVAs) revealed statistically significant increases in the soldiers’ respiration rates and heart rates while playing the 3D games and during the TSSTs. No significant interactions were found. Increases in physiological arousal among the soldiers were significant when comparing the baseline to the immersion and to the TSST, but not when comparing both stressors. Immersion in 3D games is proposed as a practical and cost-effective option to create a context that allows practicing SMT.
 
Article
In this work, a multiple user deep neural network-based non-orthogonal multiple access (NOMA) receiver is investigated considering channel estimation error. The decoding of the symbol in the case of the NOMA system follows the sequential order and decoding accuracy depends on the detection of the previous user. Without estimating the throughput, a deep neural network-based NOMA orthogonal frequency division multiplexing (OFDM) system is proposed to decode the symbols from the users. Firstly, the deep neural network is trained. Secondly, the data are trained and lastly, the data are tested for various users. In this work, for various values of signal to noise ratio, the performance of the deep neural network is investigated, and the bit error rate (BER) is calculated on a per subcarrier basis. The simulation results show that the deep neural network is more robust to symbol distortion due to inter-symbol information and will obtain knowledge of the channel state information using data testing.
 
Top-cited authors
Paul K. Davis
  • RAND Corporation
Tag Gon Kim
  • Korea Advanced Institute of Science and Technology
Kyung-Min Seo
  • Korea University of Technology and Education
Michael D. Proctor
  • University of Central Florida
James J. Nutaro
  • Oak Ridge National Laboratory