Making adversary decision modeling tractable with intent inference and information fusion

Dept of Computer Science & Engineering UTEB, University of Connecticut, U-155, 06269-3155, Storrs, CT


Military and domestic security analysts and planners are facing threats whose asymmetric nature will sharply increase the challenges of establishing an adversary's intent. This complex environment will severely limit the capabilities of the classic doctrinal approach to diagnose adversary activity. Instead, a more dynamic approach is required -adversary decision modeling (ADM) -that, while a critical capability, poses a range of daunting technological challenges. We are developing methodologies and tools that represent a tractable approach to ADM using intelligent software-based analysis of adversarial intent. In this paper we present work being performed by our team (University of Connecticut, Lockheed Martin Advanced Technology Laboratories, and the Air Force Research Laboratory Human Effectiveness Directorate) toward a preliminary composite theory of adversary intent and its descriptive models, to provide a coherent conceptual foundation for addressing adversary decision processes, tasks, and functions. We then introduce notional computational models that, given own system-of-systems actions (movements and activities) and observations of an adversary's actions and reactions, automatically generate hypotheses about the adversary's intent. We present a preliminary software architecture that implements the model with: (1) intelligent mobile agents to rapidly and autonomously collect information, (2) information fusion technologies to generate higher-level evidence, and (3) our Intent Inference engine that models interests, preferences, and context.

Download full-text


Available from: Eugene Santos, Dec 18, 2013
  • Source
    • "The problem is not as much collecting intelligence but translating it into actionable intelligence, which is difficult because current adversary tactics and doctrine change rapidly. We are collecting unprecedented amounts of Joint Directors of Laboratories level 0 and level 1 intelligence but technological limitations have inhibited transforming it into actionable levels 2 and 3 intelligence that includes meaning, such as adversarial intent [4][5]. As Figure 1 highlights, the goal of the Fused Intent System (FIS) is to leverage computational modeling in conjunction with simulation to support this transformation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding the intent of today's enemy necessitates changes in intelligence collection, processing, and dissemination. Unlike cold war antagonists, today's enemies operate in small, agile, and distributed cells whose tactics do not map well to established doctrine. This has necessitated a proliferation of advanced sensor and intelligence gathering techniques at level 0 and level 1 of the Joint Directors of Laboratories fusion model. The challenge is in leveraging modeling and simulation to transform the vast amounts of level 0 and level 1 data into actionable intelligence at levels 2 and 3 that include adversarial intent. Currently, warfighters are flooded with information (facts/observables) regarding what the enemy is presently doing, but provided inadequate explanations of adversarial intent and they cannot simulate 'what-if' scenarios to increase their predictive situational awareness. The Fused Intent System (FIS) aims to address these deficiencies by providing an environment that answers 'what' the adversary is doing, 'why' they are doing it, and 'how' they will react to coalition actions. In this paper, we describe our approach to FIS which includes adversarial 'soft-factors' such as goals, rationale, and beliefs within a computational model that infers adversarial intent and allows the insertion of assumptions to be used in conjunction with current battlefield state to perform what-if analysis. Our approach combines ontological modeling for classification and Bayesian-based abductive reasoning for explanation and has broad applicability to the operational, training, and commercial gaming domains.
    Full-text · Article · Apr 2008 · Proceedings of SPIE - The International Society for Optical Engineering
  • Source
    • "Systems that rely on symbolic reasoning e.g. [3] [4] [5] have had success in developing models of adversarial plans. However, [3] uses a rule-base to reason about enemy intent and rule-bases are error prone and time consuming to both construct and maintain. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Intent inferencing is the ability to predict an opposing force's (OPFOR) high level goals. This is accomplished by the interpretation of the OPFOR's disposition, movements, and actions within the context of known OPFOR doctrine and knowledge of the environment. For example, given likely OPFOR force size, composition, disposition, observations of recent activity, obstacles in the terrain, cultural features such as bridges, roads, and key terrain, intent inferencing will be able to predict the opposing force's high level goal and likely behavior for achieving it. This paper describes an algorithm for intent inferencing on an enemy force with track data, recent movements by OPFOR forces across terrain, terrain from a GIS database, and OPFOR doctrine as input. This algorithm uses artificial potential fields to discover field parameters of paths that best relate sensed track data from the movements of individual enemy aggregates to hypothesized goals. Hypothesized goals for individual aggregates are then combined with enemy doctrine to discover the intent of several aggregates acting in concert.
    Full-text · Conference Paper · Aug 2005
  • Source
    • "Understanding of an information assurance as uniform holistic system is extremely hampered. It depends on great many interactions between different cyber warfare processes and is determined by dynamic character of these processes and different components of computer systems (Bell and Santos 2002). Especially it is fair in conditions of the Internet evolution to a free decentralized distributed environment in which a huge number of cooperating and antagonistic software components (agents) interchange among themselves and with people by large information contents and services (Information Dynamics and Emergent Behavior 2001; Kephart at al. 1998). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The paper considers an approach to modeling and simulation of cyber-wars in Internet between the teams of software agents. Each team is a community of agents cloned on various network hosts. The approach is considered by an example of modeling and simulation of "Distributed Denial of Service" (DDoS) attacks and protection against them. Agents of different teams compete to reach antagonistic intentions. Agents of the same team cooperate to realize joint intentions. The ontologies of DDoS-attacks and mechanisms of protection against them are described. The variants of agents' team structures, the mechanisms of their interaction and coordination, the specifications of hierarchy of action plans as well as the developed software prototypes are determined.
    Full-text · Article · Jan 2005
Show more