Article

Making adversary decision modeling tractable with intent inference and information fusion

Dept of Computer Science & Engineering UTEB, University of Connecticut, U-155, 06269-3155, Storrs, CT
04/2002;

ABSTRACT Military and domestic security analysts and planners are facing threats whose asymmetric nature will sharply increase the challenges of establishing an adversary's intent. This complex environment will severely limit the capabilities of the classic doctrinal approach to diagnose adversary activity. Instead, a more dynamic approach is required -adversary decision modeling (ADM) -that, while a critical capability, poses a range of daunting technological challenges. We are developing methodologies and tools that represent a tractable approach to ADM using intelligent software-based analysis of adversarial intent. In this paper we present work being performed by our team (University of Connecticut, Lockheed Martin Advanced Technology Laboratories, and the Air Force Research Laboratory Human Effectiveness Directorate) toward a preliminary composite theory of adversary intent and its descriptive models, to provide a coherent conceptual foundation for addressing adversary decision processes, tasks, and functions. We then introduce notional computational models that, given own system-of-systems actions (movements and activities) and observations of an adversary's actions and reactions, automatically generate hypotheses about the adversary's intent. We present a preliminary software architecture that implements the model with: (1) intelligent mobile agents to rapidly and autonomously collect information, (2) information fusion technologies to generate higher-level evidence, and (3) our Intent Inference engine that models interests, preferences, and context.

Download full-text

Full-text

Available from: Eugene Santos, Dec 18, 2013
0 Followers
 · 
140 Views
  • Source
    • "The problem is not as much collecting intelligence but translating it into actionable intelligence, which is difficult because current adversary tactics and doctrine change rapidly. We are collecting unprecedented amounts of Joint Directors of Laboratories level 0 and level 1 intelligence but technological limitations have inhibited transforming it into actionable levels 2 and 3 intelligence that includes meaning, such as adversarial intent [4][5]. As Figure 1 highlights, the goal of the Fused Intent System (FIS) is to leverage computational modeling in conjunction with simulation to support this transformation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Understanding the intent of today's enemy necessitates changes in intelligence collection, processing, and dissemination. Unlike cold war antagonists, today's enemies operate in small, agile, and distributed cells whose tactics do not map well to established doctrine. This has necessitated a proliferation of advanced sensor and intelligence gathering techniques at level 0 and level 1 of the Joint Directors of Laboratories fusion model. The challenge is in leveraging modeling and simulation to transform the vast amounts of level 0 and level 1 data into actionable intelligence at levels 2 and 3 that include adversarial intent. Currently, warfighters are flooded with information (facts/observables) regarding what the enemy is presently doing, but provided inadequate explanations of adversarial intent and they cannot simulate 'what-if' scenarios to increase their predictive situational awareness. The Fused Intent System (FIS) aims to address these deficiencies by providing an environment that answers 'what' the adversary is doing, 'why' they are doing it, and 'how' they will react to coalition actions. In this paper, we describe our approach to FIS which includes adversarial 'soft-factors' such as goals, rationale, and beliefs within a computational model that infers adversarial intent and allows the insertion of assumptions to be used in conjunction with current battlefield state to perform what-if analysis. Our approach combines ontological modeling for classification and Bayesian-based abductive reasoning for explanation and has broad applicability to the operational, training, and commercial gaming domains.
    Proceedings of SPIE - The International Society for Optical Engineering 01/2008; DOI:10.1117/12.782203 · 0.20 Impact Factor
  • Source
    • "Systems that rely on symbolic reasoning e.g. [3] [4] [5] have had success in developing models of adversarial plans. However, [3] uses a rule-base to reason about enemy intent and rule-bases are error prone and time consuming to both construct and maintain. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Intent inferencing is the ability to predict an opposing force's (OPFOR) high level goals. This is accomplished by the interpretation of the OPFOR's disposition, movements, and actions within the context of known OPFOR doctrine and knowledge of the environment. For example, given likely OPFOR force size, composition, disposition, observations of recent activity, obstacles in the terrain, cultural features such as bridges, roads, and key terrain, intent inferencing will be able to predict the opposing force's high level goal and likely behavior for achieving it. This paper describes an algorithm for intent inferencing on an enemy force with track data, recent movements by OPFOR forces across terrain, terrain from a GIS database, and OPFOR doctrine as input. This algorithm uses artificial potential fields to discover field parameters of paths that best relate sensed track data from the movements of individual enemy aggregates to hypothesized goals. Hypothesized goals for individual aggregates are then combined with enemy doctrine to discover the intent of several aggregates acting in concert.
    Information Fusion, 2005 8th International Conference on; 08/2005
  • Source
    • "Lockheed Martin Advanced Technology Laboratories (ATL) is engaged in internal research and development work to assess adversary COAs and thus enable the generation of actionable information for decision makers. ATL is leveraging this work into an Air Force Research Laboratories (AFRL) Information Institute Research Project, " Adversary Intent Inferencing for Predictive Battlespace Awareness " [1] "
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to combat the present and future asymmetric threats to national and international security, information fusion developments must progress beyond current Level 1 (Object Refinement) paradigms. By focusing on the challenges of Continuous Intelligence Preparation of the Battlespace (CIPB), Lockheed Martin Advanced Technology Laboratories (ATL) has begun to elicit an infrastructure and enabling technologies for information fusion at Level 2 (Situation Refinement) and Level 3 (Threat Refinement). Our approach to Level 2 is to perform spatial and temporal processing on tracks produced by Level 1 multi-sensor, multi-target fusion supplemented with intelligence information from both structured data sources such as databases, and from unstructured data sources such as text documents. The output of Level 2 is referred to as a factlet. Factlet creation is driven both by the evolution of events in the battlespace and by the top-down information needs of the CIPB analysts working at Level 3. Our approach to Level 3 is to drive existing and newly formulated models of threat behavior, viewed from multiple perspectives, such as political, economic, and tactical, with factlets derived in Level 2, to support the determination of possible enemy courses of action. We leverage existing algorithmic lessons learned from Level 1 Classification Fusion, where each sensor represented track classification from a different, complementary perspective, for factlet or evidence fusion. In addition, an approach to automated threat information discovery is discussed, which is initiated by the need for supporting evidence to further refine the Level 3 inferencing processes. Our approach to Level 2 Situation Refinement and Level 3 Threat Refinement will be demonstrated via an Air Force CIPB scenar...
Show more