Lars Niklasson

University of Skövde, Skövd, Västra Götaland, Sweden

Are you Lars Niklasson?

Claim your profile

Publications (90)5.21 Total impact

  • Tina Erlandsson, Lars Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: Highlights •A survivability model is suggested that describes the risks for flying a route.•The model captures the dependency between getting tracked and getting hit.•An expected cost measure for route evaluation is suggested.•Simulations show the influences of uncertainties regarding the enemy’s locations.
    Information Fusion 01/2014; 20:88–98. · 3.47 Impact Factor
  • T. Erlandsson, L. Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: A fighter pilot flying an air mission within hostile territory is exposed to the risk of getting hit be enemy weapons. This paper presents a survivability model that can be used for calculating this risk and assessing the threat to the mission. The model consists of two components; a tracking model and a fire model. The tracking model calculates the probabilities that the enemy is tracking the aircraft and has identified it as hostile. The enemy's decision to fire a weapon depends on this probability but also on the enemy's intentions to hit the aircraft. The fire model estimates the aircraft's threat value, which describes how much threat the aircraft poses to the enemy's assets and influences the probability that a weapon is fired. Furthermore, the weapons' opportunities to hit the aircraft depend on the relative geometry between the aircraft velocity vector and the weapon launch positions. A reconnaissance mission is used for illustrating the model and the simulations show that the model enables a deeper analysis of the mission than previous approaches.
    Information Fusion (FUSION), 2013 16th International Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Military organizations have a long history of using simulations, role-play, and games for training. This also encompasses good practices concerning how instructors utilize games and gaming behavior. Unfortunately, the work of instructors is rarely described explicitly in research relating to serious gaming. Decision makers also tend to have overconfidence in the pedagogical power of games and simulations, particularly where the instructor is taken out of the gaming loop. The authors propose a framework, the coaching cycle, that focuses on the roles of instructors. The roles include instructors acting as game players. The fact that the instructors take a more active part in all training activities will further improve learning. The coaching cycle integrates theories of experiential learning (where action precedes theory) and deliberate practice (where the trainee’s skill is constantly challenged by a coach). Incorporating a coaching-by-gaming perspective complicates, but also strengthens, the player-centered design approach to game development in that we need to take into account two different types of players: trainees and instructor. Furthermore, the authors argue that the coaching cycle allows for a shift of focus to a more thorough debriefing, because it implies that learning of theoretical material before simulation/game playing is kept to a minimum. This shift will increase the transfer of knowledge.
    Simulation &amp Gaming 10/2012; 43(5):648-672.
  • T. Erlandsson, L. Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of situation analysis is to assess the relevant objects in the surroundings and interpret their relations and their impact in order for a decision maker to achieve situation awareness and be able to make suitable decisions. However, the information regarding the relevant objects is typically uncertain, which will induce uncertainty in the result from the situation analysis. If the kinematic states of the objects are estimated with a tracking filter, the estimates can be considered as random variables. Furthermore, the situation analysis algorithm is a function of these estimates entailing that the result from the situation analysis is random variable. This paper studies the fighter aircraft domain and a situation analysis algorithm that calculates the combat survivability, i.e., the probability that the aircraft can a fly a route inside hostile territory without getting hit by enemy fire. The survivability of different routes can be compared in order to decide where to fly. However, the uncertainties regarding the threats' positions imply that the survivability is uncertain and can be described as a random variable with a distribution. The unscented transform (UT) is here used for calculating the mean and standard deviation (std) of the survivability in a few scenarios with threats located on the ground. Simulations show that the position uncertainties affect both the mean and std of the survivability and that UT gives similar estimates as a Monte Carlo (MC) approach. UT therefore seems to be a promising approach for calculating the uncertainty in the survivability, which is more computational efficient than MC.
    Information Fusion (FUSION), 2012 15th International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A fighter aircraft flying a mission is often exposed to ground-based threats such as surface-to-air missile (SAM) sites. The fighter pilot needs to take actions to minimize the risk of being shot down, but at the same time be able to accomplish the mission. In this paper we propose a survivability model, which describes the probability that the aircraft will be able to fly a given route without being hit by incoming missiles. Input to this model can consist of sensor measurements collected during flight as well as intelligence data gathered before the mission. This input is by nature uncertain and we therefore investigate the influence of uncertainty in the input to the model. Finally we propose a number of decision support functions that can be developed based on the suggested model such as countermeasure management, mission planning and sensor management.
    Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on; 08/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Military missions in the 21st century are characterized by combinations of traditional symmetric conventional warfare, irregular warfare, and operations other than war.The inherent uncertainty in an actual mission and the variety of potential organizations (e.g. multi-agency, non-governmental, private volunteer, international, international corporations) from several countries that support the mission makes collaboration and co-ordination a key capability for command and control. The ability to communicate and automatically process intent and effects is vital in order for a commander to cooperate with other organizations and agencies and lead subordinates in such a way that the overall mission is completed in the best possible way, including exploitation of fleeting opportunities, i.e. enable for self-synchronization amongst teams and allow for subordinate initiatives. However, intent and effects are often absent in the current and forthcoming digitalized information models, and if intent and effects are present it is likely to be found that the representations are made as free-text fields based on natural language. However, such messages are very difficult to disambiguate, particularly for automated machine systems. The overall objective for the Operations Intent and Effects Model is to support operational and simulated systems by a conceptual intent and effects model and a formalism that is human and machine interpretable.
    The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology. 01/2011; 8(1):37-59.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Military fighter pilots have to make suitable decisions fast in an environment where continuously increasing flows of information from sensors, team members and databases are provided. Not only do the huge amounts of data aggravate the pilots' decision making process: time-pressure, presence of uncertain data and high workload are factors that can worsen the performance of pilot decision making. In this paper, initial ideas of how to support the pilots accomplishing their tasks are presented. Results from interviews with two fighter pilots are described as well as a discussion about how these results can guide the design of a military fighter pilot decision support system, with focus on team cooperation.
    Proc SPIE 10/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We extend the State-Based Anomaly Detection approach by introducing precise and imprecise anomaly detectors using the Bayesian and credal combination operators, where evidences over time are combined into a joint evidence. We use imprecision in order to represent the sensitivity of the classification regarding an object being normal or anomalous. We evaluate the detectors on a real-world maritime dataset containing recorded AIS data and show that the anomaly detectors outperform previously proposed detectors based on Gaussian mixture models and kernel density estimators. We also show that our introduced anomaly detectors perform slightly better than the State-Based Anomaly Detection approach with a sliding window.
    Information Fusion (FUSION), 2010 13th Conference on; 08/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the military aviation domain, the decision maker, i.e. the pilot, often has to process huge amounts of information in order to make correct decisions. This is further aggravated by factors such as time-pressure, high workload and the presence of uncertain information. A support system that aids the pilot to achieve his/her goals has long been considered vital for performance progress in military aviation. Research programs within the domain have studied such support systems, though focus has not been on team collaboration. Based on identified challenges of assessing team situation awareness we suggest an approach to future military aviation support systems based on information fusion. In contrast to most previous work in this area, focus is on supporting team situation awareness, including team threat evaluation. To deal with these challenges, we propose the development of a situational adapting system, which presents information and recommendations based on the current situation.
    Information Fusion (FUSION), 2010 13th Conference on; 08/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a novel hybrid method combining genetic programming and decision tree learning. The method starts by estimating a benchmark level of reasonable accuracy, based on decision tree performance on bootstrap samples of the training set. Next, a normal GP evolution is started with the aim of producing an accurate GP. At even intervals, the best GP in the population is evaluated against the accuracy benchmark. If the GP has higher accuracy than the benchmark, the evolution continues normally until the maximum number of generations is reached. If the accuracy is lower than the benchmark, two things happen. First, the fitness function is modified to allow larger GPs, able to represent more complex models. Secondly, a decision tree with increased size and trained on a bootstrap of the training data is injected into the population. The experiments show that the hybrid solution of injecting decision trees into a GP population gives synergetic effects producing results that are better than using either technique separately. The results, from 18 UCI data sets, show that the proposed method clearly outperforms normal GP, and is significantly better than the standard decision tree algorithm.
    Evolutionary Computation (CEC), 2010 IEEE Congress on; 08/2010
  • Source
    Anders Dahlbom, Lars Niklasson, Göran Falkman
    [Show abstract] [Hide abstract]
    ABSTRACT: Situation recognition is an important problem within the surveillance domain, which addresses the prob-lem of recognizing a priori defined patterns of interesting situations that may be of concurrent and temporal nature, and which possibly are occurring in the present flow of data and information. There may be many viable approaches, with different properties, for addressing this problem however, something they must have in common is good efficiency and high performance. In order to determine if a potential solution has these properties, it is a necessity to have access to test and development environments. In this paper we present DESIRER, a development environment for working with situation recog-nition, and for evaluating and comparing different approaches.
    01/2010;
  • Source
    Rikard König, Ulf Johansson, Lars Niklasson
    01/2010;
  • Source
    Anders Dahlbom, Lars Niklasson, Göran Falkman
    Proceedings of the 2010 International Conference on Genetic and Evolutionary Methods, GEM 2010, July 12-15, 2010, Las Vegas Nevada, USA; 01/2010
  • Source
    Ulf Johansson, Rikard König, Lars Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: Most highly accurate predictive modeling techniques produce opaque models. When comprehensible models are required, rule extraction is sometimes used to generate a transparent model, based on the opaque. Naturally, the extracted model should be as similar as possible to the opaque. This criterion, called fidelity, is therefore a key part of the optimization function in most rule extracting algorithms. To the best of our knowledge, all existing rule extraction algorithms targeting fidelity use 0/1 fidelity, i.e., maximize the number of identical classifications. In this paper, we suggests and evaluate a rule extraction algorithm utilizing a more informed fidelity criterion. More specifically, the novel algorithms, which is based on genetic programming, minimizes the difference in probability estimates between the extracted and the opaque models, by using the generalized Brier score as fitness function. Experimental results from 26 UCI data sets show that the suggested algorithm obtained considerably higher accuracy and significantly better AUC than both the exact same rule extraction algorithm maximizing 0/1 fidelity, and the standard tree inducer J48. Somewhat surprisingly, rule extraction using the more informed fidelity metric normally resulted in less complex models, making sure that the improved predictive performance was not achieved on the expense of comprehensibility.
    Genetic and Evolutionary Computation Conference, GECCO 2010, Proceedings, Portland, Oregon, USA, July 7-11, 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When predictive modeling requires comprehensible models, most data miners will use specialized techniques producing rule sets or decision trees. This study, however, shows that genetically evolved decision trees may very well outperform the more specialized techniques. The proposed approach evolves a number of decision trees and then uses one of several suggested selection strategies to pick one specific tree from that pool. The inherent inconsistency of evolution makes it possible to evolve each tree using all data, and still obtain somewhat different models. The main idea is to use these quite accurate and slightly diverse trees to form an imaginary ensemble, which is then used as a guide when selecting one specific tree. Simply put, the tree classifying the largest number of instances identically to the ensemble is chosen. In the experimentation, using 25 UCI data sets, two selection strategies obtained significantly higher accuracy than the standard rule inducer J48.
    Genetic Programming, 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010. Proceedings; 01/2010
  • Source
    Ulf Johansson, Rikard König, Lars Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. For the ensemble approach to work, base classifiers must not only be accurate but also diverse, i.e., they should commit their errors on different instances. Instance-based learners are, however, very robust with respect to variations of a data set, so standard resampling methods will normally produce only limited diversity. Because of this, instance-based learners are rarely used as base classifiers in ensembles. In this chapter, we introduce a method where genetic programming is used to generate kNN base classifiers with optimized k-values and feature weights. Due to the inherent inconsistency in genetic programming (i.e., different runs using identical data and parameters will still produce different solutions) a group of independently evolved base classifiers tend to be not only accurate but also diverse. In the experimentation, using 30 data sets from the UCI repository, two slightly different versions of kNN ensembles are shown to significantly outperform both the corresponding base classifiers and standard kNN with optimized k-values, with respect to accuracy and AUC.
    12/2009: pages 299-313;
  • Source
    C. Brax, L. Niklasson, R. Laxhammar
    [Show abstract] [Hide abstract]
    ABSTRACT: The increased societal need for surveillance and the decrease in cost of sensors have led to a number of new challenges. The problem is not to collect data but to use it effectively for decision support. Manual interpretation of huge amounts of data in real-time is not feasible; the operator of a surveillance system needs support to analyze and understand all incoming data. In this paper an approach to intelligent video surveillance is presented, with emphasis on finding behavioural anomalies. Two different anomaly detection methods are compared and combined. The results show that it is possible to best increase the total detection performance by combining two different anomaly detectors rather than employing them independently.
    Information Fusion, 2009. FUSION '09. 12th International Conference on; 08/2009
  • Source
    U. Johansson, L. Niklasson
    [Show abstract] [Hide abstract]
    ABSTRACT: Some data mining problems require predictive models to be not only accurate but also comprehensible. Comprehensibility enables human inspection and understanding of the model, making it possible to trace why individual predictions are made. Since most high-accuracy techniques produce opaque models, accuracy is, in practice, regularly sacrificed for comprehensibility. One frequently studied technique, often able to reduce this accuracy vs. comprehensibility tradeoff, is rule extraction, i.e., the activity where another, transparent, model is generated from the opaque. In this paper, it is argued that techniques producing transparent models, either directly from the dataset, or from an opaque model, could benefit from using an oracle guide. In the experiments, genetic programming is used to evolve decision trees, and a neural network ensemble is used as the oracle guide. More specifically, the datasets used by the genetic programming when evolving the decision trees, consist of several different combinations of the original training data and ldquooracle datardquo, i.e., training or test data instances, together with corresponding predictions from the oracle. In total, seven different ways of combining regular training data with oracle data were evaluated, and the results, obtained on 26 UCI datasets, clearly show that the use of an oracle guide improved the performance. As a matter of fact, trees evolved using training data only had the worst test set accuracy of all setups evaluated. Furthermore, statistical tests show that two setups, both using the oracle guide, produced significantly more accurate trees, compared to the setup using training data only.
    Computational Intelligence and Data Mining, 2009. CIDM '09. IEEE Symposium on; 05/2009
  • Anders Dahlbom, Lars Niklasson, Göran Falkman
    [Show abstract] [Hide abstract]
    ABSTRACT: Research on information fusion and situation management within the military domain, is often focused on data-driven approaches for aiding decision makers in achieving situation awareness. We have in a companion paper identified situation recognition as an important topic for further studies on knowledge-driven approaches. When developing new algorithms it is of utmost importance to have data for studying the problem at hand (as well as for evaluation purposes). This often become a problem within the military domain as there is a high level of secrecy, resulting in a lack of data, and instead one often needs to resort to artificial data. Many tools and simulation environments can be used for constructing scenarios in virtual worlds. Most of these are however data-centered, that is, their purpose is to simulate the real-world as accurately as possible, in contrast to simulating complex scenarios. In high-level information fusion we can however often assume that lower-level problems have already been solved - thus the separation of abstraction - and we should instead focus on solving problems concerning complex relationships, i.e. situations and threats. In this paper we discuss requirements that research on situation recognition puts on simulation tools. Based on these requirements we present a component-based simulator for quickly adapting the simulation environment to the needs of the research problem at hand. This is achieved by defining new components that define behaviors of entities in the simulated world.© (2009) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
    04/2009;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.
    04/2009: pages 149-164;