Conference Paper

Evolutionary particle filter: re-sampling from the genetic algorithm perspective

ARC Centre of Excellence for Autonomous Syst., Univ. of Technol., Sydney, NSW, Australia
DOI: 10.1109/IROS.2005.1545119 Conference: Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on
Source: IEEE Xplore

ABSTRACT The sample impoverishment problem in particle filters is investigated from the perspective of genetic algorithms. The contribution of this paper is in the proposal of a hybrid technique to mitigate sample impoverishment such that the number of particles required and hence the computation complexities are reduced. Studies are conducted through the use of Chebyshev inequality for the number of particles required. The relationship between the number of particles and the time for impoverishment is examined by considering the takeover phenomena as found in genetic algorithms. It is revealed that the sample impoverishment problem is caused by the resampling scheme in implementing the particle filter with a finite number of particles. The use of uniform or roulette-wheel sampling also contributes to the problem. Crossover operators from genetic algorithms are adopted to tackle the finite particle problem by re-defining or re-supplying impoverished particles during filter iterations. Effectiveness of the proposed approach is demonstrated by simulations for a monobot simultaneous localization and mapping application.

  • [Show abstract] [Hide abstract]
    ABSTRACT: When tracking the moving targets in video image sequences with the existing particle filter, usually the tracking performance is not satisfactory due to the particle degradation and particle diversity loss. In this paper, we propose a novel particle filtering algorithm. In the algorithm, the multi-agent co-evolutionary mechanism is introduced into the particle re-sampling process and make the particle become an agent having ability of local perception, competitive selection and self-learning by the redefinition of particle agent and its local living environment. The re-sampling process is accomplished by the co-evolutionary behaviors among particles such as competition, crossover, mutation and self-learning, etc. It can not only ensure the particle validity but also increase the particle diversity. Experimental results show that the proposed algorithm can achieve better performance when tracking objects in complex video scenes.
    Computer Application and System Modeling (ICCASM), 2010 International Conference on; 11/2010
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe a novel learning scheme for hidden dependencies in video streams. The proposed scheme aims to transform a given sequential stream into a dependency structure of particle populations. Each particle population summarizes an associated segment. The novel point of the proposed scheme is that both of dependency learning and segment summarization are performed in an unsupervised online manner without assuming priors. The proposed scheme is executed in two-stage learning. At the first stage, a segment corresponding to a common dominant image is estimated using evolutionary particle filtering. Each dominant image is depicted based on combinations of image descriptors. Prevailing features of a dominant image are selected through evolution. Genetic operators introduce the essential diversity preventing sample impoverishment. At the second stage, transitional probability between the estimated segments is computed and stored. The proposed scheme is applied to extract dependencies in an episode of a TV drama. We demonstrate performance by comparing to human estimations.
    Evolutionary Computation (CEC), 2012 IEEE Congress on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: During the last two decades there has been a growing interest in Particle Filtering (PF). However, PF suffers from two long-standing problems that are referred to as sample degeneracy and impoverishment. We are investigating methods that are particularly efficient at Particle Distribution Optimization (PDO) to fight sample degeneracy and impoverishment, with an emphasis on intelligence choices. These methods benefit from such methods as Markov Chain Monte Carlo methods, Mean-shift algorithms, artificial intelligence algorithms (e.g., Particle Swarm Optimization, Genetic Algorithm and Ant Colony Optimization), machine learning approaches (e.g., clustering, splitting and merging) and their hybrids, forming a coherent standpoint to enhance the particle filter. The working mechanism, interrelationship, pros and cons of these approaches are provided. In addition, approaches that are effective for dealing with high-dimensionality are reviewed. While improving the filter performance in terms of accuracy, robustness and convergence, it is noted that advanced techniques employed in PF often causes additional computational requirement that will in turn sacrifice improvement obtained in real life filtering. This fact, hidden in pure simulations, deserves the attention of the users and designers of new filters.
    Expert Systems with Applications 01/2014; 41(8):3944–3954. · 1.85 Impact Factor

Full-text (2 Sources)

Available from
May 21, 2014