Conference Paper

A Genetic Algorithm with Variable Range of Local Search for Tracking Changing Environments.

DOI: 10.1007/3-540-61723-X_1002 Conference: Parallel Problem Solving from Nature - PPSN IV, International Conference on Evolutionary Computation. The 4th International Conference on Parallel Problem Solving from Nature, Berlin, Germany, September 22-26, 1996, Proceedings
Source: DBLP

ABSTRACT In this paper we examine a modification to the genetic algorithm - a new adaptive operator was developed for two industrial applications using genetic algorithm based on-line control systems. The aim is to enable the control systems to track optima of a time-varying dynamic system whilst not being detrimental to its ability to provide sound results for the stationary environments. When compared with the hypermutation operator, the new operator matched the level of diversity introduced into the population with the "degree" of the environmental changes better because it increases population diversity only gradually. Although the new technique was developed for the control application domain where real variables are mostly used, a possible generalization of the method is also suggested. It is believed that the technique has the potential to be a further contribution in making genetic algorithm based techniques more readily usable in industrial control applications. The genetic algorithm is a proven search/optimisation technique (1) based on an adaptive mechanism of biological systems. The motivating context of Holland's initial work on genetic algorithms (GAs) was the design and implementation of robust adaptive systems in contrast to mere function optimisers (2). Understanding GAs in this broader adaptive system context is a necessary prerequisite for understanding their potential application to any problem domain and for understanding their relevant strengths and limitations as argued in the previously quoted paper. One important limiting factor for the use of the GA in real time applications common to many real world applications, whose models are not stationary, is the need for the repeated initialization of the GA from a random starting point in the search space to enable tracking optima in such changing/dynamic environments. The use of a repetitive learning cycle has obvious implications in terms of the quality of the solutions available which presents limitations on the use of genetic techniques in dynamic environments such as on-line industrial control. In this paper we present preliminary results of our research into techniques for genetic algorithm based robust systems which will continually evolve an optimal solution in

0 Bookmarks
 · 
63 Views
  • Source
    Evolutionary Computation for Dynamic Optimization Problems, Edited by Trung Thanh Nguyen, Shengxiang Yang, Juergen Branke, Xin Yao, 05/2013: chapter 2: pages 39-64; Springer-Verlag Berlin Heidelberg., ISBN: 978-3-642-38415-8
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: One approach integrated with genetic algorithms (GAs) to address dynamic optimization problems (DOPs) is to maintain diversity of the population via introducing immigrants. Many immigrants schemes have been proposed that differ on the way new individuals are generated, e.g., mutating the best individual of the previous environment to generate elitism-based immigrants. This paper examines the performance of elitism-based immigrants GA (EIGA) with different immigrant mutation probabilities and proposes an adaptive mechanism that tends to improve the performance in DOPs. Our experimental study shows that the proposed adaptive immigrants GA outperforms EIGA in almost all dynamic test cases and avoids the tedious work of fine-tuning the immigrant mutation probability parameter.
    Evolutionary Computation (CEC), 2013 IEEE Congress on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, there has been a growing interest in applying genetic algorithms to dynamic optimization problems. In this study, we present an extensive performance evaluation and comparison of 13 leading evolutionary algorithms with different characteristics on a common platform by using the moving peaks benchmark and by varying a set of problem parameters including shift length, change frequency, correlation value and number of peaks in the landscape. In order to compare solution quality or the efficiency of algorithms, the results are reported in terms of both offline error metric and dissimilarity factor, our novel comparison metric presented in this paper, which is based on signal similarity. Computational effort of each algorithm is reported in terms of average number of fitness evaluations and the average execution time. Our experimental evaluation indicates that the hybrid methods outperform the related work with respect to quality of solutions for various parameters of the given benchmark problem. Specifically, hybrid methods provide up to 24% improvement with respect to offline error and up to 30% improvement with respect to dissimilarity factor by requiring more computational effort than other methods.
    Applied Intelligence 07/2012; 37(1). · 1.85 Impact Factor

Full-text (2 Sources)

Download
1 Download