Effects of learning rate on the performance of the population based incremental learning algorithm.
ABSTRACT The effect of learning rate (LR) on the performance of a newly introduced evolutionary algorithm called population-based incremental learning (PBIL) is investigated in this paper. PBIL is a technique that combines a simple genetic algorithm (GA) with competitive learning (CL). Although CL is often studied in the context of artificial neural networks (ANNs), it plays a vital role in PBIL in that the idea of creating a prototype vector in learning vector quantization (LVQ) is central to PBIL. In PBIL, the crossover operator of GAs is abstracted away and the role of population is redefined. PBIL maintains a real-valued probability vector (PV) or prototype vector from which solutions are generated. The probability vector controls the random bitstrings generated by PBIL and is used to create other individuals through learning. The setting of the learning rate (LR) can greatly affect the performance of PBIL. However, the effect of the learning rate in PBIL is not yet fully understood. In this paper, PBIL is used to design power system stabilizers (PSSs) for a multi-machine power system. Four cases studies with different learning rate patterns are investigated. These include fixed LR; purely adaptive LR; fixed LR followed by adaptive LR; and adaptive LR followed by fixed LR. It is shown that a smaller learning rate leads to more exploration of the algorithm which introduces more diversity in the population at the cost of slower convergence. On the other hand, a higher learning rate means more exploitation of the algorithm and hence, this could lead to a premature convergence in the case of fixed LR. Therefore, in setting the LR, a trade-off is needed between exploitation and exploration.
- SourceAvailable from: kfupm.edu.sa[show abstract] [hide abstract]
ABSTRACT: This paper demonstrates the use of genetic algorithms for the simultaneous stabilization of multimachine power systems over a wide range of operating conditions via single-setting power system stabilizers. The power system operating at various conditions is treated as a finite set of plants. The problem of selecting the parameters of power system stabilizers which simultaneously stabilize this set of plants is converted to a simple optimization problem which is solved by a genetic algorithm with an eigenvalue-based objective function. Two objective functions are presented, allowing the selection of the stabilizer parameters to shift some of the closed-loop eigenvalues to the left-hand side of a vertical line in the complex s-plane, or to a wedge-shape sector in the complex s-plane. The effectiveness of the suggested technique in damping local and inter-area modes of oscillations in multimachine power systems is verified through eigenvalue analysis and simulation resultsIEEE Transactions on Power Systems 12/1999; · 2.92 Impact Factor
Conference Proceeding: Approaching evolutionary robotics through population-based incremental learning[show abstract] [hide abstract]
ABSTRACT: Population-based incremental learning (PBIL) is a recently developed evolutionary computing technique based on concepts found in genetic algorithms and competitive learning-based artificial neural networks. PBIL and a traditional genetic algorithm are compared on the task of evolving a neural network-based controller for a simulated robotic agent. In particular, this paper examines the performance of FP-PBIL, a variant of PBIL developed for this task that works with floating-point representations rather than bit-strings. Results are presented showing the superior performance of FP-PBIL. This advantage, combined with lower memory and processing requirements indicate that the technique is well-suited to developing online, evolutionary controllers for autonomous robotic agentsSystems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on; 02/1999
Conference Proceeding: Improved versions of learning vector quantization[show abstract] [hide abstract]
ABSTRACT: The author introduces a variant of (supervised) learning vector quantization (LVQ) and discusses practical problems associated with the application of the algorithms. The LVQ algorithms work explicitly in the input domain of the primary observation vectors, and their purpose is to approximate the theoretical Bayes decision borders using piecewise linear decision surfaces. This is done by purported optimal placement of the class codebook vectors in signal space. As the classification decision is based on the nearest-neighbor selection among the codebook vectors, its computation is very fast. It has turned out that the differences between the presented algorithms in regard to the remaining discretization error are not significant, and thus the choice of the algorithm may be based on secondary arguments, such as stability in learning, in which respect the variant introduced (LVQ2.1) seems to be superior to the others. A comparative study of several methods applied to speech recognition is includedNeural Networks, 1990., 1990 IJCNN International Joint Conference on; 07/1990