Multiobjective Hybrid Optimization and Training of Recurrent Neural Networks

Department of Computer Science and Artificial Intelligence, University of Grenada, Grenada, Spain.
IEEE TRANSACTIONS ON CYBERNETICS (Impact Factor: 6.22). 05/2008; 38(2):381-403. DOI: 10.1109/TSMCB.2007.912937
Source: PubMed


The application of neural networks to solve a problem involves tasks with a high computational cost until a suitable network is found, and these tasks mainly involve the selection of the network topology and the training step. We usually select the network structure by means of a trial-and-error procedure, and we then train the network. In the case of recurrent neural networks (RNNs), the lack of suitable training algorithms sometimes hampers these procedures due to vanishing gradient problems. This paper addresses the simultaneous training and topology optimization of RNNs using multiobjective hybrid procedures. The proposal is based on the SPEA2 and NSGA2 algorithms for making hybrid methods using the Baldwinian hybridization strategy. We also study the effects of the selection of the objectives, crossover, and mutation in the diversity during evolution. The proposals are tested in the experimental section to train and optimize the networks in the competition on artificial time-series (CATS) benchmark.

26 Reads
  • Source
    • "Also, a general framework using GA for designing neural network ensembles was presented in [14]. The authors in [5] proposed a hybrid MOGA method based on the SPEA2 and NSGA2 algorithms to optimize the training and the topology of the Recurrent Neural Network (RNN) simultaneously in time-series prediction problems. Another approach presented by [15] used to generalized multi-layer perceptrons (MLP) improved the performance of the evolutionary model. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Evolutionary Algorithms (EAs) are population based algorithms, which allow for simultaneous exploration of different parts in the Pareto optimal set. This paper presents Memetic Elitist Pareto Evolutionary Algorithm of Three-Term Backpropagation Network for Classification Problems. This memetic elitist Pareto evolutionary algorithm is called METBP and used to evolve Three-term Backpropagation (TBP) network, which are optimal with respect to connection weight, error rates and architecture complexity simultaneously. METPB is based on NSGA-II benefit from the local search algorithm that used to enhance the individuals in the population of the algorithm. The numerical results of METPB show the advantages of the combination of the local search algorithm, and it is able to obtain a TBP network with better classification accuracy and simpler structure when compared with a multiobjective genetic algorithm based TBP network (MOGATBP) and some methods found in the literature, the results indicate that the proposed method is a potentially useful classifier for enhancing classification process ability.
    International Journal of Advances in Soft Computing and its Applications 11/2014; 6(3):2074-8523.
  • Source
    • "Another method used by Generalized Multi-layer Perceptrons also improve the performance of the evolutionary model. The (Delgado et al., 2008) proposed a hybrid MOGA method based on the Strength Pareto Evolutionary Algorithm-2 (SPEA2) and NSGA-II algorithms to optimize the training and the topology of the Recurrent Neural Network simultaneously. (Jin et al., 2005) used MOGA and focused on the problem of multi objective optimization for feed-forward ANNs as a solution for the regularization problems in the network's complexity. "

  • Source
    • "A number of evolutionary algorithms (EAs) have been developed to solve multiobjective optimization problems such as micro-GA [13] and NSGA- II [14] [15] [16]. As a result, multiobjective optimization (MOO) techniques have been applied to the design of fuzzy models exhibiting high accuracy and significant interpretability [17] [18]. Nevertheless, when dealing with the fuzzy models, the previous studies lack an optimization vehicle which considers not only the solution space being explored but also the techniques of MOO. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA). The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.
    Mathematical Problems in Engineering 04/2013; 2013. DOI:10.1155/2013/725017 · 0.76 Impact Factor
Show more