Multiobjective hybrid optimization and training of recurrent neural networks.
ABSTRACT The application of neural networks to solve a problem involves tasks with a high computational cost until a suitable network is found, and these tasks mainly involve the selection of the network topology and the training step. We usually select the network structure by means of a trial-and-error procedure, and we then train the network. In the case of recurrent neural networks (RNNs), the lack of suitable training algorithms sometimes hampers these procedures due to vanishing gradient problems. This paper addresses the simultaneous training and topology optimization of RNNs using multiobjective hybrid procedures. The proposal is based on the SPEA2 and NSGA2 algorithms for making hybrid methods using the Baldwinian hybridization strategy. We also study the effects of the selection of the objectives, crossover, and mutation in the diversity during evolution. The proposals are tested in the experimental section to train and optimize the networks in the competition on artificial time-series (CATS) benchmark.
- [Show abstract] [Hide abstract]
ABSTRACT: Ensembles have been shown to provide better generalization performance than single models. However, the creation, selection and combination of individual predictors is critical to the success of an ensemble, as each individual model needs to be both accurate and diverse. In this paper we present a hybrid multi-objective evolutionary algorithm that trains and optimizes the structure of recurrent neural networks for time series prediction. We then present methods of selecting individual prediction models from the Pareto set of solutions. The first method selects all individuals below a threshold in the Pareto front and the second one is based on the training error. Individuals near the knee point of the Pareto front are also selected and the final method selects individuals based on the diversity of the individual predictors. Results on two time series data sets, Mackey-Glass and Sunspot, show that the training algorithm is competitive with other algorithms and that the final two selection methods are better than selecting all individuals below a given threshold or based on the training error.Neurocomputing 11/2014; 143:302–311. · 2.01 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: This study introduces an information granular-based fuzzy radial basis function neural networks (FRBFNN) based on multiobjective optimization and weighted least square (WLS). An improved multiobjective space search algorithm (IMSSA) is proposed to optimize the FRBFNN. In the design of FRBFNN, the premise part of the rules is constructed with the aid of Fuzzy C-Means (FCM) clustering while the consequent part of the fuzzy rules is developed by using four types of polynomials, namely constant, linear, quadratic, and modified quadratic. Information granulation realized with C-Means clustering helps determine the initial values of the apex parameters of the membership function of the fuzzy neural network. To enhance the flexibility of neural network, we use the WLS learning to estimate the coefficients of the polynomials. In comparison with ordinary least square commonly used in the design of fuzzy radial basis function neural networks, WLS could come with a different type of the local model in each rule when dealing with the FRBFNN. Since the performance of the FRBFNN model is directly affected by some parameters such as e.g., the fuzzification coefficient used in the FCM, the number of rules and the orders of the polynomials present in the consequent parts of the rules, we carry out both structural as well as parametric optimization of the network. The proposed IMSSA that aims at the simultaneous minimization of complexity and the maximization of accuracy is exploited here to optimize the parameters of the model. Experimental results illustrate that the proposed neural network leads to better performance in comparison with some existing neurofuzzy models encountered in the literature.Journal of Electrical Engineering and Technology 07/2012; 7(4). · 0.52 Impact Factor
- Life Science Journal 12/2013; · 0.17 Impact Factor