Article

Three-phase strategy for the OSD learning method in RBF neural networks

School of Engineering, Tarbiat Modares University, P.O. Box 14115-179, Tehran, Iran
Neurocomputing (Impact Factor: 2.01). 03/2009; 72(7-9):1797-1802. DOI: 10.1016/j.neucom.2008.05.011
Source: DBLP

ABSTRACT This paper presents a novel approach in learning algorithms commonly used for training radial basis function (RBF) neural networks. This approach could be used in applications that need real-time capabilities for retraining RBF neural networks. The proposed method is a Three-Phase Learning Algorithm that optimizes the functionality of the Optimum Steepest Decent (OSD) learning method. This methodology focuses to attain greater precision in initializing the center and width of RBF units. An RBF neural network with well-adjusted RBF units in the train process will result in better performance in network response. This method is proposed to reach better performance for RBF neural networks in fewer train iterations, which is the critical issue in real-time applications. Comparing results employing different learning strategies shows interesting outcomes as have come out in this paper.

Download full-text

Full-text

Available from: Reza Sabzevari, Aug 31, 2015
1 Follower
 · 
144 Views
 · 
63 Downloads
  • Source
    • "Radial basis function network (RBFN), known as a candidate of neural networks, has considerable advantages among which are simplicity of its structure, capability of fast learning and approximation to arbitrary smooth nonlinear functions [18] [19] [20] [21] [22]. In comparison with MLP, the RBFN results in the nonlinear maps in which the connection weights occur linearly. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Parallel robotic manipulators have a complicated dynamic model due to the presence of multi-closed-loop chains and singularities. Therefore, the control of them is a challenging and difficult task. In this paper, a novel adaptive tracking controller is proposed for parallel robotic manipulators based on fully tuned radial basis function networks (RBFNs). For developing the controller, a dynamic model of a general parallel manipulator is developed based on D׳Alembert principle and principle of virtual work. RBFNs are utilized to adaptively compensate for the modeling uncertainties, frictional terms and external disturbances of the control system. The adaptation laws for the RBFNs are derived to adjust on-line the output weights and both the centers and variances of Gaussian functions. The stability of the closed-loop system is ensured by using the Lyapunov method. Finally, a simulation example is conducted for a 2 degree of freedom (DOF) parallel manipulator to illustrate the effectiveness of the proposed controller.
    Neurocomputing 08/2014; 137:12–23. DOI:10.1016/j.neucom.2013.04.056 · 2.01 Impact Factor
  • Source
    • "The training of RBF networks is accomplished through the estimation of three kinds of parameters, namely the centers and the widths of the basis functions and finally, the neuron connection weights [13]. According to different applications of RBFNN, there is a wide variety of learning strategies that have been proposed in the literature for changing the parameters of the RBFNN in the training process [14]. Therefore, using conventional learning algorithm while employing RBFNN for real time applications, will not satisfy the desired speed and performance in the training process . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Due to environmental concerns and growing cost of fossil fuel, high levels of distributed generation (DG) units have been installed in power distribution systems. However, with the installation of DG units in a distribution system, many problems may arise such as increase and decrease of short circuit levels, false tripping of protective devices and protection blinding. This paper presents an automated and accurate fault location method for identifying the exact faulty line in the test distribution network with high penetration level of DG units by using the Radial Basis Function Neural Network with Optimum Steepest Descent (RBFNN-OSD) learning algorithm. In the proposed method, to determine the fault location, two RBFNN-OSD have been developed for various fault types. The first RBFNN-OSD is used for predicting the fault distance from the source and all DG units while the second RBFNN is used for identifying the exact faulty line. Several case studies have been simulated to verify the accuracy of the proposed method. Furthermore, the results of RBFNN-OSD and RBFNN with conventional steepest descent algorithm are also compared. The results show that the proposed RBFNN-OSD can accurately determine the location of faults in a test given distribution system with several DG units.
    Measurement 11/2013; 46(9-9):253-67. DOI:10.1016/j.measurement.2013.05.002 · 1.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Training a classifier with good generalization capability is a major issue for pattern classification problems. A novel training objective function for Radial Basis Function (RBF) network using a localized generalization error model (L-GEM) is proposed in this paper. The localized generalization error model provides a generalization error bound for unseen samples located within a neighborhood that contains all training samples. The assumption of the same width for all dimensions of a hidden neuron in L-GEM is relaxed in this work. The parameters of RBF network are selected via minimization of the proposed objective function to minimize its localized generalization error bound. The characteristics of the proposed objective function are compared with those for regularization methods. For weight selection, RBF networks trained by minimizing the proposed objective function consistently outperform RBF networks trained by minimizing the training error, Tikhonov Regularization, Weight Decay or Locality Regularization. The proposed objective function is also applied to select center, width and weight in RBF network simultaneously. RBF networks trained by minimizing the proposed objective function yield better testing accuracies when compared to those that minimizes training error only.
    Information Sciences 09/2009; 179(19-179):3199-3217. DOI:10.1016/j.ins.2009.06.001 · 3.89 Impact Factor
Show more