A fast and compact fuzzy neural network for online extraction of fuzzy rules
ABSTRACT A novel paradigm termed fast and compact fuzzy neural network (FCFNN), which incorporates a pruning strategy into some growing criteria, is proposed for online extraction of fuzzy rules. The proposed growing criteria not only speed up the online learning process but also result in a parsimonious fuzzy neural network while achieving comparable performance and accuracy by virtue of the growing and pruning mechanism. The FCFNN starts with no hidden neurons and parsimoniously generates new hidden units according to the proposed growing criteria as learning proceeds. In the second learning phase, all free parameters of the hidden units are updated by the extended Kalman filter (EKF) method. The performance of the FCFNN algorithm is compared with other popular algorithms like ANFIS, GDFNN and SOFNN, etc., for nonlinear function approximation. Simulation results demonstrate that the learning speed of the proposed FCFNN algorithm is faster and the network structure is more compact while comparable generalization performance and accuracy are achieved, moreover, it is capable of extracting fuzzy rules online.
- SourceAvailable from: Julio Ortega[show abstract] [hide abstract]
ABSTRACT: This paper proposes a framework for constructing and training a radial basis function (RBF) neural network. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. The structure of the Gaussian functions is modified using a pseudo-Gaussian function (PG) in which two scaling parameters σ are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. Other important characteristics of the proposed neural system are that the activation of the hidden neurons is normalized which, as described in the bibliography, provides a better performance than nonnormalization and instead of using a single parameter for the output weights, these are functions of the input variables which leads to a significant reduction in the number of hidden units compared to the classical RBF network. Finally, we examine the result of applying the proposed algorithm to time series prediction.Neurocomputing. 01/2002; 42:267-285.
- [show abstract] [hide abstract]
ABSTRACT: A new learning strategy for time-series prediction using radial basis function (RBF) networks is introduced. Its potential is examined in the particular case of the resource allocating network model, although the same ideas could apply to extend any other procedure. In the early stages of learning, addition of successive new groups of RBFs provides an increased rate of convergence. At the same time, the optimum lag structure is determined using orthogonal techniques such as QR factorization and singular value decomposition (SVD). We claim that the same techniques can be applied to the pruning problem, and thus they are a useful tool for compaction of information. Our comparison with the original RAN algorithm shows a comparable error measure but much smaller-sized networks. The extra effort required by QR and SVD is balanced by the simplicity of only using least mean squares for the iterative parameter adaptation.Neurocomputing. 01/2001;