A New Learning Algorithm for Function Approximation By Incorporating A Priori Information Into Feedforward Neural Networks
ABSTRACT In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The algorithm incorporates two kinds of constraints into single hidden layered feedforward neural networks, which are architectural constraints and connection weight constraints, respectively, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are a class of specific polynomial functions based on a priori information from Taylor series expansions of the approximated functions. On the other hand, the connection weight constraints are obtained from the first- order derivatives of the approximated functions. The new learning algorithm has been shown by theoretical justifications to have better generalization performance and faster convergence rate than other algorithms. Finally, several experimental results are given to verify the efficiency and effectiveness of our proposed learning algorithm.
- SourceAvailable from: Zheru Chi[show abstract] [hide abstract]
ABSTRACT: This letter proposes a novel neural root é nder based on the root moment method (RMM) to é nd the arbitrary roots (including complex ones) of arbitrary polynomials. This neural root é nder (NRF) was designed based on feedforward neural networks (FNN) and trained with a constrained learning algorithm (CLA). Specié cally, we have incorporated the a priori informationabouttherootmomentsofpolynomialsintotheconventional backpropagation algorithm (BPA), to construct a new CLA. The resulting NRF is shown to be able to rapidly estimate the distributions of roots of polynomials. We study and compare the advantage of the RMM-based NRF over the previous root coefé cient method- based NRF and the tra- ditional Muller and Laguerre methods as well as the mathematica roots function, and the behaviors, the accuracies of the resulting root é nders, and their training speeds of two specié c structures corresponding to this FNN root é nder: the log6 and the 6 ¡ 5 FNN. We also analyze the ef- fects of the three controlling parameters f±P0;µp;´g with the CLA on the two NRFs theoretically and experimentally. Finally, we present computer simulation results to support our claims.Neural Computation. 01/2004; 16:1721-1762.
- [show abstract] [hide abstract]
ABSTRACT: In this paper, a new modified hybrid learning algorithm for feedforward neural networks is proposed to obtain better generalization performance. For the sake of penalizing both the input-to-output mapping sensitivity and the high frequency components in training data, the first additional cost term and the second one are selected based on the first-order derivatives of the neural activation at the hidden layers and the second-order derivatives of the neural activation at the output layer, respectively. Finally, theoretical justifications and simulation results are given to verify the efficiency and effectiveness of our proposed learning algorithm.05/2005: pages 413-440;
- [show abstract] [hide abstract]
ABSTRACT: This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.IEEE Transactions on Neural Networks 12/2004; 15(6):1411-23. · 2.95 Impact Factor