An efficient hidden layer training method for the multilayer perceptron

FastVDO LLC, 21046, Columbia, MD, USA; Department of Electrical Engineering, University of Texas at Arlington, 76019, TX; Department of Radiology, Clinical Center National Institutes of Health, 20892, Bethesda, MD, USA
Neurocomputing (Impact Factor: 1.63). 01/2006; 70:525-535. DOI: 10.1016/j.neucom.2005.11.008

ABSTRACT A u t h o r ' s p e r s o n a l c o p y Abstract The output-weight-optimization and hidden-weight-optimization (OWO–HWO) training algorithm for the multilayer perceptron alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, three major improvements are made to OWO–HWO. First, a desired net function is derived. Second, using the classical mean square error, a weighted hidden layer error function is derived which de-emphasizes net function errors that correspond to saturated activation function values. Third, an adaptive learning factor based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified, using three training data sets.

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a novel pruning algorithm is proposed for self-organizing the feed-forward neural network based on the sensitivity analysis, named novel pruning feed-forward neural network (NP-FNN). In this study, the number of hidden neurons is determined by the output's sensitivity to the hidden nodes. This technique determines the relevance of the hidden nodes by analyzing the Fourier decomposition of the variance. Then each hidden node can obtain a contribution ratio. The connected weights of the hidden nodes with small ratio will be set as zeros. Therefore, the computational cost of the training process will be reduced significantly. It is clearly shown that the novel pruning algorithm minimizes the complexity of the final feed-forward neural network. Finally, computer simulation results are carried out to demonstrate the effectiveness of the proposed algorithm.
    International Joint Conference on Neural Networks, IJCNN 2009, Atlanta, Georgia, USA, 14-19 June 2009; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we developed a wavelet neural network (WNN) algorithm for electroencephalogram (EEG) artifact. The algorithm combines the universal approximation characteristics of neural networks and the time/frequency property of wavelet transform, where the neural network was trained on a simulated dataset with known ground truths. The contribution of this paper is two-fold. First, many EEG artifact removal algorithms, including regression based methods, require reference EOG signals, which are not always available. The WNN algorithm tries to learn the characteristics of EOG from training data and once trained, the algorithm does not need EOG recordings for artifact removal. Second, the proposed method is computationally efficient, making it a reliable real time algorithm. We compared the proposed algorithm to the independent component analysis (ICA) technique and an adaptive wavelet thresholding method on both simulated and real EEG datasets. Experimental results show that the WNN algorithm can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy datasets.
    Neurocomputing 01/2012; 97(1):374-389. · 1.63 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Functional near infrared spectroscopy (fNIRS) was used to explore hemodynamic responses in the human frontal cortex to noxious thermal stimulation over the right temporomandibular joint (TMJ). fNIRS experiments were performed on nine healthy volunteers under both low-pain stimulation (LPS) and high-pain stimulation (HPS), using a temperature-controlled thermal stimulator. By analyzing the temporal profiles of changes in oxy-hemoglobin concentration (HbO) using cluster-based statistical tests, several regions of interest in the prefrontal cortex, such as the dorsolateral prefrontal cortex and the anterior prefrontal cortex, were identified, where significant differences ( p < .05) between HbO responses to LPS and HPS were shown. In order to classify these two levels of pain, a neural network-based classification algorithm was utilized. With leave-one-out crossvalidation, the two levels of pain were identified with 99% mean accuracy to high pain. Furthermore, the “internal mentation hypothesis” and the default-mode network were introduced to explain our observations of the contrasting trend, as well as the rise and fall of HbO responses to HPS and LPS.
    Journal of Applied Biobehavioral Research 09/2013; 18(3):134-155.

Full-text (2 Sources)

Available from
May 17, 2014