Article

An efficient hidden layer training method for the multilayer perceptron

Department of Electrical Engineering, University of Texas at Arlington, Arlington, Texas, United States
Neurocomputing (Impact Factor: 2.01). 12/2006; 70:525-535. DOI: 10.1016/j.neucom.2005.11.008

ABSTRACT A u t h o r ' s p e r s o n a l c o p y Abstract The output-weight-optimization and hidden-weight-optimization (OWO–HWO) training algorithm for the multilayer perceptron alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, three major improvements are made to OWO–HWO. First, a desired net function is derived. Second, using the classical mean square error, a weighted hidden layer error function is derived which de-emphasizes net function errors that correspond to saturated activation function values. Third, an adaptive learning factor based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified, using three training data sets.

0 Followers
 · 
250 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: A system is proposed for recognizing four types of defects present in silicon wafer images. After preprocessing, the system applies four segmentation algorithms, one per defect type. Approximate posterior probabilities from a multilayer perceptron classifier aid in fusing the segmentors and making the final defect classification. Numerical results confirm the feasibility of our approach.
    Neural Networks (IJCNN), The 2013 International Joint Conference on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: A batch training algorithm for the multilayer perceptron is developed that optimizes validation error with respect to two parameters. At the end of each training epoch, the method temporarily prunes the network and calculates the validation error versus number of hidden units curve in one pass through the validation data. Since, pruning is done at each epoch, and the best networks are saved, we optimize validation error over the number of hidden units and the number of epochs simultaneously. The number of required multiplies for the algorithm has been analyzed. The method has been compared to others in simulations and has been found to work very well.
    Neural Networks (IJCNN), The 2013 International Joint Conference on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When we apply MLPs(multilayer perceptrons) to pattern classification problems, we generally allocate one output node for each class and the index of output node denotes a class. On the contrary, in this paper, we propose to increase the number of output nodes per each class for performance improvement of MLPs. For theoretical backgrounds, we derive the misclassification probability in two class problems with additional outputs under the assumption that the two classes have equal probability and outputs are uniformly distributed in each class. Also, simulations of 50 isolated-word recognition show the effectiveness of our method.
    01/2009; 9(1). DOI:10.5392/JKCA.2009.9.1.123