Article

Dynamic tunneling technique for efficient training of multilayer perceptrons

Defence Terraom Res. Lab., Delhi
IEEE Transactions on Neural Networks (Impact Factor: 2.95). 02/1999; DOI: 10.1109/72.737492
Source: IEEE Xplore

ABSTRACT A new efficient computational technique for training of multilayer
feedforward neural networks is proposed. The proposed algorithm consists
of two learning phases. The first phase is a local search which
implements gradient descent, and the second phase is a direct search
scheme which implements dynamic tunneling in weight space avoiding the
local trap and thereby generates the point of next descent. The repeated
application of these two phases alternately forms a new training
procedure which results in a global minimum point from any arbitrary
initial choice in the weight space. The simulation results are provided
for five test examples to demonstrate the efficiency of the proposed
method which overcomes the problem of initialization and local minimum
point in multilayer perceptrons

0 Bookmarks
 · 
159 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many problems that arise in machine learning and data mining domains deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Several algorithms had been proposed in the optimiza- tion literature and inherited by the machine learning community. Popularly known as the initialization problem, the ideal set of parameters required will signiflcantly depend on the initial values given by the user. In this paper, we propose stability region based methods for systematically exploring the subspace of the parameters to obtain the neighborhood local optimal solutions. The proposed algorithm takes ad- vantage of TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) to compute neighborhood local optimal solutions on the nonlin- ear surface in a systematic manner using stability regions. Our method explores the dynamic and geometric characteristics of stability boundaries of a nonlinear dynamical system corresponding to the nonlinear function of interest. Basically, our method coalesces the advantages of the traditional local optimizers with that of the dynamic and geometric characteristics of the stability regions of the corresponding nonlinear dynamical system of the log-likelihood function. Two phases namely, the local phase and the stability region phase, are repeated alternatively in the param- eter space to achieve improvements in the quality of the solutions. The local phase obtains the local maximum of the nonlinear function and the stability region phase helps to escape out of the local maximum by moving towards the neighboring sta- bility regions. The stability region based algorithms are applied to three important machine learning problems in: (1) Unsupervised learning - model-based clustering, (2) Pattern discovery - motif flnding problem and (3) Supervised learning - train- ing artiflcial neural networks. Our algorithms were tested on both synthetic and real datasets and the advantages of using this stability region based framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the ∞exibility for the practitioners to use various global and local methods that work well for a particular problem of interest.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a modified learning process for generalized neural network using a learning algorithm by Liu et al. (2001). We consider the effect of initial weights, training results and learning errors using a modified learning process. We employ an incremental training procedure where training patterns are learned systematically. Our algorithm starts with a single training pattern and a single hidden layer neuron. During the course of neural network training, we try to escape from the local minimum by using a weight scaling technique. We allow the network to grow by adding a hidden layer neuron only after several consecutive failed attempts to escape from a local minimum. Our optimization procedure tends to make the network reach the error tolerance with no or little training after the addition of a hidden layer neuron. Simulation results with suitable initial weights indicate that the present constructive algorithm can obtain neural networks very close to minimal structures and that convergence to a solution in neural network training can be guaranteed. We tested these algorithms extensively with small training sets.
    Korean Journal of Applied Statistics. 01/2013; 26(5).
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes complex-valued neural network using simultaneous perturbation method with dynamic tunneling technique. A comparison is made between conventional complex-valued backpropagation algorithm, complex-valued network using simultaneous perturbation method, complex-valued network with dynamic tunneling technique with the proposed method. All these four methods have been compared and the results are shown. For simulation, we have tested with the benchmark problem namely complex XOR with binary inputs, two real valued problems namely geometric figure rotation and similarity transformation of scaling problem. Comparison shows conventional complex-valued backpropagation method performs much better than the other three methods.
    Proceedings of the Second International Conference on Computational Science, Engineering and Information Technology. 10/2012;

Full-text (2 Sources)

Download
40 Downloads
Available from
Jun 3, 2014