Article

A Self-Organising Neural Network for Processing Data from Multiple Sensors

CoRR 01/2010; abs/1012.4173.
Source: DBLP

ABSTRACT This paper shows how a folded Markov chain network can be applied to the problem of processing data from multiple sensors, with an emphasis on the special case of 2 sensors. It is necessary to design the network so that it can transform a high dimensional input vector into a posterior probability, for which purpose the partitioned mixture distribution network is ideally suited. The underlying theory is presented in detail, and a simple numerical simulation is given that shows the emergence of ocular dominance stripes.

0 Bookmarks
 · 
61 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Bayesian methods are used to analyse the problem of training a model to make predictions about the probability distribution of data that has yet to be received. Mixture distributions emerge naturally from this framework, but are not ideally matched to the density estimation problems that arise in image processing. An extension, called a partitioned mixture distribution is presented, which is essentially a set of overlapping mixture distributions. An expectation maximisation training algorithm is derived for optimising partitioned mixture distributions according to the maximum likelihood description. Finally, the results of some numerical simulations are presented, which demonstrate that lateral inhibition arises naturally in partitioned mixture distributions, and that the nodes in a partitioned mixture distribution network co-operate in such a way that each mixture distribution in the partitioned mixture distribution receives its necessary complement of computing machinery
    IEE Proceedings - Vision Image and Signal Processing 09/1994;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper Bayesian methods are used to analyze some of the properties of a special type of Markov chain. The forward transitions through the chain are followed by inverse transitions (using Bayes' theorem) backward through a copy of the same chain; this will be called a folded Markov chain. If an appropriately defined Euclidean error (between the original input and its reconstruction via Bayes' theorem) is minimized with respect to the choice of Markov chain transition probabilities, then the familiar theories of both vector quantizers and self-organizing maps emerge. This approach is also used to derive the theory of self-supervision, in which the higher layers of a multilayer network supervise the lower layers, even though overall there is no external teacher.
    Neural Computation. 01/1994; 6:767-794.

Full-text

View
0 Downloads
Available from