Article
A SelfOrganising Neural Network for Processing Data from Multiple Sensors
CoRR 01/2010; abs/1012.4173.
Source: DBLP
 Citations (2)
 Cited In (0)

Article: Partitioned mixture distribution: an adaptive Bayesian network for lowlevel image processing
[Show abstract] [Hide abstract]
ABSTRACT: Bayesian methods are used to analyse the problem of training a model to make predictions about the probability distribution of data that has yet to be received. Mixture distributions emerge naturally from this framework, but are not ideally matched to the density estimation problems that arise in image processing. An extension, called a partitioned mixture distribution is presented, which is essentially a set of overlapping mixture distributions. An expectation maximisation training algorithm is derived for optimising partitioned mixture distributions according to the maximum likelihood description. Finally, the results of some numerical simulations are presented, which demonstrate that lateral inhibition arises naturally in partitioned mixture distributions, and that the nodes in a partitioned mixture distribution network cooperate in such a way that each mixture distribution in the partitioned mixture distribution receives its necessary complement of computing machineryIEE Proceedings  Vision Image and Signal Processing 09/1994;  [Show abstract] [Hide abstract]
ABSTRACT: In this paper Bayesian methods are used to analyze some of the properties of a special type of Markov chain. The forward transitions through the chain are followed by inverse transitions (using Bayes' theorem) backward through a copy of the same chain; this will be called a folded Markov chain. If an appropriately defined Euclidean error (between the original input and its reconstruction via Bayes' theorem) is minimized with respect to the choice of Markov chain transition probabilities, then the familiar theories of both vector quantizers and selforganizing maps emerge. This approach is also used to derive the theory of selfsupervision, in which the higher layers of a multilayer network supervise the lower layers, even though overall there is no external teacher.Neural Computation. 01/1994; 6:767794.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.