Sensor fusion by neural networks using spatially represented information

Sektion Neurophysiologie, Universität Ulm, Germany.
Biological Cybernetics (Impact Factor: 1.71). 12/2001; 85(5):371-85. DOI: 10.1007/s004220100271
Source: PubMed


A neural network model based on a lateral-inhibition-type feedback layer is analyzed with regard to its capabilities to fuse signals from two different sensors reporting the same event ("multisensory convergence"). The model consists of two processing stages. The input stage holds spatial representations of the sensor signals and transmits them to the second stage where they are fused. If the input signals differ, the model exhibits two different processing modes: with small differences it produces a weighted average of the input signals, whereas with large differences it enters a decision mode where one of the two signals is suppressed. The dynamics of the network can be described by a series of two first-order low-pass filters, whose bandwidth depends nonlinearly on the level of concordance of the input signals. The network reduces sensor noise by means of both its averaging and filtering properties. Hence noise suppression, too, depends on the level of concordance of the inputs. When the network's neurons have internal noise, sensor noise suppression is reduced but still effective as long as the input signals do not differ strongly. The possibility of extending the scheme to three and more inputs is discussed.

Full-text preview

Available from:
  • Source
    • "The information from these sensors is then fused, and the final decision to react, or not to react is made. Sensor fusion technologies, such as using neural networks, have been developed (Boß et al. 2001) and are examples of the theory of evidence (Fabre, Appriou, and Briottet 2001). "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses the integration of a geographical in- formation system (GIS) with a simulation model of the sensors (active and passive) used as components of a de- tection system on US Navy ships. The simulation model is a tool developed to improve threat recognition, undersea tactical awareness, countermeasure emissions, and counter- weapon fire control that enables surface ships to survive a salvo of torpedo attacks. The model, was implemented (2005-2006) in Java using AnyLogic™ (by XJ Technolo- gies). A commercial GIS application provides data visuali- zation, query, analysis, and integration capabilities along with the ability to create and edit geographic data. The simulation model runs and seamlessly gets geographical information from ArcGIS (by ESRI corporation) in order to make decisions such as avoiding a ship going aground. Statistics and animations are controlled by the simulation software, while the maps and the movements of the envi- ronment object above of the map is handled by ArcGIS.
    Full-text · Conference Paper · Jan 2007
  • Source
    • "However, more elegant solutions can be envisaged based on networks using place codes. Spatially coding fusion mechanisms have been suggested, for example, by Ernst and Banks (2002) and Boß et al. (2001). Unfortunately however, the notion of averaging by means of spatially coding networks is at variance with the known physiology of the vestibular and optokinetic systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We ask how vestibular and optokinetic information is combined ("fused") when human subjects who are being passively rotated while viewing a stationary optokinetic pattern try to tell when they have reached a previously instructed angular displacement ("targeting task"). Inevitably such a task entices subjects to also draw on cognitive mechanisms such as past experience and contextual expectations. Specifically, because we used rotations of constant angular velocity, we suspected that they would resort, consciously or unconsciously, to extrapolation strategies even though they had no explicit knowledge of this fact. To study these issues, we presented the following six conditions to subjects standing on a rotatable platform inside an optokinetic drum: V, pure vestibular (passive rotation in darkness); O, pure optokinetic (observer motionless, drum rotating); VO, combined (passive rotation while viewing stationary drum); Oe, optokinetic extrapolation (similar to O, but drum visible only during first 90° of rotation; thereafter subjects extrapolate the further course in their minds); VOe, combined extrapolation (similar to VO, but drum visible only during first 90°); AI, auditory imagination (rotation presented only metaphorically; observers imagine a drum rotation using the rising pitch of a tone as cue). In all conditions, angular velocities (v C) of 15, 30, or 60°/s were used (randomized presentation), and observers were to indicate when angular displacement (of the self in space or relative to the drum) had reached the instructed magnitude ("desired displacement", D D; range 90–900°). Performance was analyzed in terms of the targeting gain (G T = physical displacement at time of subjects' indication / D D) and variability (%E R = percentage absolute deviation from a subject's mean gain). In all six conditions, the global mean of G T (across v C and D D) was remarkably close to veracity, ranging from 0.95 (V) to 1.06 (O). A more detailed analysis of the gain revealed a trend of G T to be larger with fast than with slow rotations, reflecting an underestimation of fast and an overestimation of slow rotation. This effect varied significantly between conditions: it was smallest in VO, had intermediate values with the monomodal conditions V and O, and also with VOe, and was largest in Oe and AI. Variability was similar for all velocities, but depended significantly on the condition: it was smallest in VO, of intermediate magnitude in O, VOe, Oe, and largest in V and AI. Additional experiments with conditions V, O, and VO in which subjects repetitively indicated displacement increments of 90°, up to a subjective displacement of 1080°, yielded similar results and suggest, in addition, that the displacement perceptions measured at the beginning and during later phases of the rotation are correlated. With respect to the displacement perception during optokinetic stimulation, they also show that the gain and its variability are similar whether subjects feel stationary and see a rotating pattern, or feel rotated and see a stationary pattern (circular vection). We conclude that the vestibular and optokinetic information guiding the subjects' navigation toward an instructed target is not fused by straightforward averaging. Rather the subjects' internal velocity representation (which ultimately determines G T) appears to be a weighted average of (1) whatever sensory information is available and of (2) a cognitive default value reflecting the subjects' experiences and expectations. The less secure the sensory information (only one source as in V or O, additional degrading as in Oe or AI), the larger the weight of the default value. Vice versa, the better the information (e.g., two independent sources as in VO), the more the actual velocity and not the default value determines displacement perception. Moreover, we suggest that subjects intuitively proceeded from the notion of a constant velocity rotation, and therefore tended to carry on the perception built up during the beginning of a rotation or, in the case of vestibular navigation, to compensate for the decaying vestibular cue by means of an internal recovery mechanism.
    Preview · Article · Aug 2003 · Experimental Brain Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Breast cancer is a common and dreadful disease in women. The surface temperature and the vascularization pattern of the breast could indicate breast diseases. Establishing the surface isotherm pattern of the breast and the normal range of cyclic variations of temperature distribution can assist in identifying the abnormal infrared images of diseased breasts. This paper investigates the cyclic variation of temperature and vascularization of the normal breast thermograms under a controlled environment. More than 50 Asian women, were examined and some of them have been examined continuously for two month. All together, not less than 800 thermograms were obtained. Before these thermograms can be analysed objectively via a computer algorithm, they must be digitized and segmented. The authors present a method to segment thermograms and extract the useful region from the background. After the image processing, these thermograms can be analysed and then the best time to perform an examination can be chosen. All these results are important for establishing a data bank of normal breast thermography, to choose the best time for an examination and as a systematic methodology for evaluating and analysing the abnormal breast thermography in the future.
    Full-text · Article · Jan 2001 · Journal of Medical Engineering & Technology
Show more