Sensor fusion by neural networks using spatially represented information.

Sektion Neurophysiologie, Universität Ulm, Germany.
Biological Cybernetics (Impact Factor: 2.07). 12/2001; 85(5):371-85. DOI: 10.1007/s004220100271
Source: PubMed

ABSTRACT A neural network model based on a lateral-inhibition-type feedback layer is analyzed with regard to its capabilities to fuse signals from two different sensors reporting the same event ("multisensory convergence"). The model consists of two processing stages. The input stage holds spatial representations of the sensor signals and transmits them to the second stage where they are fused. If the input signals differ, the model exhibits two different processing modes: with small differences it produces a weighted average of the input signals, whereas with large differences it enters a decision mode where one of the two signals is suppressed. The dynamics of the network can be described by a series of two first-order low-pass filters, whose bandwidth depends nonlinearly on the level of concordance of the input signals. The network reduces sensor noise by means of both its averaging and filtering properties. Hence noise suppression, too, depends on the level of concordance of the inputs. When the network's neurons have internal noise, sensor noise suppression is reduced but still effective as long as the input signals do not differ strongly. The possibility of extending the scheme to three and more inputs is discussed.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We ask how vestibular and optokinetic information is combined ("fused") when human subjects who are being passively rotated while viewing a stationary optokinetic pattern try to tell when they have reached a previously instructed angular displacement ("targeting task"). Inevitably such a task entices subjects to also draw on cognitive mechanisms such as past experience and contextual expectations. Specifically, because we used rotations of constant angular velocity, we suspected that they would resort, consciously or unconsciously, to extrapolation strategies even though they had no explicit knowledge of this fact. To study these issues, we presented the following six conditions to subjects standing on a rotatable platform inside an optokinetic drum: V, pure vestibular (passive rotation in darkness); O, pure optokinetic (observer motionless, drum rotating); VO, combined (passive rotation while viewing stationary drum); Oe, optokinetic extrapolation (similar to O, but drum visible only during first 90 degrees of rotation; thereafter subjects extrapolate the further course in their minds); VOe, combined extrapolation (similar to VO, but drum visible only during first 90 degrees ); AI, auditory imagination (rotation presented only metaphorically; observers imagine a drum rotation using the rising pitch of a tone as cue). In all conditions, angular velocities ( v(C)) of 15, 30, or 60 degrees /s were used (randomized presentation), and observers were to indicate when angular displacement (of the self in space or relative to the drum) had reached the instructed magnitude ("desired displacement", D(D); range 90-900 degrees ). Performance was analyzed in terms of the targeting gain ( G(T) = physical displacement at time of subjects' indication / D(D)) and variability (% E(R) = percentage absolute deviation from a subject's mean gain). In all six conditions, the global mean of G(T) (across v(C) and D(D)) was remarkably close to veracity, ranging from 0.95 (V) to 1.06 (O). A more detailed analysis of the gain revealed a trend of G(T) to be larger with fast than with slow rotations, reflecting an underestimation of fast and an overestimation of slow rotation. This effect varied significantly between conditions: it was smallest in VO, had intermediate values with the monomodal conditions V and O, and also with VOe, and was largest in Oe and AI. Variability was similar for all velocities, but depended significantly on the condition: it was smallest in VO, of intermediate magnitude in O, VOe, Oe, and largest in V and AI. Additional experiments with conditions V, O, and VO in which subjects repetitively indicated displacement increments of 90 degrees, up to a subjective displacement of 1080 degrees, yielded similar results and suggest, in addition, that the displacement perceptions measured at the beginning and during later phases of the rotation are correlated. With respect to the displacement perception during optokinetic stimulation, they also show that the gain and its variability are similar whether subjects feel stationary and see a rotating pattern, or feel rotated and see a stationary pattern (circular vection). We conclude that the vestibular and optokinetic information guiding the subjects' navigation toward an instructed target is not fused by straightforward averaging. Rather the subjects' internal velocity representation (which ultimately determines G(T)) appears to be a weighted average of (1) whatever sensory information is available and of (2) a cognitive default value reflecting the subjects' experiences and expectations. The less secure the sensory information (only one source as in V or O, additional degrading as in Oe or AI), the larger the weight of the default value. Vice versa, the better the information (e.g., two independent sources as in VO), the more the actual velocity and not the default value determines displacement perception. Moreover, we suggest that subjects intuitively proceeded from the notion of a constant velocity rotation, and therefore tended to carry on the perception built up during the beghe perception built up during the beginning of a rotation or, in the case of vestibular navigation, to compensate for the decaying vestibular cue by means of an internal recovery mechanism.
    Experimental Brain Research 08/2003; 151(1):90-107. · 2.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses the integration of a geographical in- formation system (GIS) with a simulation model of the sensors (active and passive) used as components of a de- tection system on US Navy ships. The simulation model is a tool developed to improve threat recognition, undersea tactical awareness, countermeasure emissions, and counter- weapon fire control that enables surface ships to survive a salvo of torpedo attacks. The model, was implemented (2005-2006) in Java using AnyLogic™ (by XJ Technolo- gies). A commercial GIS application provides data visuali- zation, query, analysis, and integration capabilities along with the ability to create and edit geographic data. The simulation model runs and seamlessly gets geographical information from ArcGIS (by ESRI corporation) in order to make decisions such as avoiding a ship going aground. Statistics and animations are controlled by the simulation software, while the maps and the movements of the envi- ronment object above of the map is handled by ArcGIS.
    Proceedings of the Winter Simulation Conference, WSC 2007, Washington, DC, USA, December 9-12, 2007; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The “applied” nature distinguishes applied sciences from theoretical sciences. To emphasize this distinction, we begin with a general, meta-level overview of the scientific endeavor. We introduce the notion of knowledge spectrum and four interconnected modalities of knowledge. In addition to the traditional differentiation between implicit and explicit knowledge, we outline the concepts of general and individual knowledge. We connect general knowledge with the “frame problem,” a fundamental issue of artificial intelligence, and individual knowledge with another important paradigm of artificial intelligence, case-based reasoning, a method of individual knowledge processing that aims at solving new problems based on the solutions to similar past problems.
    07/2005: pages 59-59;