Determining when, if, and how information from separate sensory channels has been combined is a fundamental goal of research on multisensory processing in the brain. This can be a particular challenge in psychophysical data, as there is no direct recording of neural output. The most common way to characterize multisensory interactions in behavioral data is to compare responses to multisensory stimulation with the race model, a model of parallel, independent processing constructed from the probability of responses to the two unisensory stimuli which make up the multisensory stimulus. If observed multisensory reaction times are faster than those predicted by the model, it is inferred that information from the two channels is being combined rather than processed independently. Recently, behavioral research has been published employing capacity analyses where comparisons between two conditions are carried out at the level of the integrated hazard function. Capacity analyses seem to be particularly appealing technique for evaluating multisensory functioning, as they describe relationships between conditions across the entire distribution curve, are relatively easy and intuitive to interpret. The current paper presents capacity analysis of a behavioral data set previously analyzed using the race model. While applications of capacity analyses are still somewhat limited due to their novelty, it is hoped that this exploration of capacity and race model analyses will encourage the use of this promising new technique both in multisensory research and other applicable fields.
Record linkage is the task of identifying records from disparate data sources that refer to the same entity. It is an integral component of data processing in distributed settings, where the integration of information from multiple sources can prevent duplication and enrich overall data quality, thus enabling more detailed and correct analysis. Privacy-preserving record linkage (PPRL) is a variant of the task in which data owners wish to perform linkage without revealing identifiers associated with the records. This task is desirable in various domains, including healthcare, where it may not be possible to reveal patient identity due to confidentiality requirements, and in business, where it could be disadvantageous to divulge customers' identities. To perform PPRL, it is necessary to apply string comparators that function in the privacy-preserving space. A number of privacy-preserving string comparators (PPSCs) have been proposed, but little research has compared them in the context of a real record linkage application. This paper performs a principled and comprehensive evaluation of six PPSCs in terms of three key properties: 1) correctness of record linkage predictions, 2) computational complexity, and 3) security. We utilize a real publicly-available dataset, derived from the North Carolina voter registration database, to evaluate the tradeoffs between the aforementioned properties. Among our results, we find that PPSCs that partition, encode, and compare strings yield highly accurate record linkage results. However, as a tradeoff, we observe that such PPSCs are less secure than those that map and compare strings in a reduced dimensional space.
With the increasing burden of chronic diseases on the health care system, Markov-type models are becoming popular to predict the long-term outcomes of early intervention and to guide disease management. However, statisticians have not been actively involved in the development of these models. Typically, the models are developed by using secondary data analysis to find a single "best" study to estimate each transition in the model. However, due to the nature of secondary data analysis, there frequently are discrepancies between the theoretical model and the design of the studies being used. This paper illustrates a likelihood approach to correctly model the design of clinical studies under the conditions where 1) the theoretical model may include an instantaneous state of distinct interest to the researchers, and 2) the study design may be such that study data can not be used to estimate a single parameter in the theoretical model of interest. For example, a study may ignore intermediary stages of disease. Using our approach, not only can we accommodate the two conditions above, but more than one study may be used to estimate model parameters. In the spirit of "If life gives you lemon, make lemonade", we call this method "Lemonade Method". Simulation studies are carried out to evaluate the finite sample property of this method. In addition, the method is demonstrated through application to a model of heart disease in diabetes.
The classification of time series is topic of this paper. In particular we discuss the combination of multiple classifier outputs with decision templates. The decision templates are calculated over a set of feature vectors which are extracted in local time windows. To learn characteristic classifier outputs of time series a set of decision templates is determined for the individual classes. We present algorithms to calculate multiple decision templates, and demonstrate the behaviour of this new approach on a real world data set from the field of bioacoustics.
It is known that the error correcting output code (ECOC) technique, when applied to multi-class learning problems, can improve generalisation performance. One reason for the improvement is its ability to decompose the original problem into complementary two-class problems. Binary classifiers trained on the sub-problems are diverse and can benefit from combining using a simple distance-based strategy. However there is some discussion about why ECOC performs as well as it does, particularly with respect to the significance of the coding/decoding strategy. In this paper we consider the binary (0,1) code matrix conditions necessary for reduction of error in the ECOC framework, and demonstrate the desirability of equidistant codes. It is shown that equidistant codes can be generated by using properties related to the number of 1’s in each row and between any pair of rows. Experimental results on synthetic data and a few popular benchmark problems show how performance deteriorates as code length is reduced for six decoding strategies.
Multi-sensor management concerns the control of environment perception activities by managing or coordinating the usage of multiple sensor resources. It is an emerging research area, which has become increasingly important in research and development of modern multi-sensor systems. This paper presents a comprehensive review of multi-sensor management in relation to multi-sensor information fusion, describing its place and role in the larger context, generalizing main problems from existing application needs, and highlighting problem solving methodologies.
In this article we propose a calibration algorithm and three low-level data fusion algorithms for a parallel 2D/3D-camera system. A parallel 2D/3D-camera is a hardware setup of a range camera and a high-resolution gray-value camera spatially related to each other by a fixed translation. The proposed calibration algorithm utilizes the fact that for known calibration patterns the range reconstruction accuracy of the gray-value camera is significantly higher than that of the range sensor. Using the calibrated 2D/3D-camera we identify the range pixels within the gray-value image for each pair of acquired camera images. We present three low-level data fusion approaches assigning range information to each gray-value pixel based on different neighborhood relations to the identified range pixels: one-nearest neighbor, nearest neighbors in the surrounding Delaunay-triangle, and nearest neighbors constrained by a gray-value image segmentation. We demonstrate the applicability, efficiency and accuracy of our calibration and fusion algorithms on real and synthetic data. Our real experiments are performed on a 2D/3D-camera comprising a Siemens 64 × 8-pixel time-of-flight range camera developed within the European project PReVENT (UseRCams) and a common gray-value camera.
We consider uncertain data which uncertainty is represented by belief functions and that must be combined. The result of the combination of the belief functions can be partially conflictual. Initially Shafer proposed Dempster’s rule of combination where the conflict is reallocated proportionally among the other masses. Then Zadeh presented an example where Dempster’s rule of combination produces unsatisfactory results. Several solutions were proposed: the TBM solution where masses are not renormalized and conflict is stored in the mass given to the empty set, Yager’s solution where the conflict is transferred to the universe and Dubois and Prade’s solution where the masses resulting from pairs of conflictual focal elements are transferred to the union of these subsets. Many other suggestions have then been made, creating a ‘jungle’ of combination rules. We discuss the nature of the combinations (conjunctive versus disjunctive, revision versus updating, static versus dynamic data fusion), argue about the need for a normalization, examine the possible origins of the conflicts, determine if a combination is justified and analyze many of the proposed solutions.
The aim of this article is to develop a multisensor estimation method to identify the 3D structure and motion of an object. The method lies in the feature description of the object and the solution uses an extended Kalman filter (EKF) which fuses information from each sensor. The filter tracks the features through the data sequences and estimates the 3D position and affines motion parameters. The originality of this work relies on a 3D modelling of this problem to jointly estimate the 3D structure and motion. This estimation is made possible by the use of an active sensor (range camera).
State-of-the-art airborne SAR sensors provide high-resolution ground mapping data. This offers the opportunity of using this technology for the analysis of built-up areas. However, especially at building locations different SAR specific phenomena like layover, shadow, and multipath-propagation burden the interpretation of the SAR imagery even for experts. In order to consider such effects in the analysis and the geocoding of SAR data, high-resolution information about the 3D structure of the urban scene is required. Three dimensional elevation data from a GIS can be provided in different representations, like DEM (raster data) or city models (vector data). In this paper, the benefit of GIS data for the SAR mission planning and the analysis of acquired SAR data are discussed. For the first task, the SAR acquisition parameters are optimized a priori with respect to the best mapping of a local area or a certain object class. For this optimization, a large number of simulations with systematically varying aspect and viewing angles are carried out. For the second task, the 2D and 3D context information are fused with the acquired SAR imagery to support the interpretation for a change detection task. Additionally, simulated features of man-made objects are offered to the image interpreter for comparison with the data. The feasibility of different kinds of GIS data for these purposes is discussed.
We explore the relationship between diversity measures and ensemble performance, for binary classification with simple majority voting, within a problem domain characterized by asymmetric misclassification costs. Extending the work of Kuncheva and Whitaker [Machine Learning 51(2) (2003) 181], we compare a set of diversity measures within two different data representations. The first is a direct representation, which explicitly allows for consideration of asymmetric costs by indicating the specific values of the predictions––which in turn allows for a distinction between more costly misclassifications in this domain (i.e., actual 0 predicted as 1) and less costly ones (i.e., actual 1 predicted as 0). The second is an oracle representation, which indicates predictions as either correct or incorrect, and therefore does not allow for asymmetric costs. Within these representations we identified and manipulated certain situational factors, including the percentage of target group members in the population and the designed accuracy and sensitivity of each constituent model. Based on a neural network comparison of diversity measures and ensemble performance, we found that (1) diversity measure association with ensemble performance is contingent on the data representation, with Yule's Q-statistic and the coincident failure measure (CFD) as the best indicators in the direct representation and CFD alone as best indicator in the oracle representation, and (2) diversity measure association with ensemble performance varies as situational factors are manipulated; that is, diversity measures are differentially effective at different factor levels. Thus, the choice of a diversity measure in assessing ensemble classification performance requires an examination of both the nature of the task domain and the specific factors that comprise the domain.
This paper examines the problem of distributed intrusion detection in Mobile Ad-Hoc Networks (MANETs), utilizing ensemble methods. A three-level hierarchical system for data collection, processing and transmission is described. Local IDSs (intrusion detection systems) are attached to each node of the MANET, collecting raw data of network operation, and computing a local anomaly index measuring the mismatch between the current node operation and a baseline of normal operation. Anomaly indexes from nodes belonging to a cluster are periodically transmitted to a cluster head, which averages the node indexes producing a cluster-level anomaly index. Cluster heads periodically transmit these cluster-level anomaly indexes to a manager which averages them.On the theoretical side, we show that averaging improves detection rates under very mild conditions concerning the distributions of the anomaly indexes of the normal class and the anomalous class. On the practical side, the paper describes clustering algorithms to update cluster centers and machine learning algorithms for computing the local anomaly indexes. The complete suite of algorithms was implemented and tested, under two types of MANET routing protocols and two types of attacks against the routing infrastructure. Performance evaluation was effected by determining the receiver operating characteristics (ROC) curves and the corresponding area under the ROC curve (AUC) metrics for various operational conditions. The overall results confirm the theoretical developments related with the benefits of averaging with detection accuracy improving as we move up in the node–cluster–manager hierarchy.
Data processing applications for sensor streams have to deal with multiple continuous data streams with inputs arriving at highly variable and unpredictable rates from various sources. These applications perform various operations (e.g. filter, aggregate, join, etc.) on incoming data streams in real-time according to predefined queries or rules. Since the data rate and data distribution fluctuate over time, an appropriate join tree for processing join queries must be adaptively maintained in response to dynamic changes to prevent rapid degradation of the system performance. In this paper, we address the problem of finding an optimal join tree that maximizes throughput for sliding window based multi-join queries over continuous data streams and prove its NP-Hardness. We present a dynamic programming algorithm, OptDP, which produces the optimal tree but runs in an exponential time in the number of input streams. We then present a polynomial time greedy algorithm, XGreedyJoin. We tested these algorithms in ARES, an adaptively re-optimizing engine for stream queries, which we developed by extending Jess (Jess is a popular RETE-based, forward chaining rule engine written in java). For almost all instances, trees from XGreedyJoin perform close to the optimal trees from OptDP, and significantly better than common heuristics-based XJoin algorithms.
Applications and services are increasingly dependent on networks of smart sensors embedded in the environment to constantly sense and react to events. In a typical sensor network application, information is collected from a large number of distributed and heterogeneous sensor nodes. Information fusion in such applications is a challenging research issue due to the dynamicity, heterogeneity, and resource limitations of sensor networks. We present MidFusion, an adaptive middleware architecture to facilitate information fusion in sensor network applications. MidFusion discovers and selects the best set of sensors or sensor agents on behalf of applications (transparently), depending on the quality of service (QoS) guarantees and the cost of information acquisition. We also provide the theoretical foundation for MidFusion to select the best set of sensors using the principles of Bayesian and Decision theories. A sensor selection algorithm (SSA) for selecting the best set of sensors is presented in this paper. Our theoretical findings are validated through simulation of the SSA algorithm on an example scenario.
A major challenge in real world object detection for Video Surveillance (VS) is the dynamic nature of environmental conditions with respect to illumination, visibility, weather change, etc. With increasing availability of cameras and other sensor modalities beyond visible spectrum at lower cost, multi-modal VS systems involving visible, thermal infrared cameras, etc., are seen as a promising solution for reliable and robust operation in unfavorable environmental conditions like at night or dark situation. However, there are several research challenges to actually utilize the combined benefits of using different modalities. This paper addresses the uncertainty problem in the fusion of the information provided by complementary modalities like visible spectrum and thermal infrared video in a generic framework using evidence theory. A belief model is developed to determine the validity of a foreground region detected by each source for tracking. Fuzzy logic modeling is done for generating belief mass function from the sensor information by using two measurement features. A novel algorithm is developed for dynamic assessment of individual sensor reliability within the belief model. A generic approach to re-assign the conflicting mass is adopted for belief fusion to be done in a weighted manner depending on the context. Finally, the confirmed objects are tracked and the sensor measurements of their position, size, etc., are fused using a weighted Kalman filter fusion method. The approach is evaluated by using a pair of visible and thermal infrared sensors in real world challenging scenarios.
In the present article, we develop a linguistic recognition system for identification of some possible genes mediating the development of human lung adenocarcinoma. The methodology involves dimensionality reduction, classifying the genes through incorporation of the notion of linguistic fuzzy sets low, medium and high, and finally selection of some possible genes obtained by a rule generation/grouping technique. The system has been successfully applied on two microarray gene expression data sets. The results are appropriately validated by some earlier investigations, gene expression profiles and t-test. The proposed methodology has been able to find more true positives than an existing one in identifying responsible genes. Moreover, we have found some new genes that may have role in mediating the development of lung adenocarcinoma.
Large scale situation management applications such as disaster recovery and network-centric battle management are characterized by distributed heterogeneous agent platforms with dynamic agent populations, highly variable network connectivity and bandwidth, and localized situation knowledge and event collection. We describe a new agent model and an integrated peer-to-peer architecture which addresses these requirements. We present an extension of the BDI agent model which allows it to be used in highly reactive applications. We describe the use of multi-hop peer-to-peer overlays which provides highly scalable coupling of distributed agent platforms. Finally, we describe a two-phase semantic discovery mechanism which serves as a basis for agents to share events and situations across the overlay.
Many exciting, emerging applications require that a group of agents share a coherent view of the world given spatial distribution, incomplete and uncertain sensors, and communication constraints. This article describes an analysis and design methodology for distributed algorithms that coordinate the exchange of information for extremely large groups of agents maintaining a coherent belief in some environment property. The design methodology uses the tools of statistical mechanics to create a probability distribution which relates groupings of agents, called Sharing Groups, to pair-wise agent divergences and social temperature. Social temperature is a decision parameter, the same for all agents, that agents use probabilistically to decide when to join a Sharing Group. We show empirically as well as via Monte-Carlo simulations that for a critical value of social temperature the Sharing Groups formed result in bandwidth efficiency and divergence from ground truth that is simultaneously optimal independent of the method of information exchange.
Coordination of actions and plans that must be achieved by multiple agents is one of the most difficult tasks in the multi-agent domain. In order to work together and achieve a common goal, agents need to coordinate their plans in a way that guarantees, if possible, the success of each individual agent plan. In this work, we propose a temporal fusion mechanism that allows a set of agents to fuse their plans and generate a global coordinated plan. First, we define a temporal plan as a set of temporally constrained actions. The fusion of several temporal plans is a temporal plan, which can be executed by several agents. The proposed framework is applied to a Combat Search and Rescue application.
Agent-based software systems and applications are constructed by integrating diverse sets of components that are intelligent, heterogeneous, distributed, and concurrent. This paper describes a multi-agent system to assure the operation efficiency and reliability in data fusion and management of a set of networked distributive sensors (NDS). We discuss the general concept and architecture of a Hierarchical Collective Agent Network (HCAN) and its functional components for learning and adaptive control of the NDS. Sophistication of a HCAN control environment and an anatomy of the agent modules for enabling intelligent data fusion and management are presented. An exemplar HCAN is configured to support dynamic data fusion and automated sensor management in a simulated distributive and collaborative military sensor network for Global Missile Defense (GMD) application.
Information fusion can assist in the development of sensor network applications by merging capabilities, raw data and decisions from multiple sensors through distributed and collaborative integration algorithms. In this paper, we introduce a multi-layered, middleware-driven, multi-agent, interoperable architecture for distributed sensor networks that bridges the gap between the programmable application layer consisting of software agents and the physical layer consisting of sensor nodes. We adopt an energy-efficient, fault-tolerant approach for collaborative information processing among multiple sensor nodes using a mobile-agent-based computing model. In this model the sink/base-station deploys mobile agents that migrate from node to node following a certain itinerary, either pre-determined or determined on-the-fly, and fuse the information/data locally at each node. This way, the intelligence is distributed throughout the network edge and communication cost is reduced to make the sensor network energy-efficient. We evaluate the performance of our mobile-agent-based approach as well as that of the traditional client/server-based computing model, vis-à-vis energy consumption and execution time, through both analytical study and simulation. We draw important conclusions based on our findings. Finally, we consider a collaborative target classification application, supported by our architectural framework, to illustrate the efficacy of the mobile-agent-based computing model.
In real world applications robots and software agents often have to be equipped with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known and unpredictable environments. One of the major tasks in such circumstances is to fuse information from various data sources. There are many levels of information fusion, ranging from the fusing of low level sensor signals to the fusing of high level, complex knowledge structures. In a dynamically changing environment even a single agent may have varying abilities to perceive its environment which are dependent on particular conditions. The situation becomes even more complex when different agents have different perceptual capabilities and need to communicate with each other.In this paper, we propose a framework that provides agents with the ability to fuse both low and high level approximate knowledge in the context of dynamically changing environments while taking account of heterogeneous and contextually limited perceptual capabilities.To model limitations on an agent’s perceptual capabilities we introduce the idea of partial tolerance spaces. We assume that each agent has one or more approximate databases where approximate relations are represented using lower and upper approximations on sets. Approximate relations are generalizations of rough sets.It is shown how sensory and other limitations can be taken into account when constructing and querying approximate databases for each respective agent. Complex relations inherit the approximativeness of primitive relations used in their definitions. Agents then query these databases and receive answers through the filters of their perceptual limitations as represented by (partial) tolerance spaces and approximate queries. The techniques used are all tractable.
The main use of intrusion detection systems (IDS) is to detect attacks against information systems and networks. Normal use of the network and its functioning can also be monitored with an IDS. It can be used to control, for example, the use of management and signaling protocols, or the network traffic related to some less critical aspects of system policies. These complementary usages can generate large numbers of alerts, but still, in operational environment, the collection of such data may be mandated by the security policy. Processing this type of alerts presents a different problem than correlating alerts directly related to attacks or filtering incorrectly issued alerts.We aggregate individual alerts to alert flows, and then process the flows instead of individual alerts for two reasons. First, this is necessary to cope with the large quantity of alerts – a common problem among all alert correlation approaches. Second, individual alert’s relevancy is often indeterminable, but irrelevant alerts and interesting phenomena can be identified at the flow level. This is the particularity of the alerts created by the complementary uses of IDSes.Flows consisting of alerts related to normal system behavior can contain strong regularities. We propose to model these regularities using non-stationary autoregressive models. Once modeled, the regularities can be filtered out to relieve the security operator from manual analysis of true, but low impact alerts. We present experimental results using these models to process voluminous alert flows from an operational network.
The adoption of standards for exchanging information across the Web presents both new opportunities and important challenges for data integration and aggregation. Although Web Services simplify the discovery and access of information sources, the problem of semantic heterogeneity remains: how to find semantic correspondences across the data being integrated.In this paper, we explore these issues in the context of Web Services, and propose OATS, a novel algorithm for schema matching that is specifically suited to Web Service data aggregation. We show how probing Web Services with a small set of related queries results in semantically correlated data instances which greatly simplifies the matching process, and demonstrate that the use of an ensemble of string distance metrics in matching data instances performs better than individual metrics. We also show how the choice of probe queries has a dramatic effect on matching accuracy. Motivated by this observation, we describe and evaluate an machine learning approach to selecting probes to maximise accuracy while minimising cost.
In this paper we focus on the aggregation of IDS alerts, an important component of the alert fusion process. We exploit fuzzy measures and fuzzy sets to design simple and robust alert aggregation algorithms. Exploiting fuzzy sets, we are able to robustly state whether or not two alerts are “close in time”, dealing with noisy and delayed detections. A performance metric for the evaluation of fusion systems is also proposed. Finally, we evaluate the fusion method with alert streams from anomaly-based IDS.
Sensors-to-sink data in wireless sensor networks (WSNs) are typically characterized by correlation along the spatial, semantic, and/or temporal dimensions. Exploiting such correlation when performing data aggregation can result in considerable improvements in the bandwidth and energy performance of WSNs. In this paper, we first identify that most of the existing upstream routing approaches in WSNs can be translated to a correlation-unaware data aggregation structure – the shortest-path tree. Although by using a shortest-path tree, some implicit benefits due to correlation are possible, we show that explicitly constructing a correlation-aware structure can result in considerable performance improvement. Toward this end, we present a simple, scalable and distributed correlation-aware aggregation structure that addresses the practical challenges in the context of aggregation in WSNs. Through simulations and analysis, we evaluate the performance of the proposed approach with centralized and distributed correlation-aware and -unaware structures.
The astonishingly large varieties of multi-sensor scene signatures that need to be considered for building a robust machine-based scene analysis system require that one explore the possibility of deriving multi-sensor representations that are at the least invariant to scale, translation, and rotation with respect to the observer.The work presented in this paper is a probability density function (PDF)-based technique that uses the theory of invariant algebra to derive algebraic expressions that remain constant under an object's joint geometrical and surface material changes. This generalization of the geometrical invariants to cover both geometrical as well as physical changes in the images of a scene is a significant contribution to the state-of-the-art.Two cases of similar and dissimilar sensor types are considered. The fused invariants for the case of similar sensor types are unchanged under material as well as under affine transformations. For the case of dissimilar sensors, the approach leads to the derivation of the multi-sensor algebraic expressions that remain unchanged under scale, translation and two-dimensional rotation with respect to the observer. An analysis of the computational complexities of the presented techniques and its comparison with a typical non-invariant approach illustrate the noticeable advantages of the invariant method.
Common estimation algorithms, such as least squares estimation or the Kalman
filter, operate on a state in a state space S that is represented as a
real-valued vector. However, for many quantities, most notably orientations in
3D, S is not a vector space, but a so-called manifold, i.e. it behaves like a
vector space locally but has a more complex global topological structure. For
integrating these quantities, several ad-hoc approaches have been proposed.
Here, we present a principled solution to this problem where the structure of
the manifold S is encapsulated by two operators, state displacement [+]:S x R^n
--> S and its inverse [-]: S x S --> R^n. These operators provide a local
vector-space view \delta; --> x [+] \delta; around a given state x. Generic
estimation algorithms can then work on the manifold S mainly by replacing +/-
with [+]/[-] where appropriate. We analyze these operators axiomatically, and
demonstrate their use in least-squares estimation and the Unscented Kalman
Filter. Moreover, we exploit the idea of encapsulation from a software
engineering perspective in the Manifold Toolkit, where the [+]/[-] operators
mediate between a "flat-vector" view for the generic algorithm and a
"named-members" view for the problem specific functions.
The paper presents two algorithms for Decentralized Bayesian information fusion and information-theoretic decision making. The algorithms are stated in terms of operations on a general probability density function representing a single feature of the environment. Several specific density representations are then considered—Gaussian, discrete, Certainty Grid, and hybrid. Well known algorithms for these representations are shown to fit the general pattern. Stating the algorithms in Bayesian terms has a practical advantage of allowing a generic software implementation. The algorithms are described in the context of the active sensor network architecture—a modular framework for decentralized cooperative information fusion and decision making. An example of decentralized target tracking is provided. The algorithms and the framework implementation is illustrated with the results of two indoor deployment scenarios.
A new quantitative metric is proposed to objectively evaluate the quality of fused imagery. The measured value of the proposed metric is used as feedback to a fusion algorithm such that the image quality of the fused image can potentially be improved. This new metric, called the ratio of spatial frequency error (rSFe), is derived from the definition of a previous measure termed “spatial frequency” (SF) that reflects local intensity variation. In this work, (1) the concept of SF is first extended by adding two diagonal SFs, then, (2) a reference SF (SFR) is computed from the input images, and finally, (3) the error SF (SFE) (subtracting the fusion SF from the reference SF), or the ratio of SF error (rSFe = SFE/SFR), is used as a fusion quality metric. The rSFe (which can be positive or negative) indicates the direction of fusion error—over-fused (if rSFe > 0) or under-fused (if rSFe < 0). Thus, the rSFe value can be back propagated to the fusion algorithm (BP fusion), thereby directing further parameter adjustments in order to achieve a better-fused image. The accuracy of the rSFe is verified with other quantitative measurements such as the root mean square error (RMSE) and the image quality index (IQI), as well as with a qualitative perceptual evaluation based on a standard psychophysical paradigm. An advanced wavelet transform (aDWT) method that incorporates principal component analysis (PCA) and morphological processing into a regular DWT fusion algorithm is implemented with two adjustable parameters—the number of levels of DWT decompositions and the length of the selected wavelet. Results with aDWT were compared to those with a regular DWT and with a Laplacian pyramid. After analyzing several inhomogeneous image groups, experimental results showed that the proposed metric, rSFe, is consistent with RMSE and IQI, and is especially powerful and efficient for realizing the iterative BP fusion in order to achieve a better image quality. Human perceptual assessment was measured and found to strongly support the assertion that the aDWT offers a significant improvement over the DWT and pyramid methods.
In this paper, we investigate several fusion techniques for designing a composite classifier to improve the performance (probability of correct classification) of forward-looking infrared (FLIR) automatic target recognition (ATR). The motivation behind the fusion of ATR algorithms is that if each contributing technique in a fusion algorithm (composite classifier) emphasizes on learning at least some features of the targets that are not learned by other contributing techniques for making a classification decision, a fusion of ATR algorithms may improve overall probability of correct classification of the composite classifier. In this research, we propose to use four ATR algorithms for fusion. The individual performance of the four contributing algorithms ranges from 73.5% to about 77% of probability of correct classification on the testing set. The set of correctly classified targets by each contributing algorithm usually has a substantial overlap with the set of correctly identified targets by other algorithms (over 50% for the four algorithms being used in this research). There is also a significant part of the set of correctly identified targets that is not shared by all contributing algorithms. The size of this subset of correctly identified targets generally determines the extent of the potential improvement that may result from the fusion of the ATR algorithms. In this research, we propose to use Bayes classifier, committee of experts, stacked-generalization, winner-takes-all, and ranking-based fusion techniques for designing the composite classifiers. The experimental results show an improvement of more than 6.5% over the best individual performance.
In this paper, we propose a classification system based on a multiple-classifier architecture, which is aimed at updating land-cover maps by using multisensor and/or multisource remote-sensing images. The proposed system is composed of an ensemble of classifiers that, once trained in a supervised way on a specific image of a given area, can be retrained in an unsupervised way to classify a new image of the considered site. In this context, two techniques are presented for the unsupervised updating of the parameters of a maximum-likelihood classifier and a radial basis function neural-network classifier, on the basis of the distribution of the new image to be classified. Experimental results carried out on a multitemporal and multisource remote-sensing data set confirm the effectiveness of the proposed system.
As the number of the elderly population affected by Alzheimer’s disease (AD) rises rapidly, the need to find an accurate, inexpensive and non-intrusive diagnostic procedure that can be made available to community healthcare providers is becoming an increasingly urgent public health concern. Several recent studies have looked at analyzing electroencephalogram (EEG) signals through the use of wavelets and neural networks. While showing great promise, the final outcomes of these studies have been largely inconclusive. This is mostly due to inherent difficulty of the problem, but also – perhaps – due to inefficient use of the available information, as many of these studies have used a single EEG channel for the analysis. In this contribution, we describe an ensemble of classifiers based data fusion approach to combine information from two or more sources, believed to contain complementary information, for early diagnosis of Alzheimer’s disease. Our emphasis is on sequentially generating an ensemble of classifiers that explicitly seek the most discriminating information from each data source. Specifically, we use the event related potentials recorded from the Pz, Cz, and Fz electrodes of the EEG, decomposed into different frequency bands using multiresolution wavelet analysis. The proposed data fusion approach includes generating multiple classifiers trained with strategically selected subsets of the training data from each source, which are then combined through a modified weighted majority voting procedure. The implementation details and the promising outcomes of this implementation are presented.
Automatic pronunciation of words from their spelling alone is a hard computational problem, especially for languages like English and French where there is only a partially consistent mapping from letters to sound. Currently, the best known approach uses an inferential process of analogy with other words listed in a dictionary of spellings and corresponding pronunciations. However, the process produces multiple candidate pronunciations and little or no theory exists to guide the choice among them. Rather than committing to one specific heuristic scoring method, it may be preferable to use multiple strategies (i.e., soft experts) and then employ information fusion techniques to combine them to give a final result. In this paper, we compare four different fusion schemes, using three different dictionaries (with different codings for specifying the pronunciations) as the knowledge base for analogical reasoning. The four schemes are: fusion of raw scores; rank fusion using Borda counting; rank fusion using non-uniform values; and rank fusion using non-uniform values weighted by a measure of prior performance of the experts. All possible combinations of five different expert strategies are studied. Although all four fusion schemes outperformed the single best strategy, results show clear superiority of rank fusion over the other methods.
Clustering categorical data is an integral part of data mining and has attracted much attention recently. In this paper, we present k-ANMI, a new efficient algorithm for clustering categorical data. The k-ANMI algorithm works in a way that is similar to the popular k-means algorithm, and the goodness of clustering in each step is evaluated using a mutual information based criterion (namely, average normalized mutual information – ANMI) borrowed from cluster ensemble. This algorithm is easy to implement, requiring multiple hash tables as the only major data structure. Experimental results on real datasets show that k-ANMI algorithm is competitive with those state-of-the-art categorical data clustering algorithms with respect to clustering accuracy.
Dendritic cells are antigen presenting cells that provide a vital link between the innate and adaptive immune system, providing the initial detection of pathogenic invaders. Research into this family of cells has revealed that they
perform information fusion which directs immune responses. We have derived a Dendritic Cell Algorithm based on
the functionality of these cells, by modelling the biological signals and differentiation pathways to build a control mechanism for an artificial immune system. We present algorithmic details in addition to experimental results, when the algorithm was applied to anomaly detection for the detection of port scans. The results show the Dendritic Cell Algorithm is successful at detecting port scans.
We present the sensor-fusion results obtained from measurements within the European research project ground explosive ordinance detection (GEODE) system that strives for the realisation of a vehicle-mounted, multi-sensor, anti-personnel landmine-detection system for humanitarian de-mining. The system has three sensor types: a metal detector (MD), an infrared camera (IR), and a ground penetrating radar (GPR). The output of the sensors is processed to produce confidence levels on a grid covering the test-bed. A confidence level expresses a confidence or belief in a landmine detection on a certain position. The grid with confidence levels is the input for the decision-level sensor-fusion and provides a co-registration of the sensors. The applied fusion methods are naive Bayes' approaches, Dempster–Shafer theory, fuzzy probabilities, a rule-based method, and voting techniques. To compare fusion methods and to analyse the capacity of a method to separate landmines from the background on the basis of the output of different sensors, we provide an analysis of the different methods by viewing them as discriminant functions in the sensor confidence space. The results of experiments on real sensor data are evaluated with the leave-one-out method.
Target tracking using delayed, out-of-sequence measurements is a problem of growing importance due to an increased reliance on networked sensors interconnected via complex communication network architectures. In such systems, it is often the case that measurements are received out-of-time-order at the fusion center. This paper presents a Bayesian solution to this problem and provides approximate, implementable algorithms for both cluttered and non-cluttered scenarios involving single and multiple time-delayed measurements. Such an approach leads to a solution involving the joint probability density of current and past target states.In contrast, existing solutions in the literature modify the sensor measurement equation to account for the time delay and explicitly deal with the resulting correlations that arise in the process noise and current target state. In the Bayesian solution proposed in this paper, such cross correlations are treated implicitly. Under linear Gaussian assumptions, the Bayesian solution reduces to an augmented state Kalman filter (AS-KF) for scenarios devoid of clutter and an augmented state probabilistic data association filter (AS-PDA) for scenarios involving clutter. Computationally efficient versions of AS-KF and AS-PDA are considered in this paper. Simulations are presented to evaluate the performance of these solutions.
Every commander's dream is to have a graphic picture of the unfolding battlespace to show the locations and movements of all entities along with extra prompter information. The DoD command concepts have evolved to yield the common operational picture (COP) and four-stage hierarchy of information fusion. We explore an architecture for refining the fusion for building a more accurate picture. It uses a central processing center to fuse tracks from multiple tracking centers with a cognitive approach that associates local tracks with central tracks and refines estimates via our new fuzzy clustering algorithm. A refinement of target identification at the central tracker is based on the local track IDs, which resolves conflicting identities in the local tracks of the same target. Situation assessment (SA) and force threat assessment (TA) are approached using our fuzzy classifier with built-in fuzzy clustering, but these are not fully developed here due to their complexity. We also propose a dual distributed-centralized tracker that establishes central tracks with both fuzzy clustering and an adaptive α–β filter and fuses the resulting tracks.
In this paper a framework for constructing flexible, robust and efficient software applications for multisensor fusion system (MFS) is described. Three-tier architecture is exploited so that the whole software system can be divided into three parts: man/machine interface, logic part and database. Design of logic part according to requirements of MFS is emphasized in this paper by using component object model (COM). The result is a COM-based software architecture, which consists of four levels from bottom to top: sensor driver level (SDL), logical sensor level (LSL), fusion unit level (FUL) and task unit level (TUL). Each level is composed of some components with different functions. An intelligent robot system has been designed and developed in our lab based on the idea of the software frame presented above, and explicit advantages are shown extensively.
This paper presents a new architecture to integrate a library of feature extraction, Data-mining, and fusion techniques to automatically and optimally configure a classification solution for a given labeled set of training patterns. The most expensive and scarce resource in any detection problem (feature selection/classification) tends to be the acquiring of labeled training patterns from which to design the system. The objective of this paper is to present a new Data-mining architecture that will include conventional Data-mining algorithms, feature selection methods and algorithmic fusion techniques to best exploit the set of labeled training patterns so as to improve the design of the overall classification system. The paper describes how feature selection and Data-mining algorithms are combined through a Genetic Algorithm, using single source data, and how multi-source data are combined through several best-suited fusion techniques by employing a Genetic Algorithm for optimal fusion. A simplified version of the overall system is tested on the detection of volcanoes in the Magellan SAR database of Venus.
In this paper, we introduce and overview advances in the field of Web information fusion and integration. As it is such a broad and diverse topic that is researched in many different fields, we choose to provide a unified view by focusing on selected survey articles that extensively cover earlier research contributions. Given the important role that ontologies are playing in Web information fusion and the emergence and fast development of the Semantic Web and Web 3.0 technologies, a separate section is devoted to the topic of ontology research and the Semantic Web. Then, in the section on Web-based support systems, several applications that are enabled as the result of advances in Web information fusion are discussed.
Medical image fusion is the process of registering and combining multiple
images from single or multiple imaging modalities to improve the imaging
quality and reduce randomness and redundancy in order to increase the clinical
applicability of medical images for diagnosis and assessment of medical
problems. Multi-modal medical image fusion algorithms and devices have shown
notable achievements in improving clinical accuracy of decisions based on
medical images. This review article provides a factual listing of methods and
summarizes the broad scientific challenges faced in the field of medical image
fusion. We characterize the medical image fusion research based on (1) the
widely used image fusion methods, (2) imaging modalities, and (3) imaging of
organs that are under study. This review concludes that even though there
exists several open ended technological and scientific challenges, the fusion
of medical images has proved to be useful for advancing the clinical
reliability of using medical imaging for medical diagnostics and analysis, and
is a scientific discipline that has the potential to significantly grow in the
This paper describes the development of neural network models for automatic incident detection on arterial roads, using simulated data derived from inductive loop detectors and probe vehicles. The work reported in this paper extends previous research by comparing the performance of various data fusion neural network architectures and assessing model performance for various probe vehicle penetration rates and loop detector configurations. Data from 108 incidents was collected from loop detectors and probe vehicles using a calibrated and validated traffic simulation model. The best performance was obtained for detector configurations found on most existing road networks, with a detection rate of 86%, false alarm rate of 0.36% and probe vehicle penetration rate of 20%. Fusion of speed data further improved performance, resulting in an incident detection rate of 90% and a false alarm rate of 0.5%. The results reported in this paper demonstrate the feasibility of developing advanced data fusion neural network architectures for detection of incidents on urban arterials using data from existing loop detector configurations and probe vehicles.
Recently, the human immune systems have aroused researcher’s interest due to it useful mechanisms which can be used and exploited for information processing in a complex cognition system. The scope of this research is not to reproduce any immune phenomenon accurately, rather to show that immune concepts can be applied to develop powerful computational tools for data processing. From this viewpoint, an improved artificial immune algorithm is presented and applied to the problems associated with image registration and configurations of multiple sensor systems. Simulation results show that the immune algorithm can successfully obtain the global optimum with less computational cost compared to other traditional algorithms. Therefore, this method has a potential application in other optimization problems.
The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.
Parallel distributed detection schemes for M-ary hypothesis testing often assume that for each observation the local detector transmits at least log2M bits to a data fusion center (DFC). However, it is possible for less than log2M bits to be available, and in this study we consider 1-bit local detectors with M>2. We develop conditions for asymptotic detection of the correct hypothesis by the DFC, formulate the optimal decision rules for the DFC, and derive expressions for the performance of the system. Local detector design is demonstrated in examples, using genetic algorithm search for local decision thresholds. We also provide an intuitive geometric interpretation for the partitioning of the observations into decision regions. The interpretation is presented in terms of the joint probability of the local decisions and the hypotheses.
The aim of this article is to develop a GPS/IMU multisensor fusion algorithm, taking context into consideration. Contextual variables are introduced to define fuzzy validity domains of each sensor. The algorithm increases the reliability of the position information. A simulation of this algorithm is then made by fusing GPS and IMU data coming from real tests on a land vehicle. Bad data delivered by GPS sensor are detected and rejected using contextual information thus increasing reliability. Moreover, because of a lack of credibility of GPS signal in some cases and because of the drift of the INS, GPS/INS association is not satisfactory at the moment. In order to avoid this problem, the authors propose to feed the fusion process based on a multisensor Kalman filter directly with the acceleration provided by the IMU. Moreover, the filter developed here gives the possibility to easily add other sensors in order to achieve performances required.
The combining of visible light and infrared visual representations occurs naturally in some creatures, including the rattlesnake. This process, and the wide-spread use of multi-spectral multi-sensor systems, has influenced research into image fusion methods. Recent advances in image fusion techniques have necessitated the creation of novel ways of assessing fused images, which have previously focused on the use of subjective quality ratings combined with computational metric assessment. Previous work has shown the need to apply a task to the assessment process; the current work continues this approach by extending the novel use of scanpath analysis. In our experiments, participants were shown two video sequences, one in high luminance (HL) and one in low luminance (LL), both featuring a group of people walking around a clearing of trees. Each participant was shown visible and infrared (IR) inputs alone; and side-by-side (SBS); in an average (AVE) fused; a discrete wavelet transform (DWT) fused; and a dual-tree complex wavelet transform (DT-CWT) fused displays. Participants were asked to track one individual in each video sequence, as well as responding by key press when other individuals carried out secondary actions. Results showed the SBS display to lead to much poorer accuracy than the other displays, while reaction times in carrying out the secondary task favoured AVE in the HL sequence and DWT in the LL sequence. Results are discussed in relation to previous findings regarding item saliency and task demands, and the potential for comparative experiments evaluating human performance when viewing fused sequences against naturally occurring fusion processes such as the rattlesnake is highlighted.
We develop an integrated multi-phase approach to middle and high level data fusion with an application to situation and threat assessments. The method first builds a feature vector for each detected ground target that includes time, position and target class in a particular rectangular geographical area of the battlespace. It then clusters the feature vectors by position using a new robust clustering algorithm and makes an inventory of each cluster as to target classes, counts and posture parameters. Situation assessment is done next via a three-tiered cascaded process of case-based reasoning on cluster attribute records to infer the unit types, sizes, and purposes. These are then fed into our fuzzy belief network that performs inferencing via heuristic belief propagation for threat assessment, that is, it infers the actions and intentions of the enemy. A simple synthetic example demonstrates the process.