Article

Complex Evidence Theory for Multisource Data Fusion

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Data fusion is a prevalent technique for assembling imperfect raw data coming from multiple sources to capture reliable and accurate information. Dempster–Shafer evidence theory is one of useful methodologies in the fusion of uncertain multisource information. The existing literature lacks a thorough and comprehensive review of the recent advances of Dempster– Shafer evidence theory for data fusion. Therefore, the state of the art has to be surveyed to gain insight into how Dempster–Shafer evidence theory is beneficial for data fusion and how it evolved over time. In this paper, we first provide a comprehensive review of data fusion methods based on Dempster–Shafer evidence theory and its extensions, collectively referred to as classical evidence theory, from three aspects of uncertainty modeling, fusion, and decision making. Next, we study and explore complex evidence theory for data fusion in both closed world and open world contexts that benefits from the frame of complex plane modelling. We then present classical and complex evidence theory framework-based multisource data fusion algorithms, which are applied to pattern classification to compare and demonstrate their applicabilities. The research results indicate that the complex evidence theory framework can enhance the capabilities of uncertainty modeling and reasoning by generating constructive interference through the fusion of appropriate complex basic belief assignment functions modeled by complex numbers. Through analysis and comparison, we finally propose several challenges and identify open future research directions in evidence theorybased data fusion.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This theory was originally proposed by Dempster and later improved by Shafer. Due to this advantage, it has been widely applied in fields such as fault diagnosis [9][10][11], information fusion [12,13], multi-attribute decision analysis [14,15], complex evidence theory [16], decision-making [17,18], and medical image analysis [19]. ...
... Step 1: Utilize Eq (12) to determine the similarity between two pieces of evidence, m j and m k . Then the similarity measure matrix SMM n×n can be represented as follows: ...
... The results are shown in Table 4. Based on the mean and standard deviation of the training samples in each category, the models for each attribute are constructed using the Student's t-Distribution for each category. The resulting models for the four attributes are shown in Figs 10,11,12,and 13. These models are generated by fitting the Student's t-Distribution to the data, reflecting the distinct nature of each category's distribution on the respective attributes. ...
Article
Full-text available
Evidence Theory (ET) is widely applied to handle uncertainty issues in fault diagnosis. However, when dealing with highly conflicting evidence, the use of Dempster’s rule may result in outcomes that contradict reality. To address this issue, this paper proposes a fault diagnosis decision-making method. The method is primarily divided into two parts. First, a similarity measurement method is introduced to solve the conflict management problem. This method combines the belief and plausibility functions within ET. It not only considers the numerical similarity between pieces of evidence but also takes into account directional similarity, better capturing the differences between different pieces of evidence. The effectiveness of this method is validated through several complex numerical examples. Next, based on this measurement method, we propose a conflict management method, which is validated through comparative experiments. Then, considering the inherent uncertainty in real-world sensor data, we propose a basic belief assignment (BBA) generation method based on Student’s t-distribution and fuzzy membership functions. Finally, by combining the proposed conflict management method based on similarity measurement with the BBA generation method, we derive the final fault diagnosis decision, and its effectiveness is demonstrated through an application.
... Evidence fusion is one of the effective methods for addressing the above problem [11,12]. In recent years, various improvements based on evidence fusion theory have been proposed, including time series complexity measurement-based methods [13], fractal confidence-based methods [14,15], and multi-source data fusion-based methods [16]. However, the above methods ignore the correlation between attributes in the evidence fusion process. ...
... Suppose the number of evidence is 6, and its reduction result is [12,16,13,17,1,14], [17,12,13,1,9,14], [12,13,17,1,14,9,11], [12,17,13,1,9,14,11], [17,13,12,1,9,14,4,0], [12,17,1,14,13,9,11,0,16,4], according to the above formula, their accuracies are 0.73333, 0.7467, 0.7155, 0.7435, 0.7570 and 0.7311 respectively. The result of fusion through the evidence fusion theory is [12,17,1,14,13], and its classification accuracy is 0.7800. ...
... Suppose the number of evidence is 6, and its reduction result is [12,16,13,17,1,14], [17,12,13,1,9,14], [12,13,17,1,14,9,11], [12,17,13,1,9,14,11], [17,13,12,1,9,14,4,0], [12,17,1,14,13,9,11,0,16,4], according to the above formula, their accuracies are 0.73333, 0.7467, 0.7155, 0.7435, 0.7570 and 0.7311 respectively. The result of fusion through the evidence fusion theory is [12,17,1,14,13], and its classification accuracy is 0.7800. ...
Article
Full-text available
With the development of intelligent technology, data in practical applications show exponential growth in quantity and scale. Extracting the most distinguished attributes from complex datasets becomes a crucial problem. The existing attribute reduction approaches focus on the correlation between attributes and labels without considering the redundancy. To address the above problem, we propose an ensemble approach based on an incremental information level and improved evidence theory for attribute reduction (IILE). Firstly, the incremental information level reduction measure comprehensively assesses attributes based on reduction capability and redundancy level. Then, an improved evidence theory and approximate reduction methods are employed to fuse multiple reduction results, thereby obtaining an approximately globally optimal and a most representative subset of attributes. Eventually, using different metrics, experimental comparisons are performed on eight datasets to confirm that our proposal achieved better than other methods. The results show that our proposal can obtain more relevant attribute sets by using the incremental information level and improved evidence theory.
... In the classical literature [30], several techniques for data fusion can be found, such as Bayesian probabilistic methods, filters (e.g., Kalman filter), fuzzy logic and rule-based systems, and machine learning approaches. In recent years, driven by the proliferation of artificial intelligence, new data fusion methods have emerged, including deep learning [31] through deep neural networks and Dempster-Shafer evidence theory [32,33], an extension of traditional probabilistic methods. However, these recent approaches involve higher computational complexity, are more challenging to interpret and explain, and complicate the appropriate parameter tuning process. ...
Article
Full-text available
In response to the limitations of current infrared and visible light image fusion algorithms—namely insufficient feature extraction, loss of detailed texture information, underutilization of differential and shared information, and the high number of model parameters—this paper proposes a novel multi-scale infrared and visible image fusion method with two-branch feature interaction. The proposed method introduces a lightweight multi-scale group convolution, based on GS convolution, which enhances multi-scale information interaction while reducing network parameters by incorporating group convolution and stacking multiple small convolutional kernels. Furthermore, the multi-level attention module is improved by integrating edge-enhanced branches and depthwise separable convolutions to preserve detailed texture information. Additionally, a lightweight cross-attention fusion module is introduced, optimizing the use of differential and shared features while minimizing computational complexity. Lastly, the efficiency of local attention is enhanced by adding a multi-dimensional fusion branch, which bolsters the interaction of information across multiple dimensions and facilitates comprehensive spatial information extraction from multimodal images. The proposed algorithm, along with seven others, was tested extensively on public datasets such as TNO and Roadscene. The experimental results demonstrate that the proposed method outperforms other algorithms in both subjective and objective evaluation results. Additionally, it demonstrates good performance in terms of operational efficiency. Moreover, target detection performance experiments conducted on the M3FD dataset confirm the superior performance of the proposed algorithm.
Article
In multisource information fusion (MSIF), Dempster–Shafer evidence (DSE) theory offers a useful framework for reasoning under uncertainty. However, measuring the divergence between belief functions within this theory remains an unresolved challenge, particularly in managing conflicts in MSIF, which is crucial for enhancing decision-making level. In this paper, several divergence and distance functions are proposed to quantitatively measure discrimination between belief functions in DSE theory, including the reverse evidential KullbackLeibler (REKL) divergence, evidential Jeffrey's (EJ) divergence, evidential JensenShannon (EJS) divergence, evidential χ2\chi ^{2} (E χ2\chi ^{2} ) divergence, evidential symmetric χ2\chi ^{2} (ES χ2\chi ^{2} ) divergence, evidential triangular (ET) discrimination, evidential Hellinger (EH) distance, and evidential total variation (ETV) distance. On this basis, a generalized f -divergence, also called the evidential f -divergence (Ef divergence), is proposed. Depending on different kernel functions, the Ef divergence degrades into several specific classes: EKL, REKL, EJ, EJS, E χ2\chi ^{2} and ES χ2\chi ^{2} divergences, ET discrimination, and EH and ETV distances. Notably, when basic belief assignments (BBAs) are transformed into probability distributions, these classes of Ef divergence revert to their classical counterparts in statistics and information theory. In addition, several Ef-MSIF algorithms are proposed for pattern classification based on the classes of Ef divergence. These Ef-MSIF algorithms are evaluated on real-world datasets to demonstrate their practical effectiveness in solving classification problems. In summary, this work represents the first attempt to extend classical f -divergence within the DSE framework, capitalizing on the distinct properties of BBA functions. Experimental results show that the proposed Ef-MSIF algorithms improve classification accuracy, with the best-performing Ef-MSIF algorithm achieving an overall performance difference approximately 1.22 times smaller than the suboptimal method and 14.12 times smaller than the worst-performing method.
Article
In the era of complex data environments, accurately measuring uncertainty is crucial for effective decision making. Complex evidence theory (CET) provides a framework for handling uncertainty reasoning in the complex plane. Within CET, complex basic belief assignment (CBBA) aims to tackle the uncertainty and imprecision inherent in data coinciding with phase or periodic changes. However, measuring the uncertainty of CBBA over time remains an open issue. This study introduces a novel entropy model, the complex belief (CB) entropy, within the framework of CET, designed to tackle the inherent uncertainty and imprecision in data with phase or periodic changes. The model is developed by integrating concepts of interference and fractal theory to extend the understanding of uncertainty over time. Methodologically, the CB entropy is constructed to include discord, nonspecificity, and an interaction term for focal elements, defined as interference. In addition, thanks to the concept of the fractal, the model is further generalized to time fractal-based CB (TFCB) entropy for forecasting future uncertainties. We furthermore analyze the properties of the entropy models. Findings demonstrate that the proposed entropy models provide a more comprehensive measure of uncertainty in complex scenarios. Finally, a decision-making method based on the proposed entropy is proposed.
Article
Full-text available
Considering the tractability of OGM (Occupancy Grid Map) and its wide use in the dynamic environment representation of mobile robotics, the extraction of motion information from successive OGMs are very important for many tasks, such as SLAM (Simultaneously Localization And Mapping) and DATMO (Detection and Tracking of Moving Object). In this paper, we propose a novel motion extraction method based on the signal transform, called as S-KST (Spatial Keystone Transform), for the motion detection and estimation from successive noisy OGMs. It extends the KST in radar imaging or motion compensation to 1D spatial case (1DS-KST) and 2D spatial case (2DS-KST) combined multiple hypotheses about possible directions of moving obstacles. Meanwhile, the fast algorithm of 2DS-KST based on Chirp Z-Transform (CZT) is also given, which includes five steps, i.e. spatial FFT, directional filtering, CZT, spatial IFFT and Maximal Power Detector (MPD) merging and its computational complexity is proportional to the 2D-FFT. Simulation test results for the point objects and the extended objects show that SKST has a good performance on the extraction of sub-pixel motions in very noisy environment, especially for those slowly moving obstacles.
Article
Full-text available
Maneuvering target tracking is widely used in unmanned vehicles, missile navigation, underwater ships, etc. Due to the uncertainty of the moving characteristics of maneuvering targets and the low sensor measurement accuracy, trajectory tracking has always been an open research problem and challenging work. This paper proposes a trajectory estimation method based on LSTM neural network for uncertain motion characteristics. The network consists of two LSTM networks with stacked serial relationships, one of which is used to predict the movement dynamics, and the other is used to update the track's state. Compared with the classical Kalman filter based on the maneuver model, the method proposed here does not need to model the motion characteristics and sensor characteristics. It can achieve high-performance tracking by learning historical data dynamics and sensor characteristics. Experimental results show that this method can effectively improve the trajectory estimation performance when the target motion is unknown and uncertain.
Article
Full-text available
In many circumstances, decisions are based on subjective experience. However, some views can be vague, meaning that policymakers do not know exactly how they should express their opinions. Therefore, it is necessary for researchers to provide scientific decision frameworks, among which the multi-criteria decision making (MCDM) method in the linguistic environment is gradually favored by scholars. A large body of literature reports relevant approaches with regard to linguistic term sets, but existing approaches are insufficient to express the subjective thoughts of policymakers in a complex and uncertain environment. In this paper, we address this problem by introducing the concept of evidential linguistic term set (ELTS). ELTS generalizes many other uncertainty representations under linguistic context, such as fuzzy sets, probabilities, or possibility distributions. Measures on ELTS, such as uncertainty measure, dissimilarity measure and expectation function, provide general frameworks to handle uncertain information. Modeling and reasoning of information expressed by ELTSs are realized by the proposed aggregation operators. Subsequently, this paper presents a novel MCDM approach called evidential linguistic ELECTRE method, and applies it to the case of selection of emergency shelter sites. The findings demonstrate the effectiveness of the proposed method for MCDM problems under linguistic context and highlight the significance of the developed ELTS.
Article
Full-text available
How to handle conflict in Dempster-Shafer evidence theory is an open issue. Many approaches have been proposed to solve this problem. The existing approaches can be divided into two kinds. The first is to improve the combination rule, and the second is to modify the data model. A typical method to improve combination rule is to assign the conflict to the total ignorance set ΘΘ\varTheta . However, it does not make full use of conflict information. A novel combination rule is proposed in this paper, which assigns the conflicting mass to the power set (ACTP). Compared with modifying data model, the advantage of the proposed method is the sequential fusion, which greatly decrease computational complexity. To demonstrate the efficacy of the proposed method, some numerical examples are given. Due to the less information loss, the proposed method is better than other methods in terms of identifying the correct evidence, the speed of convergence and computational complexity.
Article
Full-text available
Open set recognition (OSR) aims to correctly recognize the known classes and reject the unknown classes for increasing the reliability of the recognition system. The distance-based loss is often employed in deep neural networks-based OSR methods to constrain the latent representation of known classes. However, the optimization is usually conducted using the nondirectional Euclidean distance in a single feature space without considering the potential impact of spatial distribution. To address this problem, we propose orientational distribution learning (ODL) with hierarchical spatial attention for OSR. In ODL, the spatial distribution of feature representation is optimized orientationally to increase the discriminability of decision boundaries for open set recognition. Then, a hierarchical spatial attention mechanism is proposed to assist ODL to capture the global distribution dependencies in the feature space based on spatial relationships. Moreover, a composite feature space is constructed to integrate the features from different layers and different mapping approaches, and it can well enrich the representation information. Finally, a decision-level fusion method is developed to combine the composite feature space and the naive feature space for producing a more comprehensive classification result. The effectiveness of ODL has been demonstrated on various benchmark datasets, and ODL achieves state-of-the-art performance.
Article
Full-text available
With the development of quantum decision making, how to bridge classical theory with quantum framework has gained much attention in past few years. Recently, a complex evidence theory (CET), as a generalized Dempster–Shafer evidence theory was presented to handle uncertainty on the complex plane. Whereas, CET focuses on a closed world, where the frame of discernment is complete with exhaustive and complete elements. To address this limitation, in this paper, we generalize CET to quantum framework of Hilbert space in an open world, and propose a generalized quantum evidence theory (GQET). On the basis of GQET, a quantum multisource information fusion algorithm is proposed to handle the uncertainty in an open world. To verify its effectiveness, we apply the proposed quantum multisource information fusion algorithm in a practical classification fusion.
Article
Full-text available
It is still a challenging problem to characterize uncertainty and imprecision between specific (singleton) clusters with arbitrary shapes and sizes. In order to solve such a problem, we propose a belief shift clustering (BSC) method for dealing with object data. The BSC method is considered as the evidential version of mean shift or mode seeking under the theory of belief functions. First, a new notion, called belief shift, is provided to preliminarily assign each query object as the noise, precise, or imprecise one. Second, a new evidential clustering rule is designed to partial credal redistribution for each imprecise object. To avoid the “uniform effect” and useless calculations, a specific dynamic framework with simulated cluster centers is established to reassign each imprecise object to a singleton cluster or related meta-cluster. Once an object is assigned to a meta-cluster, this object may be in the overlapping or intermediate areas of different singleton clusters. Consequently, the BSC can reasonably characterize the uncertainty and imprecision between singleton clusters. The effectiveness has been verified on several artificial, natural, and image segmentation/classification datasets by comparison with other related methods.
Article
Full-text available
Uncertainty is of great concern in information fusion and artificial intelligence. Dempster-Shafer theory is a popular tool to deal with uncertainty, but it cannot effectively represent and fuse uncertain information involving non-exclusiveness and incompleteness. In order to solve that problem, an idea of D number theory (DNT) has been proposed. In this paper, the basic theory of DNT for the fusion of non-exclusive and incomplete information is studied to strengthen its theoretical foundation, including concept formalization, uncertainty representation, information modelling and fusion. At first, the non-exclusiveness in DNT is defined formally and its basic properties are discussed. Secondly, new measures of belief and plausibility for D numbers are developed. Thirdly, the combination rule for D numbers is studied by extending the exclusive conflict redistribution rule. Fourthly, a method to combine information-incomplete D numbers is proposed. The proposed concepts, definitions, and methods are analyzed mathematically and exemplified through illustrative examples.
Article
Full-text available
The problem of multiscale ship detection in synthetic aperture radar (SAR) images has received much attention with the development of deep convolutional neural networks (DCNNs). However, existing DCNN-based multiscale SAR ship detection methods often lead to time-consuming detection process due to the massive parameters therein. To address this issue, a lightweight center-based detector with the multilevel auxiliary supervision (MLAS) structure is proposed in this article. First, an extremely lightweight backbone network is designed to improve the computation efficiency and extract SAR image features in a bottom-up manner. Then, a feature fusion network containing three multiscale feature fusion modules is introduced to combine semantic features with different levels. Finally, a novel MLAS-based framework is proposed to train our DCNN with multilevel auxiliary detection subnets. MLAS improves the performance of multiscale ship detection benefiting from the guidance of multilevel attention. Experimental results on the open SAR image dataset SSDD show that our proposed detector achieves a similar average precision for the problem of multiscale SAR ship detection but significantly reduces the computation burden of state-of-the-art methods. The required number of floating points of operations of our method is only 21.70%, 19.30%, and 4.81% of those of CenterNet, YOLOv3, and RetinaNet, respectively, and the number of learnable weights in our method is only 0.68 million that is 5.63%, 1.10%, 2.98% of those of the aforementioned three existing methods, respectively.
Article
Full-text available
A network intrusion detection model that fuses a convolutional neural network and a gated recurrent unit is proposed to address the problems associated with the low accuracy of existing intrusion detection models for the multiple classification of intrusions and low accuracy of class imbalance data detection. In this model, a hybrid sampling algorithm combining Adaptive Synthetic Sampling (ADASYN) and Repeated Edited nearest neighbors (RENN) is used for sample processing to solve the problem of positive and negative sample imbalance in the original dataset. The feature selection is carried out by combining Random Forest algorithm and Pearson correlation analysis to solve the problem of feature redundancy. Then, the spatial features are extracted by using a convolutional neural network, and further extracted by fusing Averagepooling and Maxpooling, using attention mechanism to assign different weights to the features, thus reducing the overhead and improving the model performance. At the same time, a Gated Recurrent Unit (GRU) is used to extract the long-distance dependent information features to achieve comprehensive and effective feature learning. Finally, a softmax function is used for classification. The proposed intrusion detection model is evaluated based on the UNSW_NB15, NSL-KDD, and CIC-IDS2017 datasets, and the experimental results show that the classification accuracy reaches 86.25%, 99.69%, 99.65%, which are 1.95%, 0.47% and 0.12% higher than that of the same type of CNN-GRU, and can solve the problems of low classification accuracy and class imbalance well.
Article
Full-text available
Data with missing values, or incomplete information, brings some challenges to the development of classification, as the incompleteness may significantly affect the performance of classifiers. In this paper, we handle missing values in both training and test sets with uncertainty and imprecision reasoning by proposing a new belief combination of classifier (BCC) method based on the evidence theory. The proposed BCC method aims to improve the classification performance of incomplete data by characterizing the uncertainty and imprecision brought by incompleteness. In BCC, different attributes are regarded as independent sources, and the collection of each attribute is considered as a subset. Then, multiple classifiers are trained with each subset independently and allow each observed attribute to provide a sub-classification result for the query pattern. Finally, these sub-classification results with different weights (discounting factors) are used to provide supplementary information to jointly determine the final classes of query patterns. The weights consist of two aspects: global and local. The global weight calculated by an optimization function is employed to represent the reliability of each classifier, and the local weight obtained by mining attribute distribution characteristics is used to quantify the importance of observed attributes to the pattern classification. Abundant comparative experiments including seven methods on twelve datasets are executed, demonstrating the out-performance of BCC over all baseline methods in terms of accuracy, precision, recall, F1 measure, with pertinent computational costs.
Article
Full-text available
For exploring the meaning of the power set in evidence theory, a possible explanation of power set is proposed from the view of Pascal’s triangle and combinatorial number. Here comes the question: what would happen if the combinatorial number is replaced by permutation number? To address this issue, a new kind of set, named as random permutation set (RPS), is proposed in this paper, which consists of permutation event space (PES) and permutation mass function (PMF). The PES of a certain set considers all the permutation of that set. The elements of PES are called the permutation events. PMF describes the chance of a certain permutation event that would happen. Based on PES and PMF, RPS can be viewed as a permutation-based generalization of random finite set. Besides, the right intersection (RI) and left intersection (LI) of permutation events are presented. Based on RI and LI, the right orthogonal sum (ROS) and left orthogonal sum (LOS) of PMFs are proposed. In addition, numerical examples are shown to illustrate the proposed conceptions. The comparisons of probability theory, evidence theory, and RPS are discussed and summarized. Moreover, an RPS-based data fusion algorithm is proposed and applied in threat assessment. The experimental results show that the proposed RPS-based algorithm can reasonably and efficiently deal with uncertainty in threat assessment with respect to threat ranking and reliability ranking.
Article
In artificial intelligence, it is crucial for pattern recognition systems to process data with uncertain information, necessitating uncertainty reasoning approaches such as evidence theory. As an orderable extension of evidence theory, random permutation set (RPS) theory has received increasing attention. However, RPS theory lacks a suitable generation method for the element order of permutation mass function (PMF) and an efficient determination method for the fusion order of permutation orthogonal sum (POS). To solve these two issues, this paper proposes a reasoning model for RPS theory, called random permutation set reasoning (RPSR). RPSR consists of three techniques, including RPS generation method (RPSGM), RPSR rule of combination, and ordered probability transformation (OPT). Specifically, RPSGM can construct RPS based on Gaussian discriminant model and weight analysis; RPSR rule incorporates POS with reliability vector, which can combine RPS sources with reliability in fusion order; OPT is used to convert RPS into a probability distribution for the final decision. Besides, numerical examples are provided to illustrate the proposed RPSR. Moreover, the proposed RPSR is applied to classification problems. An RPSR-based classification algorithm (RPSRCA) and its hyperparameter tuning method are presented. The results demonstrate the efficiency and stability of RPSRCA compared to existing classifiers.
Article
The rapid advancement of Internet technology, driven by social media and e-commerce platforms, has facilitated the generation and sharing of multimodal data, leading to increased interest in efficient cross-modal retrieval systems. Cross-modal image-text retrieval, encompassing tasks such as image query text (IqT) retrieval and text query image (TqI) retrieval, plays a crucial role in semantic searches across modalities. This paper presents a comprehensive survey of cross-modal image-text retrieval, addressing the limitations of previous studies that focused on single perspectives such as subspace learning or deep learning models. We categorize existing models into single-tower, dual-tower, real-value representation, and binary representation models based on their structure and feature representation. Additionally, we explore the impact of multimodal Large Language Models (MLLMs) on cross-modal retrieval. Our study also provides a detailed overview of common datasets, evaluation metrics, and performance comparisons of representative methods. Finally, we identify current challenges and propose future research directions to advance the field of cross-modal image-text retrieval.
Article
Bias estimation of sensors is an essential prerequisite for accurate data fusion. Neglect of temporal bias in general real systems prevents the existing algorithms from successful application. In this paper, both spatial and temporal biases in asynchronous multisensor systems are investigated and two novel methods for simultaneous spatiotemporal bias compensation and data fusion are presented. The general situation that the sensors sample at different times with different and varying periods is explored, and unknown time delays may exist between the time stamps and the true measurement times. Due to the time delays, the time stamp interval of the measurements from different sensors may be different from their true measurement interval, and the unknown difference between them is considered as the temporal bias and augmented into the state vector to be estimated. Multisensor measurements are collected in batch processing or sequential processing schemes to estimate the augmented state vector, results in two spatiotemporal bias compensation methods. In both processing schemes, the measurements are formulated as functions of both target states and spatiotemporal biases according to the time difference between the measurements and the states to be estimated. The Unscented Kalman Filter is used to handle the nonlinearity of the measurements and produce spatiotemporal bias and target state estimates simultaneously. The posterior Cramer-Rao lower bound (PCRLB) for spatiotemporal bias and state estimation is presented and simulations are conducted to demonstrate the effectiveness of the proposed methods.
Article
In response to the current practical fusion requirements for infrared and visible videos, which often involve collaborative fusion of difference feature information, and model cannot dynamically adjust the fusion strategy according to the difference between videos, resulting in poor fusion performance, a mimic fusion algorithm for infrared and visible videos based on the possibility distribution synthesis theory is proposed. Firstly, quantitatively describe the various difference features and their attributes of the region of interest in each frame of the dual channel video sequence, and select the main difference features corresponding to each frame. Secondly, the pearson correlation coefficient is used to measure the correlation between any two features and obtain the feature correlation matrix. Then, based on the similarity measure, the fusion effective degree distribution of each layer variables for different difference features is constructed, and the difference feature distribution is correlated and synthesized based on the possibility distribution synthesis theory. Finally, optimize the select of mimic variables to achieve mimic fusion of infrared and visible videos. The experimental results show that the proposed method achieve significant fusion results in preserving targets and details, and was significantly superior to other single fusion methods in subjective evaluation and objective analysis.
Article
The unrestricted development and utilization of marine resources have resulted in a series of practical problems, such as the destruction of marine ecology. The wide application of radar, satellites and other detection equipment has gradually led to a large variety of large-capacity marine spatiotemporal trajectory data from a vast number of sources. In the field of marine domain awareness, there is an urgent need to use relevant information technology means to control and monitor ships and accurately classify and identify ship behavior patterns through multisource data fusion analysis. In addition, the increase in the type and quantity of trajectory data has produced a corresponding increase in the complexity and difficulty of data processing that cannot be adequately addressed by traditional data mining algorithms. Therefore, this paper provides a deep learning-based algorithm for the recognition of four main motion types of the ship from automatic identification system (AIS) data: anchoring, mooring, sailing and fishing. A new method for classifying patterns is presented that combines the computer vision and time series domains. Experiments are carried out on a dataset constructed from the open AIS data of ships in the coastal waters of the United States, which show that the method proposed in this paper achieves more than 95\% recognition accuracy. The experimental results confirm that the method proposed in this paper is effective in classifying ship trajectories using AIS data and that it can provide efficient technical support for marine supervision departments.
Article
Uncertainty modeling and reasoning in intelligent systems are crucial for effective decision-making, such as complex evidence theory (CET) being particularly promising in dynamic information processing. Within CET, the complex basic belief assignment (CBBA) can model uncertainty accurately, while the complex rule of combination can effectively reason uncertainty with multiple sources of information, reaching a consensus. However, determining CBBA, as the key component of CET, remains an open issue. To mitigate this issue, we propose a novel method for generating CBBA using high-level features extracted from Box–Cox transformation and discrete Fourier transform (DFT). Specifically, our method deploys complex Gaussian fuzzy number (CGFN) to generate CBBA, which provides a more accurate representation of information. The proposed method is applied to pattern classification tasks through a multisource information fusion algorithm, and it is compared with several well-known methods to demonstrate its effectiveness. Experimental results indicate that our proposed CGFN-based method outperforms existing methods, by achieving the highest average classification rate in multisource information fusion for pattern classification tasks. We found the Box–Cox transformation contributes significantly to CGFN by formatting data in a normal distribution, and DFT can effectively extract high-level features. Our method offers a practical approach for generating CBBA in CET, precisely representing uncertainty and enhancing decision-making in uncertain scenarios.
Article
Time series data contains the amount of information to reflect the development process and state of a subject. Especially, the complexity is a valuable factor to illustrate the feature of the time series. However, it is still an open issue to measure the complexity of sophisticated time series due to its uncertainty. In this study, based on the belief R eˊ\acute{e} nyi divergence, a novel time series complexity measurement algorithm, called belief R eˊ\acute{e} nyi divergence of divergence (BR eˊ\acute{e} DOD), is proposed. Specifically, the BR eˊ\acute{e} DOD algorithm takes the boundaries of time series value into account. What is more, according to the Dempster-Shafer (D-S) evidence theory, the time series is converted to the basic probability assignments (BPAs) and it measures the divergence of a divergence sequence. Then, the secondary divergence of the time series is figured out to represent the complexity of the time series. In addition, the BR eˊ\acute{e} DOD algorithm is applied to sets of cardiac inter-beat interval time series, which shows the superiority of the proposed method over classical machine learning methods and recent well-known works.
Article
Multisource information fusion is a comprehensive and interdisciplinary subject. Dempster-Shafer (D-S) evidence theory copes with uncertain information effectively. Pattern classification is the core research content of pattern recognition, and multisource information fusion based on D-S evidence theory can be effectively applied to pattern classification problems. However, in D-S evidence theory, highly-conflicting evidence may cause counterintuitive fusion results. Belief divergence theory is one of the theories that are proposed to address problems of highly-conflicting evidence. Although belief divergence can deal with conflict between evidence, none of the existing belief divergence methods has considered how to effectively measure the discrepancy between two pieces of evidence with time evolutionary. In this study, a novel fractal belief Rényi (FBR) divergence is proposed to handle this problem. We assume that it is the first divergence that extends the concept of fractal to R/'enyi divergence. The advantage is measuring the discrepancy between two pieces of evidence with time evolution, which satisfies several properties and is flexible and practical in various circumstances. Furthermore, a novel algorithm for multisource information fusion based on FBR divergence, namely FBReD-based weighted multisource information fusion, is developed. Ultimately, the proposed multisource information fusion algorithm is applied to a series of experiments for pattern classification based on real datasets, where our proposed algorithm achieved superior performance.
Article
Remote sensing image semantic segmentation (RSISS) remains challenging due to the scarcity of labeled data. Semi-supervised learning can leverage pseudo-labels to enhance the model’s ability to learn from unlabeled data. However, accurately generating pseudo-labels for RSISS remains a significant challenge that severely affects the model’s performance, especially for the edges of different classes. In order to overcome these issues, we propose a semi-supervised semantic segmentation framework for remote sensing images based on edge-aware class activation enhancement (ECAE). Firstly, the baseline network is constructed based on the average teacher model, which separates the training of labeled and unlabeled data using student and teacher networks. Secondly, considering local continuity and global discreteness of object distribution in remote sensing images, the class activation mapping enhancement (CAME) network is designed to predict local areas more remarkably. Finally, the edge-aware network (EAN) is proposed to improve the performance of edge segmentation in remote sensing images. The combination of the CAME with the EAN further heightens the generation of high-confidence pseudo-labels. Experiments were performed on two publicly available remote sensing semantic segmentation datasets, Potsdam and ISPRS Vaihingen, which verify the superiorities of the proposed ECAE model.
Article
Information can be quantified and expressed by uncertainty, and improving the decision level of uncertain information is vital in modeling and processing uncertain information. Dempster-Shafer evidence theory can model and process uncertain information effectively. However, the Dempster combination rule may provide counter-intuitive results when dealing with highly conflicting information, leading to a decline in decision level. Thus, measuring conflict is significant in the improvement of decision level. Motivated by this issue, this paper proposes a novel method to measure the discrepancy between bodies of evidence. First, the model of dynamic fractal probability transformation is proposed to effectively obtain more information about the non-specificity of basic belief assignments (BBAs). Then, we propose the higher-order fractal belief Rényi divergence (HOFBReD). HOFBReD can effectively measure the discrepancy between BBAs. Moreover, it is the first belief Rényi divergence that can measure the discrepancy between BBAs with dynamic fractal probability transformation. HoFBReD has several properties in terms of probability transformation as well as measurement. When the dynamic fractal probability transformation ends, HoFBReD is equivalent to measuring the Rényi divergence between the pignistic probability transformations of BBAs. When the BBAs degenerate to the probability distributions, HoFBReD will also degenerate to or be related to several well-known divergences. In addition, based on HoFBReD, a novel multisource information fusion algorithm is proposed. A pattern classification experiment with real-world datasets is presented to compare the proposed algorithm with other methods. The experiment results indicate that the proposed algorithm has a higher average pattern recognition accuracy with all datasets than other methods. The proposed discrepancy measurement method and multisource information algorithm contribute to the improvement of decision level.
Article
This article investigates how to make use of imperfect data gathered from different sources for inference and decision making. Based on Bayesian inference and the principle of likelihood, a likelihood analysis method is proposed for acquisition of evidence from imperfect data to enable likelihood inference within the framework of the evidential reasoning (ER). The nature of this inference process is underpinned by the new necessary and sufficient conditions that when a piece of evidence is acquired from a data source it should be represented as a normalized likelihood distribution to capture the essential evidential meanings of data. While the explanation of sufficiency of the conditions is straightforward based on the principle of likelihood, their necessity needs to be established by following the principle of Bayesian inference. It is also revealed that the inference process enabled by the ER rule under the new conditions constitutes a likelihood inference process, which becomes equivalent to Bayesian inference when there is no ambiguity in data and a prior distribution can be obtained as a piece of independent evidence. Two examples in decision analysis under uncertainty and a case study about fault diagnosis for railway track maintenance management are examined to demonstrate the steps of implementation and potential applications of the likelihood inference process.
Article
Evidence theory provides an effective representation and handling framework for uncertain information. However, the quantification for the uncertainty of mass function in this theory is still an unsolved problem. For two types of uncertainty involved in evidence theory, conflict, and nonspecificity, many measurement methods have been proposed on the basis of requirements of axiomatic conditions. However, these existing methods proposed to measure the uncertainty of mass function are of deficiencies more or less, such as low sensitivity, counter-intuition, dispute on maximum entropy, and so on. In order to overcome the above defects, a total uncertainty measure based on the plausibility function, named as plausibility entropy, is proposed in this article, which provides a new solution to measure the uncertainty of the mass function. By embodying the plausibility function and plausibility transformation of every singleton in the frame of discernment, the new measure enlarges the Shannon’s entropy of equivalent probability mass function obtained using the plausibility transformation, and establishes a quantitative relationship between uncertainty measure and Dempster’s rule of combination. Compared with existing uncertainty measures, the proposed plausibility entropy is more sensitive to changes in a mass function. It also satisfies many desirable axiomatic properties, including non-negativity, maximum entropy, probability consistency, monotonicity, and so on. Moreover, the relationship among axiomatic properties of uncertainty measures is also discussed in this article. Numerical examples and comparison are provided to illustrate the effectiveness and rationality of the proposed plausibility entropy.
Article
Complex evidence theory, an extension of Dempster–Shafer evidence theory, is a generalized evidence theory based on complex values, which has been widely used to solve decision making of uncertainty information. In order to make a step development based on previous researches, we set out to make a connection between CET and other uncertainty theories like possibility theory, modal logic, fuzzy set theory and probability theory. With a restricted condition of a special generalized consonant belief function, it is found that possibility and necessity measure can be regarded as generalized plausibility measure and belief measure in possibility theory, and the standard interpretation of fuzzy sets in complex evidence theory is obtained by summing up the experience of predecessors and modifying the upper bound of fuzzy sets in this paper. In addition, we established the relationship between generalized plausibility, belief function and modal logic, and elaborate how to explain the complex basic belief distribution with the general semantics of modern modal logic. Finally, the transformation algorithm between complex basic belief assignment and probability distribution is proposed by using the transformation of uncertainty invariance principle, and some properties of them are derived.
Article
Complex evidence theory, as a generation model of the Dempster-Shafer evidence theory, has the ability to express uncertainty and perform uncertainty reasoning. One of the key issues in complex evidence theory is the complex basic belief assignment (CBBA) generation method. But, how to model uncertainty information in complex evidence theory is still an open issue. In this paper, therefore, we propose a CBBA generation method by taking advantage of the triangular fuzzy number. Moreover, an algorithm for decision making is devised based on the proposed CBBA generation method. Finally, the decision making algorithm is applied in classification to verify its effectiveness. In summary, the proposed method can handle uncertainty modeling and reasoning both in the real number domain and the complex number domain, which provides a promising way in decision making theory.
Article
As a key technology to perform sustainable development in transportation, electric vehicles have been largely welcomed due to their advantages in energy savings and low carbon emission. The main step in promoting these vehicles is the selection of the appropriate electric vehicle charging station (EVCS) site. EVCS site selection is a laborious task because it involves a series of conflicting quantitative and qualitative criteria from several dimensions. The quantitative criteria are usually expressed by numerical data, while qualitative criteria are commonly represented by linguistic terms. Furthermore, the linguistic terms generated by different decision makers are usually defined on multi-granular linguistic term sets. In this paper, we present a new decision-making framework to select sustainable EVCS sites within the context of heterogeneous information and multi-granular linguistic terms. First, three information transformation mechanisms are defined to unify the heterogeneous information and multi-granular linguistic terms into interval-valued belief structures. Afterwards, shadowed sets theory is utilized to reflect the personalized individual semantics of linguistic terms. Then, with the aid of the evidential reasoning algorithm, a new information fusion method is proposed to generate the interval-valued expected utilities of alternatives. Subsequently, an improved minimax regret approach is developed to compare and rank the interval-valued expected utilities. The proposed decision-making framework is then implemented to solve a case study for EVCS site selection. Further analysis and comparisons with other methods are also conducted to show the applicability and feasibility of the current proposal.
Article
Multisource information fusion (MSIF) technologies play an important role in various fields and practical applications. As a useful methodology to represent and handle uncertain information, the Dempster–Shafer evidence theory has been broadly used in many fields of MSIF. In evidence theory, however, Dempster’s combination rule (DCR) may result in counterintuitive results when fusing highly conflicting evidence. To address this issue, in this article, a novel MSIF method based on a newly defined generalized evidential divergence measure among multiple sources of evidence is proposed for decision making. Specifically, we first design a new generalized evidential Jensen–Shannon (GEJS) divergence to measure the conflict and discrepancy among multiple sources of evidence. The proposed GEJS divergence has three main characteristics, which are beneficial for evidential divergence measurement: 1) it measures the divergence among multiple sources of evidence, not just two, and thus provides a more generalized framework; 2) it considers different weights of multiple sources of evidence to measure divergence among them, which is desirable for application requirements; and 3) it considers the cardinality of propositions of multiple sources of evidence to measure divergence by considering features of evidential subsets. Given these advantages of GEJS, an appropriate weight for each source of evidence can be obtained. Next, according to the corresponding weights, we amend the bodies of evidence to generate a weighted average evidence. DCR is then used to fuse the weighted average evidence, and the final result is used to support decision making. Finally, we present a case study of fault diagnosis and a sensitivity analysis to demonstrate that the proposed MSIF is effective and robust for addressing conflicting situations compared to related works. Additionally, an application of classification validates the practicability of the proposed MSIF with a good decision level.
Article
Complex evidence theory (CET) is an effective method for uncertainty reasoning in knowledge-based systems with good interpretability that has recently attracted much attention. However, approaches to improve the performance of uncertainty reasoning in CET-based expert systems remains an open issue. One key to performance improvement is the adequate management of conflict from multisource information. In this paper, a generalized correlation coefficient, namely, the complex evidential correlation coefficient (CECC), is proposed for the complex mass functions or complex basic belief assignments (CBBAs) in CET. On this basis, a complex conflict coefficient is proposed to measure the conflict between CBBAs; when CBBAs turn into classic BBAs, the complex correlation and conflict coefficients will degrade into traditional coefficients. The complex conflict coefficient satisfies nonnegativity, symmetry, boundedness, extreme consistency, and insensitivity to refinement properties, which are desirable for conflict measurement. Several numerical examples validate through comparisons the superiority of the complex conflict coefficient. In this context, a weighted discounting multisource information fusion algorithm, which is called the CECC-WDMSIF, is designed based on the CECC to improve the performance of CET-based expert systems. By applying the CECC-WDMSIF method to the pattern classification of diverse real-world datasets, it is demonstrated that the proposed CECC-WDMSIF outperforms well-known related approaches with higher classification accuracy and robustness.
Article
Understanding the uncertainty involved in a mass function is a central issue in Dempster-Shafer evidence theory for uncertain information fusion. Recent advances suggest to interpret the mass function from a view of quantum theory. However, existing studies do not truly implement the quantization of evidence. In order to solve the problem, a usable quantization scheme for mass function is studied in this paper. At first, a novel quantum model of mass function is proposed, which effectively embodies the principle of quantum superposition. Then, a quantum averaging operator is designed to obtain the quantum average of evidence, which not only retains many basic properties, for example idempotency, commutativity, and quasi-associativity, required by a rational approach for uncertain information fusion, but also yields some new characters, namely nonlinearity and globality, caused by the quantization of mass functions. At last, based on the quantum averaging operator, a new rule called quantum average combination rule is developed for the fusion of multiple pieces of evidence, which is compared with other representative average-based combination methods to show its performance. Numerical examples and applications for classification tasks are provided to demonstrate the effectiveness of the proposed quantum model, averaging operator, and combination rule.
Article
Emergency decision making and disposal are significant challenges faced by the international community. To minimize the emergency casualties, and reduce probable secondary disasters, it is necessary to immediately dispatch rescuers for emergency rescue in calamity prone areas. The abruptness, destructiveness, and uncertainty of emergencies, the rescue team often faces challenges of pressing time, scattered calamity locations, and diverse tasks. This necessitates the effective organization of rescuers, for their swift dispatch to the areas requiring rescue. A valuable research problem is how to group and dispatch rescuers reasonably and effectively according to the actual needs of the emergency rescue task and the situation to achieve the best rescue effect. This study establishes a dispatch model for rescuers in multiple disaster areas and rescue points. First, this paper combines the Dempster-Shafer theory (DST) and linguistic term set, to propose the concept of an evidential linguistic term set (ELTS), that can flexibly and accurately describe the subjective judgment of emergency decision-makers. It not only lays a theoretical foundation for establishing the rescuers’ dispatch model, but also aids in expressing information in uncertain linguistic environments of decision-making and evaluation. Second, to determine the weight of ability-based rescuer evaluation criteria, this study adopted the evidential best-worst method, combining it with DST to compensate for the limitations of the traditional weightage calculation method in expressing uncertainty. Third, to effectively dispatch rescuers to multiple disaster areas, modeling is carried out based on the above methods to maximize the competence of rescuers and the satisfaction of rescue time, and the best scheme for the allocation of rescuers is determined by solving the model. Finally, the advantages of the constructed model in emergency multitasking group decision-making are demonstrated through an empirical analysis.
Article
Extended belief-rule-based (EBRB) system is a representative rule-based system and has attracted much attention due to its capability of solving the problems of combinatorial explosion and time-consuming optimization incurred by belief-rule-based system. Despite their advantages, the development of EBRB suffers from some shortcomings, such as the unreasonable calculation of similarity between input and antecedent belief distributions (BDs), inaccurate calculation of individual matching degrees and rule weights, and inefficient determination of activation rules. To address these shortcomings, this article proposes a new EBRB system, in which an existing similarity measure between two BDs is adopted to accurately calculate the individual matching degrees and rule weights. Furthermore, the activation weight of a rule is efficiently calculated within an activation group generated using the affinity propagation algorithm, which is utilized to determine whether the rule is activated. The activation rules are then integrated using their weights and the evidential reasoning algorithm to generate the inference result. To demonstrate its accuracy and efficiency, the proposed EBRB system is employed to help diagnose thyroid nodules based on the historical examination reports collected from a tertiary hospital located in Hefei, Anhui, China. The findings are highlighted by the comparison of the proposed EBRB system with existing EBRB systems based on the historical reports and datasets derived from the University of California at Irvine.
Article
In decision-making systems, how to address uncertainty plays an important role for the improvement of system performance in uncertainty reasoning. Dempster–Shafer evidence (DSE) theory is an effective method to address uncertainty in decision-making problems by means of basic belief assignments (BBAs) and Dempster's combination rule. In the DSE theory, divergence measure between BBAs, which is beneficial for conflict information management in decision making, remains an open issue. In this paper, several generalized evidential divergences (EDs) are proposed and studied to measure the difference and discrepancy between BBAs in DSE theory, which have more universal applicability in decision theory. On this basis, a uniform BJS\mathcal {BJS} divergence-based decision-making algorithm is devised to improve the decision level. Furthermore, the extensions of weighted BJS\mathcal {BJS} to decision-making algorithms are discussed by considering not only subjective weights but also objective weights. Notably, this is the first work to propose the weighted BJS\mathcal {BJS} divergence in DSE theory providing a promising way to analyze decision-making problems from different perspectives. Besides, experiments demonstrate the effectiveness and superiority of the proposed methods. Finally, the proposed BJS\mathcal {BJS} -based decision-making algorithm is applied to pattern classification. The results validate that the proposed decision-making algorithm is beneficial for diverse real-world datasets and outperforms several well-known related works and demonstrates higher classification accuracy as well as robustness.
Article
Fractals play an important role in nonlinear science. The most important parameter when modeling a fractal is the fractal dimension. Existing information dimension can calculate the dimension of probability distribution. However, calculating the fractal dimension given a mass function, which is the generalization of probability, is still an open problem of immense interest. The main contribution of this work is to propose an information fractal dimension of mass function. Numerical examples are given to show the effectiveness of our proposed dimension. We discover an important property in that the dimension of mass function with the maximum Deng entropy is [Formula: see text], which is the well-known fractal dimension of Sierpiski triangle. The application in complexity analysis of time series illustrates the effectiveness of our method.
Article
This article proposes a new multiattribute group decision-making (MAGDM) method with probabilistic linguistic information that considers the following three aspects: an allocation of ignorance information, a realization of group consensus, and an aggregation of assessments. To allocate ignorance information, an optimization model based on minimizing the distances among experts is developed. To measure the consensus degree, a consensus index that considers the information granules of linguistic terms (LTs) is defined. On this basis, a suitable optimization model is established to realize the group consensus adaptively by optimizing the allocation of information granules of LTs with the particle swarm optimization (PSO) algorithm. With an objective to reduce the information loss during aggregation phases, the process of generating comprehensive assessments of alternatives with the evidential reasoning (ER) algorithm is presented. Therefore, a new method is developed based on the adaptive consensus reaching (ACR) model and the ER algorithm. Finally, the applicability of the proposed method is demonstrated by solving a selection problem of a financial technology company. Comparative analyses are conducted to show the advantages of the proposed method.
Article
In artificial intelligence systems, a question on how to express the uncertainty in knowledge remains an open issue. The negation scheme provides a new perspective to solve this issue. In this paper, we study quantum decisions from the negation perspective. Specifically, complex evidence theory (CET) is considered to be effective to express and handle uncertain information in a complex plane. Therefore, we first express CET in the quantum framework of Hilbert space. On this basis, a generalized negation method is proposed for quantum basic belief assignment (QBBA), called QBBA negation. In addition, a QBBA entropy is revisited to study the QBBA negation process to reveal the variation tendency of negation iteration. Meanwhile, the properties of the QBBA negation function are analyzed and discussed along with special cases. Then, several multisource quantum information fusion (MSQIF) algorithms are designed to support decision making. Finally, these MSQIF algorithms are applied in pattern classification to demonstrate their effectiveness. This is the first work to design MSQIF algorithms to support quantum decision making from a new perspective of negation, which provides promising solutions to knowledge representation, uncertainty measure, and fusion of quantum information.