Article

A novel quantum Dempster's rule of combination for pattern classification

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The integration of evidence theory for real-time applications requires effective solutions. Recently, [124] presents a novel quantum Dempster's rule of combination, which constructs quantum circuits using quantum logical gates, significantly reducing the computational complexity of Dempster's rule of combination without information loss. It is believe that [124] provides a promising way to handle such kinds of complexity and real time problem, making it worthy of further investigation. ...
... Recently, [124] presents a novel quantum Dempster's rule of combination, which constructs quantum circuits using quantum logical gates, significantly reducing the computational complexity of Dempster's rule of combination without information loss. It is believe that [124] provides a promising way to handle such kinds of complexity and real time problem, making it worthy of further investigation. ...
Article
Data fusion is a prevalent technique for assembling imperfect raw data coming from multiple sources to capture reliable and accurate information. Dempster–Shafer evidence theory is one of useful methodologies in the fusion of uncertain multisource information. The existing literature lacks a thorough and comprehensive review of the recent advances of Dempster– Shafer evidence theory for data fusion. Therefore, the state of the art has to be surveyed to gain insight into how Dempster–Shafer evidence theory is beneficial for data fusion and how it evolved over time. In this paper, we first provide a comprehensive review of data fusion methods based on Dempster–Shafer evidence theory and its extensions, collectively referred to as classical evidence theory, from three aspects of uncertainty modeling, fusion, and decision making. Next, we study and explore complex evidence theory for data fusion in both closed world and open world contexts that benefits from the frame of complex plane modelling. We then present classical and complex evidence theory framework-based multisource data fusion algorithms, which are applied to pattern classification to compare and demonstrate their applicabilities. The research results indicate that the complex evidence theory framework can enhance the capabilities of uncertainty modeling and reasoning by generating constructive interference through the fusion of appropriate complex basic belief assignment functions modeled by complex numbers. Through analysis and comparison, we finally propose several challenges and identify open future research directions in evidence theorybased data fusion.
... In the classical literature [30], several techniques for data fusion can be found, such as Bayesian probabilistic methods, filters (e.g., Kalman filter), fuzzy logic and rule-based systems, and machine learning approaches. In recent years, driven by the proliferation of artificial intelligence, new data fusion methods have emerged, including deep learning [31] through deep neural networks and Dempster-Shafer evidence theory [32,33], an extension of traditional probabilistic methods. However, these recent approaches involve higher computational complexity, are more challenging to interpret and explain, and complicate the appropriate parameter tuning process. ...
... In contrast to probability theory, DSET utilizes mass functions to assign beliefs to subsets of a power set, rather than to individual outcomes in the sample space, allowing for a more flexible approach to defining belief assignments. Moreover, DSET has been extended to complex evidence theory and quantum evidence theory (Xiao, 2023a;2023b), which are applied to various fields, such as time series analysis (Cui et al., 2022;Qiang et al., 2022;Contreras-Reyes and Kharazmi, 2023;Kharazmi and Contreras-Reyes, 2024;and Zhang and Xiao, 2024), quantum Dempster's rule of combination (He and Xiao, 2024), software risk assessment , and so on. However, DSET struggles to handle ordered information in certain real-world problems. ...
Article
Full-text available
Random walk is an explainable approach for modeling natural processes at the molecular level. The random permutation set theory (RPST) serves as a framework for uncertainty reasoning, extending the applicability of Dempster–Shafer theory. Recent explorations indicate a promising link between RPST and random walk. In this study, we conduct an analysis and construct a random walk model based on the properties of RPST, with Monte Carlo simulations of such random walk. Our findings reveal that the random walk generated through RPST exhibits characteristics similar to those of a Gaussian random walk and can be transformed into a Wiener process through a specific limiting scaling procedure. This investigation establishes a novel connection between RPST and random walk theory, thereby not only expanding the applicability of RPST but also demonstrating the potential for combining the strengths of both approaches to improve problem-solving abilities.
... -In addition to the established MSIF methodologies, recent studies have introduced innovative fusion techniques that significantly enhance the management of uncertainty and conflicting evidence within MSIF frameworks. Notably, the development of a novel Quantum Dempster's Rule of Combination (QDRC) [95] integrates quantum computing principles into Dempster-Shafer evidence theory. This approach addresses the exponential increase in computational complexity typical of traditional Dempster's rule as the number of elements in the identification framework grows. ...
Article
In multisource information fusion (MSIF), Dempster–Shafer evidence (DSE) theory offers a useful framework for reasoning under uncertainty. However, measuring the divergence between belief functions within this theory remains an unresolved challenge, particularly in managing conflicts in MSIF, which is crucial for enhancing decision-making level. In this paper, several divergence and distance functions are proposed to quantitatively measure discrimination between belief functions in DSE theory, including the reverse evidential KullbackLeibler (REKL) divergence, evidential Jeffrey's (EJ) divergence, evidential JensenShannon (EJS) divergence, evidential χ2\chi ^{2} (E χ2\chi ^{2} ) divergence, evidential symmetric χ2\chi ^{2} (ES χ2\chi ^{2} ) divergence, evidential triangular (ET) discrimination, evidential Hellinger (EH) distance, and evidential total variation (ETV) distance. On this basis, a generalized f -divergence, also called the evidential f -divergence (Ef divergence), is proposed. Depending on different kernel functions, the Ef divergence degrades into several specific classes: EKL, REKL, EJ, EJS, E χ2\chi ^{2} and ES χ2\chi ^{2} divergences, ET discrimination, and EH and ETV distances. Notably, when basic belief assignments (BBAs) are transformed into probability distributions, these classes of Ef divergence revert to their classical counterparts in statistics and information theory. In addition, several Ef-MSIF algorithms are proposed for pattern classification based on the classes of Ef divergence. These Ef-MSIF algorithms are evaluated on real-world datasets to demonstrate their practical effectiveness in solving classification problems. In summary, this work represents the first attempt to extend classical f -divergence within the DSE framework, capitalizing on the distinct properties of BBA functions. Experimental results show that the proposed Ef-MSIF algorithms improve classification accuracy, with the best-performing Ef-MSIF algorithm achieving an overall performance difference approximately 1.22 times smaller than the suboptimal method and 14.12 times smaller than the worst-performing method.
Article
In the era of complex data environments, accurately measuring uncertainty is crucial for effective decision making. Complex evidence theory (CET) provides a framework for handling uncertainty reasoning in the complex plane. Within CET, complex basic belief assignment (CBBA) aims to tackle the uncertainty and imprecision inherent in data coinciding with phase or periodic changes. However, measuring the uncertainty of CBBA over time remains an open issue. This study introduces a novel entropy model, the complex belief (CB) entropy, within the framework of CET, designed to tackle the inherent uncertainty and imprecision in data with phase or periodic changes. The model is developed by integrating concepts of interference and fractal theory to extend the understanding of uncertainty over time. Methodologically, the CB entropy is constructed to include discord, nonspecificity, and an interaction term for focal elements, defined as interference. In addition, thanks to the concept of the fractal, the model is further generalized to time fractal-based CB (TFCB) entropy for forecasting future uncertainties. We furthermore analyze the properties of the entropy models. Findings demonstrate that the proposed entropy models provide a more comprehensive measure of uncertainty in complex scenarios. Finally, a decision-making method based on the proposed entropy is proposed.
Article
Complex evidence theory (CET), an extension of the traditional D-S evidence theory, has garnered academic interest for its capacity to articulate uncertainty through complex basic belief assignment (CBBA) and to perform uncertainty reasoning using complex combination rules. Nonetheless, quantifying uncertainty within CET remains a subject of ongoing research. To enhance decision making, a method for complex pignistic belief transformation (CPBT) has been introduced, which allocates CBBAs of multielement focal elements to subsets. CPBT’s core lies in the fractal-inspired redistribution of the complex mass function. This article presents an experimental simulation and analysis of CPBT’s generation process along the temporal dimension, rooted in fractal theory. Subsequently, a novel fractal-based complex belief (FCB) entropy is proposed to gauge the uncertainty of CBBA. The properties of FCB entropy are examined, and its efficacy is demonstrated through various numerical examples and practical application.
Article
Uncertainty is a crucial aspect in real-life scenarios, especially when dealing with ambiguous information from multimodal data sources. Complex evidence theory (CET), a generalized form of Dempster–Shafer evidence theory that uses complex numbers to describe uncertainty, offers a more promising framework for dealing with uncertainty in many fields. Focusing on conflict management in the CET framework, new divergence measures, namely, complex belief Kullback–Leibler (CBKL) divergence and complex belief Jensen–Shannon (CBJS) divergence, are proposed in this article to quantify the discrepancy between complex evidence bodies. Additionally, novel multisource information fusion methods based on CBKL divergence and CBJS divergence for decision making are proposed. To validate the effectiveness of the proposed methods, classification tasks are performed. The results demonstrate the feasibility and accuracy of the proposed methods in handling complex evidence fusion and decision-making tasks.
Article
Full-text available
With the development of quantum decision making, how to bridge classical theory with quantum framework has gained much attention in past few years. Recently, a complex evidence theory (CET), as a generalized Dempster–Shafer evidence theory was presented to handle uncertainty on the complex plane. Whereas, CET focuses on a closed world, where the frame of discernment is complete with exhaustive and complete elements. To address this limitation, in this paper, we generalize CET to quantum framework of Hilbert space in an open world, and propose a generalized quantum evidence theory (GQET). On the basis of GQET, a quantum multisource information fusion algorithm is proposed to handle the uncertainty in an open world. To verify its effectiveness, we apply the proposed quantum multisource information fusion algorithm in a practical classification fusion.
Article
Full-text available
It is still a challenging problem to characterize uncertainty and imprecision between specific (singleton) clusters with arbitrary shapes and sizes. In order to solve such a problem, we propose a belief shift clustering (BSC) method for dealing with object data. The BSC method is considered as the evidential version of mean shift or mode seeking under the theory of belief functions. First, a new notion, called belief shift, is provided to preliminarily assign each query object as the noise, precise, or imprecise one. Second, a new evidential clustering rule is designed to partial credal redistribution for each imprecise object. To avoid the “uniform effect” and useless calculations, a specific dynamic framework with simulated cluster centers is established to reassign each imprecise object to a singleton cluster or related meta-cluster. Once an object is assigned to a meta-cluster, this object may be in the overlapping or intermediate areas of different singleton clusters. Consequently, the BSC can reasonably characterize the uncertainty and imprecision between singleton clusters. The effectiveness has been verified on several artificial, natural, and image segmentation/classification datasets by comparison with other related methods.
Article
Full-text available
Dempster rule of combination is a powerful combination tool. It has been widely used in many fields, such as information fusion and decision-making. However, the computational complexity of Dempster rule of combination increases exponentially with the increase of frame of discernment. To address this issue, leveraging the parallel advantage of quantum computing, we present a quantum algorithm of Dempster rule of combination. The new method includes four steps. First, the quantum superposition states corresponding to arbitrary mass functions are prepared. Next, the superposition states corresponding to the two mass functions are combined by the tensor product. Effective qubits are then measured. Finally, the measurement results are normalized to obtain the combined results. The new method not only realizes most of the functions of Dempster rule of combination, but also effectively reduces the computational complexity of Dempster rule of combination in the quantum computer. Finally, we carry out the simulation experiments on the quantum cloud platform of IBM, and the experimental results show that the new method is reasonable. Compared with the traditional combination rule, this method effectively reduces the computational complexity. As the frame of discernment becomes larger, the advantages of the proposed approach in terms of running time become larger and larger.
Article
Full-text available
Recently, a new type of set, called random permutation set (RPS), is proposed by considering all the permutations of elements in a certain set. For measuring the uncertainty of RPS, the entropy of RPS is presented. However, the maximum entropy principle of RPS entropy has not been discussed. To address this issue, this paper presents the maximum entropy of RPS. The analytical solution of maximum RPS entropy and its PMF condition are proven and discussed. Besides, numerical examples are used to illustrate the maximum RPS entropy. The results show that the maximum RPS entropy is compatible with the maximum Deng entropy and the maximum Shannon entropy. Moreover, in order to further apply RPS entropy and maximum RPS entropy in practical fields, a comparative analysis of the choice of using Shannon entropy, Deng entropy, and RPS entropy is also carried out.
Article
Full-text available
For exploring the meaning of the power set in evidence theory, a possible explanation of power set is proposed from the view of Pascal’s triangle and combinatorial number. Here comes the question: what would happen if the combinatorial number is replaced by permutation number? To address this issue, a new kind of set, named as random permutation set (RPS), is proposed in this paper, which consists of permutation event space (PES) and permutation mass function (PMF). The PES of a certain set considers all the permutation of that set. The elements of PES are called the permutation events. PMF describes the chance of a certain permutation event that would happen. Based on PES and PMF, RPS can be viewed as a permutation-based generalization of random finite set. Besides, the right intersection (RI) and left intersection (LI) of permutation events are presented. Based on RI and LI, the right orthogonal sum (ROS) and left orthogonal sum (LOS) of PMFs are proposed. In addition, numerical examples are shown to illustrate the proposed conceptions. The comparisons of probability theory, evidence theory, and RPS are discussed and summarized. Moreover, an RPS-based data fusion algorithm is proposed and applied in threat assessment. The experimental results show that the proposed RPS-based algorithm can reasonably and efficiently deal with uncertainty in threat assessment with respect to threat ranking and reliability ranking.
Article
Full-text available
A transparent digital twin (DT) is designed for output control using the belief rule base (BRB), namely, DT-BRB. The goal of the transparent DT-BRB is not only to model the complex relationships between the system inputs and output but also to conduct output control by identifying and optimizing the key parameters in the model inputs. The proposed DT-BRB approach is composed of three major steps. First, BRB is adopted to model the relationships between the inputs and output of the physical system. Second, an analytical procedure is proposed to identify only the key parameters in the system inputs with the highest contribution to the output. Being consistent with the inferencing, integration, and unification procedures of BRB, there are also three parts in the contribution calculation in this step. Finally, the data-driven optimization is performed to control the system output. A practical case study on the Wuhan Metro System is conducted for reducing the building tilt rate (BTR) in tunnel construction. By comparing the results following different standards, the 80% contribution standard is proved to have the highest marginal contribution that identifies only 43.5% parameters as the key parameters but can reduce the BTR by 73.73%. Moreover, it is also observed that the proposed DT BRB approach is so effective that iterative optimizations are not necessarily needed.
Article
Full-text available
Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. In addition, when the cardinal of the frame of discernment is identical, both the total uncertainty case and the BPA distribution of the maximum Deng entropy have the same information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.
Article
Full-text available
Dempster–Shafer evidence theory has been widely used in various fields of applications, because of the flexibility and effectiveness in modeling uncertainties without prior information. However, the existing evidence theory is insufficient to consider the situations where it has no capability to express the fluctuations of data at a given phase of time during their execution, and the uncertainty and imprecision which are inevitably involved in the data occur concurrently with changes to the phase or periodicity of the data. In this paper, therefore, a generalized Dempster–Shafer evidence theory is proposed. To be specific, a mass function in the generalized Dempster–Shafer evidence theory is modeled by a complex number, called as a complex basic belief assignment, which has more powerful ability to express uncertain information. Based on that, a generalized Dempster’s combination rule is exploited. In contrast to the classical Dempster’s combination rule, the condition in terms of the conflict coefficient between the evidences is released in the generalized Dempster’s combination rule. Hence, it is more general and applicable than the classical Dempster’s combination rule. When the complex mass function is degenerated from complex numbers to real numbers, the generalized Dempster’s combination rule degenerates to the classical evidence theory under the condition that the conflict coefficient between the evidences is less than 1. In a word, this generalized Dempster–Shafer evidence theory provides a promising way to model and handle more uncertain information. Thanks to this advantage, an algorithm for decision-making is devised based on the generalized Dempster–Shafer evidence theory. Finally, an application in a medical diagnosis illustrates the efficiency and practicability of the proposed algorithm.
Article
Full-text available
In belief functions related fields, the distance measure is an important concept, which represents the degree of dissimilarity between bodies of evidence. Various distance measures of evidence have been proposed and widely used in diverse belief function related applications, especially in performance evaluation. Existing definitions of strict and nonstrict distance measures of evidence have their own pros and cons. In this paper, we propose two new strict distance measures of evidence (Euclidean and Chebyshev forms) between two basic belief assignments based on the Wasserstein distance between belief intervals of focal elements. Illustrative examples, simulations, applications, and related analyses are provided to show the rationality and efficiency of our proposed measures for distance of evidence.
Article
Multisource information fusion is a comprehensive and interdisciplinary subject. Dempster-Shafer (D-S) evidence theory copes with uncertain information effectively. Pattern classification is the core research content of pattern recognition, and multisource information fusion based on D-S evidence theory can be effectively applied to pattern classification problems. However, in D-S evidence theory, highly-conflicting evidence may cause counterintuitive fusion results. Belief divergence theory is one of the theories that are proposed to address problems of highly-conflicting evidence. Although belief divergence can deal with conflict between evidence, none of the existing belief divergence methods has considered how to effectively measure the discrepancy between two pieces of evidence with time evolutionary. In this study, a novel fractal belief Rényi (FBR) divergence is proposed to handle this problem. We assume that it is the first divergence that extends the concept of fractal to R/'enyi divergence. The advantage is measuring the discrepancy between two pieces of evidence with time evolution, which satisfies several properties and is flexible and practical in various circumstances. Furthermore, a novel algorithm for multisource information fusion based on FBR divergence, namely FBReD-based weighted multisource information fusion, is developed. Ultimately, the proposed multisource information fusion algorithm is applied to a series of experiments for pattern classification based on real datasets, where our proposed algorithm achieved superior performance.
Article
Remote sensing image semantic segmentation (RSISS) remains challenging due to the scarcity of labeled data. Semi-supervised learning can leverage pseudo-labels to enhance the model’s ability to learn from unlabeled data. However, accurately generating pseudo-labels for RSISS remains a significant challenge that severely affects the model’s performance, especially for the edges of different classes. In order to overcome these issues, we propose a semi-supervised semantic segmentation framework for remote sensing images based on edge-aware class activation enhancement (ECAE). Firstly, the baseline network is constructed based on the average teacher model, which separates the training of labeled and unlabeled data using student and teacher networks. Secondly, considering local continuity and global discreteness of object distribution in remote sensing images, the class activation mapping enhancement (CAME) network is designed to predict local areas more remarkably. Finally, the edge-aware network (EAN) is proposed to improve the performance of edge segmentation in remote sensing images. The combination of the CAME with the EAN further heightens the generation of high-confidence pseudo-labels. Experiments were performed on two publicly available remote sensing semantic segmentation datasets, Potsdam and ISPRS Vaihingen, which verify the superiorities of the proposed ECAE model.
Article
Information can be quantified and expressed by uncertainty, and improving the decision level of uncertain information is vital in modeling and processing uncertain information. Dempster-Shafer evidence theory can model and process uncertain information effectively. However, the Dempster combination rule may provide counter-intuitive results when dealing with highly conflicting information, leading to a decline in decision level. Thus, measuring conflict is significant in the improvement of decision level. Motivated by this issue, this paper proposes a novel method to measure the discrepancy between bodies of evidence. First, the model of dynamic fractal probability transformation is proposed to effectively obtain more information about the non-specificity of basic belief assignments (BBAs). Then, we propose the higher-order fractal belief Rényi divergence (HOFBReD). HOFBReD can effectively measure the discrepancy between BBAs. Moreover, it is the first belief Rényi divergence that can measure the discrepancy between BBAs with dynamic fractal probability transformation. HoFBReD has several properties in terms of probability transformation as well as measurement. When the dynamic fractal probability transformation ends, HoFBReD is equivalent to measuring the Rényi divergence between the pignistic probability transformations of BBAs. When the BBAs degenerate to the probability distributions, HoFBReD will also degenerate to or be related to several well-known divergences. In addition, based on HoFBReD, a novel multisource information fusion algorithm is proposed. A pattern classification experiment with real-world datasets is presented to compare the proposed algorithm with other methods. The experiment results indicate that the proposed algorithm has a higher average pattern recognition accuracy with all datasets than other methods. The proposed discrepancy measurement method and multisource information algorithm contribute to the improvement of decision level.
Article
In fuzzy large group decision making methods, an effective clustering method can greatly reduce the complexity of decision making, and it is an important ingredient for reaching a group consensus. In this paper, a novel fuzzy large group decision making method is established using three-way clustering and an adaptive exit-delegation mechanism. Traditional clustering approaches group together individuals (isolated points) that deviate from the whole. The individuals (edge points) may exist and wander in between two or more classes. Both circumstances can lead to unstable and unreasonable clustering results. To overcome both setbacks, we propose a three-way clustering method based on the 𝑘-means clustering algorithm. The method first applies 𝑘-means clustering to perform an initial division of the universe of decision-makers. Then, in the spirit of three-way clustering, the edge points and outliers are separated from the clustering results by resorting to the three-way relationships between individuals and classes. The final clustering stems from an adaptive exit-delegation mechanism, and a consensus measure-based model determines the intra-group individual weight and inter-individual trust weight. Finally, the feasibility and effectiveness of the methodology that arises from the model designed in this paper are verified by comparative analyses.
Article
Recently, a new kind of set, named Random Permutation Set (RPS), has been presented. RPS takes the permutation of a certain set into consideration, which can be regarded as an ordered extension of evidence theory. Uncertainty is an important feature of RPS. A straightforward question is how to measure the uncertainty of RPS. To address this issue, the entropy of RPS (RPS entropy) is presented in this article. The proposed RPS entropy is compatible with Deng entropy and Shannon entropy. In addition, RPS entropy meets probability consistency, additivity, and subadditivity. Numerical examples are designed to illustrate the efficiency of the proposed RPS entropy. Besides, a comparative analysis of the choice of applying RPS entropy, Deng entropy, and Shannon entropy is also carried out.
Article
Negation is an important operation in evidence theory, whose idea is to consider the opposite of events, can deal with some problems with uncertainties from the opposite side and obtain information behind probability distribution. In classical D-S theory (Dempster-Shafer's theory), there are already many negation methods existed on real number field and many properties of which have been discovered. However, in complex evidence theory, which based on complex number field, negation is still an open problem. In order to deal with some problems like those in D-S theory, a new negation method for CBBA (Complex Basic Belief Assignment) should be proposed. In this paper, a new negation method called CBBA exponential negation will be presented, which can be seen as a generalization from BBA (Basic Belief Assignment) to CBBA. This proposed negation transforms a CBBA to another one with the entropy increased simultaneously. Also, some properties of this negation will be discussed such as invariance, convergence, fixed point, distribution of Pascal triangle, convergence speed, impact on negation convergence and so on. Besides, most among them will be strictly proved in this paper. Furthermore, a new entropy for CBBA and some numerical examples will be presented, and we will study the proposed negation from the view of entropy. Finally, an application of CBBA exponential negation will be shown in the end.
Article
As a general form of intuitionistic fuzzy preference relations (IFPRs) and Pythagorean fuzzy preference relations (PFPRs), q-rung orthopair fuzzy preference relations (q-ROFPRs) provide a more flexible information representation for decision makers (DMs) to express their vagueness and uncertainty. However, there have been only a few studies conducted on q-ROFPRs. Therefore, in the context of multi-attribute decision-making (MADM), a decision framework for MADM with q-ROFPRs is proposed. First, a novel score function is proposed to compare two different q‐rung orthopair fuzzy numbers (q‐ROFNs). Subsequently, an algorithm is developed to check and improve the multiplicative consistency of q-ROFPRs. Moreover, to consider the rationality of the threshold determination, an objective method for determining the threshold of q-ROFPRs is developed considering the number of alternatives and rung q. Finally, a new method for determining the weights of attributes is discussed. In addition, an illustrative example involving the brand evaluation of new energy vehicles is used to verify the applicability of the above methods. The rationality and superiority of the proposed methods are highlighted by a comparative analysis with existing studies.
Article
Multi-source information fusion has attracted considerable attention in the few past years and plays a great role in real applications. However, the uncertainty or conflict will make the fusion results unreasonable. Furthermore, the information may be collected in the form of complex number that cannot be processed by existing methods. In this article, to handle the above issues, the complex evidence theory (CET) is exploited. CET is the generalization of Dempster–Shafer evidence theory, where the mass function is modeled by complex number, called complex mass function (CMF). In order to deal with multi-source information fusion from the perspective of the complex plane, a new Complex Jensen–Shannon divergence (CJS divergence) is presented in this article. The proposed CJS divergence can effectively measure the conflict between two CMFs, and it satisfies the properties of boundedness, symmetry and nondegeneracy. In addition, for a better combination result, we have adjusted the complex Dempster’s rule of combination, which is called the reinforced complex evidence combination rule (RCECR). Then an algorithm for multi-source information fusion is proposed based on the CJS divergence and the RCECR. Some numerical examples and two applications in target identification and medical diagnosis illustrate the effectiveness of the new approach.
Article
There is a rising need for healthcare as a result of rising public awareness of health issues. Electronic medical records include extremely confidential and sensitive information, and blockchain technology can enable the safe exchange of these records across various medical organizations. The current blockchain system is susceptible to quantum computer attacks, nevertheless, as a result of the advent of quantum computers. This research designs a novel distributed quantum electronic medical record system and suggests a new private quantum blockchain network based on security considerations. The blocks in this quantum blockchain’s data structure are linked via entangled states. The time stamp is automatically formed by connecting quantum blocks with controlled activities, which lowers the amount of storage space needed. Each block’s hash value is recorded using just one qubit. The quantum information processing is detailed in depth in the quantum electronic medical record protocol. Every medical record can be tracked, and the security and privacy of electronic medical records in Internet of Medical Things systems can be guaranteed. The protocol also ditches the traditional encryption and digital signature algorithms in favor of a quantum authentication system. According to the mathematical analysis, the quantum blockchain network has strong security against attacks from quantum computers since it can withstand External attack, Intercept-Measure-Repeat attack and Entanglement-Measure attack. The quantum circuit diagram for deriving the hash value is provided, along with the correctness and traceability analysis of the quantum block. The comparison between the proposed quantum blockchain model and a few other quantum blockchain models is also included.
Article
As a typical multi-sensor fusion technology, the evidential reasoning (ER) rule has been widely used in evaluation, decision, and classification tasks. Current researches on the ER rule tend to fuse objects of the same type, such as the unquantized analog quantity. However, the fusion of unquantized analog quantity and quantized digital quantity is more common in engineering, but has received minimal attention. Given the characteristics of the ER rule, the biggest challenge imposed by this fusion is to consider the reliability of digital quantity reasonably. In this paper, a new fusion approach of digital and analog quantity based on the ER rule is proposed. In order to improve the fusion accuracy, the combination of quantization error and external noise is adopted to measure the reliability of digital quantity. On this basis, the digital and analog quantities are fused together according to the ER rule. To further explore the intrinsic mechanism of the proposed approach, a detailed performance analysis is conducted to study the variation law of evidence reliability and fusion results. Finally, a numerical example and a case study are intended to demonstrate the effectiveness of the proposed approach.
Article
Emergency decision making and disposal are significant challenges faced by the international community. To minimize the emergency casualties, and reduce probable secondary disasters, it is necessary to immediately dispatch rescuers for emergency rescue in calamity prone areas. The abruptness, destructiveness, and uncertainty of emergencies, the rescue team often faces challenges of pressing time, scattered calamity locations, and diverse tasks. This necessitates the effective organization of rescuers, for their swift dispatch to the areas requiring rescue. A valuable research problem is how to group and dispatch rescuers reasonably and effectively according to the actual needs of the emergency rescue task and the situation to achieve the best rescue effect. This study establishes a dispatch model for rescuers in multiple disaster areas and rescue points. First, this paper combines the Dempster-Shafer theory (DST) and linguistic term set, to propose the concept of an evidential linguistic term set (ELTS), that can flexibly and accurately describe the subjective judgment of emergency decision-makers. It not only lays a theoretical foundation for establishing the rescuers’ dispatch model, but also aids in expressing information in uncertain linguistic environments of decision-making and evaluation. Second, to determine the weight of ability-based rescuer evaluation criteria, this study adopted the evidential best-worst method, combining it with DST to compensate for the limitations of the traditional weightage calculation method in expressing uncertainty. Third, to effectively dispatch rescuers to multiple disaster areas, modeling is carried out based on the above methods to maximize the competence of rescuers and the satisfaction of rescue time, and the best scheme for the allocation of rescuers is determined by solving the model. Finally, the advantages of the constructed model in emergency multitasking group decision-making are demonstrated through an empirical analysis.
Article
The concept of a Z-number has obtained plenty of interest for its ability to represent uncertain and partially reliable information. Z-numbers are also widely used in decision-making for the reason that they can describe real-world information and human cognition more flexibly. However, the classical arithmetic complexity of Z-numbers is a burden in real applications, especially under large data sets. How to both retain the inherent meaning of Z-numbers and reduce the calculation complexity is a critical issue in the real Z-number-based applications. Limited theoretical progress has so far been discussed. To balance the gap between the arithmetic complexity and the inherent meaning of Z-numbers, we propose an approximate calculation method of Z-numbers (Z-ACM) based on kernel density estimation. The main ideas are as follows: first, kernel density estimation is used to partition/group Z-numbers with the total utility of Z-numbers; second, aggregate the representative Z-number in each partitioned interval using the classical arithmetic framework of Z-numbers. Based on the proposed Z-ACM, a fast decision model (FDM) is designed to deal with the issue of multi-criteria decision-making. Some examples with comparative analysis and rationality analysis are conducted to illustrate the effectiveness of the proposed methodology.
Article
In recent years, the frequent occurrence of natural hazards has caused huge economic and human losses, as well as seriously impacting the sustainable development of society. The effective management of emergency responses to natural hazards has become an important research topic worldwide. The demand prediction of emergency materials is the premise and basis for the optimal allocation of emergency resources, which is of great significance in improving the efficiency of disaster-related emergency responses. Using case-based reasoning (CBR) and the Dempster-Shafer theory, we investigated methods of predicting emergency materials demand. First, to address the problems of missing feature values, feature heterogeneity and inter-correlations among features of CBR, we proposed a case retrieval strategy based on Dempster-Shafer theory that not only lays a theoretical foundation for subsequent research, but also improves the case retrieval strategy used in CBR. Second, inspired by the 4R principle in CBR, we proposed a scenario-matching method for natural hazard, which uses historical cases in the absence of effective decision data for natural hazard-related loss predictions. Third, assuming that the impact of natural hazards will change with time, we further constructed a dynamic prediction model of emergency material demand based on the prediction results of natural hazard losses. In this paper, typhoon and earthquake disasters are used as case studies to demonstrate the application of the proposed materials demand prediction model, and the effectiveness of the method is demonstrated through empirical analysis.
Article
The mining of important nodes in complex networks is a topic of immense interest due to its wide applications across many disciplines. In this paper, a Local Structure Entropy (LSE) approach is proposed based on the Taslli entropy by removing nodes, and by considering the information of the first-order and second-order neighboring nodes, in order to explore the impact of removing nodes on the network structure. With this method, the degree and betweenness of the first-order and second-order adjacent nodes are combined by Taslli entropy, and the influential nodes are measured by the structural characteristics of the network after nodes removal. To verify the effectiveness of LSE, we compare our method with five existing methods and perform experiments on seven real-world networks. The experimental results indicate that the influential nodes identified by LSE are better than the existing methods in terms of the range of information dissemination and robustness. Moreover, it is negatively correlated with closeness centrality and the PageRank algorithm.
Article
Fractals play an important role in nonlinear science. The most important parameter when modeling a fractal is the fractal dimension. Existing information dimension can calculate the dimension of probability distribution. However, calculating the fractal dimension given a mass function, which is the generalization of probability, is still an open problem of immense interest. The main contribution of this work is to propose an information fractal dimension of mass function. Numerical examples are given to show the effectiveness of our proposed dimension. We discover an important property in that the dimension of mass function with the maximum Deng entropy is [Formula: see text], which is the well-known fractal dimension of Sierpiski triangle. The application in complexity analysis of time series illustrates the effectiveness of our method.
Article
Multidisciplinary team is beneficial to select an appropriate treatment plan for a patient with lung cancer, where the collected cognitive evaluation information may be uncertain and incomplete. This study dedicates to dealing with a treatment plan selection problem of lung cancer through the multi-criteria analysis with generalised probabilistic linguistic term sets (GPLTSs) which are powerful in describing the uncertainty and incompleteness of subjective evaluations. The existing generalised probabilistic linguistic information aggregation method is based on the Dempster-Shafer combination rule, but the combined results may be counterintuitive. In addition, the GPLTS may not meet the conditions of applying the Dempster-Shafer combination rule. To make up for these gaps, a new combination rule based on Dempster-Shafer theory is introduced. Then, a multi-criteria decision-making (MCDM) process with generalised probabilistic linguistic information based on the proposed combination rule is formed and applied to select the treatment plans of lung cancer associated with a multidisciplinary team. Through the sensitivity analysis and comparative analysis, the advantages of the proposed method are highlighted.
Article
Human-centered systems of systems, such as social networks, the Internet of Things, or healthcare systems are growingly becoming significant facets of modern life. Realistic models of human behavior in such systems play an essential role in their accurate modeling and prediction. Nevertheless, human behavior under uncertainty often violates the predictions by the conventional probabilistic models. Recently, quantum-like decision theories have shown a considerable potential to explain the contradictions in human behavior by applying quantum probabilities. But providing a quantum-like decision theory that could predict rather than describe the current state of human behavior is still one of the unsolved challenges. The fundamental contribution of this work is introducing the concept of entanglement from quantum information theory to Bayesian networks (BNs). This concept leads to an entangled quantum-like BN (QBN), in which each human is a part of the entire society. Accordingly, society's effect on the dynamic evolution of the decision-making process, which is less often considered in decision theories, is modeled by entanglement measures. To reach this aim, we introduce a quantum-like witness and find the relationship between this witness and the famous concurrence entanglement measure. The proposed predictive entangled QBN (PEQBN) is evaluated on 22 experimental tasks. Results confirm that PEQBN provides more realistic predictions of human decisions under uncertainty when compared with classical BNs and three recent quantum-like approaches.
Article
Dempster-Shafer evidence theory is widely used in the field of information fusion especially when confronting with uncertainties. However, Dempster’s rule of combination may lead to counter-intuitive results when dealing with highly conflicting bodies of evidence (BOEs). Numerous methods were proposed to address this problem. Enlightened by the research of interaction among nodes in complex networks, this paper study the combination of evidences from the perspective of networks: BOEs are regarded as nodes, the conflicting degree between BOEs is considered as one possible interaction between nodes. The direct and indirect interactions among nodes in networks are considered together to determine the weights of the BOEs. After process of weighted average, the modified BOEs can be efficiently combined by Dempster’s rule of combination. A numerical example is illustrated to show the use and better performance of the proposed method.
Article
Various studies have focused on the classification of uncertain or imbalanced data. However, previous studies rarely consider the classification for uncertain imbalanced data. To address this research gap, this study proposes an evidential reasoning (ER) based ensemble classifier (EREC). In the proposed method, an aff?nity propagation based oversampling method is developed to obtain the balanced class distributions of the training datasets for individual classifiers. Using the balanced training datasets, ER-based classifiers are constructed as individual classifiers to handle data uncertainty, in which attribute weights are learned from the similarity between the values of attributes and labels. With trained individual classifiers, final results are generated by combining the results of individual classifiers using the ER algorithm, in which the weights of individual classifiers are determined according to the classification performance on out-of-bag data. The proposed EREC is applied to the diagnosis of thyroid nodules using the datasets of five radiologists, obtained from a tertiary hospital located in Hefei, Anhui, China. Using real datasets and UCI datasets, the EREC is compared with 12 representative ensemble classifiers and other oversampling methods based ensemble classifiers to highlight its high performance.
Article
In pattern classification, we may have a few labeled data points in the target domain, but a number of labeled samples are available in another related domain (called the source domain). Transfer learning can solve such classification problems via the knowledge transfer from source to target domains. The source and target domains can be represented by heterogeneous features. There may exist uncertainty in domain transformation, and such uncertainty is not good for classification. The effective management of uncertainty is important for improving classification accuracy. So, a new belief-based bidirectional transfer classification (BDTC) method is proposed. In BDTC, the intraclass transformation matrix is estimated at first for mapping the patterns from source to target domains, and this matrix can be learned using the labeled patterns of the same class represented by heterogeneous domains (features). The labeled patterns in the source domain are transferred to the target domain by the corresponding transformation matrix. Then, we learn a classifier using all the labeled patterns in the target domain to classify the objects. In order to take full advantage of the complementary knowledge of different domains, we transfer the query patterns from target to source domains using the K-NN technique and do the classification task in the source domain. Thus, two pieces of classification results can be obtained for each query pattern in the source and target domains, but the classification results may have different reliabilities/weights. A weighted combination rule is developed to combine the two classification results based on the belief functions theory, which is an expert at dealing with uncertain information. We can efficiently reduce the uncertainty of transfer classification via the combination strategy. Experiments on some domain adaptation benchmarks show that our method can effectively improve classification accuracy compared with other related methods.
Article
Understanding the socially influenced decision-making process that determines voluntary vaccination is essential for developing strategies and interventions of vaccine-preventable diseases. Both theoretical and experimental studies have suggested that a variety of factors, such as safety of vaccines, severity of diseases, information and advice from healthcare professionals, influence an individual's intention to vaccinate. However, limited research has been conducted on analysing systematically how individuals’ vaccine acceptance decisions are made from their beliefs and judgements on the influential factors. In particular, there is lack of quantitative analysis on how individuals’ beliefs and judgements may evolve from the spreading of vaccination-related information in a social network, which further affects their decision making. In this paper, an integrated model is first proposed to characterise the socially influenced vaccination decision-making process, in which each individual's beliefs and subjective judgements on the decision criteria are formulated as belief distributions in the framework of multiple criteria decision analysis (MCDA). The spreading of social influence in the network environment is further incorporated into the information aggregation process for supporting informed vaccination decision analysis. A series of simulation-based analyses on a real-world social network is conducted to demonstrate that the overall vaccination coverage is determined primarily by individuals’ beliefs and judgements on the decision criteria, and is also affected sensitively by the characteristics of influence spreading (including the content and amount of vaccination-related information) in the social network.
Article
As an extension of probability theory, evidence theory is able to better handle unknown and imprecise information. Owing to its advantages, evidence theory has more flexibility and effectiveness for modeling and processing uncertain information. Uncertainty measure plays an essential role both in evidence theory and probability theory. In probability theory, Shannon entropy provides a novel perspective for measuring uncertainty. Various entropies exist for measuring the uncertainty of basic probability assignment (BPA) in evidence theory. However, from the standpoint of the requirements of uncertainty measurement and physics, these entropies are controversial. Therefore, the process for measuring BPA uncertainty currently remains an open issue in the literature. Firstly, this paper reviews the measures of uncertainty in evidence theory followed by an analysis of some related controversies. Secondly, we discuss the development of Deng entropy as an effective way to measure uncertainty, including introducing its definition, analyzing its properties, and comparing it to other measures. We also examine the concept of maximum Deng entropy, the pseudo-Pascal triangle of maximum Deng entropy, generalized belief entropy, and measures of divergence. In addition, we conduct an analysis of the application of Deng entropy and further examine the challenges for future studies on uncertainty measurement in evidence theory. Finally, a conclusion is provided to summarize this study.
Article
The complex-value-based generalized Dempster–Shafer evidence theory, also called complex evidence theory is a useful methodology to handle uncertainty problems of decision-making on the framework of complex plane. In this paper, we propose a new concept of belief function in complex evidence theory. Furthermore, we analyze the axioms of the proposed belief function, then define a plausibility function in complex evidence theory. The newly defined belief and plausibility functions are the generalizations of the traditional ones in Dempster–Shafer (DS) evidence theory, respectively. In particular, when the complex basic belief assignments are degenerated from complex numbers to classical basic belief assignments (BBAs), the generalized belief and plausibility functions in complex evidence theory degenerate into the traditional belief and plausibility functions in DS evidence theory, respectively. Some special types of the generalized belief function are further discussed as well as their characteristics. In addition, an interval constructed by the generalized belief and plausibility functions can be utilized for fuzzy measure, which provides a promising way to express and model the uncertainty in decision theory.
Article
The diagnosed features (samples) with multiple attributes of medical images always demand experts to reveal insight. Up to today, machine learning often cannot be a helpful expert. The reason lies in lacking evidential granules carrying knowledge and evidence for inferential learning. The shortage slows down representation learning which aims at discovering expressions for featuring concepts. Therefore, this paper proposes evidential memberships carrying preferential relevance to build a heuristic representation learning. Empirically, it solves local features and global representations with maximum coverage under challenges of shallow bury. For illustration, it is implemented on the testing data set of UCI-SPECTF.
Article
The uncertainty measure of Atanassov’s intuitionistic fuzzy sets (AIFSs) is important for information discrimination under intuitionistic fuzzy environment. Although many entropy measures and knowledge measures haven been proposed to depict uncertainty of AIFSs, how to measure the uncertainty of AIFSs is still an open topic. The relation between uncertainty and other measures like entropy measures, fuzziness and intuitionism is not clear. This paper introduces uncertainty measures by using new defined divergence-based cross entropy measure of AIFSs. Axiomatic properties of the developed uncertainty measure are analysis, together with the monotony property of uncertainty degree with respect to fuzziness and intuitionism. To adjust the contribution of fuzzy entropy and intuitionistic entropy on the total uncertainty, the proposed cross entropy and uncertainty measures are parameterized. Numerical examples indicate the effectiveness and agility of the biparametric uncertainty measure in quantifying uncertainty degree. Then we apply the cross entropy and uncertainty measures into an optimal model to determine attribute weights in multi-attribute group decision making (MAGDM) problems. A new method for intuitionistic fuzzy MAGDM problems is proposed to show the efficiency of proposed measures in applications. It is demonstrated by application examples that the proposed measures can get reasonable results coinciding with other existing methods.
Article
We discuss how the Dempster-Shafer belief structure provides a framework for modeling an uncertain value x˜ from some domain X. We note how it involves a two-step process: the random determination of one focal element (set)guided by a probability distribution and then the selection of x˜ from this focal element in some unspecified manner. We generalize this framework by allowing the selection of the focal element to be determined by a random experiment guided by a fuzzy measure. In either case the anticipation that x˜ lies in some subset E is interval-valued, [Bel(E), Pl(E)]. We next look at database retrieval and turn to issue of determining if a database entity with an uncertain attribute value satisfies a desired value. Here we model our uncertain attribute value as x˜ and our desired value as a subset E. In this case the degree of satisfaction of the query E by the entity is [Bel(E), Pl(E)]. In order to compare these interval-valued satisfactions we use the Golden rule representative value to turn the intervals into scalars. We describe an application involving retrieval from a uncertain database.
Article
Both in science and in practical affairs we reason by combining facts only inconclusively supported by evidence. Building on an abstract understanding of this process of combination, this book constructs a new theory of epistemic probability. The theory draws on the work of A. P. Dempster but diverges from Depster's viewpoint by identifying his "lower probabilities" as epistemic probabilities and taking his rule for combining "upper and lower probabilities" as fundamental. The book opens with a critique of the well-known Bayesian theory of epistemic probability. It then proceeds to develop an alternative to the additive set functions and the rule of conditioning of the Bayesian theory: set functions that need only be what Choquet called "monotone of order of infinity." and Dempster's rule for combining such set functions. This rule, together with the idea of "weights of evidence," leads to both an extensive new theory and a better understanding of the Bayesian theory. The book concludes with a brief treatment of statistical inference and a discussion of the limitations of epistemic probability. Appendices contain mathematical proofs, which are relatively elementary and seldom depend on mathematics more advanced that the binomial theorem.
Article
The Dempster–Shafer evidence theory (D–S theory) is one of the primary tools for knowledge representation and uncertain reasoning, and has been widely used in many information fusion systems. However, how to determine the basic probability assignment (BPA), which is the main and first step in D–S theory, is still an open issue. In this paper, based on the normal distribution, a method to obtain BPA is proposed. The training data are used to build a normal distribution-based model for each attribute of the data. Then, a nested structure BPA function can be constructed, using the relationship between the test data and the normal distribution model. A normality test and normality transformation are integrated into the proposed method to handle non-normal data. The missing attribute values in datasets are addressed as ignorance in the framework of the evidence theory. Several benchmark pattern classification problems are used to demonstrate the proposed method and to compare against existing methods. Experiments provide encouraging results in terms of classification accuracy, and the proposed method is seen to perform well without a large amount of training data.
Article
A multivalued mapping from a space X to a space S carries a probability measure defined over subsets of X into a system of upper and lower probabilities over subsets of S. Some basic properties of such systems are explored in Sections 1 and 2. Other approaches to upper and lower probabilities are possible and some of these are related to the present approach in Section 3. A distinctive feature of the present approach is a rule for conditioning, or more generally, a rule for combining sources of information, as discussed in Sections 4 and 5. Finally, the context in statistical inference from which the present theory arose is sketched briefly in Section 6.