Article

A multi-granularity distance with its application for decision making

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Open set recognition (OSR) aims to correctly recognize the known classes and reject the unknown classes for increasing the reliability of the recognition system. The distance-based loss is often employed in deep neural networks-based OSR methods to constrain the latent representation of known classes. However, the optimization is usually conducted using the nondirectional Euclidean distance in a single feature space without considering the potential impact of spatial distribution. To address this problem, we propose orientational distribution learning (ODL) with hierarchical spatial attention for OSR. In ODL, the spatial distribution of feature representation is optimized orientationally to increase the discriminability of decision boundaries for open set recognition. Then, a hierarchical spatial attention mechanism is proposed to assist ODL to capture the global distribution dependencies in the feature space based on spatial relationships. Moreover, a composite feature space is constructed to integrate the features from different layers and different mapping approaches, and it can well enrich the representation information. Finally, a decision-level fusion method is developed to combine the composite feature space and the naive feature space for producing a more comprehensive classification result. The effectiveness of ODL has been demonstrated on various benchmark datasets, and ODL achieves state-of-the-art performance.
Article
Full-text available
With the development of quantum decision making, how to bridge classical theory with quantum framework has gained much attention in past few years. Recently, a complex evidence theory (CET), as a generalized Dempster–Shafer evidence theory was presented to handle uncertainty on the complex plane. Whereas, CET focuses on a closed world, where the frame of discernment is complete with exhaustive and complete elements. To address this limitation, in this paper, we generalize CET to quantum framework of Hilbert space in an open world, and propose a generalized quantum evidence theory (GQET). On the basis of GQET, a quantum multisource information fusion algorithm is proposed to handle the uncertainty in an open world. To verify its effectiveness, we apply the proposed quantum multisource information fusion algorithm in a practical classification fusion.
Article
Full-text available
It is still a challenging problem to characterize uncertainty and imprecision between specific (singleton) clusters with arbitrary shapes and sizes. In order to solve such a problem, we propose a belief shift clustering (BSC) method for dealing with object data. The BSC method is considered as the evidential version of mean shift or mode seeking under the theory of belief functions. First, a new notion, called belief shift, is provided to preliminarily assign each query object as the noise, precise, or imprecise one. Second, a new evidential clustering rule is designed to partial credal redistribution for each imprecise object. To avoid the “uniform effect” and useless calculations, a specific dynamic framework with simulated cluster centers is established to reassign each imprecise object to a singleton cluster or related meta-cluster. Once an object is assigned to a meta-cluster, this object may be in the overlapping or intermediate areas of different singleton clusters. Consequently, the BSC can reasonably characterize the uncertainty and imprecision between singleton clusters. The effectiveness has been verified on several artificial, natural, and image segmentation/classification datasets by comparison with other related methods.
Article
Full-text available
Recently, a new type of set, called random permutation set (RPS), is proposed by considering all the permutations of elements in a certain set. For measuring the uncertainty of RPS, the entropy of RPS is presented. However, the maximum entropy principle of RPS entropy has not been discussed. To address this issue, this paper presents the maximum entropy of RPS. The analytical solution of maximum RPS entropy and its PMF condition are proven and discussed. Besides, numerical examples are used to illustrate the maximum RPS entropy. The results show that the maximum RPS entropy is compatible with the maximum Deng entropy and the maximum Shannon entropy. Moreover, in order to further apply RPS entropy and maximum RPS entropy in practical fields, a comparative analysis of the choice of using Shannon entropy, Deng entropy, and RPS entropy is also carried out.
Article
Full-text available
For exploring the meaning of the power set in evidence theory, a possible explanation of power set is proposed from the view of Pascal’s triangle and combinatorial number. Here comes the question: what would happen if the combinatorial number is replaced by permutation number? To address this issue, a new kind of set, named as random permutation set (RPS), is proposed in this paper, which consists of permutation event space (PES) and permutation mass function (PMF). The PES of a certain set considers all the permutation of that set. The elements of PES are called the permutation events. PMF describes the chance of a certain permutation event that would happen. Based on PES and PMF, RPS can be viewed as a permutation-based generalization of random finite set. Besides, the right intersection (RI) and left intersection (LI) of permutation events are presented. Based on RI and LI, the right orthogonal sum (ROS) and left orthogonal sum (LOS) of PMFs are proposed. In addition, numerical examples are shown to illustrate the proposed conceptions. The comparisons of probability theory, evidence theory, and RPS are discussed and summarized. Moreover, an RPS-based data fusion algorithm is proposed and applied in threat assessment. The experimental results show that the proposed RPS-based algorithm can reasonably and efficiently deal with uncertainty in threat assessment with respect to threat ranking and reliability ranking.
Article
Full-text available
Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. In addition, when the cardinal of the frame of discernment is identical, both the total uncertainty case and the BPA distribution of the maximum Deng entropy have the same information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.
Article
Full-text available
Significance Collective risks trigger social dilemmas that require balancing selfish interests and common good. One important example is mitigating climate change, wherein without sufficient investments, worldwide negative consequences become increasingly likely. To study the social aspects of this problem, we organized a game experiment that reveals how group size, communication, and behavioral type drive prosocial action. We find that communicating sentiment and outlook leads to more positive outcomes, even among culturally heterogeneous groups. Although genuine free riders remain unfazed by communication, prosocial players better endure accumulated investment deficits, and thus fight off inaction as the failure looms. This suggests that climate negotiations may achieve more by leveraging existing goodwill than persuading skeptics to act.
Article
Full-text available
The decoy effect is a cognitive bias documented in behavioural economics by which the presence of a third, (partly) inferior choice causes a significant shift in people’s preference for other items. Here, we performed an experiment with human volunteers who played a variant of the repeated prisoner’s dilemma game in which the standard options of “cooperate” and “defect” are supplemented with a new, decoy option, “reward”. We show that although volunteers rarely chose the decoy option, its availability sparks a significant increase in overall cooperativeness and improves the likelihood of success for cooperative individuals in this game. The presence of the decoy increased willingness of volunteers to cooperate in the first step of each game, leading to subsequent propagation of such willingness by (noisy) tit-for-tat. Our study thus points to decoys as a means to elicit voluntary prosocial action across a spectrum of collective endeavours.
Article
Full-text available
Significance The evolution of cooperation has a formative role in human societies—civilized life on Earth would be impossible without cooperation. However, it is unclear why cooperation would evolve in the first place because Darwinian selection favors selfish individuals. After struggling with this problem for >150 y, recent scientific breakthroughs have uncovered multiple cooperation-promoting mechanisms. We build on these breakthroughs by examining whether two widely known cooperation-promoting mechanisms—network reciprocity and costly punishment—create synergies in a social dilemma experiment. While network reciprocity fulfilled its expected role, costly punishment proved to be surprisingly ineffective in promoting cooperation. This ineffectiveness suggests that the rational response to punishment assumed in theoretical studies is overly stylized and needs reexamining.
Article
Full-text available
In real applications, how to measure the uncertain degree of sensor reports before applying sensor data fusion is a big challenge. In this paper, in the frame of Dempster–Shafer evidence theory, a weighted belief entropy based on Deng entropy is proposed to quantify the uncertainty of uncertain information. The weight of the proposed belief entropy is based on the relative scale of a proposition with regard to the frame of discernment (FOD). Compared with some other uncertainty measures in Dempster–Shafer framework, the new measure focuses on the uncertain information represented by not only the mass function, but also the scale of the FOD, which means less information loss in information processing. After that, a new multi-sensor data fusion approach based on the weighted belief entropy is proposed. The rationality and superiority of the new multi-sensor data fusion method is verified according to an experiment on artificial data and an application on fault diagnosis of a motor rotor.
Article
Full-text available
One of the most elusive scientific challenges for over 150 years has been to explain why cooperation survives despite being a seemingly inferior strategy from an evolutionary point of view. Over the years, various theoretical scenarios aimed at solving the evolutionary puzzle of cooperation have been proposed, eventually identifying several cooperation-promoting mechanisms: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection. We report the results of repeated Prisoner's Dilemma experiments with anonymous and onymous pairwise interactions among individuals. We find that onymity significantly increases the frequency of cooperation and the median payoff per round relative to anonymity. Furthermore, we also show that the correlation between players' ranks and the usage of strategies (cooperation, defection, or punishment) underwent a fundamental shift, whereby more prosocial actions are rewarded with a better ranking under onymity. Our findings prove that reducing anonymity is a valid promoter of cooperation, leading to higher payoffs for cooperators and thus suppressing an incentive—anonymity—that would ultimately favor defection.
Article
Full-text available
In belief functions related fields, the distance measure is an important concept, which represents the degree of dissimilarity between bodies of evidence. Various distance measures of evidence have been proposed and widely used in diverse belief function related applications, especially in performance evaluation. Existing definitions of strict and nonstrict distance measures of evidence have their own pros and cons. In this paper, we propose two new strict distance measures of evidence (Euclidean and Chebyshev forms) between two basic belief assignments based on the Wasserstein distance between belief intervals of focal elements. Illustrative examples, simulations, applications, and related analyses are provided to show the rationality and efficiency of our proposed measures for distance of evidence.
Article
Full-text available
Wireless sensor network plays an important role in intelligent navigation. It incorporates a group of sensors to overcome the limitation of single detection system. Dempster-Shafer evidence theory can combine the sensor data of the wireless sensor network by data fusion, which contributes to the improvement of accuracy and reliability of the detection system. However, due to different sources of sensors, there may be conflict among the sensor data under uncertain environment. Thus, this paper proposes a new method combining Deng entropy and evidence distance to address the issue. First, Deng entropy is adopted to measure the uncertain information. Then, evidence distance is applied to measure the conflict degree. The new method can cope with conflict effectually and improve the accuracy and reliability of the detection system. An example is illustrated to show the efficiency of the new method and the result is compared with that of the existing methods.
Article
Full-text available
Dempster–Shafer evidence theory is widely used in information fusion. However, it may lead to an unreasonable result when dealing with high conflict evidence. In order to solve this problem, we put forward a new method based on the credibility of evidence. First, a novel belief entropy, Deng entropy, is applied to measure the information volume of the evidence and then the discounting coefficients of each evidence are obtained. Finally, weighted averaging the evidence in the system, the Dempster combination rule was used to realize information fusion. A weighted averaging combination role is presented for multi-sensor data fusion in fault diagnosis. It seems more reasonable than before using the new belief function to determine the weight. A numerical example is given to illustrate that the proposed rule is more effective to perform fault diagnosis than classical evidence theory in fusing multi-symptom domains.
Article
Full-text available
In this paper we provide a proof for the positive definiteness of the Jaccard index matrix used as a weighting matrix in the Euclidean distance between belief functions defined in Jousselme et al. [13]. The idea of this proof relies on the decomposition of the matrix into an infinite sum of positive semidefinite matrices. The proof is valid for any size of the frame of discernment but we provide an illustration for a frame of three elements. The Jaccard index matrix being positive definite guaranties that the associated Euclidean distance is a full metric and thus that a null distance between two belief functions implies that these belief functions are strictly identical.
Article
Full-text available
We present a measure of performance (MOP) for identification algorithms based on the evidential theory of Dempster–Shafer. As an MOP, we introduce a principled distance between two basic probability assignments (BPAs) (or two bodies of evidence) based on a quantification of the similarity between sets. We give a geometrical interpretation of BPA and show that the proposed distance satisfies all the requirements for a metric. We also show the link with the quantification of Dempster's weight of conflict proposed by George and Pal. We compare this MOP to that described by Fixsen and Mahler and illustrate the behaviors of the two MOPs with numerical examples.
Article
Multisource information fusion is a comprehensive and interdisciplinary subject. Dempster-Shafer (D-S) evidence theory copes with uncertain information effectively. Pattern classification is the core research content of pattern recognition, and multisource information fusion based on D-S evidence theory can be effectively applied to pattern classification problems. However, in D-S evidence theory, highly-conflicting evidence may cause counterintuitive fusion results. Belief divergence theory is one of the theories that are proposed to address problems of highly-conflicting evidence. Although belief divergence can deal with conflict between evidence, none of the existing belief divergence methods has considered how to effectively measure the discrepancy between two pieces of evidence with time evolutionary. In this study, a novel fractal belief Rényi (FBR) divergence is proposed to handle this problem. We assume that it is the first divergence that extends the concept of fractal to R/'enyi divergence. The advantage is measuring the discrepancy between two pieces of evidence with time evolution, which satisfies several properties and is flexible and practical in various circumstances. Furthermore, a novel algorithm for multisource information fusion based on FBR divergence, namely FBReD-based weighted multisource information fusion, is developed. Ultimately, the proposed multisource information fusion algorithm is applied to a series of experiments for pattern classification based on real datasets, where our proposed algorithm achieved superior performance.
Article
Information can be quantified and expressed by uncertainty, and improving the decision level of uncertain information is vital in modeling and processing uncertain information. Dempster-Shafer evidence theory can model and process uncertain information effectively. However, the Dempster combination rule may provide counter-intuitive results when dealing with highly conflicting information, leading to a decline in decision level. Thus, measuring conflict is significant in the improvement of decision level. Motivated by this issue, this paper proposes a novel method to measure the discrepancy between bodies of evidence. First, the model of dynamic fractal probability transformation is proposed to effectively obtain more information about the non-specificity of basic belief assignments (BBAs). Then, we propose the higher-order fractal belief Rényi divergence (HOFBReD). HOFBReD can effectively measure the discrepancy between BBAs. Moreover, it is the first belief Rényi divergence that can measure the discrepancy between BBAs with dynamic fractal probability transformation. HoFBReD has several properties in terms of probability transformation as well as measurement. When the dynamic fractal probability transformation ends, HoFBReD is equivalent to measuring the Rényi divergence between the pignistic probability transformations of BBAs. When the BBAs degenerate to the probability distributions, HoFBReD will also degenerate to or be related to several well-known divergences. In addition, based on HoFBReD, a novel multisource information fusion algorithm is proposed. A pattern classification experiment with real-world datasets is presented to compare the proposed algorithm with other methods. The experiment results indicate that the proposed algorithm has a higher average pattern recognition accuracy with all datasets than other methods. The proposed discrepancy measurement method and multisource information algorithm contribute to the improvement of decision level.
Article
Multi-source information fusion is a sophisticated estimation process that generates a unified profile to assess complex situations. Dempster–Shafer evidence theory (DSET) is a practical theory in handling uncertain information in multi-source information fusion. However, highly-conflicting evidence may cause the combination rule of Dempster to provide counterintuitive results. Thus, how to effectively reconcile highly-conflicting evidence in DSET is still an open issue. To address this problem, in this paper, a novel belief divergence, higher order belief Jensen-Shannon divergence is proposed to measure the discrepancy between BPAs in DSET. The proposed higher order belief Jensen-Shannon divergence is the first method to dynamically measure the discrepancy between BPAs over the time evolution, i.e., to measure the discrepancy between BPAs with different time scale in the future. Besides, the proposed higher order belief Jensen-Shannon divergence has benefits from the perspective of measurement. It satisfies the properties of nonnegativeness and nondegeneracy, symmetry, and the triangle inequality of root form. Based on the proposed higher order belief Jensen-Shannon divergence, a novel multi-source information fusion algorithm is proposed. Eventually, the proposed algorithm is applied to a pattern classification experiment with real-world datasets.
Article
The existing deep networks have shown excellent performance in remote sensing scene classification, which generally requires a large amount of class-balanced training samples. However, deep networks will result in underfitting with imbalanced training samples since they can easily bias toward the majority classes. To address these problems, a multi-granularity decoupling network (MGDNet) is proposed for remote sensing image scene classification. To begin with, we design a multi-granularity complementary feature representation (MGCFR) method to extract fine-grained features from remote sensing images, which utilizes region-level supervision to guide the attention of the decoupling network. Second, a class-imbalanced pseudo-label selection (CIPS) approach is proposed to evaluate the credibility of unlabeled samples. Finally, the diversity component feature (DCF) loss function is developed to force the local features to be more discriminative. Our model performs satisfactorily on three public datasets: UC Merced (UCM), NWPU-RESISC45, and Aerial Image Dataset (AID). Experimental results show that the proposed model yields superior performance compared with other state-of-the-art methods.
Article
Recently, a new kind of set, named Random Permutation Set (RPS), has been presented. RPS takes the permutation of a certain set into consideration, which can be regarded as an ordered extension of evidence theory. Uncertainty is an important feature of RPS. A straightforward question is how to measure the uncertainty of RPS. To address this issue, the entropy of RPS (RPS entropy) is presented in this article. The proposed RPS entropy is compatible with Deng entropy and Shannon entropy. In addition, RPS entropy meets probability consistency, additivity, and subadditivity. Numerical examples are designed to illustrate the efficiency of the proposed RPS entropy. Besides, a comparative analysis of the choice of applying RPS entropy, Deng entropy, and Shannon entropy is also carried out.
Article
With the scale of group decision making increasing, it is a crucial issue to get the utmost of collective intelligence for seeking the optimal solution. In this study, we propose an automatic consensus reaching process (CRP) for large-scale group decision making (LSGDM) based on parallel dynamic feedback strategy and two-dimensional scenario-based social network analysis (SNA) model. Firstly, individuals express their preferences by distributed preference relations (DPRs) which could keep the uncertainty of assessment and allow multi-attribute comparison. Secondly, SNA based on trust relationship and connection strength is implemented. Then a two-dimensional scenario-based SNA model is established, and a fuzzy clustering algorithm based on connection strength is designed to reduce the scale of decision makers (DMs). Finally, a two-phase CRP with identification rules and feedback strategy is constructed. Identification rules are used for activating different kinds of feedback mechanisms by identifying whether it reaches acceptable local or global consensus. The rules also identify which kind of social relationship for internal or external subgroups and what dominance does individual or subgroup has. Feedback strategy with parallel dynamic adjustment process is further designed based on opinion and trust adjustment factors and non-cooperative behaviors. A real illustrative case for selecting the optimal carbon footprint management provider is presented to demonstrate the validity of our proposed method, and further compare it with other current methods.
Article
As a general form of intuitionistic fuzzy preference relations (IFPRs) and Pythagorean fuzzy preference relations (PFPRs), q-rung orthopair fuzzy preference relations (q-ROFPRs) provide a more flexible information representation for decision makers (DMs) to express their vagueness and uncertainty. However, there have been only a few studies conducted on q-ROFPRs. Therefore, in the context of multi-attribute decision-making (MADM), a decision framework for MADM with q-ROFPRs is proposed. First, a novel score function is proposed to compare two different q‐rung orthopair fuzzy numbers (q‐ROFNs). Subsequently, an algorithm is developed to check and improve the multiplicative consistency of q-ROFPRs. Moreover, to consider the rationality of the threshold determination, an objective method for determining the threshold of q-ROFPRs is developed considering the number of alternatives and rung q. Finally, a new method for determining the weights of attributes is discussed. In addition, an illustrative example involving the brand evaluation of new energy vehicles is used to verify the applicability of the above methods. The rationality and superiority of the proposed methods are highlighted by a comparative analysis with existing studies.
Article
This study proposes a minimum cost consensus-based failure mode and effect analysis (MCC-FMEA) framework considering experts' limited compromise and tolerance behaviors, where the first behavior indicates that a failure mode and effect analysis (FMEA) expert might not tolerate modifying his/her risk assessment without limitations, and the second behavior indicates that an FMEA expert will accept risk assessment suggestions without being paid for any cost if the suggested risk assessments fall within his/her tolerance threshold. First, an MCC-FMEA with limited compromise behaviors is presented. Second, experts' tolerance behaviors are added to the MCC-FMEA with limited compromise behaviors. Theoretical results indicate that in some cases, this MCC-FMEA with limited compromise and tolerance behaviors has no solution. Thus, a minimum compromise adjustment consensus model and a maximum consensus model with limited compromise behaviors are developed and analyzed, and an interactive MCC-FMEA framework, resulting in an FMEA problem consensual collective solution, is designed. A case study, regarding the assessment of COVID-19-related risk in radiation oncology, and a detailed sensitivity and comparative analysis with the existing FMEA approaches are provided to verify the effectiveness of the proposed approach to FMEA consensus-reaching.
Article
As a typical multi-sensor fusion technology, the evidential reasoning (ER) rule has been widely used in evaluation, decision, and classification tasks. Current researches on the ER rule tend to fuse objects of the same type, such as the unquantized analog quantity. However, the fusion of unquantized analog quantity and quantized digital quantity is more common in engineering, but has received minimal attention. Given the characteristics of the ER rule, the biggest challenge imposed by this fusion is to consider the reliability of digital quantity reasonably. In this paper, a new fusion approach of digital and analog quantity based on the ER rule is proposed. In order to improve the fusion accuracy, the combination of quantization error and external noise is adopted to measure the reliability of digital quantity. On this basis, the digital and analog quantities are fused together according to the ER rule. To further explore the intrinsic mechanism of the proposed approach, a detailed performance analysis is conducted to study the variation law of evidence reliability and fusion results. Finally, a numerical example and a case study are intended to demonstrate the effectiveness of the proposed approach.
Article
Emergency decision making and disposal are significant challenges faced by the international community. To minimize the emergency casualties, and reduce probable secondary disasters, it is necessary to immediately dispatch rescuers for emergency rescue in calamity prone areas. The abruptness, destructiveness, and uncertainty of emergencies, the rescue team often faces challenges of pressing time, scattered calamity locations, and diverse tasks. This necessitates the effective organization of rescuers, for their swift dispatch to the areas requiring rescue. A valuable research problem is how to group and dispatch rescuers reasonably and effectively according to the actual needs of the emergency rescue task and the situation to achieve the best rescue effect. This study establishes a dispatch model for rescuers in multiple disaster areas and rescue points. First, this paper combines the Dempster-Shafer theory (DST) and linguistic term set, to propose the concept of an evidential linguistic term set (ELTS), that can flexibly and accurately describe the subjective judgment of emergency decision-makers. It not only lays a theoretical foundation for establishing the rescuers’ dispatch model, but also aids in expressing information in uncertain linguistic environments of decision-making and evaluation. Second, to determine the weight of ability-based rescuer evaluation criteria, this study adopted the evidential best-worst method, combining it with DST to compensate for the limitations of the traditional weightage calculation method in expressing uncertainty. Third, to effectively dispatch rescuers to multiple disaster areas, modeling is carried out based on the above methods to maximize the competence of rescuers and the satisfaction of rescue time, and the best scheme for the allocation of rescuers is determined by solving the model. Finally, the advantages of the constructed model in emergency multitasking group decision-making are demonstrated through an empirical analysis.
Article
The concept of a Z-number has obtained plenty of interest for its ability to represent uncertain and partially reliable information. Z-numbers are also widely used in decision-making for the reason that they can describe real-world information and human cognition more flexibly. However, the classical arithmetic complexity of Z-numbers is a burden in real applications, especially under large data sets. How to both retain the inherent meaning of Z-numbers and reduce the calculation complexity is a critical issue in the real Z-number-based applications. Limited theoretical progress has so far been discussed. To balance the gap between the arithmetic complexity and the inherent meaning of Z-numbers, we propose an approximate calculation method of Z-numbers (Z-ACM) based on kernel density estimation. The main ideas are as follows: first, kernel density estimation is used to partition/group Z-numbers with the total utility of Z-numbers; second, aggregate the representative Z-number in each partitioned interval using the classical arithmetic framework of Z-numbers. Based on the proposed Z-ACM, a fast decision model (FDM) is designed to deal with the issue of multi-criteria decision-making. Some examples with comparative analysis and rationality analysis are conducted to illustrate the effectiveness of the proposed methodology.
Article
The mining of important nodes in complex networks is a topic of immense interest due to its wide applications across many disciplines. In this paper, a Local Structure Entropy (LSE) approach is proposed based on the Taslli entropy by removing nodes, and by considering the information of the first-order and second-order neighboring nodes, in order to explore the impact of removing nodes on the network structure. With this method, the degree and betweenness of the first-order and second-order adjacent nodes are combined by Taslli entropy, and the influential nodes are measured by the structural characteristics of the network after nodes removal. To verify the effectiveness of LSE, we compare our method with five existing methods and perform experiments on seven real-world networks. The experimental results indicate that the influential nodes identified by LSE are better than the existing methods in terms of the range of information dissemination and robustness. Moreover, it is negatively correlated with closeness centrality and the PageRank algorithm.
Article
Fractals play an important role in nonlinear science. The most important parameter when modeling a fractal is the fractal dimension. Existing information dimension can calculate the dimension of probability distribution. However, calculating the fractal dimension given a mass function, which is the generalization of probability, is still an open problem of immense interest. The main contribution of this work is to propose an information fractal dimension of mass function. Numerical examples are given to show the effectiveness of our proposed dimension. We discover an important property in that the dimension of mass function with the maximum Deng entropy is [Formula: see text], which is the well-known fractal dimension of Sierpiski triangle. The application in complexity analysis of time series illustrates the effectiveness of our method.
Article
Multidisciplinary team is beneficial to select an appropriate treatment plan for a patient with lung cancer, where the collected cognitive evaluation information may be uncertain and incomplete. This study dedicates to dealing with a treatment plan selection problem of lung cancer through the multi-criteria analysis with generalised probabilistic linguistic term sets (GPLTSs) which are powerful in describing the uncertainty and incompleteness of subjective evaluations. The existing generalised probabilistic linguistic information aggregation method is based on the Dempster-Shafer combination rule, but the combined results may be counterintuitive. In addition, the GPLTS may not meet the conditions of applying the Dempster-Shafer combination rule. To make up for these gaps, a new combination rule based on Dempster-Shafer theory is introduced. Then, a multi-criteria decision-making (MCDM) process with generalised probabilistic linguistic information based on the proposed combination rule is formed and applied to select the treatment plans of lung cancer associated with a multidisciplinary team. Through the sensitivity analysis and comparative analysis, the advantages of the proposed method are highlighted.
Article
Handing uncertain information is one of the research focuses currently. For the sake of great ability of handing uncertain information, Dempster-Shafer evidence theory (D-S theory) has been widely used in various fields of uncertain information processing. However, when highly contradictory evidence appears, the results of the classical Dempster combination rules (DCR) can be counterintuitive. Aiming at this defect, by considering the relationship between the evidence and its own characteristics, the proposed method is a new method of conflicting evidence management based on non-extensive entropy and Lance distance in uncertain scenarios. Firstly, the Lance distance function is used to measure the degree of discrepancy and conflict between evidences, and the credibility of evidence is expressed by matrix. Introducing non-extensive entropy to measure the amount of information about evidence and express the uncertainty of evidence. Secondly, the discount coefficient of the final fusion evidence is measured by considering the credibility and uncertainty of the evidence, and the original evidence is modified by the discount coefficient. Then, the final result is obtained by evidence fusion with DCR. Finally, two numerical examples are provided to illustrate the efficiency of the proposed method, and the utility of our work is demonstrated through an application of the active lane change to avoid obstacles to the autonomous driving of new energy vehicles. The proposed method has a better identification accuracy, reaching 0.9811.
Article
Dempster–Shafer (D–S) evidence theory has been studied and applied broadly, owing to its advantage of effectively handling uncertainty problems in multisource information fusion. But under the circumstance of the body of evidences are highly conflicting, the result of evidence fusion is not so satisfactory even counter-intuitive. In order to conquer the flaw, a newly defined belief Hellinger distance is presented to quantify the discrepancy between evidences in D–S evidence theory. The belief Hellinger distance takes the number of the possible hypotheses into account, thus allowing it to provide a more rational and telling approach for dissimilarity measure between evidences. In addition, through strictly proven, the belief Hellinger distance meets the properties of boundedness, nondegeneracy, symmetry and satisfaction of triangle inequality, which is to say it is a true metric. On the basis of newly defined belief Hellinger distance, a new multisource information fusion method is well-designed. What is more, an iris dataset-based and a motor rotor fault diagnosis application are implemented to verify the new proposed distance measurement and the multisource information fusion method has an extensive practicality, effectiveness and applicability.
Article
Social network group decision-making (SNGDM) has emerged as a new decision tool to effectively model the social trust relationships among decision makers. The impact of the social trust relationships on assessments-modifications in the consensus reaching in the SNGDM is seldom considered. This study aims at addressing this issue. The main starting point is the assumption that a decision maker will not be willing to accept the assessments-modifications suggestions that significantly differ from his/her trusted decision makers assessments in a social trust network. Thus, this study proposes a social trustdriven minimum adjustments consensus model (STDMACM) for SNGDM. Simultaneously, a social trustdriven consensus maximum optimization model (STDCMOM) is proposed for maximizing the consensus level among decision makers under the above assumption. Based on both STDCMOM and STDMACM, an interactive consensus reaching process is presented, in which the assessments-modifications suggestions generated from the STDMACM are used, when the maximum consensus level obtained from STDCMOM is acceptable, as the references for guiding the consensus reaching; otherwise, assessments- modifications suggestions are generated from the designed STDCMOM. The validity of the social trust-driven consensus reaching process with respect to its consensus convergence rate and consensus success ratio is verified with a simulation and comparison analysis.
Article
In pattern classification, we may have a few labeled data points in the target domain, but a number of labeled samples are available in another related domain (called the source domain). Transfer learning can solve such classification problems via the knowledge transfer from source to target domains. The source and target domains can be represented by heterogeneous features. There may exist uncertainty in domain transformation, and such uncertainty is not good for classification. The effective management of uncertainty is important for improving classification accuracy. So, a new belief-based bidirectional transfer classification (BDTC) method is proposed. In BDTC, the intraclass transformation matrix is estimated at first for mapping the patterns from source to target domains, and this matrix can be learned using the labeled patterns of the same class represented by heterogeneous domains (features). The labeled patterns in the source domain are transferred to the target domain by the corresponding transformation matrix. Then, we learn a classifier using all the labeled patterns in the target domain to classify the objects. In order to take full advantage of the complementary knowledge of different domains, we transfer the query patterns from target to source domains using the K-NN technique and do the classification task in the source domain. Thus, two pieces of classification results can be obtained for each query pattern in the source and target domains, but the classification results may have different reliabilities/weights. A weighted combination rule is developed to combine the two classification results based on the belief functions theory, which is an expert at dealing with uncertain information. We can efficiently reduce the uncertainty of transfer classification via the combination strategy. Experiments on some domain adaptation benchmarks show that our method can effectively improve classification accuracy compared with other related methods.
Article
As an extension of probability theory, evidence theory is able to better handle unknown and imprecise information. Owing to its advantages, evidence theory has more flexibility and effectiveness for modeling and processing uncertain information. Uncertainty measure plays an essential role both in evidence theory and probability theory. In probability theory, Shannon entropy provides a novel perspective for measuring uncertainty. Various entropies exist for measuring the uncertainty of basic probability assignment (BPA) in evidence theory. However, from the standpoint of the requirements of uncertainty measurement and physics, these entropies are controversial. Therefore, the process for measuring BPA uncertainty currently remains an open issue in the literature. Firstly, this paper reviews the measures of uncertainty in evidence theory followed by an analysis of some related controversies. Secondly, we discuss the development of Deng entropy as an effective way to measure uncertainty, including introducing its definition, analyzing its properties, and comparing it to other measures. We also examine the concept of maximum Deng entropy, the pseudo-Pascal triangle of maximum Deng entropy, generalized belief entropy, and measures of divergence. In addition, we conduct an analysis of the application of Deng entropy and further examine the challenges for future studies on uncertainty measurement in evidence theory. Finally, a conclusion is provided to summarize this study.
Article
We discuss how the Dempster-Shafer belief structure provides a framework for modeling an uncertain value x˜ from some domain X. We note how it involves a two-step process: the random determination of one focal element (set)guided by a probability distribution and then the selection of x˜ from this focal element in some unspecified manner. We generalize this framework by allowing the selection of the focal element to be determined by a random experiment guided by a fuzzy measure. In either case the anticipation that x˜ lies in some subset E is interval-valued, [Bel(E), Pl(E)]. We next look at database retrieval and turn to issue of determining if a database entity with an uncertain attribute value satisfies a desired value. Here we model our uncertain attribute value as x˜ and our desired value as a subset E. In this case the degree of satisfaction of the query E by the entity is [Bel(E), Pl(E)]. In order to compare these interval-valued satisfactions we use the Golden rule representative value to turn the intervals into scalars. We describe an application involving retrieval from a uncertain database.
Article
The highly diversified conceptual and algorithmic landscape of Granular Computing calls for the formation of sound fundamentals of the discipline, which cut across the diversity of formal frameworks (fuzzy sets, sets, rough sets) in which information granules are formed and processed. The study addresses this quest by introducing an idea of granular models - generalizations of numeric models that are formed as a result of an optimal allocation (distribution) of information granularity. Information granularity is regarded as a crucial design asset, which helps establish a better rapport of the resulting granular model with the system under modeling. A suite of modeling situations is elaborated on; they offer convincing examples behind the emergence of granular models. Pertinent problems showing how information granularity is distributed throughout the parameters of numeric functions (and resulting in granular mappings) are formulated as optimization tasks. A set of associated information granularity distribution protocols is discussed. We also provide a number of illustrative examples.
Article
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
Article
In group decision making, one strives to reconcile differences of opinions (judgments) expressed by individual members of the group. Fuzzy-decision-making mechanisms bring a great deal of flexibility. By admitting membership degrees, we are offered flexibility to exploit different aggregation mechanisms and navigate a process of interaction among decision makers to achieve an increasing level of consistency within the group. While the studies reported so far exploit more or less sophisticated ways of adjusting/transforming initial judgments (preferences) of individuals, in this paper, we bring forward a concept of information granularity. Here, information granularity is viewed as an essential asset, which offers a decision maker a tangible level of flexibility using some initial preferences conveyed by each individual that can be adjusted with the intent to reach a higher level of consensus. Our study is concerned with an extension of the well-known analytic hierarchy process to the group decision-making scenario. More specifically, the admitted level of granularity gives rise to a granular matrix of pairwise comparisons. The granular entries represented, e.g., by intervals or fuzzy sets, supply a required flexibility using the fact that we select the most suitable numeric representative of the reciprocal matrix. The proposed concept of granular reciprocal matrices is used to optimize a performance index, which comes as an additive combination of two components. The first one expresses a level of consistency of the individual pairwise comparison matrices; by exploiting the admitted level of granularity, we aim at the minimization of the corresponding inconsistency index. The second part of the performance index quantifies a level of disagreement in terms of the individual preferences. The flexibility offered by the level of granularity is used to increase the level of consensus within the group. Given an implicit nature of relationships between the realizations of - - the granular pairwise matrices and the values of the performance index, we consider using particle swarm optimization as an optimization vehicle. Two scenarios of allocation of granularity among decision makers are considered, namely, a uniform allocation of granularity and nonuniform distribution of granularity, where the levels of allocated granularity are also subject to optimization. A number of numeric studies are provided to illustrate an essence of the method.
Article
A modified average method to combine belief function based on distance measures of evidence is proposed. The weight of each body of evidence (BOE) is taken into account. A numerical example is shown to illustrate the use of the proposed method to combine conflicting evidence. Some open issues are discussed in the final section.
Article
The use of belief functions to represent and to manipulate uncertainty in expert systems has been advocated by some practitioners and researchers. Others have provided examples of counter-intuitive results produced by Dempster's rule for combining belief functions and have proposed several alternatives to this rule. This paper presents another problem, the failure to balance multiple evidence, then illustrates the proposed solutions and describes their limitations. Of the proposed methods, averaging best solves the normalization problems, but it does not offer convergence toward certainty, nor a probabilistic basis. To achieve convergence, this research suggests incorporating average belief into the combining rule.
Article
A multivalued mapping from a space X to a space S carries a probability measure defined over subsets of X into a system of upper and lower probabilities over subsets of S. Some basic properties of such systems are explored in Sections 1 and 2. Other approaches to upper and lower probabilities are possible and some of these are related to the present approach in Section 3. A distinctive feature of the present approach is a rule for conditioning, or more generally, a rule for combining sources of information, as discussed in Sections 4 and 5. Finally, the context in statistical inference from which the present theory arose is sketched briefly in Section 6.