Chapter
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The Dempster-Shafer theory of evidence (here, DS theory, for brevity), sometimes called evidential reasoning (cf. Lowrance et al. [Lowrance et al., 1981]) or belief function theory, is a mechanism formalised by Shafer ([Shafer, 1976]) for representing and reasoning with uncertain, imprecise and incomplete information. It is based on Dempster’s original work ([Dempster, 1967]) on the modelling of uncertainty in terms of upper and lower probabilities that are induced by a multivalued mapping rather than as a single probability value.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Dempster-Shafer theory. Note that we combine the data of different campaigns in a frequentist manner, as opposed to the more intricate ways of combining evidence of Dempster-Shafer theory [23,41]. That theory expresses not only the aggregate opinion of multiple sources, but also the uncertainty due to disagreeing sources. ...
... We highlight that these metrics will in general not represent the probability of an attack succeeding in the real world-see [23,41] and Sec. 2.2 in this work for some discussions on this. ...
Preprint
Full-text available
The landscape of cyber threats grows more complex by the day. Advanced Persistent Threats carry out systematic attack campaigns against which cybersecurity practitioners must defend. Examples of such organized attacks are operations Dream Job, Wocao, WannaCry or the SolarWinds Compromise. To evaluate which risks are most threatening, and which campaigns to prioritize against when defending, cybersecurity experts must be equipped with the right toolbox. In particular, they must be able to (a) obtain likelihood values for each attack campaign recorded in the wild and (b) reliably and transparently operationalize these values to carry out quantitative comparisons among campaigns. This will allow security experts to perform quantitatively-informed decision making that is transparent and accountable. In this paper we construct such a framework by: (1) quantifying the likelihood of attack campaigns via data-driven procedures on the MITRE knowledge base and (2) introducing a methodology for automatic modelling of MITRE intelligence data: this is complete in the sense that it captures any attack campaign via template attack tree models. (3) We further propose a computational framework to carry out this comparisons based on the cATM formal logic, and implement this into an open-source Python tool. Finally, we validate our approach by quantifying the likelihood of all MITRE campaigns, and comparing the likelihood of the Wocao and Dream Job MITRE campaigns -- generated with our proposed approach -- against "ad hoc" traditionally-built attack tree models, demonstrating how our methodology is substantially lighter in modelling effort, and still capable of capturing all the quantitative relevant data.
... Fuzzy methods face similar issues, as their reliance on expert judgment to construct fuzzy sets and membership functions introduces subjectivity and potential information loss. In contrast, evidence theory reflects the degree of belief (DoB) in all possible outcomes using probability bounds, it extends classical probability theory (Liu 2001) and offers richer information than interval methods, providing more detailed results. Thus, evidence theory has a solid theoretical foundation for integrating probabilistic and interval methods, minimizing the information loss common in other approaches. ...
Article
Full-text available
Digital terrain analysis (DTA) using digital elevation models is influenced by two main uncertainties: propagated elevation uncertainty (PEU) from data and truncation error (TE) from the modeling process. Traditional studies often treat these uncertainties separately, neglecting their coupled nature, which limits the ability to accurately capture overall uncertainty and assess the relative contributions of different factors. This study examines slope calculation using evidence theory, addressing representation with focal elements, propagation through belief and likelihood functions, evaluation metrics based on bias and variability, and sensitivity analysis through changes in the probability envelope area. Experiments with Gaussian synthetic surfaces and high-density LiDAR data reveal that PEU dominates overall uncertainty, with data accuracy affecting slope reliability. TE limits users’ expectations regarding uncertainty, with cognitive limitations influencing belief in slope products. This work offers key insights into uncertainty in slope calculation and other DTA models.
... The DST method (Beynon, 2014) was initially launched by Dempster (1967) and later modified by Shafer (1976). DST is a very effective and efficient method for modelling uncertain information (Liu, 2001), and can be employed in a broad span of research applications, including expert systems, artificial intelligence, risk assessment, and MCDM problems. There is also wide use of AHP in the literature. ...
Article
Full-text available
In recent years, supply chain (SC) disruptions and their severe economic and social consequences have sparked a growing interest on the part of decision makers and researchers to adequately manage risk. The perception of risk is strongly related to the possibility of occurrence of disruptive events. In the supply chain risk management (SCRM) domain, disruptions such as the 2011 Japan earthquake and Hurricane Sandy have severely affected operations and put corporate finances at risk, becoming one of the most pressing concerns faced by companies competing in today's global marketplace. Without a doubt, the COVID-19 pandemic has exposed the fragility of SCs on a global scale as has never been seen in the past, impacting SCs from multiple sectors such as agriculture, manufacturing, transportation, leisure, to name a few and causing giants 32% drop in international trade in 2020 and an estimated 12% drop in the global economy. In order to mitigate and control the adverse effects caused by disruption risks, both in academia and in professional circles, important work is carried out in the area of SCRM. In recent times, scholars have utilized a various types of multi-criteria decision-making (MCDM) methods to evaluate sustainable supply chain risks in many contexts. Due to its importance, to date, there are no studies that can guide researchers and decision makers on what would be the most appropriate methods to face the multiple challenges posed by risk management in sustainable supply chains. In this study, we intend to cover this need, and for this, we carry out a careful review of 101 articles published since 2010. This review allows us to know the current state of MCDM applications in SCRM, and also to propose future research directions that allow us to properly manage the risks in sustainable SCs. We concluded that most of the studies used a single MCDM method or at most integrated two methods to assess sustainable supply chain risk. According to our findings, we propose a new future research agenda that considers, among others, the following: (a) use of MCDM methods to link risk with mitigation strategies, (b) integrate three or more MCDM methods to manage risks in the fields of cleaner and more sustainable production, and (c) development of methodologies that integrate MCDM with other operational research approaches such as optimization, simulation and mathematical modelling. We understand that these lines of research can contribute so that decision makers can better address the multiple consequences of increasingly frequent and intense disruptive events.
Article
Objective: The auditory brainstem response (ABR) is an evoked response obtained from brain electrical activity when an auditory stimulus is applied to the ear. An audiologist can determine the threshold level of hearing by applying stimuli at reducing levels of intensity, and can also diagnose various otological, audiological, and neurological abnormalities by examining the morphology of the waveform and the latencies of the individual waves. This is a subjective process requiring considerable expertise. The aim of this research was to develop software classification models to assist the audiologist with an automated detection of the ABR waveform and also to provide objectivity and consistency in this detection. Materials and methods: The dataset used in this study consisted of 550 waveforms derived from tests using a range of stimulus levels applied to 85 subjects ranging in hearing ability. Each waveform had been classified by a human expert as 'response=Yes' or 'response=No'. Individual software classification models were generated using time, frequency and cross-correlation measures. Classification employed both artificial neural networks (NNs) and the C5.0 decision tree algorithm. Accuracies were validated using six-fold cross-validation, and by randomising training, validation and test datasets. Results: The result was a two stage classification process whereby strong responses were classified to an accuracy of 95.6% in the first stage. This used a ratio of post-stimulus to pre-stimulus power in the time domain, with power measures at 200, 500 and 900Hz in the frequency domain. In the second stage, outputs from time, frequency and cross-correlation classifiers were combined using the Dempster-Shafer method to produce a hybrid model with an accuracy of 85% (126 repeat waveforms). Conclusion: By combining the different approaches a hybrid system has been created that emulates the approach used by an audiologist in analysing an ABR waveform. Interpretation did not rely on one particular feature but brought together power and frequency analysis as well as consistency of subaverages. This provided a system that enhanced robustness to artefacts while maintaining classification accuracy.
ResearchGate has not been able to resolve any references for this publication.