Article

What's Wrong with Risk Matrices?

Wiley
Risk Analysis
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Risk matrices-tables mapping "frequency" and "severity" ratings to corresponding risk priority levels-are popular in applications as diverse as terrorism risk analysis, highway construction project management, office building risk analysis, climate change risk management, and enterprise risk management (ERM). National and international standards (e.g., Military Standard 882C and AS/NZS 4360:1999) have stimulated adoption of risk matrices by many organizations and risk consultants. However, little research rigorously validates their performance in actually improving risk management decisions. This article examines some mathematical properties of risk matrices and shows that they have the following limitations. (a) Poor Resolution. Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks ("range compression"). (b) Errors. Risk matrices can mistakenly assign higher qualitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be "worse than useless," leading to worse-than-random decisions. (c) Suboptimal Resource Allocation. Effective allocation of resources to risk-reducing countermeasures cannot be based on the categories provided by risk matrices. (d) Ambiguous Inputs and Outputs. Categorizations of severity cannot be made objectively for uncertain consequences. Inputs to risk matrices (e.g., frequency and severity categorizations) and resulting outputs (i.e., risk ratings) require subjective interpretation, and different users may obtain opposite ratings of the same quantitative risks. These limitations suggest that risk matrices should be used with caution, and only with careful explanations of embedded judgments.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Risk matrices are widely adopted structures for risk assessment in marine ecosystem-based management (Cox, 2008). They are commonly used to provide a ranking framework for visualizing and prioritizing risks, guiding resource allocations in a convenient and efficient manner. ...
... Specifically, risk is typically seen as the product of (1) the likelihood that an event will occur, and (2) the effect that event may have. However, recent literature emphasizes the potential pitfalls of the risk matrix, such as risk rating, limitations in resolution, and the possibility of errors (Cox, 2008). Additionally, the risk matrix establishment process often fails to account for the risk attitudes of decision-makers (Ruan et al., 2015;Kaya et al., 2019). ...
... In this study, we have nonetheless chosen to use the risk matrix method for several reasons. First, despite its limitations, this approach is still widely applied in many fields (Cox, 2008;Duijm, 2015). Second, assessing risk based on utility theory may also be problematic due to what is commonly known as the Allais paradox (Allais, 1953). ...
Article
Full-text available
This study aims to investigate the risks posed by climate change and anthropogenic activities on ecosystem services in the Barents Sea, Norway. Using an expert assessment approach, we identify which ecosystem services are at high risk and the human activities and pressures contributing to these risks. The findings indicate that risks vary across ecosystem services, activities, and pressures; however, most are categorized as medium or low. Biodiversity, as a cultural service, and fish/shellfish, as a provisioning service, are identified as the two most threatened ecosystem services. In contrast, educational services are perceived as the least impacted. Temperature change is found to have the greatest impact on the services. Experts are generally uncertain about the risk levels; however, fish/shellfish and biodiversity are the two services associated with the least uncertainty. The results highlight the limited knowledge regarding risks to ecosystem services in the Barents Sea. The study emphasizes the need for future research to address these knowledge gaps and discusses where management efforts should be focused.
... Risks are calculated as products of stress, exposure, and vulnerability. The approach is common in statistics literature (Cox, 2008;Montibeller & von Winterfeldt, 2015;Rausand & Haugen, 2020), entailing the assumption that stressor scores are interpretable as probabilities of occurrence. Thus, a stressor score can be interpreted as the probability of an individual being subject to that stressor (Cox, 2008) in a geographic unit. ...
... The approach is common in statistics literature (Cox, 2008;Montibeller & von Winterfeldt, 2015;Rausand & Haugen, 2020), entailing the assumption that stressor scores are interpretable as probabilities of occurrence. Thus, a stressor score can be interpreted as the probability of an individual being subject to that stressor (Cox, 2008) in a geographic unit. ...
... Thus, the unit for risk equals that of exposure (Cox, 2008), in this case population; if, for instance, a pixel includes 200 people, the stressor score for a specific stressor is 0.4, and the vulnerability score is 0.5, both for that pixel, then the risk for being subjected to that stressor and pixel is 40 people. ...
... Risk management theory defines the value of a risk as the product of its probability of occurrence and its impact on a project [36,37]. Given the nature of the data collected for this study (use of the 5-point Likert scale), risk was depicted and assessed using a risk matrix [38][39][40]. The matrix has some limitations [41][42][43], such as modelling risk without correlation of risk factors. ...
... On the horizontal axis, the matrix evaluated the degree of impact (in an adjusted interval according to the mean score: almost uncertain 1,00-1,80; unlikely 1,81-2,60; fifty-fifty 2,61-3,40; likely 3,41-4,20; almost certain 4,21-5,00), while the vertical axis, the probability of occurrence was evaluated (in an adjusted interval according to the mean score: negligible 1,00-1,80; minor 1,81-2,60; moderate 2,61-3,40; significant 3,41-4,20; severe 4,21-5,00). The available literature provided three [38,39,42], four [45], or five risk zones [46] for evaluating the risk level, depending on the size of the matrix. For this research, a 5×5 matrix with three risk zones was selected, as recommended by Duijm [42]: High-risk zone, medium-risk zone and low-risk zone. ...
Article
Full-text available
In the context of tender documentation for construction projects in the Czech Republic, technical specifications define the content and scope of work to be conducted using drawing documentation and a list of works, supplies and services. In practice, such documents are often burdened with errors (deficiencies) that can have different causes and impacts on the success of a project in terms of cost, time and quality. This study aims to explore the perception of the probability of occurrence and degree of impact of errors from various perspectives including risk factors, causes, possible effects, responsibility and the role of stakeholders. Data collected from experienced construction professionals in the Czech Republic show that documentation errors mainly affect project constraints in terms of cost and time and are often underestimated by investors concerning their impact and the probability of occurrence. Several recommendations are formulated to serve as preventive measures contributing to the elimination of errors and their early detection.
... This method can be applied at several levels, with varying degrees of difficulty. Most healthcare organizations use a 535 risk matrix based on Australian standards AS/4360: 2004, where the two axes correspond to the scales for consequences, sometimes called severity and probability as illustrated in Figure 2 [14]. ...
Chapter
Full-text available
Today's modern hospital is highly dependent on different types of medical equipment to help diagnose, monitor, and treat patients. Medical equipment maintenance is important to reduce costs, reduce patient dissatisfaction, treat the patient in a timely manner, and reduce mortality and risks during patient care. Good maintenance management is important to have well-planned and implemented programs through which hospitals can minimize medical device failures or other problems with the operation of medical equipment. Medical equipment plays an important role in the hospital system; therefore, the acquisition, maintenance, and replacement of medical equipment are key factors in hospitals for the implementation of the health service. Thus, in order to ensure the quality of medical devices for the provision of medical care, it is imperative to evaluate the safety of using hospital maintenance management. In order to achieve these goals, hospitals must develop checklists that identify the state of performance of medical equipment maintenance. It is essential for clinical managers and engineers not only to increase the capacity of the hospital but also to predict the risks of sudden failure. Given the lack of unique and comprehensive maintenance management checklists, the current goal is to design and develop medical equipment maintenance management checklists.
... For example, Bow-tie diagrams assume a theory of causation relating threat events on the diagram's left-hand side to loss events on the right-hand side [3]. The risk matrix supposes the notions of risk likelihood, impact severity, and event types [4]. FMEA, widely used in reliability and safety engineering, includes the concepts of failure, failure effect, detection, mitigation, and others [5]. ...
Conference Paper
Full-text available
According to ISO 31000, the risk management process comprises communication, risk assessment, risk treatment , monitoring, and reporting. Numerous techniques address these aspects, particularly risk assessment and treatment, such as attack trees, fault trees, risk matrix, etc. These approaches implicitly or explicitly require a conceptualization of the risk management domain, that is, a reference domain ontology as a background theory. However, because these techniques are not grounded in ontological analyses and well-founded reference ontologies, they suffer from several limitations and semantic confusion, such as ambiguity, little to no modeling guidance, and lack of semantic integration. Existing well-founded reference ontologies of value, risk, security, and related topics, can support a full-fledge ontologically sound risk management framework capable of solving those semantic issues. Nevertheless, such a comprehensive approach to risk management is yet to be seen. To cover this gap, we present a research proposal integrating these ontologies and associated services into a domain-specific modeling language for risk management. First, we establish a risk management ontology network, including value, risk, incident, security, monitoring, trust, and resilience concepts. We will employ them to ground ontological analyses of those important risk management techniques to identify their shortcomings. This analysis will support redesigns of these techniques to overcome the limitations. We will design a domain-specific modeling language interpreted by the ontology network and served by the redesigned versions of those techniques. By doing so, we expect to address semantic interoperability problems among risk management approaches and data sources.
... Measures of risk that are not of this form (e.g., the product of certainty and severity) will be inconsistent with the forecast directive and hence with decision-theoretic coherent use of risk matrices. Although not expressed in these terms, this is the essence of a major criticism of the conventional use of risk matrices in risk management (Cox, 2008). Our framework is suitable for any area of risk management provided that the risk matrix itself represents an adequate model of risk for the application. ...
Preprint
Full-text available
Risk matrices are widely used across a range of fields and have found increasing utility in warning decision practices globally. However, their application in this context presents challenges, which range from potentially perverse warning outcomes to a lack of objective verification (i.e., evaluation) methods. This paper introduces a coherent framework for generating multi-level warnings from risk matrices to address these challenges. The proposed framework is general, is based on probabilistic forecasts of hazard severity or impact and is compatible with the Common Alerting Protocol (CAP). Moreover, it includes a family of consistent scoring functions for objectively evaluating the predictive performance of risk matrix assessments and the warnings they produce. These scoring functions enable the ranking of forecasters or warning systems and the tracking of system improvements by rewarding accurate probabilistic forecasts and compliance with warning service directives. A synthetic experiment demonstrates the efficacy of these scoring functions, while the framework is illustrated through warnings for heavy rainfall based on operational ensemble prediction system forecasts for Tropical Cyclone Jasper (Queensland, Australia, 2023). This work establishes a robust foundation for enhancing the reliability and verifiability of risk-based warning systems.
... Furthermore, TPACK is not a single framework that can explain the complex, diverse, and unstructured domain of technology integration in education. Given that TPACK has a significant effect on how teachers are trained to use technology in the classroom, there is a need for criticism, further research, and analysis of the development of TPACK (Cox, 2008;Graham & Hebert, 2011). ...
Article
Full-text available
Technology is a critical element of the human perspective on learning. Good teaching with technology requires understanding the mutually reinforcing relationships between content, pedagogy, and technology to develop proper context-specific strategies and representations. TPACK is one of the most researched topics that discusses the interaction and combination of teacher knowledge in the domains of technology, pedagogy, and content. Overtime, TPACK has raised criticism regarding the many areas that had not been explored and the need to reconceptualize the framework. There have been many further developments in the form of modifications to the TPACK model, which have produced various measurement instruments. This research uses a systematic literature review (SLR) method to explore multiple modifications to the TPACK framework and the integration of related fields. The review was carried out on published papers from 2008 to 2023 utilizing the Scopus database. The analysis in the research was carried out through 3 stages: planning, conducting, and reporting. Planning including identify SLR needs, determine the focus of the question, and Develop Review Rules. Furthermore, in conducting, researcher excluding articles that didn’t mention TPACK Development Framework. Last stage is reporting including analyze the main article by creating a review table and answer the Research Question. There were 18 core articles filtered from a total of 1,295 articles. The findings from the thematic analysis identified five areas of study that were integrated with modifications to the TPACK framework, including Knowledge, Competency, Tools, Practice, and Attitude. Finally, exploration was also carried out to see gaps and research potential for future studies in each field.
... While common Enterprise Risk Management (ERM) software offers numerous benefits, such as improved risk visibility, enhanced decision-making, and a streamlined risk management process, there are also some potential disadvantages to consider, such as complexity, customization requirements, data quality, overreliance on technology, resistance to change, cost, regulatory compliance challenges, cybersecurity, etc. [16][17][18][19][20][21][22][23][24]. ...
Article
Full-text available
The paper shows the integration of the theoretical and practical aspects of designing an ERM software tool. The basic idea of the designed ERM is conceived in the form of an algorithm using the integration of ISO 3100, the COSO framework, the risk matrix designed according to the risk appetite of the observed company, quantitative and qualitative models for risk assessment, and the generation of consequences and mitigating measures for each identified risk. Methodologies for risk assessment in the presented ERM include the following: For business risks, questionnaires were generated for different business areas (from knowledge bases) and the assessment was conducted according to risk matrices. Workplace risk is assessed according to the Kinney method, while technical risks are assessed according to the API 580, 581 standard. Software is created to outbalance problems of all types of risk that may arise regardless of the complexity of the business and the risk itself. The algorithm, model and software were developed and successfully tested in two Serbian companies.
... However, recent academic discourse has questioned the effectiveness of traditional risk matrix assessments in the context of cybersecurity. Scholars such as Cox [72] and Hubbard [73] have argued that the method is subjective and insufficient for effectively identifying, evaluating, and managing risk. � Every factor that affects the degree of risk-including processes, systems, and components-contributes to determining the risk rating based on the likelihood of their manifestation. ...
Article
Full-text available
The modern Industrial Control System (ICS) environment now combines information technology (IT), operational technology, and physical processes. This digital transformation enhances operational efficiency, service quality, and physical system capabilities enabling systems to measure and control the physical world. However, it also exposes ICS to new and evolving cybersecurity threats that were once confined to the IT domain. As a result, identifying cyber risks in ICS has become more critical, leading to the development of new methods and tools to tackle these emerging threats. This study reviews some of the latest tools for cyber‐risk identification in ICS. It empirically analyses each tool based on specific attributes: focus, application domain, core risk management concepts, and how they address current cybersecurity concerns in ICS.
... This reliance engenders additional complexities wherein project managers may revert to intuition rather than engaging in structured analysis Ball and Watt, 2013;Thomas et al., 2014). As such, the prevalence of subjective judgement introduces latent ambiguity and bias, thereby negatively affecting the precision and efficacy of risk assessments Barber et al., 2021;Cox, 2008;. ...
Preprint
Full-text available
The construction industry's increasing complexity and dynamic project environments engender advanced risk management strategies. AI-based risk management tools, reliant on complex mathematical models, often impose specialised coding requirements, leading to challenges in accessibility and implementation. In this vein, Generative Artificial Intelligence (GenAI) emerges as a potentially transformative solution, leveraging adaptive algorithms capable of real-time data analysis to enhance predictive accuracy and decisionmaking efficacy within Construction Risk Management (CRM). However, integrating GenAI into CRM introduces significant challenges, including concerns around data security, privacy, regulatory compliance, and a skills gap. Our research seeks to address these issues by presenting a systematic bibliometric analysis that explores evolving trends, key research contributions, and critical methodological approaches related to GenAI in CRM. Thus far, our investigation has analysed 23 selected research articles from an initial corpus of 212 papers, spanning the period from 2014 to 2024. Early insights delineate a marked escalation in research activity from 2020 onwards, a surge likely engendered by recent advancements in AI technologies and their applicability to construction management. We categorise GenAI's potential benefits into technical, operational, technological, and integration-related advantages, encompassing improvements in risk identification, predictive capabilities, scheduling, and cybersecurity. Simultaneously, we identify significant risks, particularly related to data governance, social acceptance, and the operational impacts of AI-driven decisions. These preliminary findings underscore the imperative for systematic governance frameworks and proactive stakeholder engagement to optimise GenAI's benefits whilst mitigating its latent risks.
... This reliance engenders additional complexities wherein project managers may revert to intuition rather than engaging in structured analysis Ball and Watt, 2013;Thomas et al., 2014). As such, the prevalence of subjective judgement introduces latent ambiguity and bias, thereby negatively affecting the precision and efficacy of risk assessments Barber et al., 2021;Cox, 2008;. ...
Preprint
Full-text available
The construction industry's increasing complexity and dynamic project environments engender advanced risk management strategies. AI-based risk management tools, reliant on complex mathematical models, often impose specialised coding requirements, leading to challenges in accessibility and implementation. In this vein, Generative Artificial Intelligence (GenAI) emerges as a potentially transformative solution, leveraging adaptive algorithms capable of real-time data analysis to enhance predictive accuracy and decision-making efficacy within Construction Risk Management (CRM). However, integrating GenAI into CRM introduces significant challenges, including concerns around data security, privacy, regulatory compliance, and a skills gap. Our research seeks to address these issues by presenting a systematic bibliometric analysis that explores evolving trends, key research contributions, and critical methodological approaches related to GenAI in CRM. Thus far, our investigation has analysed 23 selected research articles from an initial corpus of 212 papers, spanning the period from 2014 to 2024. Early insights delineate a marked escalation in research activity from 2020 onwards, a surge likely engendered by 2 recent advancements in AI technologies and their applicability to construction management. We categorise GenAI's potential benefits into technical, operational, technological, and integration-related advantages, encompassing improvements in risk identification, predictive capabilities, scheduling, and cybersecurity. Simultaneously, we identify significant risks, particularly related to data governance, social acceptance, and the operational impacts of AI-driven decisions. These preliminary findings underscore the imperative for systematic governance frameworks and proactive stakeholder engagement to optimise GenAI's benefits whilst mitigating its latent risks.
... While common Enterprise Risk Management (ERM) software offers numerous benefits, such as improved risk visibility, enhanced decision-making, and streamlined risk management processes, there are also some potential disadvantages to consider: complexity, customization requirements, data quality, overreliance of technology, resistant to change, cost, regulatory compliance challenges, cybersecurity, etc. [16][17][18][19][20][21][22][23][24][25]. ...
Preprint
Full-text available
The problem is how to create a comprehensive "tool", applicable to any company, which will allow to achieve multiple effects at the same time, such as adhering to the ISO 31000 procedure, connecting risks and company goals in the manner defined by the COSO framework (by connecting the elements of the organizational structure with the company goals), assessment of all types of risks that a company may face, including technical, enable decision support and given visualization of data and effects. Solving this general problem requires solving many specific problems such as the creation of a specific risk matrixes for each company that will express its risk appetite but also to give the possibility of a comprehensive overview of all possible types of consequences, linking entities one-to-many and many-to-one in order to link the elements of the organizational structure to the goals that can be and common to several units, the acquisition of potential risks based on the knowledge of external experts, which overcomes the limitation of insight from the company and introduces elements of artificial intelligence, the application of standards for the evaluation of technical risks such as EN 16991 or API 580, 581 etc. while enabling the management of the implementation project and evergreening in the future. Our research question is can an integral model be made that will solve the above problems in an integrated way. The goal is to create software that will support this solution and that will be practically applicable. Solving such a complex problem of connecting various theoretical and practical aspects required creation of a special algorithm for connecting project activities, risks, organizational structure, goals, knowledge, data management and software. The algorithm, model and software were developed and successfully tested in two Serbian companies.
... It was observed that they can be reduced by mapping the relationship between the risk assessment value and its category scale in a linear or logarithmic manner [10], or in a different way that is not discrete mapping. A significant degree of mapping discretisation may lead to low quality and usability of the results of risk assessments [10,11]. ...
Article
In risk management in railway transport, standard risk models are usually used based on its typical definition and discrete quantification. This approach allows for easy justification of the adopted model, most often by referring to appropriate norms or standards (such as IRIS). The scientific approach does not disqualify the practical use of standard risk models, but its disadvantages (especially typical risk matrices, including their subjectivity) are increasingly being pointed out. In risk management procedures, most frequently one model is used to assess the risk of all identified hazards. This may turn out to be a mistake, considering the specific characteristics of the hazards. A risk model applied to one hazard may not be adequate to assess the risk of another. Therefore, it should be individually adapted both in terms of variables and the ranges of their measurement values. For some hazards, it will even be necessary to develop or adopt non-standard models. The aim of the article is to present non-standard risk models that provide a base for their easy implementation in safety management procedures used by railway entities.
... Nonetheless, values assigning specific risks are discrete numbers (i.e., 1~5); thus, the weight of probability and severity are discontinuous values. For that reason, TRMM has some drawbacks, such as weak consistency [10], betweenness [11], and consistent coloring [12]. Therefore, the concept of the continuous risk management matrix (CRMM) is proposed to overcome this shortcoming. ...
Article
Full-text available
Ferry transport has witnessed numerous fatal accidents due to unsafe navigation; thus, it is of paramount importance to mitigate risks and enhance safety measures in ferry navigation. This paper aims to evaluate the navigational risk of ferry transport by a continuous risk management matrix (CRMM) based on the fuzzy Best-Worst Method (BMW). Its originalities include developing CRMM to figure out the risk level of risk factors (RFs) for ferry transport and adopting fuzzy BWM to estimate the probability and severity weights vector of RFs. Empirical results show that twenty RFs for ferry navigation are divided into four zones corresponding to their risk values, including extreme-risk, high-risk, medium-risk, and low-risk areas. Particularly, results identify three extreme-risk RFs: inadequate evacuation and emergency response features, marine traffic congestion, and insufficient training on navigational regulations. The proposed research model can provide a methodological reference to the pertinent studies regarding risk management and multiple-criteria decision analysis (MCDA).
... The applicable safety barriers and mitigations are outlined in the bow-tie diagram for further quantitative analysis. The risk matrix is a table that contains several categories of frequencies and consequences for its rows and columns [17]. The method is commonly utilised for risk analysis across various industries [18,19]. ...
... • A color scheme based on r(b,q) is used and displayed together with the whole interval i(b). We avoid colors typically used in risk matrices [40] (red, yellow, green) to mitigate cultural biases. ...
Preprint
Full-text available
Background and Objective: Only about 14 % of eligible EU citizens finally participate in colorectal cancer (CRC) screening programs despite it being the third most common type of cancer worldwide. The development of CRC risk models can enable predictions to be embedded in decision-support tools facilitating CRC screening and treatment recommendations. This paper develops a predictive model that aids in characterizing CRC risk groups and assessing the influence of a variety of risk factors on the population. Methods: A CRC Bayesian Network is learnt by aggregating extensive expert knowledge and data from an observational study and making use of structure learning algorithms to model the relations between variables. The network is then parametrized to characterize these relations in terms of local probability distributions at each of the nodes. It is finally used to predict the risks of developing CRC together with the uncertainty around such predictions. Results: A graphical CRC risk mapping tool is developed from the model and used to segment the population into risk subgroups according to variables of interest. Furthermore, the network provides insights on the predictive influence of modifiable risk factors such as alcohol consumption and smoking, and medical conditions such as diabetes or hypertension linked to lifestyles that potentially have an impact on an increased risk of developing CRC. Conclusions: CRC is most commonly developed in older individuals. However, some modifiable behavioral factors seem to have a strong predictive influence on its potential risk of development. Modelling these effects facilitates identifying risk groups and targeting influential variables which are subsequently helpful in the design of screening and treatment programs.
Article
Full-text available
The frequent occurrence of disasters has brought significant challenges to increasingly complex urban systems. Resilient city planning and construction has emerged as a new paradigm for dealing with the growing risks. Infrastructure systems like transportation, lifelines, flood control, and drainage are essential to the operation of a city during disasters. It is necessary to measure how risks affect these systems’ resilience at different spatial scales. This paper develops an infrastructure risk and resilience evaluation index system in city and urban areas based on resilience characteristics. Then, a comprehensive infrastructure resilience evaluation is established based on the risk–resilience coupling mechanism. The overall characteristics of comprehensive infrastructure resilience are then identified. The resilience transmission level and the causes of resilience effects are analyzed based on the principle of resilience scale. Additionally, infrastructure resilience enhancement strategies under different risk scenarios are proposed. In the empirical study of Zhengzhou City, comprehensive infrastructure resilience shows significant clustering in the city area. It is high in the central city and low in the periphery. Specifically, it is relatively high in the southern and northwestern parts of the airport economy zone (AEZ) and low in the center. The leading driving factors in urban areas are risk factors like flood and drought, hazardous materials, infectious diseases, and epidemics, while resilience factors include transportation networks, sponge city construction, municipal pipe networks, and fire protection. This study proposes a “risk-resilience” coupling framework to evaluate and analyze multi-hazard risks and the multi-system resilience of urban infrastructure across multi-level spatial scales. It provides an empirical resilience evaluation framework and enhancement strategies, complementing existing individual dimensional risk or resilience studies. The findings could offer visualized spatial results to support the decision-making in Zhengzhou’s resilient city planning outline and infrastructure special planning and provide references for resilience assessment and planning in similar cities.
Article
Full-text available
Introduction Risk management is essential for quality assurance in modern healthcare organizations. Risk matrices are widely used to evaluate risks in healthcare settings; however, this approach has noteworthy weaknesses and limitations. This paper introduces a novel risk evaluation model that utilizes multicriteria decision-making and fuzzy logic, to enhance the transparency and quality of the risk evaluation process in healthcare. Methods The Multicriteria Evaluation Model was developed using the Decision Expert method and expert knowledge integration. Fuzzy logic was integrated within the model, using partial degrees of membership and probabilistic analysis, to address uncertainties inherent to healthcare risk evaluation. The evaluation model was tested with healthcare professionals active in the field of risk management in clinical practice and compared with the risk matrix. Results The designed evaluation model utilizes multicriteria decision-making while encompassing the risk matrix framework to boost user understanding and enable meaningful comparison of results. Compared with the risk matrix, the model provided similar or marginally higher risk-level evaluations. The use of degrees of membership enables evaluators to articulate a wide range of plausible risk consequences, which are often overlooked or ambiguously addressed in the traditional risk matrix approach. Discussion and Conclusions The evaluation model demonstrates increased transparency of the decision-making process and facilitates in-depth analysis of the evaluation results. The utilization of degrees of membership revealed distinct strategies for handling uncertainty among participants, highlighting the weaknesses of using single value evaluation approach for the presented and similar decision problems. The presented approach is not limited to healthcare-related risk evaluation, but has the capacity to improve risk evaluation practices in diverse settings.
Article
Purpose The construction sector is highly prone to accidents, traditionally assessed using subjective qualitative measurements. To enhance the allocation of risk management resources and identify high-risk projects during pre-construction, an objective and quantitative approach is necessary. This study introduces a three-step clustering methodology to quantitatively evaluate accident risk levels in construction projects. Design/methodology/approach In the first step, accident and total construction revenue by project were collected to calculate accident probabilities. In the second step, accident probabilities were calculated by project type using the data collected in the first step. After that, benchmark models were suggested using clustering methods to identify high-risk project types for risk management. Before suggesting the benchmark models, an uncertainty analysis was conducted due to the limited amount of data. In the third step, the suggested benchmark models were validated for accuracy. Findings The results categorized risk levels for fatalities and injuries into four distinct groups. Validation through ordinal logistic regression demonstrated high explanatory power, with fatality risk levels ranging from 79.9 to 100% and injury risk levels from 90.3 to 100%. Originality/value This benchmark model facilitates effective comparisons and analyses across various construction sectors and countries, offering a robust quantitative standard for risk management. By identifying high-risk projects such as “Dam,” this methodology enables better resource allocation during the pre-construction phase, thereby improving overall safety management in the construction industry and providing a basis for legislative applications.
Article
Full-text available
Background Hurricanes and other wind events are significant disturbances that affect coastal urban forests around the world. Past research has led to the creation of wind resistance ratings for different tree species, which can be used in urban forest management efforts to mitigate the effects of these storms. While useful, these ratings have been limited to species common to urban forestry in Florida, USA. Methods Drawing on past ratings and data from a global literature review on tropical storm research, we created a machine learning model to broaden both the geographic coverage and the variety of species currently assessed for their resistance to wind. Results We assigned wind resistance ratings to 281 new species based on the available data and our modelling efforts. The model accuracy and agreement with the original ratings when applied to the testing data set was high with 91% accuracy. Conclusions Our study demonstrated how a machine learning algorithm can be used to expand rating systems to include new species given sufficient data. Communities can use the expanded wind resistance rating species list to choose wind resistant species for planting and focus risk assessment on low wind resistant trees.
Article
Full-text available
Effective management of health crises requires public health preparedness and response, especially in urban settings where the complexity and scope of catastrophes provide considerable challenges. The integration of project management frameworks with public health policies is highlighted in this review, which investigates the optimization of emergency response systems using a project management methodology. The adoption of cutting-edge technologies that improve real-time monitoring, predictive analytics, and resource allocation such as artificial intelligence (AI), big data, and the Internet of Things (Io-T) is one of the main topics covered. The assessment also discusses how crucial it is to take ethics into account when making decisions, how to distribute resources fairly, and how to actively engage communities to build resilience. Technological and tool innovations in project management are emphasized as critical to enhancing response times and accommodating changing circumstances. The review also emphasizes the necessity of ongoing learning and development based on prior experiences to improve preparedness tactics and overall efficacy. Public health systems can respond to urban health emergencies in a more coordinated, equitable, and efficient manner by combining these components, which will eventually improve outcomes and resilience in impacted populations.
Article
Full-text available
Safe drinking water is key to individual and community health. Water safety is often evaluated based on whether or not a community’s drinking water meets the quality standards specified by a governing authority. These water quality standards address many microbial and chemical water safety risks but may not capture risks that are difficult to quantify or community-specific needs and preferences. Water safety planning, first introduced by the World Health Organization, is a more holistic approach that aims to integrate water system stakeholders, system mapping, hazard identification and matrices to better characterise risk. In this study, we documented previous efforts to apply water WSPs in Arctic jurisdictions and evaluated existing risk scoring systems for potential application to Nunavut, an Arctic territory in Canada. The observations from the evaluation informed the development of a preliminary WSP framework for Nunavut which considers both past frequency and the existing hazard barriers in place when determining the likelihood score.
Article
Purpose The objective of this review is to provide a comprehensive analysis of risk management practices in the healthcare sector, with a particular focus on identifying challenges and strategies in Moroccan hospitals. Design/methodology/approach A literature search was carried out on several academic search engines using search terms reflecting the relationship between risk management and public hospitals in Morocco. Findings The Moroccan public hospital is confronted with several disjunctions, which can be sources of multiple risks. This influences the quality of care provided to patients and can sometimes threaten their vital prognosis. The risk management process can help health professionals, researchers and risk managers to be agile and identify and anticipate risks in order to avoid serious accidents that can affect the whole organization, especially after the pandemic lived experiences (COVID-19). Originality/value Protecting human life in an environment where risks are omnipresent is a dilemma that every hospital organization must confront. So, risk management in the hospital is not a simple process, given the interaction of several components and the sensitivity of the field. Risk management in this establishment must be rigorous because every error can cost human life. In this sense, the analysis of risk management processes in Moroccan hospitals, based on what really exists, enables the identification of shortcomings in order to master the risk management system and thus protect goods and services as well as human life, which is the ultimate goal of the hospital organization’s existence.
Chapter
This study aims to perform a bibliometric analysis to map the development of the enterprise risk management (ERM) field. Post-1996 publications on enterprise risk management in the Web of Science database are analyzed in this scope. Bibliometric analysis of 597 publications provides a map of keywords, authors, countries, and institutions and a framework for following this literature over 25 years. According to the results of the research, in the field of ERM, an interdisciplinary field of study that attracts the attention of not only the academic world but also the business world, there has been a decrease in the number of publications in recent years. However, studies on new and different subjects have been conducted, and academic studies in ERM have shifted from developed to developing countries. It has been observed that proximity or language similarity does not affect the cluster formed by publishing countries.
Article
Damages, out-of-services and interruptions of activities triggered by natural events on healthcare facility networks highlight their vulnerability and the relevance of the consequences in terms of post-event emergency phases. In this perspective, a key role is played by rational and effective risk assessment methods guiding the design of mitigation countermeasures avoiding lack of operativity of the structures and of the services to the patients in the case of natural hazardous events. In this paper an approach to a comprehensive workplace risk assessment of healthcare facilities in seismic-prone areas is proposed. The novel feature of the methodology consists of the introduction of an operational probabilistic framework aimed at integrating survey and assessment actions commonly adopted for occupational safety of health facilities whose non-structural components, medical equipment, installations are investigated and monitored with those related to overall performance of the structures. It is intended as a support for (i) the implementation of preventive practice-oriented risk mitigation measures and for (ii) near-real time decisions-making concerning the prioritization of on-site inspections aimed at usability assessment in the event of an earthquake and the associated level of detail. Two software applications have been developed and herein presented to support risk analyses, encompassing the two relevant stages of the data collection and occupational safety risk computation including the structural one. An explanatory case referring to the health facility network of the Molise Region (Italy) is illustrated and discussed, highlighting its value in supporting a comprehensive workplace risk assessment in a rational and practice-oriented way.
Article
The increasing complexity of systems is demanding a paradigm shift in risk management frameworks (RMFs). This study adopts a systems thinking approach to conduct an empirically grounded analysis (EGA) of the risk management practices of Flight Test crews operating in a dynamic environment with catastrophic consequences. Extending upon qualitative research that elicited the unique RMF of Flight Test crews, the EGA examines the academic theory underlying why the Flight Test crew approach to applying multiple mitigations to system hazards is effective. Grounded in risk and utility theory, this research then presents a novel RMF that aligns effective risk management tools with the intricacy level (dynamism, determinism and latency) of the underlying system, categorized using a Cynefin ontological framework. This novel RMF accommodates the attributes of complexity exhibited by socio‐technical systems, enabling effective (and therefore efficient) resource allocation when mitigating risk in complex systems. Using the systems thinking approach of the Flight Test crews, this EGA contributes a validated, generalized RMF to support decision‐making in organizations operating complex systems.
Article
Full-text available
Introduction: Risk assessment matrix is a tool used in a project’s risk assessment process to identify the probability of risks and evaluate the potential damages caused by those risks. Generally, a risk assessment matrix is drawn in a two-dimensional form, with two factors: the severity of the accident and the probability of its occurrence. So, the purpose of this study is to develop a specific risk assessment matrix in a three- dimensional form by using the accident severity grade (ASG) rating system, the accident probability, and taking into account the preventive approach that helps occupational injury risk assessment in the automobile industry. Material and Methods: This cross-sectional study was conducted in 1402 (2023) in one of the automobile assembly industries. One hundred cases were randomly selected by examining the reports of this industry’s past accidents. The ASG scoring checklist was designed and completed by the experts to assess the severity of accidents. Then, considering the ASG score, the frequency of the accident, and its preventability, a three-dimensional risk assessment matrix specific to this industry was presented. Results: According to the findings of the accident analysis, a total of 658 accidents and 15,019 lost working days were recorded in this period. The most influential factor in the occurrence of accidents is related to “surface condition” (influence factor = 0.6), and the least of them belongs to the “weather conditions” (influence factor = 0.028). The results of the three-dimensional matrix show that when the ability to prevent accidents increases, the risk of accidents decreases. Conclusion: Using the accident severity grade (ASG) and preventability in the proposed three-dimensional risk assessment matrix, the accident severity can be quantified immediately after the accident. This approach allows monitoring workplaces during the accident, leading to timely control and risk management implementation.
Article
Purpose By exploring the halalness and food safety risks from the perspective of technology and the relationship among them, this study aims to make quantitative predictions of such risks in the broiler supply chain to determine the critical control points (CCPs) in Hazard Analysis Critical Control Point (HACCP). Design/methodology/approach This study integrates Interpretive Structural Modeling (ISM) and Bayesian Network (BN) to achieve the objectives. Data were collected from focus group discussions (FGDs) with experts and direct observations at the broiler supply chain. Findings This paper identified 19 risks in the Indonesian broiler supply chain. The risk for halalness and food safety reached 30.92%, indicating that assuring halalness and food safety remains improbable or unlikely. The two CCPs of halalness and food safety are the knife’s sharpness and the vehicle’s storage temperature. Research limitations/implications This study quantifies the halalness and food safety risks in the Indonesian broiler supply chain, but it only involves one step forward and one step backward in the slaughterhouse’s chain. Practical implications The findings can provide insights for stakeholders, such as business owners, employees, management system auditors and consumers, regarding the critical control points of halalness and food safety in the broiler supply chain to improve the halalness and food safety management systems. Originality/value This study’s novelty lies in the examination of halalness and food safety risks using a risk prediction model to determine CCPs for the HACCP plan in the broiler supply chain in Indonesia.
Article
Full-text available
Improving hazards in maritime transport is essential to maintain the reliability and sustainability of the industry, ensure safety and security, and support global trade and economic growth. This paper is aimed at analyzing the hazards of passenger-cargo ferries (PCFs). The novelty of this paper is fourfold: (1) developing a revised risk matrix (RRM) model with considering the adaptability-based resilience of organization to assess PCFs’ hazards, (2) identifying risk factors (RFs) in ferry navigation and developing adaptive strategies to mitigate hazards, (3) applying the fuzzy best–worst method (BWM) to determine the RFs’ weight, and (4) employing the leading Taiwanese passenger-cargo ferry operator (the Taiwan-ferries case) as an empirical study to verify the proposed model. The empirical result can provide pragmatic information for ferry operators to improve the safety of their ferry navigations. Additionally, the proposed RRM model can provide a methodological reference for related research in hazard management.
Article
Full-text available
The success of tunneling projects is crucial for infrastructure development. However, the potential leakage risk is particularly challenging due to the inherent uncertainties and fuzziness involved. To address this demanding challenge, a hybrid approach integrating the copula theory, cloud model, and risk matrix, is proposed. The dependence of multiple risk‐related influential factors is explored by the construct of the copula‐cloud model, and the diverse information is fused by applying the risk matrix to gain a crisp risk result. A case study is performed to test the applicability of the proposed approach, in which a risk index system consisting of nine critical factors is developed and Sobol‐enabled global sensitivity analysis (GSA) is incorporated to investigate the contributions of different factors to the risk magnitude. Key findings are as follows: (1) Risk statuses of the studied three tunnel sections are perceived as under grade I (safe), II (low‐risk), and III (medium‐risk), respectively, and the waterproof material aspect is found prone to deteriorating the tunnel sections. Furthermore, the proposed approach allows for a better understanding of the trends in the risk statuses of the tunnel sections. (2) Strong interactions between influential factors exist and exert impacts on the final risk results, proving the necessity of studying the factor dependence. (3) The developed neutral risk matrix presents a strong robustness and displays a higher recognition capacity in risk assessment. The novelty of this research lies in the consideration of the dependence and uncertainty in multisource information fusion with a hybrid copula‐cloud model, enabling to perform a robust risk assessment under different risk matrices with varying degrees of risk tolerance.
Thesis
Full-text available
Within safety risk management, risk aggregation methods have emerged as a critical consideration, yet a significant research-practice gap persists. This research explores this complex interplay by investigating the influence of risk assessor experience on the adoption of risk aggregation methods. The study undertook an initial action research stage, consisting of a literature review to identify current risk aggregation methodologies and challenges associated with their adoption. This enables the development of a normalised framework for semi-quantitative risk aggregation. This action research stage was followed by a formal research questionnaire approach, administered to a purposive sample, enabling the analysis of influential factors shaping support for risk aggregation adoption. The findings highlight a significant correlation between risk assessor experience and their endorsement of risk aggregation methods. Experienced assessors exhibit heightened support, emphasising the pivotal role of expertise in shaping attitudes toward risk aggregation. The findings and recommendations from this research provide insights for organisations seeking to enhance the integration of risk aggregation approaches into safety risk management frameworks, in order to support effective decision-making and risk assessment practices. Keywords: Risk Aggregation, Risk Assessment, Semi-Quantitative Risk Assessment, Risk matrix, Decision-Making, Risk Assessor Experience, Safety Risk Management
Article
Full-text available
Serverless computing is one of the recent compelling paradigms in cloud computing. Serverless computing can quickly run user applications and services regardless of the underlying server architecture. Despite the availability of several commercial and open-source serverless platforms, there are still some open issues and challenges to address. One of the key concerns in serverless computing platforms is security. Therefore, in this paper, we present a multi-layer abstract model of serverless computing for an security investigation. We conduct a quantitative analysis of security risks for each layer. We observe that the Attack Tree and Attack-Defense Tree methodologies are viable approaches in this regard. Consequently, we make use of the Attack Tree and the Attack-Defense Tree to quantify the security risks and countermeasures of serverless computing. We also propose a novel measure called the Relative Risk Matrix (RRM) to quantify the probability of attack success. Stakeholders including application developers, researchers, and cloud providers can potentially apply these findings and implications to better understand and further enhance the security of serverless computing.
Article
Full-text available
Risk assessment is a crucial component in various fields, including finance, healthcare, engineering, and environmental science. The goal of risk assessment is to identify potential hazards, evaluate their likelihood, and estimate the consequences to mitigate adverse effects. Among the different approaches to risk assessment, semi-quantitative risk assessment (SQRA) stands out as a versatile method that combines the strengths of both qualitative and quantitative techniques. This article explores the fundamentals, methodology, advantages, and applications of SQRA.
Article
Since its foundation, the Joint Committee on Structural Safety (JCSS) has been engaged in the discussion of methods for determining the reliability of components, calibration of standards, as well as risk modelling of systems. In publications, it is regularly explained which methods have which advantages. In the literature, the drawbacks and pitfalls that challenge rational decisions and help to develop and find more appropriate methods for practice are often not documented. Such problems can lead to decisions, which are not rational from a decision-theoretic point of view, some of which are worse than a random decision. Especially events, with a very small probability of occurrence hardly give any feedback possibilities from reality and evidence-based analysis of decisions is not possible. Careful selection of methods and knowledge/information of the assumptions is crucial to rational decisions. This paper will discuss some of the identified pitfalls based on the discussions in the JCSS. It will span from aspects in the uncertainty quantification, uncertainty propagation, consequence assessment as well as approaches that are found and used in practice for decision-making (e.g. probability interpretations, risk aversion, risk matrices and FN diagrams). This paper can be seen as a documentation of outtakes from the discussions which led to the joint understanding and approach of the JCSS. The paper does not claim to be complete concerning all the possible pitfalls in risk assessments and system identification. But it does provide important reflections and indicates where the eyes must be kept open. Further, the paper points to a way of rational decision-making accounting for the uncertainties in information.
Article
Increased connectivity renders the ships more cost-effective but also vulnerable to cyberattacks. Since ships are assets of significant value and importance, they constitute a lucrative object for cyber-attacks. The power and propulsion functions are among the most safety critical and essential for ship operations. Simultaneously, the use of Dual-Fuel (DF) engines for power generation and propulsion has become very popular in the recent years. The aim of this research is the risk identification and analysis of potential cybersecurity attack scenarios in a DF engine on inland waterways ship. For this purpose, we employ an adapted version of Failure Modes, Vulnerabilities and Effects Analysis (FMVEA). In our approach we demonstrate how the implementation of FMVEA can be interconnected with the existing assurance processes for maritime engines and novel developments in the field of risk theory. We also provide insights in the riskiest cybersecurity attacks on DF engine and how to reduce their risks.
Article
Full-text available
We show that if performance measures in stochastic and dynamic scheduling problems satisfy generalized conservation laws, then the feasible region of achievable performance is a polyhedron called an extended polymatroid, that generalizes the classical polymatroids introduced by Edmonds. Optimization of a linear objective over an extended polymatroid is solved by an adaptive greedy algorithm, which leads to an optimal solution having an indexability property (indexable systems). Under a certain condition the indices possess a stronger decomposition property (decomposable system). The following problems can be analyzed using our theory: multiarmed bandit problems, branching bandits, scheduling of multiclass queues (with or without feedback), scheduling of a batch of jobs. Consequences of our results include: (1) a characterization of indexable systems as systems that satisfy generalized conservation laws, (2) a sufficient condition for indexable systems to be decomposable, (3) a new linear programming proof of the decomposability property of Gittins indices in multiarmed bandit problems, (4) an approach to sensitivity analysis of indexable systems, (5) a characterization of the indices of indexable systems as sums of dual variables, and an economic interpretation of the branching bandit indices in terms of retirement options, (6) an analysis of the indexability of undiscounted branching bandits, (7) a new algorithm to compute the indices of indexable systems (in particular Gittins indices), as fast as the fastest known algorithm, (8) a unification of Klimov’s algorithm for multiclass queues and Gittins’ algorithm for multiarmed bandits as special cases of the same algorithm, (9) a closed formula for the maximum reward of the multiarmed bandit problem, with a new proof of its submodularity and (10) an understanding of the invariance of the indices with respect to some parameters of the problem. Our approach provides a polyhedral treatment of several classical problems in stochastic and dynamic scheduling and is able to address variations such as: discounted versus undiscounted cost criterion, rewards versus taxes, discrete versus continuous time, and linear versus nonlinear objective functions.
Article
Full-text available
In this paper we study both market risks and nonmarket risks, without complete markets assumption, and discuss methods of measurement of these risks. We present and justify a set of four desirable properties for measures of risk, and call the measures satisfying these properties “coherent.” We examine the measures of risk provided and the related actions required by SPAN, by the SEC/NASD rules, and by quantile-based methods. We demonstrate the universality of scenario-based methods for providing coherent measures. We offer suggestions concerning the SEC method. We also suggest a method to repair the failure of subadditivity of quantile-based methods.
Article
Full-text available
Standard classification algorithms are generally designed to maximize the number of correct predictions (concordance). The criterion of maximizing the concordance may not be appropriate in certain applications. In practice, some applications may emphasize high sensitivity (e.g., clinical diagnostic tests) and others may emphasize high specificity (e.g., epidemiology screening studies). This paper considers effects of the decision threshold on sensitivity, specificity, and concordance for four classification methods: logistic regression, classification tree, Fisher's linear discriminant analysis, and a weighted k-nearest neighbor. We investigated the use of decision threshold adjustment to improve performance of either sensitivity or specificity of a classifier under specific conditions. We conducted a Monte Carlo simulation showing that as the decision threshold increases, the sensitivity decreases and the specificity increases; but, the concordance values in an interval around the maximum concordance are similar. For specified sensitivity and specificity levels, an optimal decision threshold might be determined in an interval around the maximum concordance that meets the specified requirement. Three example data sets were analyzed for illustrations.
Article
A major investment decision for individual and institutional investors alike is to choose between different asset classes, i.e., equity investments and interest-bearing investments. The asset allocation decision determines the ultimate risk and return of a portfolio. The asset allocation problem is frequently addressed either through a static analysis, based on Markowitz's mean-variance model, or dynamically but often myopically through the application of analytical results for special classes of utility functions, e.g., Samuelson's fixed-mix result for constant relative risk aversion. Only recently, the full dynamic and multi-dimensional nature of the asset allocation problem could be captured through applications of stochastic dynamic programming and stochastic programming techniques. This chapter reviews the different approaches to asset allocation and presents a novel approach based on stochastic dynamic programming and Monte Carlo sampling that permits one to consider many rebalancing periods, many asset classes, dynamic cash flows, and a general representation of investor risk preference. It presents a novel approach of representing utility by directly modeling risk aversion as a function of wealth, and thus provides a general framework for representing investor preference. It shows how the optimal asset allocation depends on the investment horizon, wealth, and the investor's risk preference and how it therefore changes over time depending on cash flow and the returns achieved. It demonstrates how dynamic asset allocation leads to superior results compared to static or myopic techniques. It presents examples of dynamic strategies for various typical risk preferences and multiple asset classes.
Article
This paper evaluates the variable selection performed by several machine-learning techniques on a myocardial infarction data set. The focus of this work is to determine which of 43 input variables are considered relevant for prediction of myocardial infarction. The algorithms investigated were logistic regression (with stepwise, forward, and backward selection), backpropagation for multilayer perceptrons (input relevance determination), Bayesian neural networks (automatic relevance determination), and rough sets. An independent method (self-organizing maps) was then used to evaluate and visualize the different subsets of predictor variables. Results show good agreement on some predictors, but also variability among different methods; only one variable was selected by all models.
Article
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as "high,"medium," and "low" can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real-world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as "high") to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus-building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.
Article
Aggregate exposure metrics based on sums or weighted averages of component exposures are widely used in risk assessments of complex mixtures, such as asbestos-associated dusts and fibers. Allowed exposure levels based on total particle or fiber counts and estimated ambient concentrations of such mixtures may be used to make costly risk-management decisions intended to protect human health and to remediate hazardous environments. We show that, in general, aggregate exposure information alone may be inherently unable to guide rational risk-management decisions when the components of the mixture differ significantly in potency and when the percentage compositions of the mixture exposures differ significantly across locations. Under these conditions, which are not uncommon in practice, aggregate exposure metrics may be "worse than useless," in that risk-management decisions based on them are less effective than decisions that ignore the aggregate exposure information and select risk-management actions at random. The potential practical significance of these results is illustrated by a case study of 27 exposure scenarios in El Dorado Hills, California, where applying an aggregate unit risk factor (from EPA's IRIS database) to aggregate exposure metrics produces average risk estimates about 25 times greater - and of uncertain predictive validity - compared to risk estimates based on specific components of the mixture that have been hypothesized to pose risks of human lung cancer and mesothelioma.
Article
Learning vector quantization (LVQ) is described, with both the LVQ1 and LVQ3 algorithms detailed. This approach involves finding boundaries between classes based on codebook vectors that are created for each class using an iterative neural network. LVQ has an advantage over traditional boundary methods such as support vector machines in the ability to model many classes simultaneously. The performance of the algorithm is tested on a data set of the thermal properties of 293 commercial polymers, grouped into nine classes: each class in turn consists of several grades. The method is compared to the Mahalanobis distance method, which can also be applied to a multiclass problem. Validation of the classification ability is via iterative splits of the data into test and training sets. For the data in this paper, LVQ is shown to perform better than the Mahalanobis distance as the latter method performs best when data are distributed in an ellipsoidal manner, while LVQ makes no such assumption and is primarily used to find boundaries. Confusion matrices are obtained of the misclassification of polymer grades and can be interpreted in terms of the chemical similarity of samples.
Some limitations ofqualitativeriskratingsystems
  • L A Cox
  • Jr
  • D Babayev
  • W Huber
Cox, L. A. Jr., Babayev, D., & Huber, W. (2005). Some limitations ofqualitativeriskratingsystems.RiskAnalysis,25(3),651–662
ToolsTechniq-ues/RiskMatrix.html. (Last accessed 11-19-2007 Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis. Wash-ington, DC: National Institute of Building Sciences
  • N A Renfroe
  • J L Smith
MITRE Risk Management Toolkit. (1999–2007). Available at http://www.mitre.org/work/sepo/toolkits/risk/ToolsTechniq-ues/RiskMatrix.html. (Last accessed 11-19-2007.) Renfroe, N. A., & Smith, J. L. (2007). Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis. Wash-ington, DC: National Institute of Building Sciences. Avail-able at http://www.wbdg.org/design/riskanalysis.php. (Last ac-cessed 8-19-2007.)
Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis
  • N A Renfroe
  • J L Smith
Renfroe, N. A., & Smith, J. L. (2007). Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis. Washington, DC: National Institute of Building Sciences. Available at http://www.wbdg.org/design/riskanalysis.php. (Last accessed 8-19-2007.)
Australian Greenhouse Office, in the Department of the Environment and Heritage. Climate Change Impacts & Risk Management: A Guide for Business and Government
  • Australian Government
Australian Government. (2006). Australian Greenhouse Office, in the Department of the Environment and Heritage. Climate Change Impacts & Risk Management: A Guide for Business and Government. Canberra, Australia: Commonwealth of Australia. Available at http://www.greenhouse. gov.au/impacts/publications/pubs/risk-management.pdf. (Last accessed 8-19-2007.)
Available at http://www.mitre.org/work/sepo/toolkits/risk/ToolsTechniques
MITRE Risk Management Toolkit. (1999–2007). Available at http://www.mitre.org/work/sepo/toolkits/risk/ToolsTechniques/RiskMatrix.html. (Last accessed 11-19-2007.)
Systems Engineering Guidebook for ITS Version 2.0. Available at www. fhwa.dot.gov/cadiv/segb
California Department of Transportation, Federal Highway Administration, California Division. (2007). Systems Engineering Guidebook for ITS Version 2.0. Available at www. fhwa.dot.gov/cadiv/segb/views/ document / Sections / Section3 / 3 9 4.htm.
Handbook of Assets and Liability Management
  • G. Infanger
Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments
  • Gao
GAO. (1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. Washington, DC: U.S. Government Accounting Office. Available at http://www.gao.gov/archive/1998/ns98074.pdf (Last accessed 8-19-2007.)
Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis. Washington, DC: National Institute of Building Sciences
  • N A Renfroe
  • J L Smith
Renfroe, N. A., & Smith, J. L. (2007). Whole Building Design Guide: Threat/Vulnerability Assessments and Risk Analysis. Washington, DC: National Institute of Building Sciences. Available at http://www.wbdg.org/design/riskanalysis.php. (Last accessed 8-19-2007.)