Conference PaperPDF Available

Application of Design Review to Probabilistic Risk Assessment in a Large Investment Project


Abstract and Figures

In this paper, we present a systematic and comprehensive Design Review (DR) process that is integrated in design and engineering stages of final disposal facilities of spent nuclear fuel. The review process consists of seventeen interconnected phases and in the paper the methodology of certain phases is described in more detail. The main tools in the design review process are probabilistic modeling, stochastic simulation schemes, and large computer-aided calculation. Based on experience of the design review process application at an early stage of the project design and development phase it becomes possible to identify the problem areas, which may reduce the system availability and safety, increase the system life-cycle costs, and delay the design or operation start-up time.
Content may be subject to copyright.
A preview of the PDF is not available
... 20 We have also noted the limitations of the standard tools during our decadelong experience developing ELMAS [5] software. Various demanding use cases prompted us to add new advanced features for the modelling of complex relationships and dynamic operation changes [6,7,8,9,10]. The use of ELMAS for availability modelling of future circular colliders [11] in the European Organiza- 25 tion for Nuclear Research (CERN) highlighted the challenge that the addition of customized domain-specific features required programming skills from the modeller. ...
... We developed the OpenMARS to answer the needs we have encountered in 655 various industry cases. For example, in cases from the metal industry, robotics, and nuclear industry, we have required advanced features for (i) combination of FTA and FMEA analysis [8], (ii) multi-state modelling of partial process flows, (iii) dynamic rules for backup power supply use [9], and (iv) definition of exclusive stochastic consequences [10]. The extensive use of the advanced features 660 was required in our particle collider availability model [11], which prompted us to develop OpenMARS. ...
This paper introduces an Open Modelling approach for Availability and Reliability of Systems (OpenMARS), which is developed for risk and performance assessment of large and complex systems with dynamic behaviours. The approach allows for combining the most common risk assessment and operation modelling techniques. This ensures a high degree of freedom for the modeller to accurately describe the system without limitations imposed by an individual technique. OpenMARS uses a platform-independent tabular format to define the used modelling technique, to create the model structure, and to assign the parameter values. We developed the format to enable a straightforward manual model definition while maintaining database compatibility. This paper also presents our calculation engine for stochastic simulation-based analysis of OpenMARS models. Our intention is to use this approach as a basis for new software. We demonstrate the feasibility of OpenMARS with an example of a multi-state production process that is subject to failures. The example creates a comprehensive system model by combining interconnected failure logic, operation phase, and production function models. We believe that the advanced features of OpenMARS have wide ranging applications for analysis of reliability, performance, and energy efficiency of complex industrial processes
... As a built-in feature ELMAS support fault tree analyses, cause-consequence analyses and reliability block diagrams. ELMAS has about decade-long development history with several published use cases in various fields of industry: i) reliability analysis performed for a production-critical molding crane [164]; ii) maintenance robots for radioactive environment [165]; and power supply grid for a spent nuclear fuel encapsulation plant [166]. ...
Full-text available
This thesis presents an availability model for the Future Circular hadron Collider. The model is based on the current operations of the Large Hadron Collider. The thesis shows that hadron collider’s availability for physics is a complex problem as it depends both on the system availability and the operational cycle. The availability itself is not the key performance indicator but it is closely linked to collision production and so-called integrated luminosity that are essential for physics research. The thesis shows that taking into account the operational cycle is critical for modeling luminosity production. The thesis validates the model against LHC operations and shows preliminary results on the FCC availability and luminosity production. Ramentor Oy’ ELMAS software was chosen as the platform for the model. ELMAS is designed as a fault tree software. However, the developed model combines fault trees with Markov models. This feature was implemented by adding custom Java code and libraries to models. This reliance on a custom code was not the ideal solution and lead to the development of the OpenMARS approach. This approach allows combining the most common risk assessment and operation modeling techniques and connecting the models made with these techniques. This thesis presents the basic concept of this approach and shows how the collider operations model can be implemented with it. The discussion section also provides ideas on how the study would have proceeded if the collaboration with Ramentor Oy had not been an option and further applications of the OpenMARS approach. Another issue encountered during the study was the lack of reliability data. This issue resulted in a study of reliability data sharing practices in the industry where the author was a key contributor. The thesis presents the idea to use a similar concept in the accelerator field and comments on this.
The cost and profitability related to operation and maintenance phase is significant for several industrial applications. Thus, the first proactive maintenance management action is to design/out the maintenance work by designing out the critical failure modes and causes. The dependability standards (IEC 60300) provides the framework and methodology to design for maintainability at project phase. However, the risk that physical asset will fail is always there as there are changes in the operating and loading conditions, which might initiate new failure modes. The philosophy of industry 4.0 is to develop smart asset to enable a real-time monitoring of the dynamic asset behaviour. In this context, the dependability standards (IEC 60300) need to be updated to consider the technical requirements that support the intelligent maintenance process. Therefore, the purpose of this paper is to present a potential reference standard related to “design for intelligent maintenance” that comply with industry 4.0 requirements. This work illustrates the progress toward a unified standard body for dependability in industry 4.0, which might lead to significant changes in the current state of the art in designing industrial assets. For example, among the 80,000 sensors that are attached to modern oil and gas platforms, a few ones are generating data for health monitoring and maintenance purposes, the majority applied for detecting operational anomalies and control.
The use of importance measures to analyze PRA results is discussed. Commonly used importance measures are defined. Some issues that have been identified as potentially limiting their usefulness are addressed, namely: there is no simple relationship between importance measures evaluated at the single component level and those evaluated at the level of a group of components, and, as a result, some of the commonly used importance measures are not realistic measures of the sensitivity of the overall risk to parameter value changes; and, importance measures do not typically take into account parameter uncertainties which raises the question of the robustness of conclusions drawn from importance analyses. The issues are explored in the context of both ranking and categorization of structures, systems, and components (SSCs) with respect to risk-significance and safety-significance for use in risk-informed regulatory analyses.
Sumario: Introduction -- Methods (Probability concepts. Probability distributions for describing failures. Data manipulation concepts. Failure data. Reliability of simple sistems. Reliability and availability of systems with repair. Fault tree analysis. Event tree analysis. Computer programs for fault tree analysis) -- Nuclear power risks (Risk concepts. Risks for light water reactors. Risks for liquid metal fast breeder and high temperature gas reactors. Risks for nuclear materials transportation. Risks for nuclear waste disposal) -- Other risk assessments (Comparison of risks. Risk-benefit assessment. Risk acceptance. Epilogue) -- Appendices (Some useful mathematical functions. Failure data. Some matrix mathematics. Failure modes and effects analysis. Light water reactor safety systems. Additional light water reactor safety study fault trees. The GO method. Answers to selected exercises)
Conference Paper
The risk priority number (RPN) methodology for prioritizing failure modes is an integral part of the automobile FMECA technique. The technique consists of ranking the potential failures from 1 to 10 with respect to their severity, probability of occurrence, and likelihood of detection in later tests, and multiplying the numbers together. The result is a numerical ranking, called the RPN, on a scale from 1 to 1000. Potential failure modes having higher RPNs are assumed to have a higher design risk than those having lower numbers. Although it is well documented and easy to apply, the method is seriously flawed from a technical perspective. This makes the interpretation of the analysis results problematic. The problems with the methodology include the use of the ordinal ranking numbers as numeric quantities, the presence of holes making up a large part of the RPN measurement scale, duplicate RPN values with very different characteristics, and varying sensitivity to small changes. Recommendations for an improved methodology are also given.
Expanded FMEA (EFMEA)
  • Zigmund Bluvband
  • Pavel Grabov
  • Oren Nakar
Zigmund Bluvband, Pavel Grabov, Oren Nakar. Expanded FMEA (EFMEA). IEEE Proceeding, Annual Reliability and Maintainability Symposium 2004.
Simulation of Reliability, Availability and Maintenance Costs. Recent Advances in Stochastic Operation Research II
  • P-E Hagmark
  • S Virtanen
Hagmark, P-E. and Virtanen, S. Simulation of Reliability, Availability and Maintenance Costs. Recent Advances in Stochastic Operation Research II, edited by Tadashi Dohi, Shunji Osaki & Katsushige Sawaki, Japan, 2009. ISBN 978-981-279-166-5K.
An assessment of RPN prioritization in a Failure Mode Effects and Criticality Analysis
  • B Jhon
  • Bowels
Jhon B. Bowels, An assessment of RPN prioritization in a Failure Mode Effects and Criticality Analysis, IEEE Proceeding, Annual reliability and Maintainability Symposium 2003.
Modelling and Analysis of Causes and Consequences of Failures
  • S Virtanen
  • P-E Hagmark
  • J-P Penttinen
Virtanen, S., Hagmark, P-E. and Penttinen, J-P. Modelling and Analysis of Causes and Consequences of Failures. IEEE, Proceeding: Annual Reliability and Maintainability Symposium (RAMS). January 23-26, 2006. Newport Beach, CA, USA. pp. 506-511.
Probabilistic Safety Assessment in the Chemical and Nuclear Industries
  • Sintef Psa Handbook
SINTEF PSA handbook, Probabilistic Safety Assessment in the Chemical and Nuclear Industries
Use of Importance Measures in Risk-Informed Regulatory Applications
  • M Cheok
  • G Parry
  • R Serry
M.Cheok, G. Parry, R. Serry, "Use of Importance Measures in Risk-Informed Regulatory Applications". Reliability Engineering and System Safety, 1998.