Reliability Engineering & System Safety

Published by Elsevier BV

Print ISSN: 0951-8320

Articles


Comparison of Monte Carlo and Quasi Monte Carlo Sampling Methods in High Dimensional Model Representation
  • Conference Paper

October 2009

·

179 Reads

Balazs Feil

·

·

A number of new techniques which improve the efficiency of random sampling-high dimensional model representation (RS-HDMR) is presented. Comparison shows that quasi Monte Carlo based HDMR (QRS-HDRM) significantly outperforms RS-HDMR. RS/QRS-HDRM based methods also show faster convergence than the Sobol method for sensitivity indices calculation. Numerical tests prove that the developed methods for choosing optimal orders of polynomials and the number of sampled points are robust and efficient.
Share

Reliability and cost optimization of electronic devices considering the component failure rate uncertainty

July 2003

·

185 Reads

The objective of this paper is to present an efficient methodology to obtain the optimal system structure for power electronic devices using various component types, while considering the constraints on reliability and cost. The component failure rate uncertainty is taken under consideration and it is modeled with two alternative probability distribution functions. The Latin Hypercube Sampling method is used to simulate the probability distributions and the efficiency of this stratified sampling method is compared with the typical Monte Carlo analysis method. The optimization methodology being used was the simulated annealing algorithm because of its flexibility to be applied in various system types with various constraints and its efficiency in computational time. The developed methodology was applied to a power electronic device and the results were compared with the results of the complete enumeration of the solution space. The stochastic nature of the top solutions was sampled extensively and the robustness of the optimization methodology was demonstrated. Finally, a typical power electronic device is used as a case study and the obtained results are presented.

An EDF perspective on human factors

July 1988

·

15 Reads

The author presents the main lines of the program undertaken by Electricite de France in the field of human factors as a result of the incident at Three Mile Island (YMI). As it is important to be aware of some human characteristics to understand the difficulties and needs in the field, the following behavior characteristics are described: man is not a component; man functions through a single channel; man has a continuous need of information; man biases risk estimation; and man uses mental representation. It is remarked that the actions launched after TMI are man-machine interface improvement, operator training, crew organization, operating experience analysis, state approach development and emergency planning. It is emphasized that all these actions are linked to human factors. Also presented are control room studies for a new plant called N4, a 1400 MWe light water reactor

Process Risk Evaluation—What Method to Use?

December 1990

·

85 Reads

In today's process industry environment, it is becoming more and more important for companies to evaluate the risks associated with their plants. However, many risk evaluation methods of varying degrees of complexity and cost exist. Choosing the right method to provide the information management needs to answer a company's risk questions is often difficult. This paper provides an overview of three risk evaluation methods: one (HAZOP analysis) is a qualitative technique, the second (Facility Risk Review) blends together aspects of qualitative and quantitative risk assessment techniques, and the third (Quantitative Risk Analysis) is a quantitative technique. Example risk evaluations using each technique are provided to help the reader understand the capabilities and typical results obtained with each method.

Pozzi, S.: Evaluation of air traffic management procedures - safety assessment in an experimental environment. Reliability Engineering & System Safety 89(1), 105-117

July 2005

·

194 Reads

This paper presents and discusses the application of safety assessment methodologies to a pre-operational project in the Air Traffic Control field. In the case analysed in the present paper a peculiar aspect was the necessity to effectively assess new operational procedures and tools. In particular we exploited an integrated methodology to evaluate computer-based applications and their interactions with the operational environment. Current ATC safety practices, methodologies, guidelines and standards were critically revised, in order to identify how they could be applied to the project under consideration. Thus specific problematic areas for the safety assessment in a pre-operational experimental project are highlighted and, on the basis of theoretical principles, some possible solutions taken into consideration. The latter are described highlighting the rationale of most relevant decisions, in order to provide guidance for generalisation or re-use.

Dynamic reliability and risk assessment of the accident localization system of the Ignalina NPP RBMK-1500 reactor

January 2005

·

88 Reads

The paper presents reliability and risk analysis of the RBMK-1500 reactor accident localization system (ALS) (confinement), which prevents radioactive releases to the environment. Reliability of the system was estimated and compared by two methods: the conventional fault tree method and an innovative dynamic reliability model, based on stochastic differential equations. Frequency of radioactive release through ALS was also estimated. The results of the study indicate that conventional fault tree modeling techniques in this case apply high degree of conservatism in the system reliability estimates.One of the purposes of the ALS reliability study was to demonstrate advantages of the dynamic reliability analysis against the conventional fault/event tree methods. The Markovian framework to deal with dynamic aspects of system behavior is presented. Although not analyzed in detail, the framework is also capable of accounting for non-constant component failure rates. Computational methods are proposed to solve stochastic differential equations, including analytical solution, which is possible only for relatively small and simple systems. Other numerical methods, like Monte Carlo and numerical schemes of differential equations are analyzed and compared. The study is finalized with concluding remarks regarding both the studied system reliability and computational methods used.

A risk assessment methodology for incorporating uncertainties using fuzzy concepts. Reliability Engineering and System Safety, 78, 173-183

November 2002

·

387 Reads

This paper proposes a new methodology for incorporating uncertainties using fuzzy concepts into conventional risk assessment frameworks. This paper also introduces new forms of fuzzy membership curves, designed to consider the uncertainty range that represents the degree of uncertainties involved in both probabilistic parameter estimates and subjective judgments, since it is often difficult or even impossible to precisely estimate the occurrence rate of an event in terms of one single crisp probability.It is to be noted that simple linguistic variables such as ‘High/Low’ and ‘Good/Bad’ have the limitations in quantifying the various risks inherent in construction projects, but only represent subjective mental cognition adequately. Therefore, in this paper, the statements that include some quantification with giving specific value or scale, such as ‘Close to any value’ or ‘Higher/Lower than analyzed value’, are used in order to get over the limitations.It may be stated that the proposed methodology will be very useful for the systematic and rational risk assessment of construction projects.

Optimal maintenance decisions under imperfect inspection. Reliab Eng Syst Safety 90(2-3):177-185

November 2005

·

128 Reads

The process industry is increasingly making use of Risk Based Inspection (RBI) techniques to develop cost and/or safety optimal inspection plans. This paper proposes an adaptive Bayesian decision model to determine these optimal inspection plans under uncertain deterioration. It uses the gamma stochastic process to model the corrosion damage mechanism and Bayes’ theorem to update prior knowledge over the corrosion rate with imperfect wall thickness measurements. This is very important in the process industry as current non-destructive inspection techniques are not capable of measuring the exact material thickness, nor can these inspections cover the total surface area of the component. The decision model finds a periodic inspection and replacement policy, which minimizes the expected average costs per year. The failure condition is assumed to be random and depends on uncertain operation conditions and material properties. The combined deterioration and decision model is illustrated by an example using actual plant data of a pressurized steel vessel.

Fig. 3. Assessment matrix for five coupled infrastructures [1]. Colors are used for the initial judgment: Red corresponds to high, green to low, yellow to in-between; transitions indicate changes/trends.
Kroger, W.: Critical infrastructures at risk: A need for a new conceptual approach and extended analytical tools. Reliability Engineering and System Safety 93(12), 1781-1787
  • Article
  • Full-text available

December 2008

·

1,116 Reads

Recent decades have witnessed on the one hand a much greater and tighter integration of goods or services supply systems and growing interconnectedness as well as changing organizational and operational factors, and on the other hand an increased social vulnerability in the face of accidental or intentional disruption. The work of the International Risk Governance Council (IRGC) in the field of critical infrastructures has focused on both the risks associated with five individual infrastructures and the issues associated with the increasing interdependence between them. This paper presents a selection of system weaknesses and a number of policy options that have been identified and highlights issues for further investigation and dialogue with stakeholders. Furthermore, the need to extend current modeling and simulation techniques in order to cope with the increasing system complexity is elaborated. An object-oriented, hybrid modeling approach promising to overcome some of the shortcomings of traditional methods is presented.
Download

Accelerated failure time models for reliability data analysis. Reliability Engineering & System Safety, 20(3), 187-197

December 1988

·

74 Reads

Despite the popularity of the proportional hazards model (PHM) in analysing many kinds of reliability data, there are situations in which it is not appropriate. The accelerated failure time model (AFT) then provides an alternative. In this paper, a unified treatment of the accelerated failure time model is outlined for the standard reliability distributions (Weibull, log-normal, inverse Gaussian, gamma). The problem of choosing between the accelerated failure time models and proportional hazard models is discussed and effects of misspecification are reported. The techniques are illustrated in the analysis of data from a fatigue crack growth experiment.

A fuzzy-logic-based approach to qualitative safety modelling for marine systems. Reliability Engineering and System Safety, 73, 19-4

July 2001

·

198 Reads

Safety assessment based on conventional tools (e.g. probability risk assessment (PRA)) may not be well suited for dealing with systems having a high level of uncertainty, particularly in the feasibility and concept design stages of a maritime or offshore system. By contrast, a safety model using fuzzy logic approach employing fuzzy IF–THEN rules can model the qualitative aspects of human knowledge and reasoning processes without employing precise quantitative analyses. A fuzzy-logic-based approach may be more appropriately used to carry out risk analysis in the initial design stages. This provides a tool for working directly with the linguistic terms commonly used in carrying out safety assessment. This research focuses on the development and representation of linguistic variables to model risk levels subjectively. These variables are then quantified using fuzzy sets. In this paper, the development of a safety model using fuzzy logic approach for modelling various design variables for maritime and offshore safety based decision making in the concept design stage is presented. An example is used to illustrate the proposed approach.

Probabilistic safety assessment development in the United States 1972–1990

December 1993

·

21 Reads

The purpose of this article is to honor F.R. Farmer for his contribution to the creation of the discipline of probabilistic safety assessment (PSA) as it applies to nuclear power plant (NPP) safety, and to review some of the important contributions to its development in the United States from the WASH-1400 Reactor Safety Report to NUREG-1150, An assessment of risks for five US nuclear power plants.

Lessons learned for PSA from the SGTR incident at Mihama, unit 2, in 1991

December 1994

·

29 Reads

This paper presents insights for probabilistic safety assessment (PSA) obtained from examination of the SGTR event that occurred at Mihama (Japan), unit 2, in 1991. The review of typical PSAs for PWR shows that the event corresponds to the most probable sequence in the SGTR event tree, and that the malfunction of several valves due to maintenance and repair errors experienced in this event has been considered in the fault trees for relevant events. The re-evaluation of SGTR occurrence frequency shows that it could be reduced to some extent if countermeasures to stress corrosion cracking (SCC) were considered in the design and operation of SGs, since most of the defects of SG tubes have occurred due to SCC. Based on the observation of non-stationary occurrences of incidents after such outage, the importance of the proper consideration of maintenance errors at refuelling outage in PSA is also discussed from the viewpoint of improving the PSA in general.

Development of the conceptual models for chemical conditions and hydrology used in the 1996 performance assessment for the Waste Isolation Pilot Plant

May 2000

·

24 Reads

The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) facility for the permanent disposal of defense-related transuranic (TRU) waste. US Environmental Protection Agency (EPA) regulations specify that the DOE must demonstrate on a sound basis that the WIPP disposal system will effectively contain long-lived alpha-emitting radionuclides within its boundaries for 10,000 years following closure. In 1996, the DOE submitted the 40 CFR Part 191 Compliance Certification Application for the Waste Isolation Pilot Plant (CCA) to the EPA. The CCA proposed that the WIPP site complies with EPA's regulatory requirements. Contained within the CCA are descriptions of the scientific research conducted to characterize the properties of the WIPP site and the probabilistic performance assessment (PA) conducted to predict the containment properties of the WIPP disposal system. In May 1998, the EPA certified that the TRU waste disposal at the WIPP complies with its regulations. Waste disposal operations at WIPP commenced on 28 March 1999.

Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

May 2000

·

12 Reads

·

·

·

[...]

·

Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequent to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.

Software quality assurance in the 1996 performance assessment for the Waste Isolation Pilot Plant

May 2000

·

9 Reads

The US Department of Energy (DOE) Waste Isolation Pilot Plant (WIPP), located in southeast New Mexico, is a deep geologic repository for the permanent disposal of transuranic waste generated by DOE defense-related activities. Sandia National Laboratories (SNL), in its role as scientific advisor to the DOE, is responsible for evaluating the long-term performance of the WIPP. This risk-based Performance Assessment (PA) is accomplished in part through the use of numerous scientific modeling codes, which rely for some of their inputs on data gathered during characterization of the site. The PA is subject to formal requirements set forth in federal regulations. In particular, the components of the calculation fall under the configuration management and software quality assurance aegis of the American Society of Mechanical Engineers (ASME) Nuclear Quality Assurance (NQA) requirements. This paper describes SNL's implementation of the NQA requirements regarding software quality assurance (SQA). The description of the implementation of SQA for a PA calculation addresses not only the interpretation of the NQA requirements, it also discusses roles, deliverables, and the resources necessary for effective implementation. Finally, examples are given which illustrate the effectiveness of SNL's SQA program, followed by a detailed discussion of lessons learned.

Conceptual structure of the 1996 performance assessment for the Waste Isolation Pilot Plant

September 2000

·

28 Reads

The conceptual structure of the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) is described. This structure involves three basic entities (EN1, EN2, EN3): (i) EN1, a probabilistic characterization of the likelihood of different futures occurring at the WIPP site over the next 10,000 years; (ii) EN2, a procedure for estimating the radionuclide releases to the accessible environment associated with each of the possible futures that could occur at the WIPP site over the next 10,000 years; and (iii) EN3, a probabilistic characterization of the uncertainty in the parameters used in the definition of EN1 and EN2. In the formal development of the 1996 WIPP PA, EN1 is characterized by a probability space (st, st, pst) for stochastic (i.e. aleatory) uncertainty; EN2 is characterized by a function f that corresponds to the models and associated computer programs used to estimate radionuclide releases; and EN3 is characterized by a probability space (su, su, psu) for subjective (i.e. epistemic) uncertainty. A high-level overview of the 1996 WIPP PA and references to additional sources of information are given in the context of (st, st, pst), f and (su, su, psu).

Assignment of probability distributions for parameters in the 1996 performance assessment for the Waste Isolation Pilot Plant. Part 1: Description of process

April 2005

·

18 Reads

A managed process was used to consistently and traceably develop probability distributions for parameters representing epistemic uncertainty in four preliminary and the final 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The key to the success of the process was the use of a three-member team consisting of a Parameter Task Leader, PA Analyst, and Subject Matter Expert. This team, in turn, relied upon a series of guidelines for selecting distribution types. The primary function of the guidelines was not to constrain the actual process of developing a parameter distribution but rather to establish a series of well-defined steps where recognized methods would be consistently applied to all parameters. An important guideline was to use a small set of distributions satisfying the maximum entropy formalism. Another important guideline was the consistent use of the log transform for parameters with large ranges (i.e. maximum/minimum>103). A parameter development team assigned 67 probability density functions (PDFs) in the 1989 PA and 236 PDFs in the 1996 PA using these and other guidelines described.

Radionuclide and colloid transport in the Culebra Dolomite and associated complementary cumulative distribution functions in the 1996 performance assessment for the Waste Isolation Pilot Plant

May 2000

·

21 Reads

The following topics related to radionuclide and colloid transport in the Culebra Dolomite in the 1996 performance assessment for the Waste Isolation Pilot Plant (WIPP) are presented: (i) mathematical description of models; (ii) uncertainty and sensitivity analysis results arising from subjective (i.e. epistemic) uncertainty for individual releases; and (iii) construction of complementary cumulative distribution functions (CCDFs) arising from stochastic (i.e. aleatory) uncertainty. The presented results indicate that radionuclide and colloid transport in the Culebra Dolomite does not constitute a serious threat to the effectiveness of the WIPP as a disposal facility for transuranic waste. Even when the effects of uncertain analysis inputs are taken into account, no radionuclide transport to the boundary with the accessible environment was observed; thus, the associated CCDFs for comparison with the boundary line specified in the US Environmental Protection Agency's standard for the geologic disposal of radioactive waste (40 CFR 191, 40 CFR 194) are degenerate in the sense of having a probability of zero of exceeding a release of zero.

Highlights from the early (and Pre-) history of reliability engineering. Reliability Engineering and System Safety, 91, 249-256

February 2006

·

790 Reads

Reliability is a popular concept that has been celebrated for years as a commendable attribute of a person or an artifact. From its modest beginning in 1816-the word reliability was first coined by Samuel T. Coleridge-reliability grew into an omnipresent attribute with qualitative and quantitative connotations that pervades every aspect of our present day technologically intensive world.In this short communication, we highlight key events and the history of ideas that led to the birth of Reliability Engineering, and its development in the subsequent decades. We first argue that statistics and mass production were the enablers in the rise of this new discipline, and the catalyst that accelerated the coming of this new discipline was the (unreliability of the) vacuum tube. We highlight the foundational role of AGREE report in 1957 in the birth of reliability engineering, and discuss the consolidation of numerous efforts in the 1950s into a coherent new technical discipline. We show that an evolution took place in the discipline in the following two decades along two directions: first, there was an increased specialization in the discipline (increased sophistication of statistical techniques, and the rise of a new branch focused on the actual physics of failure of components, Reliability Physics); second, there occurred a shift in the emphasis of the discipline from a component-centric to an emphasis on system-level attributes (system reliability, availability, safety). Finally, in selecting the particular events and highlights in the history of ideas that led to the birth and subsequent development of reliability engineering, we acknowledge a subjective component in this work and make no claims to exhaustiveness.

Architectural design and reliability analysis of a fail-operational brake-by-wire system from ISO 26262 perspectives

October 2011

·

850 Reads

Next generation drive-by-wire automotive systems enabling autonomous driving will build on the fail-operational capabilities of electronics, control and software (ECS) architectural solutions. Developing such architectural designs that would meet dependability requirements and satisfy other system constraints is a challenging task and will possibly lead to a paradigm shift in automotive ECS architecture design and development activities. This aspect is becoming quite relevant while designing battery-driven electric vehicles with integrated in-wheel drive-train and chassis subsystems.In such highly integrated dependable systems, many of the primary features and functions are attributed to the highest safety critical ratings. Brake-by-wire is one such system that interfaces with active safety features built into an automobile, and which in turn is expected to provide fail-operational capabilities. In this paper, building up on the basic concepts of fail-silent and fail-operational systems design we propose a system-architecture for a brake-by-wire system with fail-operational capabilities. The design choices are supported with proper rationale and design trade-offs. Safety and reliability analysis of the proposed system architecture is performed as per the ISO 26262 standard for functional safety of electrical/electronic systems in road vehicles.

Nonlinear Monte Carlo reliability analysis with biasing towards top event. Reliability Engineering and System Safety, 40(1), 31-42

December 1993

·

90 Reads

This paper deals with the Monte Carlo evaluation of the reliability and availability of a complex system made up of a large number of components, each with many possible states. To make allowance for the fact that in reality the transition probabilities depend on the system configuration, so that the transition probabilities may be suitably varied after each transition occurrence, has been developed. In the present work the model obeys the usual Markovian assumption of transitions dependent on the present system configuration, but removal of this assumption is easy and it would account for system aging. To drive the system towards the more interesting but highly improbable cut set configurations, a variance reduction technique, based on the introduction of distances between the present and the cut set configurations, is also proposed.

The effect of introducing increased-reliability-risk electronic components into 3rd generation telecommunications systems

August 2005

·

18 Reads

In this paper, the dependability of 3rd generation telecommunications network systems is studied. Special attention is paid to a case where increased-reliability-risk electronic components are introduced to the system.The paper consists of three parts: First, the reliability data of four electronic components is considered. This includes statistical analysis of the reliability test data, thermo-mechanical finite element analysis of the printed wiring board structures, and based on those, a field reliability estimate of the components is constructed. Second, the component level reliability data is introduced into the network element reliability analysis. This is accomplished by using a reliability block diagram technique and Monte Carlo simulation of the network element. The end result of the second part is a reliability estimate of the network element with and without the high-risk component. Third, the whole 3rd generation network having multiple network elements is analyzed. In this part, the criticality of introducing high-risk electronic components into a 3rd generation telecommunications network is considered.

The NRC Bell 412 ASRA safety system: A human factors perspective on lessons learned from an airborne incident

February 2002

·

131 Reads

The National Research Council (NRC) Bell 205 Airborne Simulator is a full authority fly-by-wire (FBW) research helicopter. On 24 May, 1996 this aircraft underwent a failure which drove all four flight control actuators to full extension shortly after engagement of the FBW system, with nearly catastrophic results. The sound design inherent in the original Bell 205 safety system allowed the safety pilot to override the FBW system and prevented the loss of aircraft and crew. This incident, however, led to the realization that the existing safety system configuration in the Bell 205 was only marginally acceptable, and that this same system would be inadequate for the next generation FBW aircraft, the NRC Bell 412 Advanced Systems Research Aircraft (ASRA). Experience gained from the Bell 205 incident, and historical experience, has driven the design process of the safety systems for ASRA, with a particular view toward the capabilities and limitations of the operators.

Integrating RAMS engineering and management with the safety life cycle of IEC 61508

December 2009

·

432 Reads

This article outlines a new approach to reliability, availability, maintainability, and safety (RAMS) engineering and management. The new approach covers all phases of the new product development process and is aimed at producers of complex products like safety instrumented systems (SIS). The article discusses main RAMS requirements to a SIS and presents these requirements in a holistic perspective. The approach is based on a new life cycle model for product development and integrates this model into the safety life cycle of IEC 61508. A high integrity pressure protection system (HIPPS) for an offshore oil and gas application is used to illustrate the approach.

Loss of safety assessment and the IEC 61508 standard

January 2004

·

191 Reads

The standard IEC 61508 contains a lot of useful information and guidance for safety improvement regarding the use of safety systems. However, some of the basic concepts and methods for loss of safety quantification are somewhat confusing. This paper discusses the failure classification, the various contributions to the safety unavailability, and in particular the common cause failure (CCF) model presented in this standard. Suggestions for clarifications and improvements are provided. In particular, a new CCF model is suggested, denoted the Multiple Beta Factor model.

Availability of systems with self-diagnostic components - Applying Markov model to IEC 61508-6

May 2003

·

212 Reads

Of all the techniques applicable to safety-related analyses, each one may be adaptable to some aspects of the system safety behavior. On the other hand, some of them can fit to analysis on one aspect of the system behavior concerning risk, but they do not always lead to the same results. Rouvroye and Brombacher made a comparison of these techniques and indicated that Markov and Enhanced Markov analysis techniques can cover most aspects of system's safety-related behavior. According to their conclusion, the Markov method is introduced to Part 6 of the standard IEC 61508 for quantitative analysis in this paper. The purpose is to present explanation in details for solutions given in the standard because there are not clear descriptions for many results and it is not easy for a safety engineer to find the clue. In addition, the down time tc1 shown in the standard is newly defined because it is the basis to get the results of average probability of failure on demand of system architectures and its meaning is not clearly explained. Through derivation, however, a discrepancy is found in the standard. From this point of view, new suggestions are proposed based on the results obtained.

Design optimization of a safety-instrumented system based on RAMS+C addressing IEC 61508 requirements and diverse redundancy

February 2009

·

205 Reads

This paper presents the design optimization by a multi-objective genetic algorithm of a safety-instrumented system based on RAMS+C measures. This includes optimization of safety and reliability measures plus lifecycle cost. Diverse redundancy is implemented as an option for redundancy allocation, and special attention is paid to its effect on common cause failure and the overall system objectives. The requirements for safety integrity established by the standard IEC 61508 are addressed, as well as the modelling detail required for this purpose. The problem is about reliability and redundancy allocation with diversity for a series–parallel system. The objectives to optimize are the average probability of failure on demand, which represents the system safety integrity, Spurious Trip Rate and Lifecycle Cost. The overall method is illustrated with a practical example from the chemical industry: a safety function against high pressure and temperature for a chemical reactor. In order to implement diversity, each subsystem is given the option of three different technologies, each technology with different reliability and diagnostic coverage characteristics. Finally, the optimization with diversity is compared against optimization without diversity.

Practical use of IEC 61508 and EN 954 for the safety evaluation of an automatic mining truck

November 1999

·

31 Reads

This paper presents the general content and results of a safety program and comments on its application. The safety file, which was used to support the safety assessment of an automatic mining truck system, was developed in accordance with the general requirements of a standard of the International Electrotechnical Commission for the functional safety of safety-related systems, and using some parts of the European standard for control systems. Conclusions on the assessed system and on the use of the methodology in similar applications are presented.

A combined goal programming—AHP approach to maintenance selection problem. Reliability Engineering & Systems Safety, 91, 839-848

July 2006

·

1,588 Reads

This paper presents a ‘Lexicographic’ Goal Programming (LGP) approach to define the best strategies for the maintenance of critical centrifugal pumps in an oil refinery.For each pump failure mode, the model allows to take into account the maintenance policy burden in terms of inspection or repair and in terms of the manpower involved, linking them to efficiency-risk aspects quantified as in FMECA methodology through the use of the classic parameters occurrence (O), severity (S) and detectability (D), evaluated through an adequate application of the Analytic Hierarchy Process (AHP) technique.An extended presentation of the data and results of the case analysed is proposed in order to show the characteristics and performance of this approach.

Modelling the reliability of search and rescue operations with Bayesian belief networks. Reliability Engineering and System Safety, 93(7), 940-949

July 2008

·

139 Reads

This paper uses a Bayesian Belief Networks (BBN) methodology to model the reliability of Search And Rescue (SAR) operations within UK Coastguard (Maritime Rescue) coordination centres. This is an extension of earlier work, which investigated the rationale of the government's decision to close a number of coordination centres. The previous study made use of secondary data sources and employed a binary logistic regression methodology to support the analysis. This study focused on the collection of primary data through a structured elicitation process, which resulted in the construction of a BBN. The main findings of the study are that statistical analysis of secondary data can be used to complement BBNs. The former provided a more objective assessment of associations between variables, but was restricted in the level of detail that could be explicitly expressed within the model due to a lack of available data. The latter method provided a much more detailed model, but the validity of the numeric assessments was more questionable. Each method can be used to inform and defend the development of the other. The paper describes in detail the elicitation process employed to construct the BBN and reflects on the potential for bias.

Intelligent decision aids for abnormal events in nuclear power plants

December 1988

·

14 Reads

German nuclear power plants are characterized by a high degree of automation, not only for normal operation but also for abnormal events. Therefore the role of the operating personnel is mainly a supervisory function. Nevertheless, for a spectrum of unexpected events the operating personnel have to react with manual recovery actions. In order to minimize human error in such recovery actions, different kinds of intelligent decision aid support the operators today. In this paper such aids are discussed and one of them is described in more detail.

Early warning and prediction of flight parameter abnormalities for improved system safety assessment

April 2002

·

34 Reads

It is widely accepted that human error is a major contributing factor in aircraft accidents. The early detection of a subsystem abnormality that is developing during flight is potentially important, because the extra time before an alert range is reached may improve the crew's situation awareness. The flight crew may thus consider and try more options for dealing with the failure situation. Robust numerical algorithms and techniques are proposed for rapid recognition of faulty situations, which have the potential for such early detection. The warning system includes a model-based multi-step ahead predictor, which provides predictive information on some flight critical parameters. A key feature of the proposed techniques is that it takes advantage of the on-board information redundancy, computer technology and graphics displays, uses the already available measurements and hence requires only input–output processing for implementation in on-board computers. This is an important aspect when considering the testability and certificability of the software implementation. The system is tested on a simulated typical landing approach scenario of a civil aircraft using the RCAM1 benchmark.

Nearly minimal disjoint forms of the Abraham reliability problem

December 1994

·

12 Reads

A minimal disjoint form (m.d.f.) is a collection of minimal disjoint subformulae, each subformula representing the incremental contribution of a minpath that is properly sequenced with respect to the subset of minpaths of the same size (cardinality). The size of an m.d.f. depends on whether variables are inverted singly or in groups. We conjecture that: an m.d.f. for the Abraham Reliability problem with single-variable inversion only could have as few as 55 disjoint terms; if grouped-variable inversion is also employed, an m.d.f. could be 35 disjoint terms; there are different ways to sort minpaths of the same size or terms of a subpolynomial of the same size that will result in an m.d.f. and there is a combinatorial number of canddiate m.d.f.s.

Experimental study on the effects of visualized functionally abstracted information on process control tasks

February 2008

·

25 Reads

Two distinct design problems of information display for process control are information content representation and visual form design. Regarding information content, we experimentally showed the effectiveness of functionally abstracted information without the benefits of sophisticated graphical presentation in various task situations. However, since it is obvious that the effects of the information display are also influenced by display formats (i.e., visual forms) as well as the information content, further research was required to investigate the effectiveness of visualized functionally abstracted information. For this purpose, this study conducted an experiment in complex process control tasks (operation and fault diagnosis). The experimental purposes were to confirm the effectiveness of the functionally abstracted information visualized with emergent features or peculiar geometric forms and to examine the additional effects of the visualization on task performance. The results showed that functionally abstracted information presented with sophisticated visual forms helped operators perform process control tasks in more efficient and safe way. The results also indicated the importance of explicit visualization of goal–means relation between higher and lower abstraction levels. Lastly, this study proposed a framework for designing visual forms for process control display.

The effects of presenting functionally abstracted information in fault diagnosis tasks

August 2001

·

31 Reads

With respect to the design of visual information display (VID) for process control, this study experimentally evaluated the effectiveness of functionally abstracted information in the task of fault diagnosis. The benefits of functional properties of work domain have been emphasized by ecological interface design (EID), a relatively new design framework for human–machine interfaces. According to the concept of EID, multilevel information representation based on abstraction hierarchy of work domain is expected to be advantageous for supporting the operator's problem solving. To investigate the advantage of EID application, an experiment was conducted using a computer-based simulation of the secondary cooling system of nuclear power plants. Three interfaces were compared: the first representing only the physical properties of the process, the second representing purpose-related generalized functions (GFs) in addition to the physical properties, and the third representing abstract functions (AFs) governing the GFs in addition to the physical properties. The results showed that the diagnostic performance was improved by displaying functionally abstracted information at both levels, and that the usefulness of the abstract information was dependent on the complexity of the diagnosis problems.

An optimal design of accelerated life test for exponential distribution

December 1991

·

27 Reads

This paper considers the optimal design of accelerated life tests in which two levels, high and low, of stress are constantly applied and the failed test items are replaced with new ones. For exponential distribution with mean life of a log linear function of stress, the maximum likelihood estimator of log mean life at design stress is to be used. The standardized low level of stress and initial sample proportion allocated to it which minimize the estimator's asymptotic variance are determined. Sensitivity analysis is given.

Accelerated uncertainty propagation in two-level probabilistic studies under monotony

September 2010

·

29 Reads

Double-level probabilistic uncertainty models that separate aleatory and epistemic components enjoy significant interest in risk assessment. But the expensive computational costs associated with calculations of rare failure probabilities are still a large obstacle in practice. Computing accurately a risk lower than 10−3 with 95% epistemic confidence usually requires 107–108 runs in a brute-force double Monte Carlo. For single-level probabilistic studies, FORM (First Order Reliability Analysis) is a classical recipe allowing fast approximation of failure probabilities while MRM (Monotonous Reliability Method) recently proved an attractive robust alternative under monotony. This paper extends these methods to double-level probabilistic models through two novel algorithms designed to compute a set of failure probabilities or an aleatory risk level with an epistemic confidence quantile. The first, L2-FORM (level-2 FORM), allows a rapid approximation of the failure probabilities through a combination of FORM with new ideas to use similarity between computations. L2-MRM (level-2 MRM), a quadrature approach, provides 100%-guaranteed error bounds on the results. Experiments on three flood prediction problems showed that both algorithms approximate a set of 500 failure probabilities of 10−3–10−2 or derived 95% epistemic quantiles with a total of only 500–1000 function evaluations, outperforming importance sampling, iterative FORM and regression splines metamodels.

Fig. 1. A separate ALT design for each test item.
Fig. 4. Prior and posterior scale parameter for environment 2-interval data.
Fig. 5. Prior and posterior shape parameter-interval data.
A general Bayes Weibull inference model for accelerated life testing

November 2005

·

204 Reads

This article presents the development of a general Bayes inference model for accelerated life testing. The failure times at a constant stress level are assumed to belong to a Weibull distribution, but the specification of strict adherence to a parametric time-transformation function is not required. Rather, prior information is used to indirectly define a multivariate prior distribution for the scale parameters at the various stress levels and the common shape parameter. Using the approach, Bayes point estimates as well as probability statements for use-stress (and accelerated) life parameters may be inferred from a host of testing scenarios. The inference procedure accommodates both the interval data sampling strategy and type I censored sampling strategy for the collection of ALT test data. The inference procedure uses the well-known MCMC (Markov Chain Monte Carlo) methods to derive posterior approximations. The approach is illustrated with an example.

Assessing high reliability via Bayesian approach and accelerated tests

June 2002

·

42 Reads

Sometimes the assessment of very high reliability levels is difficult for the following main reasons:(a)the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures;(b)the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for ‘classical’ statistical analyses.In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull–inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one.The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality.A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work.

Design of PH-based accelerated life testing plans under multiple-stress-type

March 2007

·

97 Reads

Accelerated life testing (ALT) is used to obtain failure time data quickly under high stress levels in order to predict product life performance under design stress conditions. Most of the previous work on designing ALT plans is focused on the application of a single stress. However, as components or products become more reliable due to technological advances, it becomes more difficult to obtain significant amount of failure data within reasonable amount of time using single stress only. Multiple-stress-type ALTs have been employed as a means of overcoming such difficulties. In this paper, we design optimum multiple-stress-type ALT plans based on the proportional hazards model. The optimum combinations of stresses and their levels are determined such that the variance of the reliability estimate of the product over a specified period of time is minimized. The use of the model is illustrated using numerical example, and sensitivity analysis shows that the resultant optimum ALT plan is robust to the deviation in model parameters.

Optimal design of partially accelerated life tests for the lognormal distribution under type I censoring

December 1993

·

50 Reads

This paper considers optimal designs of partially accelerated life tests in which test items are first run simultaneously at use condition for a specified time, and the surviving items are then run at accelerated condition until a predetermined censoring time. For items having lognormally distributed lives maximum likelihood estimators (MLEs) of the location and scale parameters of the lifetime distribution at use condition, and the acceleration factor which is the ratio of the mean life at use condition to that at accelerated condition are obtained. The change time is determined to minimize either the asymptotic variance of MLE of the acceleration factor or the generalized asymptotic variance of MLEs of the model parameters.

Design stage confirmation of lifetime improvement for newly modified products through accelerated life testing

August 2010

·

31 Reads

After a modification to the original version of a product and before mass production, the expected improvement in the product lifetime or reliability needs to be validated. This paper presents three approaches based on accelerated life testing to verify, estimate and confirm the lifetime or reliability of a newly modified product at design stage: ALT comparative approach, reliability estimation approach, and reliability validation test. Test samples of the original and modified versions are expected to fail during the tests in order to obtain their failure time data. In ALT comparative approach, the statistical comparison between failure time data of the original and modified versions is used to verify the required improvement in lifetime. In reliability estimation approach, the relationship made between available lifetime and failure time data of the original version is used to extrapolate lifetime data of the modified version from its failure time data. Since modified versions are usually highly reliable, all test samples might survive the tests (without any failures) that results in the lack of failure time data for statistical analysis. To confirm a level of service reliability with confidence, reliability validation test is presented to make an estimate of the number of samples required to survive the tests. To fulfill the same level of confidence for fewer number of prototypes (as test samples), the test time must be extended. On the other hand, more prototypes are needed to pass a shorter test time if there are any time constraints.

Reliability studies of a high-power proton accelerator for accelerator-driven system applications for nuclear waste transmutation

April 2007

·

77 Reads

The main effort of the present study is to analyze the availability and reliability of a high-performance linac (linear accelerator) conceived for Accelerator-Driven Systems (ADS) purpose and to suggest recommendations, in order both to meet the high operability goals and to satisfy the safety requirements dictated by the reactor system. Reliability Block Diagrams (RBD) approach has been considered for system modelling, according to the present level of definition of the design: component failure modes are assessed in terms of Mean Time Between Failure (MTBF) and Mean Time To Repair (MTTR), reliability and availability figures are derived, applying the current reliability algorithms. The lack of a well-established component database has been pointed out as the main issue related to the accelerator reliability assessment. The results, affected by the conservative character of the study, show a high margin for the improvement in terms of accelerator reliability and availability figures prediction. The paper outlines the viable path towards the accelerator reliability and availability enhancement process and delineates the most proper strategies. The improvement in the reliability characteristics along this path is shown as well.

Fig. 1. Block diagram of an accelerator.
Fig. 2. Beam loss fault tree.  
Fig. 3. Fault tree for target rupture.  
Radiological risk analysis of particle accelerators

August 2008

·

87 Reads

Considering the growing use of high-current accelerators in medicine, industry and research, there is a need for evaluating the hazard potentials of new accelerator systems from the design stage itself. The present paper discusses the factors taken care of in a radiological safety analysis of accelerators. Possible hazards identified are beam loss, target rupture, faulty components and personnel being trapped in an active area. Human error is one of the major factors leading to accelerator hazard. How radiation dose to both occupational workers and general public is reduced and taken care of are discussed.

Acceptable risk as a basis for design

January 1998

·

39 Reads

Historically, human civilisations have striven to protect themselves against natural and man-made hazards. The degree of protection is a matter of political choice. Today this choice should be expressed in terms of risk and acceptable probability of failure to form the basis of the probabilistic design of the protection. It is additionally argued that the choice for a certain technology and the connected risk is made in a cost-benefit framework. The benefits and the costs including risk are weighed in the decision process. A set of rules for the evaluation of risk is proposed and tested in cases. The set of rules leads to technical advice in a question that has to be decided politically.

A discussion of the acceptable risk problem

July 1998

·

143 Reads

The petroleum activities on the Norwegian Continental Shelf are subject to regulations issued by the Norwegian Petroleum Directorate. One important issue in these regulations is the use of acceptance criteria, and this paper discusses some philosophical aspects of acceptance criteria for risk, and the role of statistical decision theory within safety management. Statistical decision theory has been applied in several studies within the nuclear industry, but has not been fully adopted within the petroleum activity. The discussion concludes by listing important measures to manage the acceptable risk problem.

Time effects in criteria for acceptable risk

October 2002

·

26 Reads

The paper proposes new answers to some questions that occur frequently in practice, such as: “Should you discount future life risks and, if so, how?”, or “How can the acceptance criteria be applied when the risks and costs are not simultaneous, or are time series rather than numbers?” The solutions apply to many safety regulations or projects, when only the environmental, social and cultural consequences are secondary for acceptance in comparison with the costs and the risk to life or health. It is shown how discounting need be applied to financial quantities only, entirely obviating the ethical difficulties that go with the concept of “the value of a human life”.

On the consistency of risk acceptance criteria with normative theories for decision-making

December 2008

·

146 Reads

In evaluation of safety in projects it is common to use risk acceptance criteria to support decision-making. In this paper, we discuss to what extent the risk acceptance criteria is in accordance with the normative theoretical framework of the expected utility theory and the rank-dependent utility theory. We show that the use of risk acceptance criteria may violate the independence axiom of the expected utility theory and the comonotonic independence axiom of the rank-dependent utility theory. Hence the use of risk acceptance criteria is not in general consistent with these theories. The level of inconsistency is highest for the expected utility theory.

Application of the risk oriented accident analysis methodology (ROAAM) to severe accident management in the AP600 advanced light water reactor

August 1999

·

121 Reads

An important part of the AP600 design, as well as of the design certification review by the US Nuclear Regulatory Commission, is devoted to ensuring defense in depth through deep consideration and management of severe accidents. Going beyond the traditional Level 2 PRA, in this article we show how this defense in depth was achieved and demonstrated in a consistent manner between prevention and mitigation, through application of the Integrated ROAAM approach. This requires the up-front integration of probabilistic and deterministic thought, which leads naturally to clear and coherent safety goals as an overall guide toward closure.

Top-cited authors