ArticlePDF Available

Reliability Assessment of Passive Safety Systems for Nuclear Energy Applications: State-of-the-Art and Open Issues

Authors:

Abstract and Figures

Passive systems are fundamental for the safe development of Nuclear Power Plant (NPP) technology. The accurate assessment of their reliability is crucial for their use in the nuclear industry. In this paper, we present a review of the approaches and procedures for the reliability assessment of passive systems. We complete the work by discussing the pending open issues, in particular with respect to the need of novel sensitivity analysis methods, the role of empirical modelling and the integration of passive safety systems assessment in the (static/dynamic) Probabilistic Safety Assessment (PSA) framework.
Content may be subject to copyright.
energies
Review
Reliability Assessment of Passive Safety Systems for Nuclear
Energy Applications: State-of-the-Art and Open Issues
Francesco Di Maio 1, * , Nicola Pedroni 2, Barnabás Tóth 3, Luciano Burgazzi 4and Enrico Zio 1,5


Citation: Di Maio, F.; Pedroni, N.;
Tóth, B.; Burgazzi, L.; Zio, E.
Reliability Assessment of Passive
Safety Systems for Nuclear Energy
Applications: State-of-the-Art and
Open Issues. Energies 2021,14, 4688.
https://doi.org/10.3390/en14154688
Academic Editors: Jong-Il Yun and
Hiroshi Sekimoto
Received: 3 May 2021
Accepted: 19 July 2021
Published: 2 August 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1Energy Department, Politecnico di Milano, 20156 Milan, Italy; enrico.zio@polimi.it
2Dipartimento di Energia, Politecnico di Torino, 10121 Turin, Italy; nicola.pedroni@polito.it
3NUBIKI Nuclear Safety Research Institute Ltd, 1121 Budapest, Hungary; tothb@nubiki.hu
4ENEA Agenzia Nazionale per le Nuove Tecnologie, L’energia e lo Sviluppo Economico Sostenibile,
40121 Bologna, Italy; luciano.burgazzi@enea.it
5
Centre for Research on Risk and Crises (CRC), MINES ParisTech, PSL Research University, 75006 Paris, France
*Correspondence: francesco.dimaio@polimi.it
Abstract:
Passive systems are fundamental for the safe development of Nuclear Power Plant (NPP)
technology. The accurate assessment of their reliability is crucial for their use in the nuclear industry.
In this paper, we present a review of the approaches and procedures for the reliability assessment
of passive systems. We complete the work by discussing the pending open issues, in particular
with respect to the need of novel sensitivity analysis methods, the role of empirical modelling and
the integration of passive safety systems assessment in the (static/dynamic) Probabilistic Safety
Assessment (PSA) framework.
Keywords:
reliability assessment; Probabilistic Safety Assessment; passive safety systems; nuclear
power plants
1. Introduction
Passive systems are in use since the dawning of nuclear power technology. They have,
then, received a renewal of interest after the major nuclear accidents in 1979, 1986 and 2011.
However, in all, passive systems design has been the focus of a large number of researches
and applications that have not led to a common understanding of the benefits and cons of
passive safety systems implementation.
On the contrary, a common understanding must be laid down in particular with
respect to the reliability of passive systems, for demonstrating their qualification and use-
fulness for nuclear safety. Specifically, the large uncertainty associated with inadequacies of
the design codes used to simulate the complex passive systems physical behavior must be
addressed for the reliability assessment, because it may lead to hidden large unreliability.
In comparison to active systems, passive safety systems benefit from less dependence
on external energy sources, no need for operator actions to activate them and reduced
costs, including easier maintenance. Recognition of those advantages is shared among
most stakeholders in the nuclear industry, as demonstrated by the number of nuclear
reactor designs that make use of passive safety systems. Yet, it is still necessary to precisely
assess and demonstrate the reliability of passive safety systems and the capacity to perform
and complete the expected functions. In simple and direct words, passive safety systems
may contribute to improving the safety of Nuclear Power Plants (NPPs), provided that
their performance-based design and operation are demonstrated by tailored deterministic
and reliability assessment methods, approaches and data (e.g., experimental databases)
available to industry and regulators [15].
With reference to the passive natural circulation of fluid for emergency cooling, the
complex set of physical conditions that occurs in the passive safety systems, where no ex-
ternal sources of mechanical energy for the fluid motion are involved, has led the designers
Energies 2021,14, 4688. https://doi.org/10.3390/en14154688 https://www.mdpi.com/journal/energies
Energies 2021,14, 4688 2 of 17
of the present-generation reactors to position the main heat sink (i.e., the steam generators
for pressurized water reactors and feed-water inlet for boiling water reactors) at a higher
elevation with respect to the heat source location (i.e., the core). By so doing, should the
forced circulation driven by centrifugal pumps become unavailable, the removal of the
decay heat produced by the core is still allowed [
6
]. For their reliability assessment, mathe-
matical models are typically built [
7
] to describe the mathematical relationship between
the passive system physical parameters influencing the NPP behavior, then translated into
detailed mechanistic Thermal-Hydraulic (T–H) computer codes for simulating the effects
of various operational transients and accident scenarios on the system [715].
In practice, characteristics of the system under analysis are only partially captured and,
therefore, simulated by the associated T–H code. Moreover, the uncertainties affecting the
behavior of passive systems and its modeling are usually much larger than those associated
with active systems, challenging the passive systems reliability assessment [
16
18
]. This is due
to [
1
,
8
10
,
17
,
19
22
]: (i) stochastic transitions of intrinsically random phenomena occurring
(such as component degradation and failures), and (ii) the lack of experimental results,
that mine the completeness of the knowledge about some of that same phenomena [
23
25
].
Such uncertainties translate into uncertainty of the model output uncertainty that, for the
sake of a realistic reliability assessment, must be estimated [22,2628].
In this paper, we review the methodological solutions to the T–H passive safety
systems reliability assessment. In particular, the approaches for the reliability assessment of
nuclear passive systems are described in Section 2: independent failure modes, hardware
failure modes, functional failure modes approaches are described in Sections 2.12.3,
respectively. In Section 3, the advanced Monte Carlo simulation approaches are introduced.
In Section 4, the existing coordinated procedures for reliability evaluation are presented.
Open issues, along with the methods proposed in the literature to address these issues,
are discussed in Section 5, that include (i) the identification of the most contributing
model hypotheses and parameters to the output uncertainty (Section 5.1), (ii) the empirical
regression modelling for reducing computational time (Section 5.2), and (iii) the integration
of reliability assessment of passive systems into the current Probabilistic Safety Assessment
(PSA) (Section 5.3).
2. Approaches for the Reliability Assessment of Passive Systems
In general, the reliability of passive systems depends on:
systems/components reliability;
physical phenomena reliability, which accounts for the physical boundary conditions
and mechanisms.
This means that, to guarantee a large passive system reliability: well-engineered
components (with at least the same reliability as active systems) are to be selected; the
physical principles (e.g., gravity and density difference in T–H passive systems) and
the effects of surrounding/environments conditions in which they occur and affect the
parameters evolution during the accident development (e.g., flow rate and exchanged heat
flux in T–H passive systems) are to be fully understood and captured. Both aspects should
be considered within a consistent approach to passive system reliability assessment. In
what follows, a summary of three different approaches is provided for passive systems
performance assessment upon onset of system operation.
2.1. The Independent Failure Modes Approach
The independent failure modes approach entails [
16
]: (i) identifying the failure modes
leading to the unfulfillment of the passive system function, and (ii) evaluating the system
failure probability as the probability of failure modes occurrence.
Typically, failure modes are identified from the application of a Failure Modes and
Effects Analysis (FMEA) procedure [29].
Conventional probabilistic failure process models commonly used for hardware com-
ponents (i.e., the exponential distribution, e
λt
, where
λ
is the failure rate and tis time) are
Energies 2021,14, 4688 3 of 17
not applicable to model physical processes failures; in this case, each failure is character-
ized by specific critical physical parameters distributions and a defined failure mode, that
implies, for each of these latter, the definition of the probability distributions and failure
ranges of the critical physical parameters (for example, for a T–H passive system, these may
include non-condensable gas build-up, the undetected leakage, the heat exchange process
reduction due to surface oxidation, piping layout, thermal insulation degradation, etc.).
Eventually, to evaluate the probability of the event of failure of the system, Pe
t
, the
probabilities of the different failure mode events, Pei,i = 1, . . . ,n, are combined according
to a series logic, assuming mutually non-exclusive independent events [29]:
Pet=1.0 ((1.0 Pe1)(1.0 Pe2)... (1.0 Pen)) (1)
Since this approach assesses the system failure probability assuming that a single
failure mode event is sufficient to lose the system function, the resulting value of failure
probability of system failure can be conservatively assumed as an upper bound for the
unreliability of the system [29].
2.2. The Hardware Failure Modes Approach
In the hardware failure modes approach [
30
], the unreliability of the passive system
is obtained by accounting for the probabilities of occurrence of the hardware failures that
degrade the physical mechanisms (which the passive system relies upon for its function).
For example, with reference to a typical Isolation Condenser [
30
], natural circulation
failure due to high concentration of non-condensable gas is modelled in terms of the
probability of occurrence of vent lines failure to purge the gases [
3
]; natural circulation
failure because of insufficient heat transfer to an external source is assessed through
two possible hardware failure modes: (1) insufficient water in the pool and make-up valve
malfunctioning, (2) degraded heat transfer conditions due to excessive fouling of the heat
exchanger pipes.
Thus, the probabilities of degraded physical mechanisms are expressed in terms of
unreliability of the components whose failures degrade the function of the passive system.
Some critical aspects of this approach are: (i) lack of completeness of the possible failure
modes and corresponding hardware failures), (ii) failures due to unfavourable initial or
boundary conditions being neglected, and (iii) fault tree models typically adopted to
represent the hardware failure modes may inappropriately replace the complex T–H code
behavior and predict interactions among physical phenomena of the system [3].
2.3. The Functional Failure Approach
The functional failure approach is based on the concept of functional failure [
31
]:
in the context of passive systems, this is defined as the probability of failing to achieve
a safety function (i.e., the probability of a given safety variable—load—to exceed a safety
threshold—capacity). To model uncertainties, probability distributions are assigned, mainly
by subjective/engineering judgments, to both the safety threshold (for example, a minimum
value of water mass flow required) and safety variable (i.e., the water mass flow circulating
through the system).
3. Advanced Monte Carlo Simulation Approach
The functional failure approach (Section 2) relies on the deterministic T–H computer
code model (mathematically represented by the nonlinear function f(
·
)) and the Monte
Carlo (MC) propagation of the uncertainties in the code inputs x(i.e., the Probability
Density Functions (PDFs) q(x)) to the outputs y=f(x), with respect to which the failure
event is defined according to given safety thresholds. The propagation consists in repeating
the T–H code computer runs (or simulations) for different sets of the uncertain input values
x, sampled from their PDF q(x) [
1
,
2
,
5
,
32
38
]. The main strength of MC simulation is that it
does not force the analyst to resort to simplifying approximations, since it does not suffer
Energies 2021,14, 4688 4 of 17
from any T–H model complexity, and is, therefore, expected to provide the most realistic
passive system assessment.
On the other hand, it is challenged by the long calculations needed to run the de-
tailed, mechanistic T–H code (one run for each batch of sampled input values) and the
computational efforts, increasing with decreasing failure probability [
39
,
40
], that, inciden-
tally is particularly small (e.g., less than 10
4
) for functional failure of T–H passive safety
systems [5,28].
To reduce the number of T–H code runs and the computational time as much as possi-
ble, alternatives to be considered are fast running surrogate regression models (also called
response surfaces or metamodels) and advanced Monte Carlo simulation methods [
41
,
42
].
Fast-running surrogate regression models mimic the response of the original T–H model
code, circumventing the long computing time (as it will be described in Section 5.2: to
name a few, polynomial Response Surfaces (RSs) [
43
], polynomial chaos expansions [
44
],
stochastic collocations [
45
], Artificial Neural Networks (ANNs) [
46
,
47
], Support Vector
Machines (SVMs) [48] and kriging [49] (see the following Section 5.2 for details).
Advanced Monte Carlo Simulation allows limiting of the number of code runs, guar-
anteeing, at the same time, robust estimations [
50
,
51
]. The present Section focuses on this
latter class of methods.
Among these, Stratified Sampling consists in calculating the probability of each of the
non-overlapping subregions (i.e., strata) of the sample space; by randomly sampling a fixed
number of outcomes from each stratum (i.e., the stratified samples), the coverage of the
sample space is ensured [
52
,
53
]. However, the definition of the strata and the calculation of
the associated probabilities is a major challenge [53].
Latin Hypercube Sampling (LHS), commonly used in PSA [
52
,
54
58
] and reliability
assessment problems [
59
], is a compromise between standard MCS and Stratified Sampling,
but it does not overcome enough the performance of standard MCS for small failure
probabilities estimation [
60
], as in the case of passive safety systems reliability assessment.
Subset Simulation (SS) [
61
63
] and Line Sampling (LS) [
64
,
65
] have been proposed as
advanced MCS methods to tackle the typical multidimensional load-capacity problems of
structural reliability assessment: therefore, they can address the problem of the functional
failure probability assessment of T–H passive systems [22,47,66].
In the SS approach, the problem is tackled by performing simulations of sequences
of (more) frequent events in their conditional probability spaces: finally, the product of
the conditional probabilities of such more frequent events is taken as the functional failure
probability; Markov Chain Monte Carlo (MCMC) simulations are used to generate the
conditional samples [
67
], which, by sequentially populating the intermediate conditional
regions, reach the final functional failure region.
In the LS method, the failure domain of the high-dimensional problem under analysis
is explored by means of lines, instead of random points [
65
]. One-dimensional problems
are solved along an “important direction” that optimally points towards the failure domain,
in place of the high-dimensional problem [
65
]. The approach overcomes standard MCS in
a wide range of engineering applications [
22
,
39
,
51
,
65
,
68
71
] and allows ideally reducing
to zero the variance of the failure probability estimator if the “important direction” is
perpendicular to the almost linear boundaries of the failure domain [64].
Finally, the more frequently adopted advanced MCS method is Importance Sampling
(IS): in IS, the original PDF q(x) is replaced by an Importance Sampling Density (ISD) g(x)
biased towards the MC samples that lead to outputs close to the failure region, in a way to
artificially increase the (rare) failure event frequency. To approximate the ideal ISD g*(x)
(i.e., the one that would make the standard deviation of the MC estimator result equal
to zero) the Adaptive Kernel (AK) [
72
,
73
], the Cross-Entropy (CE) [
74
76
], the Variance
Minimization (VM) [
77
] and the Markov Chain Monte Carlo-Importance Sampling (MCMC-
IS) [78] methods have been proposed.
Energies 2021,14, 4688 5 of 17
Adaptive Metamodel-based Subset Importance Sampling (AM-SIS) is a recently pro-
posed method [
79
] which combines SS and metamodels (for example, Artificial Neural
Networks, ANNs) within an adaptive IS scheme, as follows [78,79]:
1.
Subset Simulation (SS) is used to create an input batch from the ideal, zero-variance
ISD g*(x) relying on an ANN that (i) is adaptively refined in proximity of failure
region by means of the samples iteratively produced by SS, and (ii) substitutes the
expensive T–H code f(x);
2.
The g*(x) built at step (1) is used to perform IS and calculate the probability of failure
of the T–H passive system.
Notice that the idea of integrating metamodels within efficient MCS schemes has been
widely proposed in the literature: see, e.g., [8088].
4. Frameworks for the Reliability Assessment of Passive Systems
A first framework for the reliability assessment of passive systems is the REPAS
(Reliability Evaluation of Passive Systems) methodology [
3
], then continued onto the
EU (European Union) project called RMPS (Reliability Methods for Passive Systems)
project (https://cordis.europa.eu/project/id/FIKS-CT-2000-00073, accessed on 3 May
2021) [
11
]. The RMPS methodology is aimed at the: (1) identification and quantification
of the sources of uncertainties—combining (often vague and imprecise) expert judgments
and the (typically scarce) experimental data available—and determination of the critical
parameters, (2) propagation of the uncertainties through thermal–hydraulic (T–) codes
and assessment of the passive system unreliability, and (3) introduction of the passive
system unreliability in the accident sequences for probabilistic risk analysis. The RMPS
methodology has been successfully applied to passive systems providing natural circulation
of coolant flow in different types of reactors (BWR, PWR and VVER). A complete example
of application concerning the passive residual heat removal system of a CAREM (Central
Argentina Reactor de Elementos Modulares) is presented in [
89
]. Recently, the RMPS
methodology has been applied by ANL (Argonne National Laboratory) in studies for the
evaluation of the reliability of passive systems designed for GenIV sodium fast reactors:
see, for instance, [90].
In the APSRA (Assessment of Passive System ReliAbility) methodology [
91
], a failure
hyper-surface is generated in the space of the critical physical parameters by considering
their deviations from the nominal values, after a root-cause analysis is performed to identify
the causes of deviation of these parameters, assuming that the deviation of such physical
parameters occurs only due to failures of mechanical components. Then, the probability of
failure of the passive system is evaluated from the failure probability of these mechanical
components. Examples of the APSRA (and its evolution APSRA+) application can be
found in [91,92]
The two frameworks, RMPS and APSRA, have certain features in common, as well
as distinctive characteristics. To name a few similarities, both methodologies use Best
Estimate (BE) codes to estimate the T–H behavior of the passive systems and integrate both
probabilistic and deterministic analyses to assess the reliability of the systems; with respect
to differences, while the RMPS framework proceeds with the identification and quantifica-
tion of the parameter uncertainties using probability distributions and propagating their
realizations via a T–H code or a response surface, the APSRA methodology assesses the
causes of deviation of the parameters from their nominal values.
5. Open Issues
In the following Sections, the open issues regarding the methods and frameworks for
the reliability assessment of passive safety systems and for their application are discussed,
in particular, with respect to the need of novel sensitivity analysis methods, the role of
empirical regression modelling and the integration of passive systems in PSA.
Energies 2021,14, 4688 6 of 17
5.1. Sensitivity Analysis Methods
Safety margins are practically verified resorting to T–H codes [41,93]. Recently, these
calculations have been performed by BE T–H codes that provide realistic results and avoid
over-conservatism [
51
], and also by the demanding identification and quantification of the
uncertainties in the code, which require a large number of simulations [94].
To tackle this challenge, various approaches of Uncertainty Analysis (UA) have been
developed, e.g., Code Scaling, Applicability and Uncertainty (CSAU) [
95
], Automated Sta-
tistical Treatment of Uncertainty Method (ASTRUM), Integrated Methodology for Thermal
Hydraulics Uncertainty Analysis (IMTHUA) [
28
]. In all the mentioned approaches, the
assumption is that input variables follow statistical distributions: this implies that if N
input sets are sampled from these distributions and fed to the BE code, the corresponding
Noutput values can be calculated, propagating the variability of the input variables onto
the output. To speed up the computation and substituting the TH code with a simple
and faster surrogate, a combination of Order Statistics (OS) [
96
] and Artificial Neural
Networks [
97
] has been proposed. However, this latter approach does not allow one to
completely characterize the PDF of the output variable but only some percentiles [5].
Particularly, SA techniques can be categorized in: Local, Regional and Global [
97
].
The local approaches provide locally valid information since they analyze the effect on
the output of small variations around fixed values of the input parameters. Regional
approaches analyze the effects on the output of partial ranges of the inputs distributions.
Global approaches analyze the contribution of the entire distribution of the input on the
output variability. This makes the global approaches more suitable when models are
non-linear and non-monotone, with respect to which, local and regional approaches may
fail. The higher capabilities of global approaches are paid by larger computational costs.
Examples of global methods are Fourier Amplitude Sensitivity Test (FAST) [
52
], Response
Surface Methodology (RSM) [43] and variance decomposition methods [26].
In this Section, we will illustrate a relatively recent method for global SA, called the
distribution-based approach [
94
]. In practice, the PDF of the output variable is recon-
structed, with fewer runs than variance decomposition-based methods, for conducting
an SA. Polynomial Chaos Expansion (PCE) methods have been used [
44
], although the
multimodal output variable distribution cannot be modeled by PCE (because, to accu-
rately enough reconstruct the PDF, the order of the expansion and the computational
cost become too large) In such cases, Finite Mixture Models (FMMs) [
98
] can overcome
the problem, by naturally “clustering” the T–H code output (e.g., subdividing the inputs
leading to output values with large, low, insufficient safety margins) in probabilistic models
(i.e., PDFs) composing the mixture. Advantages are (i) the availability the analytical PDF
of the model output and (ii) a lower computational cost than classical global SA methods.
To further reduce the computational cost related with the T–H code runs, a framework
based on FMMs has been proposed in [
94
]. The natural clustering made by the FMM on
the T–H code output [
99
] (where one cluster corresponds to one Gaussian model of the
mixture) is exploited to develop an ensemble of three SA methods that perform differently
depending on the data at hand: input saliency [
100
], Hellinger distance [
101
,
102
] and
Kullback–Leibler divergence [
101
,
102
]. The advantage offered by the diversity of the
methods is the possibility of overcoming possible errors of the individual methods that
may occur, due to the limited quantity of data.
The proposed framework applicability to the reliability assessment of passive safety
systems is challenging because one must consider the uncertainties affecting the passive
systems functional performance [1,16,66,92,103].
In [
104
], the application of the framework to a Passive Containment Cooling System
(PCCS) of an Advanced Pressurized reactor AP1000 during a Loss Of Coolant Accident
(LOCA) is shown. The combination of multiple sensitivity rankings is shown to increase
the robustness of the results, without any additional T–H code run.
The work in [
104
] has been extended in [
105
] by considering three Global SA methods
(the Input Saliency (IS), Hellinger Distance (HD), Kullback–Leibler Divergence (KLD)) and
Energies 2021,14, 4688 7 of 17
Bootstrap [
97
] that (artificially, but without information bias) increase the amount of data
obtained. The framework has been applied to a real case study of a Large Break Loss of
Coolant Accident (LBLOCA) in the Zion 1 NPP [106], simulated by the TRACE code.
5.2. Role of Empirical Regression Modelling
To address the computational problem related to the run of the detailed, mechanistic T–H
system code, either efficient sampling techniques can be adopted as described in Section 3, or
nonparametric order statistics [
107
] can be employed, especially if only particular statistics
(e.g., the 95th percentile) of the outputs of the code are needed [
96
,
108
,
109
], or fast-running,
surrogate regression models can be implemented to mimic the long-running T–H model.
In general terms, the construction (i.e., training) of such regression models entails using
a (reduced) number (e.g., 50–100) input/output patterns of the T–H model code for fitting,
by statistical techniques, the response surface of the regression model to the input/output
data. Several examples can be found in the literature: in [
87
,
88
,
110
], polynomial Response
Surfaces (RSs) are used to calculate the structural failure probability; in [
5
,
34
,
36
], with
linear and quadratic polynomial RSs, the reliability analysis of a T–H passive system
of an advanced nuclear reactor is performed; Radial Basis Functions (RBFs), Artificial
Neural Networks (ANNs) and Support Vector Machines (SVMs) are shown to provide
local approximation of the failure domain in structural reliability problems and for the
functional failure analysis of a passive safety systems in a Gas-cooled Fast Reactor (GFR),
in [
48
,
111
,
112
]; finally, Gaussian meta-models have been used for the sensitivity analysis
of inputs driving the radionuclide transport in groundwater as modeled by complex
hydrogeological models in [113,114].
5.3. Integration of Passive Systems in PSA
The introduction of passive safety systems in the framework of PSA based on FTs and
ETs deserves particular attention. The reason is that the reliability of these systems does
not depend only on (mechanical) components failure modes, but also on the occurrence
of phenomenological events. This makes the problem nontrivial (see Sections 2and 3),
because it is difficult to define the status of these systems along an accident sequence
only in Boolean terms of ‘success/failure’. An ‘intermediate’ mode of operation of
a passive system or, equivalently, a degraded performance of the system (up to the
failure point) should be considered, where the passive system might still be capable
of providing a functional level sufficient for the mitigation to the accident progression.
5.3.1. Integration of Passive System Reliability into Static PSA
An ET describes—in a logically structured, graphical form—the sequences of events
(scenarios) that can possibly originate from an initiating event, depending on the fulfilment
(or not) of the functional requirements of the safety (and operational) systems involved in
the accident scenario. For each of these systems, an FT displays in graphical/logic form
all the combinations of the so-called basic events that cause the failure of the system, by
connecting the events through logic gates. The basic events represent the fundamental
failure modes of the system and can be assessed by different reliability models and data.
With respect to active safety systems working in conventional, currently operating
nuclear facilities, the following two fundamental failure modes are usually considered:
Start-up failure: for standby active equipment (e.g., pumps, fans), the failure probabil-
ity of start-up should be assessed, while for valves, the failure probability of opening
and/or closing should be modelled.
Failure during operation: the failure probability during operation of active compo-
nents (e.g., pumps) should be quantified and modelled in the PSA. To this purpose, the
most commonly applied reliability models employ the failure rate and the expected
mission time (or functional time) of the component. For components with relatively
short mission time (1–2 h), this kind of malfunction is usually modelled within the
start-up failure framework.
Energies 2021,14, 4688 8 of 17
With respect to passive systems, the applicability of the FT method depends on the
passivity level (A, B, C and D), as defined by the IAEA [115].
Type ‘B’ passive systems do not contain any moving mechanical parts and the start-up
of the system is triggered by passive phenomena (with the exclusion of valve utilization):
in this case, the start-up failure probability of the system is determined only based on
the probability that the passive physical phenomenon occurs or not (e.g., that natural
circulation develops in the cooling circuit). Failure during operation is, instead, determined
by the physical stability of the passive phenomenon (e.g., long-term stability of the natural
circulation), which is mainly influenced by the initial and the boundary conditions. It is
worth mentioning that, as pointed out before, modelling start-up failure and failure during
operation needs the consideration of different physical phenomena, because alterations
in boundary conditions during accident mitigation can result in the degradation of the
driving forces even after a successful start-up.
When passive systems are concerned, other failure modes are to be considered, such
as mechanical equipment failures (e.g., heat exchanger plugging, rupture or leak, etc.),
which can also lead to failure during operation [
2
] and alter the physical stability, and
human errors, which can influence the long-term operation of a passive system. In some
cases (for example, [89]), these failure modes are considered in a separate FT.
As an example of a type ‘B’ passive system, let us consider a passive residual heat
removal system [
2
] where the heat is transferred into a pool that must be refilled to ensure
the fulfillment of the safety function in the long run. The resulting FT for the start-up
and during-operation failure modes is shown in Figure 1: the failure probability of the
‘phenomenological’ basic events (i.e., ‘natural circulation fails to start’ NC-FS and ‘natural
circulation fails to run’ NC-FR) should be derived from the reliability assessment of the
physical phenomenon, while the failure probabilities of the mechanical parts (i.e., ‘compo-
nent failure during operation’ COMP-FAIL and ‘refill failure of ultimate sink’ REFILL) are
the result of classical FMEA or HAZOP methods.
Figure 1. FT for start-up failure and failure during operation for type ‘B’ passive systems.
Types ‘C’ and ‘D’ passive systems may contain moving mechanical parts (e.g., check
valves in case of type ‘C’ and motor-operated valves in case of type ‘D’), in order to trigger
the operation of the system. In this case, the system start-up failure is determined by both
the malfunction of the active (or mechanical) component and the probability of the physical
phenomenon development, while the failure during operation is determined by the stability
of the physical phenomena, the reliability of mechanical parts and the possible failure of
the refill procedure (if considered), similarly to type ‘B’ passive systems. Moreover, for
type ‘D’ passive systems, the failures of electric power supply and Instrumentation and
Control (I & C) systems have to be considered along with the active component failure
during start-up.
Energies 2021,14, 4688 9 of 17
Typical FTs for start-up failure and failure during operation for type ‘C’ and ‘D’ passive
systems are shown in Figure 2.
Figure 2. FTs for the start-up failure and failure during operation for type ‘C’ and ‘D’ passive systems.
As usual in traditional PSA, the FTs have to be linked to the ETs, where the passive
system success/failure is considered among the ETs header events [
116
]. In general terms,
the call in operation of a passive system results from the malfunction of an active system:
therefore, the header representing passive systems is typically preceded by headers of
active systems.
Integration can be done by, alternatively:
Separate headers for start-up failures and failures during operation;
One header representing both types of failure.
The ETs representing these two alternatives are presented in Figure 3, left and
right, respectively.
Figure 3.
Possible approaches to integrating FTs of passive systems into ETs. Left: separate head-
ers for start-up failures and failures during operation; right: one header representing both types
of failure.
In the former case, the FTs presented in Figures 1and 2are placed behind the
two distinct headers (‘Passive System Successfully Starts’ and ‘Passive System Successfully
Continues Operation’), whereas in the latter case, the two FTs are linked together into an
‘OR’ gate and placed behind the single header ‘Passive System Successfully Starts and
Continues Operation’.
Energies 2021,14, 4688 10 of 17
In most cases, the two ET construction approaches result in the same minimal cut-set
lists; however, the first approach should be cautiously applied for scenarios where more
than one redundant train is available, and the operation of a single train can fulfill the
required safety function. In this particular case, some relevant minimal cut-sets are left out
from the results. For illustration purposes, consider a passive system with two redundant
trains. The top gate of the FT for the start-up failure is an ‘AND’ gate, which links the
start-up failures of the two redundant trains. The FT for the failure during operation also
has the same structure. As a result, the passive system fails only if both trains fail to start
or both trains fail to run, neglecting the minimal cut set ‘one train fails to start and the
other train fails to run’. Therefore, in this case (when there are 100% redundant trains), the
second option is preferable.
5.3.2. Integration of Passive Systems into Dynamic PSA
In the PSA practice, accident scenarios, though dynamic in nature, are usually ana-
lyzed with the ‘static’ ETs and FTs, as discussed in the previous Section 5.3.1.
The current ‘static’ PSA framework is limited when: (i) handling the actual events
timing, which ultimately influences the evolution of the scenarios; and (ii) modelling the
interactions between the hardware components (i.e., failure rates) and the process variables
(temperatures, mass flows, pressures, etc.) [
66
,
104
,
105
,
117
,
118
]. In practice, with respect to
(i), different orders of the same success and failure events (and/or different timing of these
events occurrence) along an accident scenario typically lead to different outcomes; with
respect to (ii), the event/scenarios occurrence probabilities are affected by process variables
values (temperatures, mass flows, pressures, etc.). This highlights another limitation of
the ‘static’ PSA framework, which can only handle Boolean representations of system
states (i.e., success or failure), neglecting any intermediate (partial operation) states, which,
conversely, is fundamental when concerned with the passive system operation.
In fact, because of its specific features, defining the status of a passive system simply
in terms of ‘success’ or ‘failure’, is limited, since ‘intermediate’ modes of operation or
equivalently degraded performance states (up to the failure point) are possible and may
(still) guarantee some (even limited) operation. This operation could be sufficient to recover
a failed system (e.g., through redundancy configuration) and, ultimately, a severe accident.
In complex situations where several (multi-state) safety systems are involved and
where human behavior still plays a relevant role, advanced solutions have been proposed
and already used for dynamic PSA, like Continuous Event Trees (CETs) [
119
,
120
], Dynamic
Event Trees (DETs) [
121
], Discrete DETs (DDETs) [
122
], Monte Carlo DETs (MCDETs) [
123
]
and Repairable DETs (RDETs) [
124
], because they provide more realistic frameworks than
static FTs and ETs, since they capture the interaction between the process parameters and
the passive system states within the dynamical evolution of the accident. The most evident
difference between DETs and static ETs is that while ETs are constructed by expert analysts
that draw their branches based on success/failure criteria set by the analysts, in DETs, these
are spooned by a software that embeds the (deterministic) models simulating the plant
dynamics and the (stochastic) models of components failure. Naturally, the DET generates
a number of scenarios much larger than that of the classical static FT/ET approaches, so that
the a posteriori retrieval of information can become quite burdensome and complex [
125
127
].
Another challenge is related to the relevant effort in terms of computational time required
for generating a large number of time-dependent accident scenarios by means of Monte
Carlo techniques that are typically employed to deeply and thoroughly explore the entire
system state-space, and to cover in principle all the possible combinations of events over
long periods of time. This, for thermal hydraulic passive systems, is even more relevant,
since during the accident progression their reliability strongly depends (more than other
safety systems) upon time and the state/parameter evolution of the system. Therefore, also
in this case, resorting to metamodels can help [
128
], accomplishing the evaluation process
of T–H passive systems in a consistent manner.
Energies 2021,14, 4688 11 of 17
The goal of dynamic PSA is, therefore, to account for the interaction of the process
dynamics and the stochastic nature/behavior of the system at various stages and embed
the state/parameter evaluation by deterministic thermal hydraulic codes within a DET
generation [
129
]. The framework should be able to estimate the physical variations of
all the system parameters/variables and the frequency of the accident sequences, while
taking into proper account the dynamic effects. If the (mechanical) components failure
probabilities (e.g., the failure probability per-demand of a valve) are known, then they can
be combined with the probability distributions of estimated parameters/variables, in order
to predict the probabilistic evolution of each scenario.
In [
130
], the T–H passive system behavior is represented as a non-stationary stochastic
process, where natural circulation is modelled in terms of time-variant performance pa-
rameters (e.g., thermal power and mass flow rate) assumed as stochastic variables. In that
work, which can be considered as a preliminary attempt to address the dynamic aspect
in the context of passive system reliability, the statistics of such stochastic variables (e.g.,
mean values and standard deviations) change in time, so that the corresponding random
variables assume different values in different realizations (i.e., each realization is different).
6. Conclusions, Recommendations and Additional Issues
In this paper, we have laid down a common understanding of the state-of-the-art and
open issues with respect to the reliability assessment of passive safety systems for their
adoption in nuclear installations. Indeed, such safety systems rely on intrinsic physical
phenomena, which makes the assessment of their performance quite challenging to carry
out with respect to traditional (active) systems. This is due to the typical scarcity of data in
a sufficiently wide range of operational conditions, which introduces relevant (aleatory and
epistemic) uncertainties into the analysis. These issues could have a negative impact on
the public acceptance of next generation nuclear reactors, which instead—thanks to use
of passive systems—should be safer than the current ones. Thus, structured and sound
frameworks and techniques must be sought, developed and demonstrated for a robust
quantification of the reliability/failure probability of nuclear passive safety systems.
With respect to T–H passive systems, a review of the available approaches and frame-
works for the quantification of the reliability of nuclear passive safety systems has been
presented, followed by a critical discussion of the pending open issues.
It has turned out that the massive use of expert judgement and subjective assumptions
combined with often scarce data requires the propagation of the corresponding uncertainties
by simulating numerous times the system behavior under different operating conditions.
In this light, the most realistic assessment of the passive system is provided by the functional
failure-based approach, thanks to MCS, which is flexible and is not negatively affected by
any model complexity: therefore, it does not require any simplifying assumption. On the
other hand, often prohibitive computational efforts are required, because a large number
of MC-sampled model evaluations must be often carried out for an accurate and precise
assessment of the frequently small (e.g., lower than 10
4
) functional failure probability:
actually, each evaluation requires the call of a long-running mechanistic code (several
hours, per run). Thus, we must resort to advanced methods to tackle the issues associated
with the analysis.
As open issues, we focused, in particular, on the role of empirical regression modelling,
the need of advanced sensitivity analysis methods and the integration of passive systems
in the (static/dynamic) PSA framework. In this regard, we can provide general conclusions
and recommendations for those practitioners who tackle the issue of passive systems
reliability assessment:
If the estimation of the passive system functional failure probability is of interest,
we suggest combining metamodels with efficient MCS techniques, for example, by
constructing and adaptively refining the metamodel by means of samples generated by
the advanced MCS method in proximity of the system functional failure region [7886].
An example is represented by the Adaptive Metamodel-based Subset Importance Sam-
Energies 2021,14, 4688 12 of 17
pling (AM-SIS) method, recently proposed by some of the authors, which intelligently
combines Subset Simulation (SS), Importance Sampling (IS) and iteratively trained
Artificial Neural Networks (ANNs) [78,79].
If thorough uncertainty propagations (e.g., the determination of the PDFs, CDFs,
percentiles of the code outputs) and SA are of interest to the analyst, a combination of
Finite Mixture Models (FMMs) and ensembles of global SA measures are suggested,
as proposed by some of the authors in [94,98].
Finally, it is worth mentioning that, to foster these methods’ acceptance in the nuclear
research community and to consequently promote the public acceptance of future reactor
designs involving passive safety systems, other (open) issues should be addressed, such as:
The methods proposed rely on the assessment of the uncertainty (both aleatory and
epistemic) in the quantitative description provided by models of the phenomena
pertaining to the functions of the passive systems. This requires a systematic, sound
and rigorous Inverse Uncertainty Quantification (IUQ) approaches to find a characteri-
zation of the input parameters uncertainty that is consistent with the experimental data,
while limiting the associated computational burden. Approaches have been already
proposed in the open literature, but not yet in the field of passive system reliability
assessment [131136].
If we resort to empirical metamodels for estimating passive systems failure probabili-
ties and carrying out uncertainty and SA, the following problems should
be considered:
i.
the regression error should be carefully quantified (and possibly controlled)
throughout the process, in order to reduce its impact on the entire reliability
assessment [81];
ii.
the higher the input dimensionality (e.g., in the presence of time series data),
the higher the size of the training dataset should be to obtain metamodel
accuracy. Rigorous (linear or nonlinear) approaches to reduce the input dimen-
sionality (e.g., Principal Component Analysis, PCA, or Stacked Sparse Autoen-
coders) should be sought, with increased metamodel performances [137];
iii.
the quality of metamodel training can be negatively affected by noisy data.
Data filtering, carried out on the model code predictions, may impact on the
metamodel predictive performance [138].
The introduction of passive safety systems in the framework of PSA deserves particular
attention, in particular, when accident scenarios are generated in a dynamic fashion.
The reasons are the following:
i.
it is difficult to define the state of passive systems along an accident sequence
only in the classical binary terms of ‘success/failure’; rather, ‘intermediate’
modes of operation or degraded performances states should be considered,
where the passive system might still be capable of providing a functional level
sufficient for the mitigation of the accident progression;
ii.
the amount of accident scenarios to be handled is consistently larger than that
associated with the traditional static fault/event tree techniques. Thus, the
“a posteriori” retrieval of information can be quite burdensome and difficult.
In this view, artificial intelligence techniques could be embraced to address the
problem [125127];
iii.
the thorough exploration of the dynamic state-space of the passive safety sys-
tem is impracticable by standard (sampling) methods: advanced exploration
schemes should be sought to intelligently drive the search towards ‘interesting’
scenarios (e.g., extreme unexpected events), while reducing the computational
effort [139,140].
Energies 2021,14, 4688 13 of 17
Author Contributions:
All authors provided equal contributions to the technical work. In addition,
F.D.M. and N.P. attended to the editing of the paper. All authors have read and agreed to the
published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Pagani, L.P.; Apostolakis, G.E.; Hejzlar, P. The Impact of Uncertainties on the Performance of Passive Systems. Nucl. Technol.
2005
,
149, 129–140. [CrossRef]
2.
Marquès, M.; Pignatel, J.; Saignes, P.; D’Auria, F.; Burgazzi, L.; Müller, C.; Bolado-Lavin, R.; Kirchsteiger, C.; La Lumia, V.; Ivanov,
I. Methodology for the reliability evaluation of a passive system and its integration into a Probabilistic Safety Assessment. Nucl.
Eng. Des. 2005,235, 2612–2631. [CrossRef]
3.
Jafari, J.; D’Auria, F.; Kazeminejad, H.; Davilu, H. Reliability evaluation of a natural circulation system. Nucl. Eng. Des.
2003
,224,
79–104. [CrossRef]
4.
Thunnissen, D.P.; Au, S.-K.; Tsuyuki, G.T. Uncertainty Quantification in Estimating Critical Spacecraft Component Temperatures.
J. Thermophys. Heat Transf. 2007,21, 422–430. [CrossRef]
5.
Fong, C.J.; Apostolakis, G.E.; Langewisch, D.R.; Hejzlar, P.; Todreas, N.E.; Driscoll, M.J. Reliability analysis of a passive cooling
system using a response surface with an application to the flexible conversion ratio reactor. Nucl. Eng. Des.
2009
,239, 2660–2671.
[CrossRef]
6.
Bousbia Salah, A.; Auria, F. Insights into Natural Circulation—Phenomena, Models, and Issues; LAP Lambert Academic Publishing:
Saarbrücken, Germany, 2010.
7.
Aksan, N.; D’Auria, F. Relevant Thermal Hydraulic Aspects of Advanced Reactor Design-Status Report. NEA/CSNI/R. 1996.
Available online: https://www.oecd-nea.org/jcms/pl_16144/relevant-thermal-hydraulic- aspects-of- advanced-reactors-design-
status-report-1996?details=true (accessed on 3 May 2021).
8.
Aksan, N.; Boado, R.; Burgazzi, L.; Choi, J.H.; Chung, Y.J.; D’Auria, F.S.; De La Rosa, F.C.; Gimenez, M.O.; Ishii, M.; Khartabil,
H.; et al. Natural Circulation Phenomena and Modeling for Advanced Water Cooled Reactors. IAEA-TECDOC-1677 978-92-
0-127410-6. 2012. Available online: https://www-pub.iaea.org/MTCD/Publications/PDF/TE-1677_web.pdf (accessed on
3 May 2021).
9.
Aksan, N.; D’Auria, F.S.; Marques, M.; Saha, D.; Reyes, J.; Cleveland, J. Natural Circulation in Water-Cooled Nuclear Power
Plants Phenomena, Models, and Methodology for System Reliability Assessments. IAEA-TECDOC-1474 92-0-110605-X. 2005.
Available online: https://www- pub.iaea.org/MTCD/Publications/PDF/TE_1474_web.pdf (accessed on 3 May 2021).
10. Aksan, N.; Choi, J.H.; Chung, Y.J.; Cleveland, J.; D’Auria, F.S.; Fil, N.; Gimenez, M.O.; Ishii, M.; Khartabil, H.; Korotaev, K.; et al.
Passive Safety Systems and Natural Circulation in Water Cooled Nuclear Power Plants. IAEA-TECDOC-1624 978-92-0-111309-2.
2009. Available online: https://www-pub.iaea.org/MTCD/Publications/PDF/te_1624_web.pdf (accessed on 3 May 2021).
11.
Ricotti, M.E.; Zio, E.; D’Auria, F.; Caruso, G. Reliability Methods for Passive Systems(RMPS) Study—Strategy and Results.
Unclassified NEA CSNI/WGRISK 2002,10, 149.
12.
Lewis, M.J.; Pochard, R.; D’auria, F.; Karwat, H.; Wolfert, K.; Yadigaroglu, G.; Holmstrom, H.L.O. Thermohydraulics of
Emergency Core Cooling in Light Water Reactors—A State of the Art Report. CSNI Rep. 1989. Available online: https:
//www.oecd-nea.org/upload/docs/application/pdf/2020-01/csni89-161.pdf (accessed on 3 May 2021).
13.
Aksan, N.; D’auria, F.; Glaeser, H.; Pochard, R.; Richards, C.; Sjoberg, A. A Separate Effects Test Matrix for Thermal-Hydraulic Code
Validation: Phenomena Characterization and Selection of Facilities and Tests; OECD/GD OECD Guidance Document; OECD: Paris,
France, 1993.
14.
Aksan, N.; D’Auria, F.; Glaeser, H.; Lillington, J.; Pochard, R.; Sjoberg, A. Evaluation of the Separate Effects Tests (SET) Validation
Matrix; OECD/GD OECD Guidance Document; OECD: Paris, France, 1996.
15.
U.S. Nuclear Regulatory Commission. Others Compendium of ECCS Research for Realistic LOCA Analysis; NUREG-1230; Division of
Systems Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission: Washington, DC, USA, 1988.
16. Burgazzi, L. Addressing the uncertainties related to passive system reliability. Prog. Nucl. Energy 2007,49, 93–102. [CrossRef]
17.
Burgazzi, L. State of the art in reliability of thermal-hydraulic passive systems. Reliab. Eng. Syst. Saf.
2007
,92, 671–675. [CrossRef]
18.
Burgazzi, L. Thermal–hydraulic passive system reliability-based design approach. Reliab. Eng. Syst. Saf.
2007
,92, 1250–1257.
[CrossRef]
19. Burgazzi, L. Evaluation of uncertainties related to passive systems performance. Nucl. Eng. Des. 2004,230, 93–106. [CrossRef]
20.
D’Auria, F.; Giannotti, W. Development of a Code with the Capability of Internal Assessment of Uncertainty. Nucl. Technol.
2000
,
131, 159–196. [CrossRef]
21.
Ahn, K.-I.; Kim, H.-D. A Formal Procedure for Probabilistic Quantification of Modeling Uncertainties Employed in Phenomeno-
logical Transient Models. Nucl. Technol. 2000,130, 132–144. [CrossRef]
22.
Zio, E.; Pedroni, N. Building confidence in the reliability assessment of thermal-hydraulic passive systems. Reliab. Eng. Syst. Saf.
2009,94, 268–281. [CrossRef]
23.
Apostolakis, G. The concept of probability in safety assessments of technological systems. Science
1990
,250, 1359–1364. [CrossRef]
Energies 2021,14, 4688 14 of 17
24. Helton, J.; Oberkampf, W. Alternative representations of epistemic uncertainty. Reliab. Eng. Syst. Saf. 2004,85, 1–10. [CrossRef]
25.
Zio, E. A study of the bootstrap method for estimating the accuracy of artificial neural networks in predicting nuclear transient
processes. IEEE Trans. Nucl. Sci. 2006,53, 1460–1478. [CrossRef]
26.
Helton, J.; Johnson, J.; Sallaberry, C.; Storlie, C. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliab.
Eng. Syst. Saf. 2006,91, 1175–1209. [CrossRef]
27.
Zio, E.; Pedroni, N. Estimation of the functional failure probability of a thermal–hydraulic passive system by Subset Simulation.
Nucl. Eng. Des. 2009,239, 580–599. [CrossRef]
28.
Pourgol-Mohamad, M.; Mosleh, A.; Modarres, M. Methodology for the use of experimental data to enhance model output
uncertainty assessment in thermal hydraulics codes. Reliab. Eng. Syst. Saf. 2010,95, 77–86. [CrossRef]
29.
Burgazzi, L. Failure Mode and Effect Analysis Application for the Safety and Reliability Analysis of a Thermal-Hydraulic Passive
System. Nucl. Technol. 2006,156, 150–158. [CrossRef]
30. Burgazzi, L. Passive System Reliability Analysis: A Study on the Isolation Condenser. Nucl. Technol. 2002,139, 3–9. [CrossRef]
31.
Burgazzi, L. Reliability Evaluation of Passive Systems through Functional Reliability Assessment. Nucl. Technol.
2003
,144,
145–151. [CrossRef]
32.
Bassi, C.; Marques, M. Reliability Assessment of 2400 MWth Gas-Cooled Fast Reactor Natural Circulation Decay Heat Removal
in Pressurized Situations. Sci. Technol. Nucl. Install. 2008,2008, 287376. [CrossRef]
33.
Mackay, F.J.; Apostolakis, G.E.; Hejzlar, P. Incorporating reliability analysis into the design of passive cooling systems with
an application to a gas-cooled reactor. Nucl. Eng. Des. 2008,238, 217–228. [CrossRef]
34.
Mathews, T.S.; Ramakrishnan, M.; Parthasarathy, U.; Arul, A.J.; Kumar, C.S. Functional reliability analysis of Safety Grade Decay
Heat Removal System of Indian 500MWe PFBR. Nucl. Eng. Des. 2008,238, 2369–2376. [CrossRef]
35.
Mathews, T.S.; Arul, A.J.; Parthasarathy, U.; Kumar, C.S.; Subbaiah, K.V.; Mohanakrishnan, P. Passive system reliability analysis
using Response Conditioning Method with an application to failure frequency estimation of Decay Heat Removal of PFBR. Nucl.
Eng. Des. 2011,241, 2257–2270. [CrossRef]
36.
Arul, A.J.; Iyer, N.K.; Velusamy, K. Adjoint operator approach to functional reliability analysis of passive fluid dynamical systems.
Reliab. Eng. Syst. Saf. 2009,94, 1917–1926. [CrossRef]
37.
Arul, A.J.; Iyer, N.K.; Velusamy, K. Efficient reliability estimate of passive thermal hydraulic safety system with automatic
differentiation. Nucl. Eng. Des. 2010,240, 2768–2778. [CrossRef]
38.
Mezio, F.; Grinberg, M.; Lorenzo, G.; Giménez, M. Integration of the functional reliability of two passive safety systems to mitigate
a SBLOCA+BO in a CAREM-like reactor PSA. Nucl. Eng. Des. 2014,270, 109–118. [CrossRef]
39.
Schuëller, G.; Pradlwarter, H. Benchmark study on reliability estimation in higher dimensions of structural systems—An overview.
Struct. Saf. 2007,29, 167–182. [CrossRef]
40.
Schuëller, G. Efficient Monte Carlo simulation procedures in structural uncertainty and reliability analysis—Recent advances.
Struct. Eng. Mech. 2009,32, 1–20. [CrossRef]
41.
Zio, E.; Pedroni, N. How to effectively compute the reliability of a thermal–hydraulic nuclear passive system. Nucl. Eng. Des.
2011,241, 310–327. [CrossRef]
42.
Zio, E.; Pedroni, N. Monte Carlo simulation-based sensitivity analysis of the model of a thermal–hydraulic passive system. Reliab.
Eng. Syst. Saf. 2012,107, 90–106. [CrossRef]
43.
Kersaudy, P.; Sudret, B.; Varsier, N.; Picon, O.; Wiart, J. A new surrogate modeling technique combining Kriging and polynomial
chaos expansions—Application to uncertainty analysis in computational dosimetry. J. Comput. Phys.
2015
,286, 103–117.
[CrossRef]
44. Kartal, M.E.; Ba¸sa˘ga, H.B.; Bayraktar, A. Probabilistic nonlinear analysis of CFR dams by MCS using Response Surface Method.
Appl. Math. Model. 2011,35, 2752–2770. [CrossRef]
45.
Pedroni, N.; Zio, E.; Apostolakis, G. Comparison of bootstrapped artificial neural networks and quadratic response surfaces for
the estimation of the functional failure probability of a thermal–hydraulic passive system. Reliab. Eng. Syst. Saf.
2010
,95, 386–395.
[CrossRef]
46.
Zio, E.; Pedroni, N. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems.
Reliab. Eng. Syst. Saf. 2010,95, 1300–1313. [CrossRef]
47.
Babuška, I.; Nobile, F.; Tempone, R. A stochastic collocation method for elliptic partial differential equations with random input
data. SIAM Rev. 2010,52, 317–355. [CrossRef]
48.
Hurtado, J.E. Filtered importance sampling with support vector margin: A powerful method for structural reliability analysis.
Struct. Saf. 2007,29, 2–15. [CrossRef]
49.
Bect, J.; Ginsbourger, D.; Li, L.; Picheny, V.; Vazquez, E. Sequential design of computer experiments for the estimation of
a probability of failure. Stat. Comput. 2012,22, 773–793. [CrossRef]
50.
Rubino, G.; Tuffin, B. Rare Event Simulation Using Monte Carlo Methods; John Wiley & Sons: Hoboken, NJ, USA, 2009; ISBN
9780470772690.
51.
Zio, E.; Apostolakis, G.; Pedroni, N. Quantitative functional failure analysis of a thermal–hydraulic passive system by means of
bootstrapped Artificial Neural Networks. Ann. Nucl. Energy 2010,37, 639–649. [CrossRef]
52.
Helton, J.; Davis, F. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliab. Eng.
Syst. Saf. 2003,81, 23–69. [CrossRef]
Energies 2021,14, 4688 15 of 17
53.
Cacuci, D.G.; Ionescu-Bujor, M. A Comparative Review of Sensitivity and Uncertainty Analysis of Large-Scale Systems—II:
Statistical Methods. Nucl. Sci. Eng. 2004,147, 204–217. [CrossRef]
54. Morris, M.D. Three Technomefrics Experimental Design Classics. Technometrics 2000,42, 26–27. [CrossRef]
55.
Helton, J.; Davis, F.; Johnson, J. A comparison of uncertainty and sensitivity analysis results obtained with random and Latin
hypercube sampling. Reliab. Eng. Syst. Saf. 2005,89, 305–330. [CrossRef]
56.
Sallaberry, C.; Helton, J.; Hora, S. Extension of Latin hypercube samples with correlated variables. Reliab. Eng. Syst. Saf.
2008
,93,
1047–1059. [CrossRef]
57.
Helton, J. Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. J. Stat. Comput. Simul.
1997,57, 3–76. [CrossRef]
58.
Storlie, C.B.; Swiler, L.P.; Helton, J.C.; Sallaberry, C.J. Implementation and evaluation of nonparametric regression procedures for
sensitivity analysis of computationally demanding models. Reliab. Eng. Syst. Saf. 2009,94, 1735–1763. [CrossRef]
59.
Olsson, A.; Sandberg, G.; Dahlblom, O. On Latin hypercube sampling for structural reliability analysis. Struct. Saf.
2003
,25,
47–68. [CrossRef]
60.
Pebesma, E.J.; Heuvelink, G.B. Latin Hypercube Sampling of Gaussian Random Fields. Technometrics
1999
,41, 303–312. [CrossRef]
61.
Au, S.-K.; Beck, J.L. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Eng. Mech.
2001,16, 263–277. [CrossRef]
62.
Au, S.-K.; Beck, J.L. Subset Simulation and its Application to Seismic Risk Based on Dynamic Analysis. J. Eng. Mech.
2003
,129,
901–917. [CrossRef]
63. Au, S.-K.; Wang, Y. Engineering Risk Assessment with Subset Simulation; John Wiley & Sons: Hoboken, NJ, USA, 2014.
64.
Koutsourelakis, P.-S.; Pradlwarter, H.; Schuëller, G. Reliability of structures in high dimensions, part I: Algorithms and applications.
Probabilistic Eng. Mech. 2004,19, 409–417. [CrossRef]
65.
Pradlwarter, H.; Pellissetti, M.; Schenk, C.; Schuëller, G.; Kreis, A.; Fransen, S.; Calvi, A.; Klein, M. Realistic and efficient reliability
estimation for aerospace structures. Comput. Methods Appl. Mech. Eng. 2005,194, 1597–1617. [CrossRef]
66.
Zio, E.; Pedroni, N. Functional failure analysis of a thermal–hydraulic passive system by means of Line Sampling. Reliab. Eng.
Syst. Saf. 2009,94, 1764–1781. [CrossRef]
67.
Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing
Machines. J. Chem. Phys. 1953,21, 1087–1092. [CrossRef]
68. Schuëller, G.; Pradlwarter, H.; Koutsourelakis, P.-S. A critical appraisal of reliability estimation procedures for high dimensions.
Probabilistic Eng. Mech. 2004,19, 463–474. [CrossRef]
69. Lu, Z.; Song, S.; Yue, Z.; Wang, J. Reliability sensitivity method by line sampling. Struct. Saf. 2008,30, 517–532. [CrossRef]
70.
Valdebenito, M.; Pradlwarter, H.; Schuëller, G. The role of the design point for calculating failure probabilities in view of
dimensionality and structural nonlinearities. Struct. Saf. 2010,32, 101–111. [CrossRef]
71.
Wang, B.; Wang, N.; Jiang, J.; Zhang, J. Efficient estimation of the functional reliability of a passive system by means of an
improved Line Sampling method. Ann. Nucl. Energy 2013,55, 9–17. [CrossRef]
72.
De Boer, P.-T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A Tutorial on the Cross-Entropy Method. Ann. Oper. Res.
2005
,134, 19–67.
[CrossRef]
73.
Hutton, D.M. The Cross-Entropy Method: A Unified Approach to Combinatorial Optimisation, Monte-Carlo Simulation and
Machine Learning. Kybernetes 2005,34, 903.
74.
Botev, Z.I.; Kroese, D. An Efficient Algorithm for Rare-event Probability Estimation, Combinatorial Optimization, and Counting.
Methodol. Comput. Appl. Probab. 2008,10, 471–505. [CrossRef]
75.
Au, S.-K.; Beck, J. A new adaptive importance sampling scheme for reliability calculations. Struct. Saf.
1999
,21, 135–158.
[CrossRef]
76.
Morio, J. Extreme quantile estimation with nonparametric adaptive importance sampling. Simul. Model. Pract. Theory
2012
,27,
76–89. [CrossRef]
77.
Botev, Z.I.; L’Ecuyer, P.; Tuffin, B. Markov chain importance sampling with applications to rare event probability estimation. Stat.
Comput. 2011,23, 271–285. [CrossRef]
78. Asmussen, S.; Glynn, P.W. Stochastic Simulation: Algorithms and Analysis; Springer: Berlin/Heidelberg, Germany, 2007.
79. Pedroni, N.; Zio, E. An Adaptive Metamodel-Based Subset Importance Sampling approach for the assessment of the functional
failure probability of a thermal-hydraulic passive system. Appl. Math. Model. 2017,48, 269–288. [CrossRef]
80.
Echard, B.; Gayton, N.; Lemaire, M. AK-MCS: An active learning reliability method combining Kriging and Monte Carlo
Simulation. Struct. Saf. 2011,33, 145–154. [CrossRef]
81.
Echard, B.; Gayton, N.; Lemaire, M.; Relun, N. A combined Importance Sampling and Kriging reliability method for small failure
probabilities with time-demanding numerical models. Reliab. Eng. Syst. Saf. 2013,111, 232–240. [CrossRef]
82.
Bourinet, J.-M.; Deheeger, F.; Lemaire, M. Assessing small failure probabilities by combined subset simulation and Support Vector
Machines. Struct. Saf. 2011,33, 343–353. [CrossRef]
83.
Dubourg, V.; Sudret, B.; Deheeger, F. Metamodel-based importance sampling for structural reliability analysis. Probabilistic Eng.
Mech. 2013,33, 47–57. [CrossRef]
84.
Fauriat, W.; Gayton, N. AK-SYS: An adaptation of the AK-MCS method for system reliability. Reliab. Eng. Syst. Saf.
2014
,123,
137–144. [CrossRef]
Energies 2021,14, 4688 16 of 17
85.
Cadini, F.; Santos, F.; Zio, E. Passive systems failure probability estimation by the meta-AK-IS 2 algorithm. Nucl. Eng. Des.
2014
,
277, 203–211. [CrossRef]
86.
Cadini, F.; Gioletta, A.; Zio, E. Improved metamodel-based importance sampling for the performance assessment of radioactive
waste repositories. Reliab. Eng. Syst. Saf. 2015,134, 188–197. [CrossRef]
87.
Liel, A.B.; Haselton, C.B.; Deierlein, G.G.; Baker, J. Incorporating modeling uncertainties in the assessment of seismic collapse risk
of buildings. Struct. Saf. 2009,31, 197–211. [CrossRef]
88.
Gavin, H.P.; Yau, S.C. High-order limit state functions in the response surface method for structural reliability analysis. Struct.
Saf. 2008,30, 162–179. [CrossRef]
89.
Lorenzo, G.; Zanocco, P.; Giménez, M.; Marques, M.; Iooss, B.; Lavín, R.B.; Pierro, F.; Galassi, G.; D’Auria, F.; Burgazzi, L.
Assessment of an Isolation Condenser of an Integral Reactor in View of Uncertainties in Engineering Parameters. Sci. Technol.
Nucl. Install. 2010,2011, 827354. [CrossRef]
90.
Bucknor, M.; Grabaskas, D.; Brunett, A.J.; Grelle, A. Advanced Reactor Passive System Reliability Demonstration Analysis for an
External Event. Nucl. Eng. Technol. 2017,49, 360–372. [CrossRef]
91.
Chandrakar, A.; Nayak, A.K.; Gopika, V. Development of the APSRA+ Methodology for Passive System Reliability Analysis and
Its Application to the Passive Isolation Condenser System of an Advanced Reactor. Nucl. Technol. 2016,194, 39–60. [CrossRef]
92.
Nayak, A.; Jain, V.; Gartia, M.; Prasad, H.; Anthony, A.; Bhatia, S.; Sinha, R. Reliability assessment of passive isolation condenser
system of AHWR using APSRA methodology. Reliab. Eng. Syst. Saf. 2009,94, 1064–1075. [CrossRef]
93.
Zio, E.; Di Maio, F.; Tong, J. Safety margins confidence estimation for a passive residual heat removal system. Reliab. Eng. Syst.
Saf. 2010,95, 828–836. [CrossRef]
94.
Di Maio, F.; Nicola, G.; Zio, E.; Yu, Y. Ensemble-based sensitivity analysis of a Best Estimate Thermal Hydraulics model:
Application to a Passive Containment Cooling System of an AP1000 Nuclear Power Plant. Ann. Nucl. Energy
2014
,73, 200–210.
[CrossRef]
95.
Zuber, N.; Wilson, G.E.; (Lanl), B.B.; (Ucla), I.C.; (Inel), R.D.; (Mit), P.G.; (Inel), K.K.; (Sli), G.L.; (Sli), S.L.; (Bnl), U.R.; et al.
Quantifying reactor safety margins Part 5: Evaluation of scale-up capabilities of best estimate codes. Nucl. Eng. Des.
1990
,119,
97–107. [CrossRef]
96. Guba, A.; Makai, M.; Pál, L. Statistical aspects of best estimate method—I. Reliab. Eng. Syst. Saf. 2003,80, 217–232. [CrossRef]
97.
Secchi, P.; Zio, E.; Di Maio, F. Quantifying uncertainties in the estimation of safety parameters by using bootstrapped artificial
neural networks. Ann. Nucl. Energy 2008,35, 2338–2350. [CrossRef]
98.
Di Maio, F.; Nicola, G.; Zio, E.; Yu, Y. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety
systems analysis. Nucl. Eng. Des. 2015,289, 144–154. [CrossRef]
99. McLachlan, G.; Peel, D. Finite Mixture Models; John Wiley & Sons: Hoboken, NJ, USA, 2000.
100.
Law, M.H.C.; Figueiredo, M.; Jain, A.K. Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern
Anal. Mach. Intell. 2004,26, 1154–1166. [CrossRef]
101. Diaconis, P.; Zabell, S.L. Updating Subjective Probability. J. Am. Stat. Assoc. 1982,77, 822–830. [CrossRef]
102. Gibbs, A.L.; Su, F.E. On Choosing and Bounding Probability Metrics. Int. Stat. Rev. 2002,70, 419–435. [CrossRef]
103.
Cummins, W.E.; Vijuk, R.P.; Schulz, T.L. Westinghouse AP1000 advanced passive plant. In Proceedings of the American Nuclear
Society—International Congress on Advances in Nuclear Power Plants 2005 (ICAPP05), Seoul, Korea, 15–19 May 2005.
104.
Di Maio, F.; Nicola, G.; Borgonovo, E.; Zio, E. Invariant methods for an ensemble-based sensitivity analysis of a passive
containment cooling system of an AP1000 nuclear power plant. Reliab. Eng. Syst. Saf. 2016,151, 12–19. [CrossRef]
105.
Di Maio, F.; Bandini, A.; Zio, E.; Alberola, S.C.; Sanchez-Saez, F.; Martorell, S. Bootstrapped-ensemble-based Sensitivity Analysis
of a trace thermal-hydraulic model based on a limited number of PWR large break loca simulations. Reliab. Eng. Syst. Saf.
2016
,
153, 122–134. [CrossRef]
106.
Perez, M.; Reventos, F.; Batet, L.; Guba, A.; Tóth, I.; Mieusset, T.; Bazin, P.; de Crécy, A.; Borisov, S.; Skorek, T.; et al. Uncertainty
and sensitivity analysis of a LBLOCA in a PWR Nuclear Power Plant: Results of the Phase V of the BEMUSE programme. Nucl.
Eng. Des. 2011,241, 4206–4222. [CrossRef]
107.
Wilks, S.S. Statistical Prediction with Special Reference to the Problem of Tolerance Limits. Ann. Math. Stat.
1942
,13, 400–409.
[CrossRef]
108. Makai, M.; Pál, L. Best estimate method and safety analysis II. Reliab. Eng. Syst. Saf. 2006,91, 222–232. [CrossRef]
109.
Nutt, W.T.; Wallis, G.B. Evaluation of nuclear safety from the outputs of computer codes in the presence of uncertainties. Reliab.
Eng. Syst. Saf. 2004,83, 57–77. [CrossRef]
110.
Bucher, C.; Most, T. A comparison of approximate response functions in structural reliability analysis. Probabilistic Eng. Mech.
2008,23, 154–163. [CrossRef]
111. Deng, J. Structural reliability analysis for implicit performance function using radial basis function network. Int. J. Solids Struct.
2006,43, 3255–3291. [CrossRef]
112.
Cardoso, J.B.; Almeida, J.; Dias, J.S.; Coelho, P. Structural reliability analysis using Monte Carlo simulation and neural networks.
Adv. Eng. Softw. 2008,39, 505–513. [CrossRef]
113.
Volkova, E.; Iooss, B.; Van Dorpe, F. Global sensitivity analysis for a numerical model of radionuclide migration from the RRC
“Kurchatov Institute” radwaste disposal site. Stoch. Environ. Res. Risk Assess. 2006,22, 17–31. [CrossRef]
Energies 2021,14, 4688 17 of 17
114.
Marrel, A.; Iooss, B.; Laurent, B.; Roustant, O. Calculations of Sobol indices for the Gaussian process metamodel. Reliab. Eng. Syst.
Saf. 2009,94, 742–751. [CrossRef]
115.
International Atomic Energy Agency. Safety Related Terms for Advanced Nuclear Plants; Terminos Relacionados con la Seguridad
para Centrales Nucleares Avanzadas; International Atomic Energy Agency (IAEA): Vienna, Austria, 1995.
116.
Zio, E.; Baraldi, P.; Cadini, F. Basics of Reliability and Risk Analysis: Worked Out Problems and Solutions; World Scientific: Singapore,
2011; ISBN 9814355038.
117.
Marseguerra, M.; Zio, E. Monte Carlo approach to PSA for dynamic process systems. Reliab. Eng. Syst. Saf.
1996
,52, 227241.
[CrossRef]
118.
Di Maio, F.; Vagnoli, M.; Zio, E. Risk-Based Clustering for Near Misses Identification in Integrated Deterministic and Probabilistic
Safety Analysis. Sci. Technol. Nucl. Install. 2015,2015, 693891. [CrossRef]
119.
Hakobyan, A.; Denning, R.; Aldemir, T.; Dunagan, S.; Kunsman, D. A Methodology for Generating Dynamic Accident Progression
Event Trees for Level-2 PRA. In Proceedings of the PHYSOR-2006—American Nuclear Society’s Topical Meeting on Reactor
Physics, Vancouver, BC, Canada, 10–14 September 2006.
120.
Coyne, K.; Mosleh, A. Nuclear plant control room operator modeling within the ADS-IDAC, Version 2, Dynamic PRA Environ-
ment: Part 1—General description and cognitive foundations. Int. J. Perform. Eng. 2014,10, 691–703.
121.
Devooght, J.; Smidts, C. Probabilistic Reactor Dynamics—I: The Theory of Continuous Event Trees. Nucl. Sci. Eng.
1992
,111,
229–240. [CrossRef]
122.
Kopustinskas, V.; Augutis, J.; Rimkevi ˇcius, S. Dynamic reliability and risk assessment of the accident localization system of the
Ignalina NPP RBMK-1500 reactor. Reliab. Eng. Syst. Saf. 2005,87, 77–87. [CrossRef]
123.
Kloos, M.; Peschke, J. MCDET: A Probabilistic Dynamics Method Combining Monte Carlo Simulation with the Discrete Dynamic
Event Tree Approach. Nucl. Sci. Eng. 2006,153, 137–156. [CrossRef]
124.
Kumar, R.; Bechta, S.; Kudinov, P.; Curnier, F.; Marquès, M.; Bertrand, F. A PSA Level-1 method with repairable compo-
nents: An application to ASTRID Decay Heat Removal systems. In Proceedings of the Safety and Reliability: Methodol-
ogy and Applications—European Safety and Reliability Conference, ESREL 2014, Wroclaw, Poland, 14–18 September 2014;
CRC Press/Balkema: Leiden, The Netherlands, 2015; pp. 1611–1617.
125.
Di Maio, F.; Rossetti, R.; Zio, E. Postprocessing of Accidental Scenarios by Semi-Supervised Self-Organizing Maps. Sci. Technol.
Nucl. Install. 2017,2017, 2709109. [CrossRef]
126.
Di Maio, F.; Baronchelli, S.; Vagnoli, M.; Zio, E. Determination of prime implicants by differential evolution for the dynamic
reliability analysis of non-coherent nuclear systems. Ann. Nucl. Energy 2017,102, 91–105. [CrossRef]
127.
Di Maio, F.; Picoco, C.; Zio, E.; Rychkov, V. Safety margin sensitivity analysis for model selection in nuclear power plant
probabilistic safety assessment. Reliab. Eng. Syst. Saf. 2017,162, 122–138. [CrossRef]
128. Zio, E. The future of risk assessment. Reliab. Eng. Syst. Saf. 2018,177, 176–190. [CrossRef]
129.
Di Maio, F.; Zio, E. Dynamic accident scenario generation, modeling and post-processing for the integrated deterministic and
probabilistic safety analysis of nuclear power plants. In Advanced Concepts in Nuclear Energy Risk Assessment and Management;
World Scientific: Singapore, 2018; Volume 1, pp. 477–504. ISBN 9813225629.
130.
Burgazzi, L. About time-variant reliability analysis with reference to passive systems assessment. Reliab. Eng. Syst. Saf.
2008
,93,
1682–1688. [CrossRef]
131.
Shrestha, R.; Kozlowski, T. Inverse uncertainty quantification of input model parameters for thermal-hydraulics simulations
using expectation–maximization under Bayesian framework. J. Appl. Stat. 2015,43, 1011–1026. [CrossRef]
132.
Hu, G.; Kozlowski, T. Inverse uncertainty quantification of trace physical model parameters using BFBT benchmark data. Ann.
Nucl. Energy 2016,96, 197–203. [CrossRef]
133.
Faes, M.; Broggi, M.; Patelli, E.; Govers, Y.; Mottershead, J.; Beer, M.; Moens, D. A multivariate interval approach for inverse
uncertainty quantification with limited experimental data. Mech. Syst. Signal Process. 2019,118, 534–548. [CrossRef]
134. Kennedy, M.C.; O’Hagan, A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 2001,63, 425–464. [CrossRef]
135.
Wu, X.; Kozlowski, T.; Meidani, H. Kriging-based inverse uncertainty quantification of nuclear fuel performance code BISON
fission gas release model using time series measurement data. Reliab. Eng. Syst. Saf. 2018,169, 422–436. [CrossRef]
136.
Wu, X.; Kozlowski, T.; Meidani, H.; Shirvan, K. Inverse uncertainty quantification using the modular Bayesian approach based on
Gaussian process, Part 1: Theory. Nucl. Eng. Des. 2018,335, 339–355. [CrossRef]
137.
Baccou, J.; Zhang, J.; Fillion, P.; Damblin, G.; Petruzzi, A.; Mendizábal, R.; Reventos, F.; Skorek, T.; Couplet, M.; Iooss, B.;
et al. SAPIUM: A Generic Framework for a Practical and Transparent Quantification of Thermal-Hydraulic Code Model Input
Uncertainty. Nucl. Sci. Eng. 2020,194, 721–736. [CrossRef]
138.
Nagel, J.B.; Rieckermann, J.; Sudret, B. Principal component analysis and sparse polynomial chaos expansions for global sensitivity
analysis and model calibration: Application to urban drainage simulation. Reliab. Eng. Syst. Saf. 2020,195, 106737. [CrossRef]
139.
Roma, G.; Di Maio, F.; Bersano, A.; Pedroni, N.; Bertani, C.; Mascari, F.; Zio, E. A Bayesian framework of inverse uncertainty
quantification with principal component analysis and Kriging for the reliability analysis of passive safety systems. Nucl. Eng. Des.
2021,379, 111230. [CrossRef]
140.
Turati, P.; Pedroni, N.; Zio, E. An Adaptive Simulation Framework for the Exploration of Extreme and Unexpected Events in
Dynamic Engineered Systems. Risk Anal. 2016,37, 147–159. [CrossRef] [PubMed]
... High-fidelity simulations solve the mathematical and physical models underlying the processes developing during accidental scenarios and can aid extracting knowledge on the most important parameters of the SSCs [56]. They can also be used to perform Uncertainty Quantification (UQ) and Sensitivity Analysis (SA), to assess the effects of operating and environmental conditions, initiating events, and model and design parameters on the behavior of SCCs during accidental scenarios [7]. M&S are developed to mimic the SSC most important phenomena and their evolution during the accidental scenarios, as determined by a systematic analysis of the threats and hazards [57]. ...
... UQ and SA tasks require a large number of simulations. To speed up the computation, simulation codes are usually substituted with simple and faster surrogates [7]. The considered surrogate model is a Gaussian Processes (GP), which is developed by i) sampling 250 sets of input parameters using the Latin Hypercube Sampling (LHS) strategy [15], ii) performing the corresponding simulations and collecting the output of interest (i.e., the PCT and the maximum moderator temperature) and iii) training a GP to replicate the input/output relationships. ...
... Such tools require a large number of simulations. To speed up the computation, high-fidelity simulation codes are usually substituted with simple and faster surrogates [7]. The process requires i) identifying the range of variation of the design and modeling parameters, and the associated probability distributions; ii) sampling various sets of input parameters with an ad- ...
Article
Full-text available
Accidents may occur as a result of complex dynamic processes in interconnected socio-technical systems. Such accidents cannot be explained solely in terms of static chains of failures. Therefore, the traditional Probabilistic Risk Assessment (PRA) framework, which stands on the consideration that accidents are caused by direct failures or chains of events, is not apt to describe the dynamic behavior of the relevant Systems, Structures and Components (SSCs) and assess the risk. This work proposes a novel framework that embeds i) System-Theoretic Accident Model and Processes (STAMP) principles to guide a qualitative exploration of the SSC threats and hazards, ii) Modeling and Simulation (M&S) to investigate the SSC dynamic behavior during accidental scenarios, and iii) the Goal-Tree Success-Tree Master Logic Diagram (GTST-MLD) framework to assess risk quantitatively. The integration of STAMP, M&S and GTST-MLD allows a systematic analysis to provide risk insights, with due account to the SSC dependencies and interactions, and enables a dynamic assessment of the risk profile. The effectiveness of the proposed framework is shown by means of its application to the safety assessment of Nuclear Batteries (NBs), a unique class of nuclear micro-reactors which is gaining attention as a transportable, flexible, affordable, and distributed low-carbon power source.
... It consists of a total of nine steps, enabling a more rational evaluation of PSSs reliability. These methodologies have been used as the basis for numerous experiments and Thermal-Hydraulic (TH) analyses to evaluate PSSs reliability, and both methodological improvements and reliability evaluation guidelines continue to develop (Jafari et al., 2003;D'Auria et al., 2008;Petruzzi et al., 2010;D'Auria, 2018;Mascari et al., 2019;Choi et al., 2019;Roma et al., 2021;Di Mario et al., 2021). The purpose of these reliability evaluation methodologies is to derive reliability by considering the failure probability of the PSSs, which includes mechanical, electrical, and functional failure. ...
... In APSRA methodology, the failure surface is generated in consideration of the critical parameters affecting PSSs, and the deviation of the influence of each parameter is analyzed by diagnosing root causes. Based on these methodologies, many experiments and thermal--hydraulic model simulations have been conducted to evaluate the reliability of the PSSs, and until recently, methodological improvement and reliability evaluation guidelines have been continuously developed (Mascari et al., 2019;Choi et al., 2019;D'Auria et al., 2008;Petruzzi et al., 2010;D'Auria, 2018;Roma et al., 2021;Di Maio et al., 2021). While various reliability evaluation methodologies are being developed and applied, there are relatively few studies on performance evaluation methodologies. ...
... In this regard, there is a need to create new technologies (and improve existing ones) with the aim to increase the efficiency of electricity generation in nuclear power plants, as well as other (solar or wind) power plants [2]. At the same time, it is essential to ensure that strict requirements are in place regarding environmental safety at electric power facilities [3]. During the operation of nuclear power plants, special attention should be paid to the processing and storage of spent nuclear fuel [4]. ...
Article
Full-text available
Due to the growing demand for electrical energy generation worldwide [...]
... Casing is passive safety system that consists in a pipe that is assembled and inserted into a drilled section of borehole and is typically held in place with cement. It must withstand the load acting on it when the kick has entered in the well and the BOP has shut-in the well, to avoid an underground blowout and the loss of the wellbore integrity with severe consequences (Khakzad et al., 2013) (Blade Energy Partners, 2005: as commonly done in practice (see Di Maio et al., 2021), it is considered among the headers events of the ET, beside the active systems (namely, the MPD and the preventers of the BOP). ...
Article
Blowout is one of the most dreaded accidents for Oil and Gas companies. It is of particular concern during the drilling phase of deep-water oil & gas wells. This is due to the largely uncertain and extremely harsh environmental conditions that affect the design, drilling and operation activities of these wells. Seeking new technological solutions to prevent blowout has led, in the last few decades, to develop Managed Pressure Drilling (MPD) techniques. MPD offers many advantages compared to conventional drilling techniques, such as the capability (1) to detect the gas influx that initiates the kick that might lead to blowout and (2) to optimally control and circulate out this influx, to avoid the blowout. Nevertheless, effects of uncertainties on the MPD functionality are not fully understood and satisfactorily modelled within conventional safety assessment that relies on Event Trees (ETs). In this work, we propose a Dynamic Event Tree (DET) modelling framework of the scenario of kick escalation into blowout to allow accounting for the uncertainties that affect not only the kick variables, but also for the fundamental role played by the time and the delay of the kick detection task. The uncertainties affecting the kick variables are evaluated from kick events records taken from 2000 oil wells drilled in the Niger Delta, whereas the uncertainties affecting the time and delay of kick detection are evaluated by simulating the performance of the tool embedded into the MPD for kick detection (which applies the CUSUM statistical test to differential flow measurements), and assuming a possibilistic distribution for the confirmation time needed by the operator to take counteracting measures with respect to the evolving accidental scenario. A Hybrid Monte Carlo and Possibilistic method is utilized to represent and propagate uncertainties associated to the events occurring throughout the DET. Results are compared with those of a Purely Probabilistic method in support of the blowout probability quantification.
... Song et al. [7] presented an analytic approach that can be used in quantifying the uncertainty of the PSA model, and Di Maio et al. [8] mentioned various methods that can be used to analyze the reliability of the passive system in a review paper. ...
Article
Full-text available
Since the publication of the first comprehensive Probabilistic Safety Assessment (PSA) study—known as WASH-1400—in the US, PSA has developed into an effective and systematic method of identifying hazards, and evaluating and prioritizing the risks in nuclear facilities [...]
Article
Hazardous Natural events can cascade into Technological accidental scenarios (so called NaTech accidents). The occurrence of these accidents can degrade the performance of the preventive and mitigative safety barriers installed in the technological plants. Such performance degradation is typically assessed by expert judgement, without considering the effect of the magnitude of the natural hazard, nor its increasing frequency of occurrence in view of climate change. In this work, a novel sensitivity analysis framework is developed to identify the safety barriers whose performance degradation is most critical and thus needs careful modeling for realistic risk assessment. The framework is based on the calculation of a set of sensitivity measures, namely the Beta, the Conditional Value at Risk (CVaR) and the Value of Information (VoI), and their use to prioritize the safety barriers with respect to the need of: •accounting for performance degradation during an accidental scenario; •planning investments for further characterization of the safety barrier performance. An application is shown with respect to a case study of literature that consists of a chemical facility equipped with five safety barriers (of three different types, active, passive and procedural). NaTech scenarios can occur, triggered by floods and earthquakes. The results obtained with the Beta measure indicate that two-out-of-five barriers (one active and one passive) deserve accurate modelling of the performance degradation due to natural events. An additional outcome is that in the case study considered, both CVaR and VoI rank the passive barrier as the most effective in mitigating the scenarios escalation: therefore, this barrier is the one for which the decision maker could decide to invest resources for improving the characterization of its performance to obtain a more realistic assessment of the risk.
Article
For decades, efforts have been made to enhance the realism of risk assessment methods for nuclear power plants (NPPs) by augmenting the number of representative scenarios. The common issue though is that each scenario should be simulated individually by computationally demanding physical models such as thermal-hydraulic system codes. To address this problem, this research proposes an algorithm that identifies the success or failure of each scenario with a minimized number of simulations. The algorithm seeks to locate the limit surface/states that distinguishes success and failure regions and impels the physical models to simulate only the scenarios where the consequence is close to the given failure criterion. To this end, the algorithm is an iterative process that estimates the LS by a metamodel, samples the scenarios close to the estimated LS, simulates the sampled scenarios, and trains the metamodel with accumulated simulation results. A deep neural network (DNN) is employed as the metamodel for predicting the consequence of each scenario, and Monte Carlo dropout informs the predictive uncertainty. Referring to this uncertainty, the DNN metamodel meticulously ‘sails’ across the scenario space to accurately pinpoint the LS. We name the proposed algorithm the deep neural network-based searching algorithm of informative limit surfaces and states, or Deep-SAILS. Combining the simulation results of risk-sensitive scenarios (close to the LS) and the ability of the trained DNN metamodel to reasonably predict the non-simulated scenarios, Deep-SAILS identified the success or the failure of tens of thousands of dynamic scenarios of loss of coolant accidents at more than 99% accuracy with only thousands of simulations.
Article
Full-text available
This paper presents an efficient surrogate modeling strategy for the uncertainty quantification and Bayesian calibration of a hydrological model. In particular, a process-based dynamical urban drainage simulator that predicts the discharge from a catchment area during a precipitation event is considered. The goal is to perform a global sensitivity analysis and to identify the unknown model parameters as well as the measurement and prediction errors. These objectives can only be achieved by cheapening the incurred computational costs, that is, lowering the number of necessary model runs. With this in mind, a regularity-exploiting metamodeling technique is proposed that enables fast uncertainty quantification. Principal component analysis is used for output dimensionality reduction and sparse polynomial chaos expansions are used for the emulation of the reduced outputs. Sensitivity measures such as the Sobol indices are obtained directly from the expansion coefficients. Bayesian inference via Markov chain Monte Carlo posterior sampling is drastically accelerated.
Article
Full-text available
This paper introduces an improved version of a novel inverse approach for the indirect quantification of multivariate interval uncertainty for high dimensional models under scarce data availability. The method is compared to results obtained via the well-established probabilistic framework of Bayesian model updating via Transitional Markov Chain Monte Carlo in the context of the DLR-AIRMOD test structure. It is shown that the proposed improvements of the inverse method alleviate the curse of dimensionality of the method with a factor up to 10 5. Comparison with the Bayesian results revealed that the most appropriate method depends largely on the desired information and availability of data. In case large amounts of data are available, and/or the analyst desires full (joint)-probabilistic descriptors of the model parameter uncertainty, the Bayesian method is shown to be the most performing. On the other hand however, when such descriptors are not needed (e.g., for worst-case analysis), and only scarce data are available, the interval method is shown to deliver more objective and robust bounds on the uncertain parameters.
Article
Full-text available
In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.
Article
Full-text available
Integrated Deterministic and Probabilistic Safety Analysis (IDPSA) of dynamic systems calls for the development of efficient methods for accidental scenarios generation. The necessary consideration of failure events timing and sequencing along the scenarios requires the number of scenarios to be generated to increase with respect to conventional PSA. Consequently, their postprocessing for retrieving safety relevant information regarding the system behavior is challenged because of the large amount of generated scenarios that makes the computational cost for scenario postprocessing enormous and the retrieved information difficult to interpret. In the context of IDPSA, the interpretation consists in the classification of the generated scenarios as safe, failed, Near Misses (NMs), and Prime Implicants (PIs). To address this issue, in this paper we propose the use of an ensemble of Semi-Supervised Self-Organizing Maps (SSSOMs) whose outcomes are combined by a locally weighted aggregation according to two strategies: a locally weighted aggregation and a decision tree based aggregation. In the former, we resort to the Local Fusion (LF) principle for accounting the classification reliability of the different SSSOM classifiers, whereas in the latter we build a classification scheme to select the appropriate classifier (or ensemble of classifiers), for the type of scenario to be classified. The two strategies are applied for the postprocessing of the accidental scenarios of a dynamic U-Tube Steam Generator (UTSG).
Article
In this work, we propose an Inverse Uncertainty Quantification (IUQ) approach to assigning Probability Density Functions (PDFs) to uncertain input parameters of Thermal-Hydraulic (T-H) models used to assess the reliability of passive safety systems. The approach uses experimental data within a Bayesian framework. The application to a RELAP5-3D model of the PERSEO (In-Pool Energy Removal System for Emergency Operation) facility located at SIET laboratory (Piacenza, Italy) is demonstrated. Principal Component Analysis (PCA) is applied for output dimensionality reduction and Kriging meta-modeling is used to emulate the reduced set of RELAP5-3D code outputs. This is done to decrease the computational cost of the Markov Chain Monte Carlo (MCMC) posterior sampling of the uncertain input parameters, which requires a large number of model simulations.
Article
Uncertainty analysis is a key element in nuclear power plant deterministic safety analysis using best-estimate thermal-hydraulic codes and best-estimate-plus-uncertainty methodologies. If forward uncertainty propagation methods have now become mature for industrial applications, the input uncertainty quantification (IUQ) on the physical models still requires further investigations. The Organisation for Economic Co-operation and Development/Nuclear Energy Agency PREMIUM project attempted to benchmark the available IUQ methods, but observed a strong user effect due to the lack of best practices guidance. The SAPIUM project has been proposed toward the construction of a clear and shared systematic approach for IUQ. The main outcome of the project is a first “good-practices” document that can be exploited for safety study in order to reach consensus among experts on recommended practices as well as to identify remaining open issues for further developments. This paper describes the systematic approach that consists of five elements in a step-by-step approach to perform a meaningful model IUQ and validation as well as some good-practice guideline recommendations for each step.
Article
Risk assessment must evolve for addressing the existing and future challenges, and considering the new systems and innovations that have already arrived in our lives and that are coming ahead. In this paper, I swing on the rapid changes and innovations that the World that we live in is experiencing, and analyze them with respect to the challenges that these pose to the field of risk assessment. Digitalization brings opportunities but with it comes also the complexity of cyber-phyiscal systems. Climate change and extreme natural events are increasingly threatening our infrastructures; terrorist and malevolent threats are posing severe concerns for the security of our systems and lives. These sources of hazard are extremely uncertain and, thus, difficult to describe and model quantitatively. Some research and development directions that are emerging are presented and discussed, also considering the ever increasing computational capabilities and data availability. These include the use of simulation for accident scenario identification and exploration, the extension of risk assessment into the framework of resilience and business continuity, the reliance on data for dynamic and condition monitoring-based risk assessment, the safety and security assessment of cyber-physical systems. The paper is not a research work and not exactly a review or a state of the art work, but rather it offers a lookout on risk assessment, open to consideration and discussion, as it cannot pretend to give an absolute point of view nor to be complete in the issues addressed (and the related literature referenced to).
Article
In nuclear reactor fuel performance simulation, fission gas release (FGR) and swelling involve treatment of several complicated and interrelated physical processes, which inevitably depend on uncertain input parameters. However, the uncertainties associated with these input parameters are only known by “expert judgment”. In this paper, inverse Uncertainty Quantification (UQ) under the Bayesian framework is applied to BISON code FGR model based on Risø-AN3 time series experimental data. Inverse UQ seeks statistical descriptions of the uncertain input parameters that are consistent with the available measurement data. It always captures the uncertainties in its estimates rather than merely determining the best-fit values. Kriging metamodel is applied to greatly reduce the computational cost during Markov Chain Monte Carlo sampling. We performed a dimension reduction for the FGR time series data using Principal Component Analysis. We also projected the original FGR time series measurement data onto the PC subspace as “transformed experiment data”. A forward uncertainty propagation based on the posterior distributions shows that the agreement between BISON simulation and Risø-AN3 time series measurement data is greatly improved. The posterior distributions for the uncertain input factors can be used to replace the expert specifications for future uncertainty/sensitivity analysis.
Article
An Adaptive Metamodel-Based Subset Importance Sampling (AM-SIS) approach, previously developed by the authors, is here employed to assess the (small) functional failure probability of a thermal-hydraulic (T-H) nuclear passive safety system. The approach relies on an iterative Importance Sampling (IS) scheme that efficiently couples the powerful characteristics of Subset Simulation (SS) and fast-running Artificial Neural Networks (ANNs). In particular, SS and ANNs are intelligently employed to build and progressively refine a fully nonparametric estimator of the ideal, zero-variance Importance Sampling Density (ISD), in order to: (i) increase the robustness of the failure probability estimates and (ii) decrease the number of expensive T-H simulations needed (together with the related computational burden). The performance of the approach is thoroughly compared to that of other efficient Monte Carlo (MC) techniques on a case study involving an advanced nuclear reactor cooled by helium.