Content uploaded by Silvia Tolo
Author content
All content in this area was uploaded by Silvia Tolo on Nov 12, 2018
Content may be subject to copyright.
UNCERTAINTY IN DIGITAL REACTOR DESIGN
S. Tolo, Institute for Risk and Uncertainty, Virtual Engineering Centre, University of Liverpool, U.K.
D. Litskevich, K.C. Lai, D. Faulke, D. Bowman, B. Merk and K. Vikhorev, Virtual Engineering
Centre, University of Liverpool U.K
E. Patelli, Institute for Risk and Uncertainty, University of Liverpool, U.K.
ABSTRACT
The Digital Reactor Design project aims at providing a complete and robust framework for the
implementation of a simulated environment covering all the aspects of nuclear reactor design and
operation. This would allow to enhance safety and structural integrity as well as to improve confidence in
the knowledge of the system through the postulation of scenarios and operating conditions that can be
modelled start-to-end from a graphical interface. In order to match industry requirements in terms of
model robustness and reliability, non-conservative approaches, better known as Best Estimate Plus
Uncertainty, able to take into account and quantify analysis uncertainty, are integrated in the framework
along with traditional conservative tools. This paper describes the computational framework for
uncertainty propagation, reliability and sensitivity analysis under development in the context of the
Digital Reactor Design project and hence tailored on the requirements and needs of the nuclear industry.
The analysis of a rod ejection accident for a pressurized water reactor, based on the OECD/NEA and US
NRC PWR MOX/UO2 core transient benchmark, is proposed in order to provide a general overview of
the current tool’s capabilities.
1. INTRODUCTION
Computer-aided modelling and virtual prototypes
play a key role in the nuclear industry, supporting
and driving the design of new and more advanced
components, structures and systems. As a
counterpart of these unquestionable advantages,
the use of computational approaches for the
simulation and prediction of the behaviour of
complex systems raises reasonable concerns on
the reliability of the adopted tools and the
accuracy of the response.
In order to guarantee the safe operation of nuclear
power plants (NPPs), provisions are assigned by
national regulatory bodies in terms of safety limits
and margins. These latter are usually interpreted as
the difference, in physical units, between the
regulatory acceptance criteria (e.g. safety limits)
and the respective results provided by the
calculation of parameters relevant to the
anticipated operational occurrences, design basis
accidents and more generally changes or
phenomenon under considerations [1].
Historically, the margins to acceptance criteria, as
well as the compliance of licenced systems, have
been determined by a fully conservative approach.
This practice, considered potentially misleading
due to the unknown level of conservatism and the
eventual prediction of unrealistic system
behaviours, has been first replaced with a similarly
conservative procedure, known as Best Estimate
Bounding, based on the use of Best Estimate (BE)
codes in combination with conservative
assumptions regarding the availability of systems
as well as initial and boundary conditions [2].
In the wake of the increasing tendency towards
more realistic calculations and led by the aim of
minimizing unnecessary conservatism while
accounting for uncertainties associated to
simulation results, recent trends suggest the
substitution of traditional conservative approaches
with Best Estimate Plus Uncertainty (BEPU)
methods, as highlighted by several research efforts
and recommendations [3][4][5]. BEPU approaches
combine BE codes with realistic assumptions
regarding boundaries and initial conditions but
with system availability based on conservative
assumptions or, in more rare cases, originated
from probabilistic safety analysis.
The growing interest of the nuclear sector for
more realistic approaches to systems modelling
implies the need for efficient and robust
computational tools able to adequately identify,
propagate and quantify the uncertainties associated
with each step of the reactor design process.
This paper aims to give an overview of the
computational framework for uncertainty
quantification, reliability and sensitivity analysis
associated with the simulation environment under
development in the context of the Digital Reactor
Design (DRD) project. The following sections
provide a general introduction of the aims and
objectives of the DRD project (Section 2) and the
description of the computational tool integrated in
the implemented environment for uncertainty,
reliability and sensitivity analysis (Section 3).
Finally, the application of the tool to a case-study
modelling the occurrence of a rod ejection
accident (REA) in a pressurized water reactor
(PWR) is discussed in Section 4.
2. DIGITAL REACTOR DESIGN
Thanks to the continuous increase of
computational power and data availability, last
decades have seen the emergence of digital twin
technology in various industrial sectors. Such
approach mainly consists of the implementation of
a virtual representation of physical systems (i.e. a
digital twin) able to reproduce and hence predict,
thanks to the aid of advanced computational tools
and models as well as the continuous collection of
data, the behaviour and state of the object of
interest across its entire lifecycle. This provides a
better understanding of the physical assets,
allowing to optimize systems design,
manufacturing, operation and maintenance, with
unquestionable enhancements in terms of both
safety and cost efficiency.
Although the application of this kind of approach
is currently limited to few industrial fields, such as
the aerospace sector, significant efforts and
ongoing research have been made to extend its
applicability to a growing number of engineering
areas.
In the nuclear sector, this trend has resulted in the
postulation of an integrated nuclear digital
environment (INDE) covering the modelling of
the entire life of nuclear plants life, from
prototyping and construction, to operation, shut-
down and decommissioning [6]. The
implementation of such framework implies the
availability and interconnection of a series of
multi-scale and multi-physics computational
models continuously updated with data acquired
from the physical system during its life cycle.
The DRD project, supported by the U.K.
Department for Business, Energy and Industrial
Strategy and led by Amec Foster Wheeler (Wood
Group) in partnership with the Virtual Engineering
Centre of the University of Liverpool as well as
other industrial and academic partners, such as the
Science and Technology Facilities Council’s
Hartree Centre, the National Nuclear Laboratory,
Rolls-Royce, EDF Energy, Cambridge University,
and Imperial College London, represents a first
step in the direction of implementing a fully
developed INDE. The main objective of the
project is to provide a digital environment for
computer-aided modelling able to capture the
different phases of reactors design in a coherent,
unique framework, which would ultimately result
in the implementation of the reactor digital twin.
From a strictly technical point of view, this
translates in a simulation system distributed over
several independent processes (e.g. chemical,
thermal, mechanical etc.) or components
interconnected to each other and executable on
more computational nodes: the use of high
performance computing, and hence the
parallelization of the computation, are indeed a
crucial requirement for the realization of such
framework, due to the high computational costs
and model complexity.
These challenges, as well as the strong novelty of
the approach, rise reasonable questions on the
credibility of the ultimate tool, understood as the
willingness of designers, regulators and operators
to trust the information provided and hence to
make decisions based on the output obtained from
the model [7]. It is then of crucial importance to
provide, along with efficient computational tools,
their validation and evidence of their robustness
and reliability.
The rigorous analysis of uncertainties affecting the
different steps of the calculation and the
quantification of their impact on the model
response play a key role in defining the accuracy
of the adopted model, providing a more realistic
knowledge of the physical and virtual systems
with respect to traditional conservative approaches
and thus enhancing the credibility of the approach
in the eyes of decision makers.
2.1 DEALING WITH UNCERTAINTY
According to the International Atomic Energy
Agency, the uncertainties affecting the reactor
modelling, and subsequently the safety analysis,
may be organized in five main categories [2]:
Code or model uncertainties (e.g. numerical
approximations, randomness or imprecision
concerning material properties, simplifying
assumptions);
Representation uncertainties (e.g. system
discretization, nodalization, mesh cells
homogenization etc.);
Scaling uncertainties (i.e. reliance on scaling
laws to extend the results of scaled
experiments to full scale systems);
Plant uncertainties (e.g. concerning initial and
boundary conditions);
User effect (e.g. misapplication of the system,
user errors etc.);
The main objective of uncertainty analysis is to
model the lack of knowledge concerning the
system under study by using adequate
mathematical frameworks to understand and
quantify the effect of the different kinds of
uncertainty on some defined objectives and/or
requirements. This information is essential to
define the level of accuracy achievable by the
adopted numerical model as well as to verify in a
robust way if, and to what extent, the design,
operational or safety requirements are met.
In the context of the DRD project, this implies
first of all to identify, for each step of the reactor
design process, the relevant uncertainties raised
from the sources listed above that can represent a
threat for the reliability of the analysis, from
lattice computation to fuel behaviour.
This is usually achieved through sensitivity
analysis, which allows to highlight eventual
weakness of the model adopted and to test its
robustness against the input variability, as well as
to identify the parameters that impact the most the
computational results and whose refinement can
thus most significantly decrease the output
uncertainty.
Once the relevant uncertainties are identified and
adequately modelled on the basis of the available
data or, in lack of it, expert elicitation, these must
be propagated through the analysis associated to
the specific stages of the design process. This kind
of analysis aims thus to define the imprecision or
randomness affecting the output, allowing the
quantified accuracy of the analysis to be included
in the decision making process and hence strongly
enhancing the value of the computed information.
A further valuable tool for uncertainty
management is reliability analysis. The latter has
as object the quantification of the probability of
not exceeding some predefined thresholds. This
results of particular relevance for risk-informed
decision making and reactor safety: although
emphasis is still mainly focused on the
deterministic evaluation of safety margins, current
international trends seem to point to the extension
of the safety standards over the existing
acceptability thresholds associated with
conservative approaches, in order to include
probabilistic safety assessment assumptions for
safety margins [1].
Since the DRD virtual environment consists of a
simulation framework distributed over several
processes interconnected to each other (meaning
that the output of one analysis becomes the input
of the computation associated to the following
design step), the computational tool employed for
uncertainty quantification purposes must meet
several requirements. First of all, it must guarantee
the loose coupling of the computational codes
associated with the design phases; this translates in
the capability to propagate uncertainty through
stand-alone as well as interconnected codes or
models. Further requirements stem from the high
computational demand implied by this kind of
analysis, which results in the need for highly
efficient mathematical strategies and
computational methods, matched by the
exploitation of high performance computing
facilities.
3. COMPUTATIONAL TOOL
In order to provide the framework under
development with uncertainty quantification tools
fitting the requirements highlighted in the previous
section, the OpenCossan library [8] has been
adopted.
OpenCossan is an open source software released
under the LGPL license and originally developed
in an object oriented fashion in MATLAB
environment, which provides an expandable
modular framework [9]. The current version of the
software, developed in collaboration with several
industrial and academic partners, is the result of
thirty years of research in the field of
computational and stochastic analysis and
incorporates both traditional and cutting-edge
methods covering a wide range of fields and
applications, including optimization analysis, life-
cycle management, reliability and risk analysis,
sensitivity, optimization and robust design. The
software is under continuous evolution, constantly
enriched with novel numerical methods developed
within the research facility of the Institute for Risk
and Uncertainty of the University of Liverpool,
where it is currently hosted [10].
For reasons of compatibility with the
computational architecture developed in the
context of the DRD project, a version of the
software tailored on the needs of the virtual
environment has been implemented in JAVA
language. This work, and hence the application
discussed in the following sections, refer to this
latter version. A brief overview of the main
capabilities of interest for the current application
are presented in the following sections. All the
techniques mentioned can be also applied to
surrogate models (see Table 1 for available
options) built on the basis of available data or
highly complex computational models, in order to
reduce the costs of the analysis.
Table 1 Overview of the OpenCossan tools
TOOL
METHODS
Uncertainty
Quantification
and Reliability
Monte Carlo
Latin Hypercube Sampling
Quasi-Monte Carlo Sampling
Importance Sampling
Line Sampling
Subset Simulation
Interval Monte Carlo
Markov Chain Monte Carlo
Optimization
Genetic algorithms
COBYLA and BOBYQA
SQP
Simplex
Simulated annealing
Evolution strategies
Alpha-level optimization
Sensitivity
MC gradient estimation
FAST
Sobol’ sensitivity indices
Nonspecificity technique
Meta-Model
Artificial neural networks
Gaussian Process
Polyharmonic Splines
Response surface
3.1 UNCERTAINTY
CHARACTERIZATION
One of the strengths of the OpenCossan library
lies with the high flexibility associated with the
characterization of uncertainty in its different
instances.
It is common practice to fit the uncertainty
affecting the available information within the
framework of probability, through the use of
probability density models to capture the aleatory
behaviour of data. The library offers a wide range
of pre-defined options in terms of probability
distribution families, whose instantiations can be
built by the user specifying the distribution
momentums, as well as non-parametric
distribution and the possibility to adopt user-
defined models.
Nevertheless, when the dataset available is limited
or corrupted, the epistemic contribution to the
overall uncertainty can be so large to make any
kind of estimation of probability functions
inaccurate and, eventually, misleading since not
justified by the experimental evidence. While for
extreme cases the use of intervals is generally
unanimously recommended, for in-between
situations, i.e. when the amount or quality of the
available data are not sufficient to justify the use
of a probability model on the one hand and the use
of intervals would imply discarding part of the
information on the other, several mathematical
frameworks have been proposed.
Consistently, the software allows, along with
intervals and traditional random variables, the use
of probability boxes, theorized in the context of
imprecise probabilities and understood as sets of
possible probability distributions, all reasonably in
agreement with the data available. In the particular
case of parametric p-boxes, the software allows
the user to define the distribution family
associated with the set and the momentums,
expressed as intervals, which define the set itself.
On the other hand, the characterization of the
uncertainty affecting the output of a certain model
implies the propagation of the initial uncertainties
in input. Several methodology are available for
this purpose but restricted by the nature of the
input variables:
Sampling Techniques: classical sampling
techniques, e.g. Monte Carlo, Latin Hypercube
Sampling are available in the library to be
adopted when all the uncertainty in input are
represented by probabilistic models;
Interval Analysis: if all the uncertainties in
input are characterized by interval values, their
propagation through the model under
examination generally requires the use of
optimization techniques. The library provides
a wide range of options for dealing with
different types of optimization problems, as
listed in Table 1;
Hybrid Methods: if both aleatory and
epistemic uncertainties are present in input, i.e.
in the form of both probabilistic and interval
variables, hybrid approaches coupling
sampling and optimization techniques are
available.
3.2 RELIABILITY ANALYSIS
The selection of the most appropriate simulation
tool for the performance of reliability analysis is
bound by the type of variables involved in the
analysis as well as by the computational power
and accuracy requirements. For this purpose, the
library provides a large variety of advanced
simulation tools (see Table 1) able to handle both
probabilistic and non-probabilistic variables, along
with traditional approximation methods, e.g. First
and Second Order Reliability Method [11].
3.3 SENSITIVITY ANALYSIS
The numerical algorithms provided in the
OpenCossan software cover all three main
categories in which the sensitivity analysis is
usually classified, namely screening methods
(consisting of varying one input parameter at a
time, subsequently measuring the impact on the
output), local (offering an insight on the system
behaviour in a specific region of the input domain)
and global (considering the whole range of the
input parameters at once) sensitivity analysis. Also
in this case, the library allows the handling of both
aleatory and epistemic uncertainties as well as the
use of pre-calculated output data obtained from
previous analysis. A list of methods available in
OpenCossan for sensitivity analysis is provided in
Table 1.
3.4 THIRD-PARTY SOFTWARE AND
HIGH-PERFORMANCE COMPUTING
The library is designed to interact with
deterministic third-party software through non-
intrusive strategies relying on the manipulation of
the ASCII input files of the external solvers and
avoiding the necessity of dedicated interfaces. For
each realization of the uncertain user-defined input
variables, the correspondent value is injected in
the solver input files. Then the third-party solver is
executed, and the output of interest automatically
extracted from the generated output file and
passed back to the library for visualization or
further manipulation.
Stochastic analysis implies multiple solver
executions which, in particular in the case of
highly computationally demanding third-party
software, can result unfeasibly expensive to be run
locally. For this purpose, the library provides
transparent access to high-performance computing
(HPC) for any algorithms implemented in the
framework. The library manages the execution of
the solvers, in agreement with eventual specific
order in case of multiple tasks, creating tailored
assignments and allowing splitting the calculation
in batches in order to enhance the flexibility of the
analysis (e.g. checking the convergence, adding
samples etc.). Different parallelization strategies
are available as well as pre-built interfaces for
most common job schedulers, such as GridEngine,
Platform/LSF and OpenLava, allowing to exploit
remote clusters and grids for the distribution of
jobs. Furthermore, the software allows the use of
HPC strategies in combination with machines
running different operating systems and the
storing of the results either locally or remotely, in
order to facilitate the accessibility of large amount
of data eventually generated.
4. CASE STUDY
For testing and validation purposes, the
computational tool discussed in this paper was
applied for the uncertainty analysis of key reactors
parameters of a PWR during a REA event. This
latter falls in the category of reactivity-initiated
accidents for PWRs and is one of the design basis
accidents taken into account during the NPP
licensing process.
The accident consists of the unexpected ejection of
a control rod due to the failing of the rod housing
(e.g. caused by the occurrence of cracks). In the
current analysis, uncertainty affecting thermo-
hydraulic parameters (i.e. fuel and cladding
thermal conductivity), boundary conditions (i.e.
initial power, coolant temperature and pressure)
and manufacturing tolerances (i.e. cladding outer
and internal diameter) were considered in input
and their impact on the simulation output
quantified. The analysis was realized coupling the
uncertainty quantification library with the Dyn3D
software [12] for the 3-D core calculation of the
reactor transient triggered by the rod ejection
event.
4.1 CORE CONFIGURATION AND INPUT
The simulation setting adopted for the current
case-study is based on the OECD/NEA and US
NRC PWR MOX/UO2 core transient benchmark
[13]. This refers to a four-loop Westinghouse
PWR power plant partially loaded with MOX fuel
in combination with Zyrcaloid-2 cladding and
subject to a rod ejection accident. The rod is
assumed to be fully ejected in 0.1 seconds after
which no reactor scram is considered.
According to the common guidelines for cores
partially loaded with MOX, less than one third of
the assemblies contain MOX fuel and no fresh
MOX is located on the core periphery. Two main
values of fuel burnup are considered, 20
GWd/tHM and 35 GWd/tHM.
Table 2 Normal random variables in input
Variable
Mean
STD
Initial Power
[MW]
3.565
0.02
Coolant Temperature
[°C]
286.85
1.15
Coolant Pressure
[MPa]
15.5
0.075
Table 3 Interval variables in input
Variable
Lower
Bound
Upper
Bound
Cladding Outer Diameter
[mm]
9.146
9.186
Cladding Internal Diameter
[mm]
7.820
8.220
Fuel Thermal Conductivity
[W/m*K]
3.9
10.1
Cladding Thermal Conductivity
[W/m*K]
12.7
18.1
The ejection event was simulated from hot zero
power conditions, while the characterization of
uncertainties in input was gathered from the
available literature [14]. As shown in Table 2, the
uncertainties in input were captured adopting both
probabilistic (see Table 2) and interval models
(see Table 3).
The direct impact of the input uncertainties on the
key parameters, e.g. maximum fuel temperature
and enthalpy, generated power, was quantified and
a sensitivity analysis carried out in order to
estimate the most important factors in terms of
output uncertainty. Moreover, a reliability analysis
of the system, focusing on the probability of fuel
damage was considered.
4.2 RESULTS
The results shown in this sections were obtained
through a Latin Hypercube Sampling approach,
adopting a population of 1000 samples in order to
fully explore the input space and its impact on the
model outputs.
4.2 (a) Uncertainty Propagation
The uncertainty in input has first been propagated
through the model in order to visualize the
magnitude of their impact on the output accuracy.
As shown in Fig.1, the overall trend of the results
is coherent with reasonable expectations: higher
fuel temperatures are registered in the mid region
of the fuel assemblies; moreover, the reactor
channels located in proximity of the ejected rod
(channels from 100 to 200) are subject to a
significant increase of temperature along the fuel
centreline due to the increase in reactivity due to
the REA event.
Figure 1 Temperature of fuel centreline along its active
length given input uncertainties over each reactor
channel
As shown in Fig. 2, the temperature along the fuel
centreline covers different ranges according to the
location of the assembly analysed: more marginal
assemblies register temperatures between 284°C
and 290°C, while the largest interval was obtained
for the area of the ejected control rod (assembly
194) and covers a range between 284 and 415°C.
Figure 2 Results of the sampling analysis for the
maximum fuel temperature over the reactor channels
The increase in terms of temperature range goes
along with a rise of uncertainty affecting the
output. Indeed, considering the behaviour of the
maximum fuel temperature along the active fuel
length (see Fig.3), larger output oscillations are
registered in the central region of the fuel
assemblies (approximately around 2m), where the
highest values of temperature are expected. In
spite of this, the amount of uncertainty affecting
the maximum temperature value remains quite
small, with a range width of 8.7°C (i.e. [406.8°C,
415.5°C]) for the peak and 7.2°C (i.e. [283.3°C,
290.5°C]) for the lowest values.
Figure 3 Sampling results for the maximum overall
temperature along fuel channels’ length
Conversely, the uncertainty affecting the
computed average values of moderator
temperature (see Fig.4) appears to be consistent
along all the reactor channels and is not affected
by the location of the assembly of reference in
spite of the increase in temperature between
channel 150 and 240.
Figure 4 Uncertainty propagation results for the average
moderator temperature over the reactor channel
This steady trend can be explained as the
consequence of the averaging operation and can be
also observed in the computed average fuel
temperature, shown in Fig. 5: also in this case, in
spite of the large oscillations registered in
proximity of the rejection rod, the degree of
uncertainty results steady around 7°C across the
whole reactor.
Figure 5 Uncertainty propagation results for the average
fuel temperature along the reactor channel
The impact of the input uncertainty over the
computed value of the maximum generated power
was also captured through the sampling process.
As shown in Fig. 5, the peak value of the
generated power during the transient varies
between 15353 MW and 16456 MW. Moreover,
the location of the computed peaks seems to
slightly vary in time under 0.75 seconds, while
beyond 1 second the uncertainty affecting the
transients reduces considerably and the trend
becomes strongly homogenous.
Figure 6 Uncertainty propagation results for the power
transient triggered by the rod ejection accident
4.2 (b) Sensitivity Analysis
In order to trace the computed uncertainty back to
the input individual impact, sensitivity analysis
were carried out focusing on key output of the
model, such as the maximum generated power and
the fuel enthalpy. The magnitude of such influence
was captured through the computation of
normalized sensitivity measures.
Figure 7 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the maximum
value of generated power
As shown in Fig. 6, the variation of the thermal
conductivity of the fuel appears to affect the
generated power most significantly, as expected,
while the initial power does not have any
significant impact over the computed values.
Also the cladding manufacturing tolerances seem
to have a certain impact, although limited, on the
overall output uncertainty, as are the cladding
thermal conductivity and the coolant temperature
and pressure.
Figure 8 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the maximum
value of fuel enthalpy over all the reactor channels
Similarly, thermo-hydraulic parameters, such as
the thermal conductivity of the fuel and cladding,
play a crucial role in terms of maximum fuel
enthalpy uncertainty, which is also sensitive to the
manufacturing tolerances of the fuel assemblies
(see Fig. 7).
Figure 9 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the fuel
centreline temperature of the hottest reactor channel
Predictably, the uncertainty affecting the inner and
outer diameter of the cladding influence the
temperature computed for the individual channels,
as shown in Figure 9 for channel 194, where the
highest temperature across the reactor is registered.
4.2 (c) Reliability Analysis
In order to significantly lower the computational
burden associated with the model under study, the
1000 samples previously analysed were used for
the implementation of a surrogate model to be
used for the reliability analysis of the system. This
goal was achieved adopting the response surface
methodology in combination with a training
dataset containing 75% of the computed Latin
Hypercube samples, and a validation dataset
covering the remaining 25%. The implemented
surrogate model shown a satisfying performance
within the validation domain, achieving a R2 value
over 0.999.
Figure 10 Convergence of the response surface surrogate
model for the training and validation sets respectively.
The reliability analysis was carried out focusing
on the possible occurrence of fuel or cladding
damage. In the analysis of reactivity accident, the
departure from nuclear boiling ratio (DNBR) and
the peak fuel enthalpy are the most common
metrics for fuel damage prediction.
The analysis carried out confirmed the robustness
of the system and in no fuel assembly critical
values for the peaked fuel enthalpy or DNBR were
detectable.
Indeed, the computed probability for the peak
enthalpy to overcome a threshold value of 88 kj/kg,
largely lower than values considered dangerous
for the fuel structural integrity [15], remains
between 0.5% and 1.5% for the assemblies with
the highest enthalpy peak.
5. CONCLUSIONS
A complete computational tool for uncertainty
quantification tailored on the requirements of the
Digital Reactor Design virtual framework is
presented. The capabilities of the library were
discussed and tested on a case-study modelling a
PWR core transient triggered by the occurrence of
a rod ejection accident, based on the OECD/NEA
and US NRC PWR MOX/UO2 core transient
benchmark. The uncertainty affecting thermo-
hydraulic parameters, manufacturing tolerances,
initial and boundary conditions in input was taken
into account and its impact on the robustness of
the system analysed in terms of both sensitivity
and reliability. Furthermore, the propagation of the
initial uncertainties to key reactor parameters in
output was quantified and discussed. Future work
will focus on the full integration of the tool in the
simulation framework under implementation and
on its application to coupled numerical codes and
models.
ACKNOWLEDGEMENTS
This work has been funded by the Department for
Business, Energy and Industrial Strategy of the
U.K. government under the Digital Reactor
Design project (DUNS Number: 211991708).
REFERENCES
[1] IAEA, 2005 ‘Safety Margins of Operating
Reactors: Analysis of Uncertainties and
Implications for Decision Making’, IAEA
TECDOC-1332
[2] IAEA, 2008 ‘Best estimate safety analysis for
nuclear power plants: uncertainty evaluation’,
Vienna: International Atomic Energy Agency,
Safety reports series, ISSN 1020–6450, no. 52
[3] A. Petruzzi, F. D’Auria, J. C. Micaelli, A. De
Crecy, J. Royen, 2004 ‘The BEMUSE programme
(Best Estimate Methods Uncertainty and
Sensitivity Evaluation)’, Int. Mtg. on Best-
Estimate Methods in Nuclear Installation Safety
Analysis (BE-2004) IX, Washington D.C., USA,
Nov. 14–18
[4] OECD/NEA, 2007 ‘Technology Relevance of
the Uncertainty Analysis in Modelling Project for
Nuclear Reactor Safety’, NEA/NSC/DOC(2007)15
[5] A. Bucalossi and A. Petruzzi, 2010 ‘Role of
Best Estimate Plus Uncertainty Methods in Major
Nuclear Power Plant Modifications’, Journal of
Nuclear Science and Technology, 47:8, 671-683
[6] E. A. Patterson, R. J. Taylor and M. Bankhead,
2016 ‘A framework for an integrated nuclear
digital environment’ Progress in Nuclear Energy,
87, 97-103
[7] L.W. Schruben, 1980 ‘Establishing the
credibility of simulations’, Simulation, 34, 101-
105
[8] E. Patelli, 2016 ‘COSSAN: a multidisciplinary
software suite for uncertainty quantification and
risk management’, Handbook of uncertainty
quantification, pp.1-69
[9] E. Patelli, S. Tolo, H. George-Williams, J.
Sadeghi, R. Rocchetta, M. De Angelis and
M.Broggi, 2018 ‘OpenCossan 2.0: an efficient
computational toolbox for risk, reliability and
resilience analysis’, Proceedings of the joint
ICVRAM ISUMA Uncertainties conference
Florianopolis, SC, Brazil, April 8-11
[10] COSSAN Project: https://cossan.co.uk/
[11] N. K. Prinja, A. Ogunbadejo, J. Sadeghi, and
E. Patelli, 2017 “Structural Reliability of pre-
stressed concrete containments”, Nuclear
Engineering and Design, 2017, 323C, 235-244
[12] U. Rohde et al, 2016 ‘The reactor dynamics
code DYN3D – models, validation and
applications’, Progress in Nuclear Energy,
Volume 89, pp 170-190, ISSN 0149-1970
[13] T. Kozlowski and T.J. Downar, 2003.
‘OECD/NEA and US NRC PWR MOX/UO2 core
transient benchmark’, Working Party of the
Physics of Plutonium Fuels and Innovative Fuel
Cycles, OECD/NEA Nuclear Science Committee
[14] X. Pan, B. Jia, J. Han, J. Jing and C. Zhang,
2017 ‘Systematic and quantitative uncertainty
analysis for rod ejection accident of pressurized
water reactor’, Energy Procedia, 127, pp.369-376
[15] P. E. MacDonald, S. L. Seiffert, Z. R.
Martinson, R.K. McCardell, D.E. Owen and S. K.
Fukuda, 1980 ‘Assessment of light water reactor
fuel damage during a reactivity initiated accident’,
(No. CONF-800971-1). Idaho National
Engineering Lab., Idaho Falls (USA).