Conference PaperPDF Available

UNCERTAINTY IN DIGITAL REACTOR DESIGN

Authors:

Abstract and Figures

The Digital Reactor Design project aims at providing a complete and robust framework for the implementation of a simulated environment covering all the aspects of nuclear reactor design and operation. This would allow to enhance safety and structural integrity as well as to improve confidence in the knowledge of the system through the postulation of scenarios and operating conditions that can be modelled start-to-end from a graphical interface. In order to match industry requirements in terms of model robustness and reliability, non-conservative approaches, better known as Best Estimate Plus Uncertainty, able to take into account and quantify analysis uncertainty, are integrated in the framework along with traditional conservative tools. This paper describes the computational framework for uncertainty propagation, reliability and sensitivity analysis under development in the context of the Digital Reactor Design project and hence tailored on the requirements and needs of the nuclear industry. The analysis of a rod ejection accident for a pressurized water reactor, based on the OECD/NEA and US NRC PWR MOX/UO2 core transient benchmark, is proposed in order to provide a general overview of the current tool’s capabilities.
Content may be subject to copyright.
UNCERTAINTY IN DIGITAL REACTOR DESIGN
S. Tolo, Institute for Risk and Uncertainty, Virtual Engineering Centre, University of Liverpool, U.K.
D. Litskevich, K.C. Lai, D. Faulke, D. Bowman, B. Merk and K. Vikhorev, Virtual Engineering
Centre, University of Liverpool U.K
E. Patelli, Institute for Risk and Uncertainty, University of Liverpool, U.K.
ABSTRACT
The Digital Reactor Design project aims at providing a complete and robust framework for the
implementation of a simulated environment covering all the aspects of nuclear reactor design and
operation. This would allow to enhance safety and structural integrity as well as to improve confidence in
the knowledge of the system through the postulation of scenarios and operating conditions that can be
modelled start-to-end from a graphical interface. In order to match industry requirements in terms of
model robustness and reliability, non-conservative approaches, better known as Best Estimate Plus
Uncertainty, able to take into account and quantify analysis uncertainty, are integrated in the framework
along with traditional conservative tools. This paper describes the computational framework for
uncertainty propagation, reliability and sensitivity analysis under development in the context of the
Digital Reactor Design project and hence tailored on the requirements and needs of the nuclear industry.
The analysis of a rod ejection accident for a pressurized water reactor, based on the OECD/NEA and US
NRC PWR MOX/UO2 core transient benchmark, is proposed in order to provide a general overview of
the current tool’s capabilities.
1. INTRODUCTION
Computer-aided modelling and virtual prototypes
play a key role in the nuclear industry, supporting
and driving the design of new and more advanced
components, structures and systems. As a
counterpart of these unquestionable advantages,
the use of computational approaches for the
simulation and prediction of the behaviour of
complex systems raises reasonable concerns on
the reliability of the adopted tools and the
accuracy of the response.
In order to guarantee the safe operation of nuclear
power plants (NPPs), provisions are assigned by
national regulatory bodies in terms of safety limits
and margins. These latter are usually interpreted as
the difference, in physical units, between the
regulatory acceptance criteria (e.g. safety limits)
and the respective results provided by the
calculation of parameters relevant to the
anticipated operational occurrences, design basis
accidents and more generally changes or
phenomenon under considerations [1].
Historically, the margins to acceptance criteria, as
well as the compliance of licenced systems, have
been determined by a fully conservative approach.
This practice, considered potentially misleading
due to the unknown level of conservatism and the
eventual prediction of unrealistic system
behaviours, has been first replaced with a similarly
conservative procedure, known as Best Estimate
Bounding, based on the use of Best Estimate (BE)
codes in combination with conservative
assumptions regarding the availability of systems
as well as initial and boundary conditions [2].
In the wake of the increasing tendency towards
more realistic calculations and led by the aim of
minimizing unnecessary conservatism while
accounting for uncertainties associated to
simulation results, recent trends suggest the
substitution of traditional conservative approaches
with Best Estimate Plus Uncertainty (BEPU)
methods, as highlighted by several research efforts
and recommendations [3][4][5]. BEPU approaches
combine BE codes with realistic assumptions
regarding boundaries and initial conditions but
with system availability based on conservative
assumptions or, in more rare cases, originated
from probabilistic safety analysis.
The growing interest of the nuclear sector for
more realistic approaches to systems modelling
implies the need for efficient and robust
computational tools able to adequately identify,
propagate and quantify the uncertainties associated
with each step of the reactor design process.
This paper aims to give an overview of the
computational framework for uncertainty
quantification, reliability and sensitivity analysis
associated with the simulation environment under
development in the context of the Digital Reactor
Design (DRD) project. The following sections
provide a general introduction of the aims and
objectives of the DRD project (Section 2) and the
description of the computational tool integrated in
the implemented environment for uncertainty,
reliability and sensitivity analysis (Section 3).
Finally, the application of the tool to a case-study
modelling the occurrence of a rod ejection
accident (REA) in a pressurized water reactor
(PWR) is discussed in Section 4.
2. DIGITAL REACTOR DESIGN
Thanks to the continuous increase of
computational power and data availability, last
decades have seen the emergence of digital twin
technology in various industrial sectors. Such
approach mainly consists of the implementation of
a virtual representation of physical systems (i.e. a
digital twin) able to reproduce and hence predict,
thanks to the aid of advanced computational tools
and models as well as the continuous collection of
data, the behaviour and state of the object of
interest across its entire lifecycle. This provides a
better understanding of the physical assets,
allowing to optimize systems design,
manufacturing, operation and maintenance, with
unquestionable enhancements in terms of both
safety and cost efficiency.
Although the application of this kind of approach
is currently limited to few industrial fields, such as
the aerospace sector, significant efforts and
ongoing research have been made to extend its
applicability to a growing number of engineering
areas.
In the nuclear sector, this trend has resulted in the
postulation of an integrated nuclear digital
environment (INDE) covering the modelling of
the entire life of nuclear plants life, from
prototyping and construction, to operation, shut-
down and decommissioning [6]. The
implementation of such framework implies the
availability and interconnection of a series of
multi-scale and multi-physics computational
models continuously updated with data acquired
from the physical system during its life cycle.
The DRD project, supported by the U.K.
Department for Business, Energy and Industrial
Strategy and led by Amec Foster Wheeler (Wood
Group) in partnership with the Virtual Engineering
Centre of the University of Liverpool as well as
other industrial and academic partners, such as the
Science and Technology Facilities Council’s
Hartree Centre, the National Nuclear Laboratory,
Rolls-Royce, EDF Energy, Cambridge University,
and Imperial College London, represents a first
step in the direction of implementing a fully
developed INDE. The main objective of the
project is to provide a digital environment for
computer-aided modelling able to capture the
different phases of reactors design in a coherent,
unique framework, which would ultimately result
in the implementation of the reactor digital twin.
From a strictly technical point of view, this
translates in a simulation system distributed over
several independent processes (e.g. chemical,
thermal, mechanical etc.) or components
interconnected to each other and executable on
more computational nodes: the use of high
performance computing, and hence the
parallelization of the computation, are indeed a
crucial requirement for the realization of such
framework, due to the high computational costs
and model complexity.
These challenges, as well as the strong novelty of
the approach, rise reasonable questions on the
credibility of the ultimate tool, understood as the
willingness of designers, regulators and operators
to trust the information provided and hence to
make decisions based on the output obtained from
the model [7]. It is then of crucial importance to
provide, along with efficient computational tools,
their validation and evidence of their robustness
and reliability.
The rigorous analysis of uncertainties affecting the
different steps of the calculation and the
quantification of their impact on the model
response play a key role in defining the accuracy
of the adopted model, providing a more realistic
knowledge of the physical and virtual systems
with respect to traditional conservative approaches
and thus enhancing the credibility of the approach
in the eyes of decision makers.
2.1 DEALING WITH UNCERTAINTY
According to the International Atomic Energy
Agency, the uncertainties affecting the reactor
modelling, and subsequently the safety analysis,
may be organized in five main categories [2]:
Code or model uncertainties (e.g. numerical
approximations, randomness or imprecision
concerning material properties, simplifying
assumptions);
Representation uncertainties (e.g. system
discretization, nodalization, mesh cells
homogenization etc.);
Scaling uncertainties (i.e. reliance on scaling
laws to extend the results of scaled
experiments to full scale systems);
Plant uncertainties (e.g. concerning initial and
boundary conditions);
User effect (e.g. misapplication of the system,
user errors etc.);
The main objective of uncertainty analysis is to
model the lack of knowledge concerning the
system under study by using adequate
mathematical frameworks to understand and
quantify the effect of the different kinds of
uncertainty on some defined objectives and/or
requirements. This information is essential to
define the level of accuracy achievable by the
adopted numerical model as well as to verify in a
robust way if, and to what extent, the design,
operational or safety requirements are met.
In the context of the DRD project, this implies
first of all to identify, for each step of the reactor
design process, the relevant uncertainties raised
from the sources listed above that can represent a
threat for the reliability of the analysis, from
lattice computation to fuel behaviour.
This is usually achieved through sensitivity
analysis, which allows to highlight eventual
weakness of the model adopted and to test its
robustness against the input variability, as well as
to identify the parameters that impact the most the
computational results and whose refinement can
thus most significantly decrease the output
uncertainty.
Once the relevant uncertainties are identified and
adequately modelled on the basis of the available
data or, in lack of it, expert elicitation, these must
be propagated through the analysis associated to
the specific stages of the design process. This kind
of analysis aims thus to define the imprecision or
randomness affecting the output, allowing the
quantified accuracy of the analysis to be included
in the decision making process and hence strongly
enhancing the value of the computed information.
A further valuable tool for uncertainty
management is reliability analysis. The latter has
as object the quantification of the probability of
not exceeding some predefined thresholds. This
results of particular relevance for risk-informed
decision making and reactor safety: although
emphasis is still mainly focused on the
deterministic evaluation of safety margins, current
international trends seem to point to the extension
of the safety standards over the existing
acceptability thresholds associated with
conservative approaches, in order to include
probabilistic safety assessment assumptions for
safety margins [1].
Since the DRD virtual environment consists of a
simulation framework distributed over several
processes interconnected to each other (meaning
that the output of one analysis becomes the input
of the computation associated to the following
design step), the computational tool employed for
uncertainty quantification purposes must meet
several requirements. First of all, it must guarantee
the loose coupling of the computational codes
associated with the design phases; this translates in
the capability to propagate uncertainty through
stand-alone as well as interconnected codes or
models. Further requirements stem from the high
computational demand implied by this kind of
analysis, which results in the need for highly
efficient mathematical strategies and
computational methods, matched by the
exploitation of high performance computing
facilities.
3. COMPUTATIONAL TOOL
In order to provide the framework under
development with uncertainty quantification tools
fitting the requirements highlighted in the previous
section, the OpenCossan library [8] has been
adopted.
OpenCossan is an open source software released
under the LGPL license and originally developed
in an object oriented fashion in MATLAB
environment, which provides an expandable
modular framework [9]. The current version of the
software, developed in collaboration with several
industrial and academic partners, is the result of
thirty years of research in the field of
computational and stochastic analysis and
incorporates both traditional and cutting-edge
methods covering a wide range of fields and
applications, including optimization analysis, life-
cycle management, reliability and risk analysis,
sensitivity, optimization and robust design. The
software is under continuous evolution, constantly
enriched with novel numerical methods developed
within the research facility of the Institute for Risk
and Uncertainty of the University of Liverpool,
where it is currently hosted [10].
For reasons of compatibility with the
computational architecture developed in the
context of the DRD project, a version of the
software tailored on the needs of the virtual
environment has been implemented in JAVA
language. This work, and hence the application
discussed in the following sections, refer to this
latter version. A brief overview of the main
capabilities of interest for the current application
are presented in the following sections. All the
techniques mentioned can be also applied to
surrogate models (see Table 1 for available
options) built on the basis of available data or
highly complex computational models, in order to
reduce the costs of the analysis.
Table 1 Overview of the OpenCossan tools
TOOL
METHODS
Uncertainty
Quantification
and Reliability
Monte Carlo
Latin Hypercube Sampling
Quasi-Monte Carlo Sampling
Importance Sampling
Line Sampling
Subset Simulation
Interval Monte Carlo
Markov Chain Monte Carlo
Optimization
Genetic algorithms
COBYLA and BOBYQA
SQP
Simplex
Simulated annealing
Evolution strategies
Alpha-level optimization
Sensitivity
MC gradient estimation
FAST
Sobol sensitivity indices
Nonspecificity technique
Meta-Model
Artificial neural networks
Gaussian Process
Polyharmonic Splines
Response surface
3.1 UNCERTAINTY
CHARACTERIZATION
One of the strengths of the OpenCossan library
lies with the high flexibility associated with the
characterization of uncertainty in its different
instances.
It is common practice to fit the uncertainty
affecting the available information within the
framework of probability, through the use of
probability density models to capture the aleatory
behaviour of data. The library offers a wide range
of pre-defined options in terms of probability
distribution families, whose instantiations can be
built by the user specifying the distribution
momentums, as well as non-parametric
distribution and the possibility to adopt user-
defined models.
Nevertheless, when the dataset available is limited
or corrupted, the epistemic contribution to the
overall uncertainty can be so large to make any
kind of estimation of probability functions
inaccurate and, eventually, misleading since not
justified by the experimental evidence. While for
extreme cases the use of intervals is generally
unanimously recommended, for in-between
situations, i.e. when the amount or quality of the
available data are not sufficient to justify the use
of a probability model on the one hand and the use
of intervals would imply discarding part of the
information on the other, several mathematical
frameworks have been proposed.
Consistently, the software allows, along with
intervals and traditional random variables, the use
of probability boxes, theorized in the context of
imprecise probabilities and understood as sets of
possible probability distributions, all reasonably in
agreement with the data available. In the particular
case of parametric p-boxes, the software allows
the user to define the distribution family
associated with the set and the momentums,
expressed as intervals, which define the set itself.
On the other hand, the characterization of the
uncertainty affecting the output of a certain model
implies the propagation of the initial uncertainties
in input. Several methodology are available for
this purpose but restricted by the nature of the
input variables:
Sampling Techniques: classical sampling
techniques, e.g. Monte Carlo, Latin Hypercube
Sampling are available in the library to be
adopted when all the uncertainty in input are
represented by probabilistic models;
Interval Analysis: if all the uncertainties in
input are characterized by interval values, their
propagation through the model under
examination generally requires the use of
optimization techniques. The library provides
a wide range of options for dealing with
different types of optimization problems, as
listed in Table 1;
Hybrid Methods: if both aleatory and
epistemic uncertainties are present in input, i.e.
in the form of both probabilistic and interval
variables, hybrid approaches coupling
sampling and optimization techniques are
available.
3.2 RELIABILITY ANALYSIS
The selection of the most appropriate simulation
tool for the performance of reliability analysis is
bound by the type of variables involved in the
analysis as well as by the computational power
and accuracy requirements. For this purpose, the
library provides a large variety of advanced
simulation tools (see Table 1) able to handle both
probabilistic and non-probabilistic variables, along
with traditional approximation methods, e.g. First
and Second Order Reliability Method [11].
3.3 SENSITIVITY ANALYSIS
The numerical algorithms provided in the
OpenCossan software cover all three main
categories in which the sensitivity analysis is
usually classified, namely screening methods
(consisting of varying one input parameter at a
time, subsequently measuring the impact on the
output), local (offering an insight on the system
behaviour in a specific region of the input domain)
and global (considering the whole range of the
input parameters at once) sensitivity analysis. Also
in this case, the library allows the handling of both
aleatory and epistemic uncertainties as well as the
use of pre-calculated output data obtained from
previous analysis. A list of methods available in
OpenCossan for sensitivity analysis is provided in
Table 1.
3.4 THIRD-PARTY SOFTWARE AND
HIGH-PERFORMANCE COMPUTING
The library is designed to interact with
deterministic third-party software through non-
intrusive strategies relying on the manipulation of
the ASCII input files of the external solvers and
avoiding the necessity of dedicated interfaces. For
each realization of the uncertain user-defined input
variables, the correspondent value is injected in
the solver input files. Then the third-party solver is
executed, and the output of interest automatically
extracted from the generated output file and
passed back to the library for visualization or
further manipulation.
Stochastic analysis implies multiple solver
executions which, in particular in the case of
highly computationally demanding third-party
software, can result unfeasibly expensive to be run
locally. For this purpose, the library provides
transparent access to high-performance computing
(HPC) for any algorithms implemented in the
framework. The library manages the execution of
the solvers, in agreement with eventual specific
order in case of multiple tasks, creating tailored
assignments and allowing splitting the calculation
in batches in order to enhance the flexibility of the
analysis (e.g. checking the convergence, adding
samples etc.). Different parallelization strategies
are available as well as pre-built interfaces for
most common job schedulers, such as GridEngine,
Platform/LSF and OpenLava, allowing to exploit
remote clusters and grids for the distribution of
jobs. Furthermore, the software allows the use of
HPC strategies in combination with machines
running different operating systems and the
storing of the results either locally or remotely, in
order to facilitate the accessibility of large amount
of data eventually generated.
4. CASE STUDY
For testing and validation purposes, the
computational tool discussed in this paper was
applied for the uncertainty analysis of key reactors
parameters of a PWR during a REA event. This
latter falls in the category of reactivity-initiated
accidents for PWRs and is one of the design basis
accidents taken into account during the NPP
licensing process.
The accident consists of the unexpected ejection of
a control rod due to the failing of the rod housing
(e.g. caused by the occurrence of cracks). In the
current analysis, uncertainty affecting thermo-
hydraulic parameters (i.e. fuel and cladding
thermal conductivity), boundary conditions (i.e.
initial power, coolant temperature and pressure)
and manufacturing tolerances (i.e. cladding outer
and internal diameter) were considered in input
and their impact on the simulation output
quantified. The analysis was realized coupling the
uncertainty quantification library with the Dyn3D
software [12] for the 3-D core calculation of the
reactor transient triggered by the rod ejection
event.
4.1 CORE CONFIGURATION AND INPUT
The simulation setting adopted for the current
case-study is based on the OECD/NEA and US
NRC PWR MOX/UO2 core transient benchmark
[13]. This refers to a four-loop Westinghouse
PWR power plant partially loaded with MOX fuel
in combination with Zyrcaloid-2 cladding and
subject to a rod ejection accident. The rod is
assumed to be fully ejected in 0.1 seconds after
which no reactor scram is considered.
According to the common guidelines for cores
partially loaded with MOX, less than one third of
the assemblies contain MOX fuel and no fresh
MOX is located on the core periphery. Two main
values of fuel burnup are considered, 20
GWd/tHM and 35 GWd/tHM.
Table 2 Normal random variables in input
Variable
Mean
STD
Initial Power
[MW]
3.565
0.02
Coolant Temperature
[°C]
286.85
1.15
Coolant Pressure
[MPa]
15.5
0.075
Table 3 Interval variables in input
Variable
Lower
Bound
Upper
Bound
Cladding Outer Diameter
[mm]
9.146
9.186
Cladding Internal Diameter
[mm]
7.820
8.220
Fuel Thermal Conductivity
[W/m*K]
3.9
10.1
Cladding Thermal Conductivity
[W/m*K]
12.7
18.1
The ejection event was simulated from hot zero
power conditions, while the characterization of
uncertainties in input was gathered from the
available literature [14]. As shown in Table 2, the
uncertainties in input were captured adopting both
probabilistic (see Table 2) and interval models
(see Table 3).
The direct impact of the input uncertainties on the
key parameters, e.g. maximum fuel temperature
and enthalpy, generated power, was quantified and
a sensitivity analysis carried out in order to
estimate the most important factors in terms of
output uncertainty. Moreover, a reliability analysis
of the system, focusing on the probability of fuel
damage was considered.
4.2 RESULTS
The results shown in this sections were obtained
through a Latin Hypercube Sampling approach,
adopting a population of 1000 samples in order to
fully explore the input space and its impact on the
model outputs.
4.2 (a) Uncertainty Propagation
The uncertainty in input has first been propagated
through the model in order to visualize the
magnitude of their impact on the output accuracy.
As shown in Fig.1, the overall trend of the results
is coherent with reasonable expectations: higher
fuel temperatures are registered in the mid region
of the fuel assemblies; moreover, the reactor
channels located in proximity of the ejected rod
(channels from 100 to 200) are subject to a
significant increase of temperature along the fuel
centreline due to the increase in reactivity due to
the REA event.
Figure 1 Temperature of fuel centreline along its active
length given input uncertainties over each reactor
channel
As shown in Fig. 2, the temperature along the fuel
centreline covers different ranges according to the
location of the assembly analysed: more marginal
assemblies register temperatures between 284°C
and 290°C, while the largest interval was obtained
for the area of the ejected control rod (assembly
194) and covers a range between 284 and 415°C.
Figure 2 Results of the sampling analysis for the
maximum fuel temperature over the reactor channels
The increase in terms of temperature range goes
along with a rise of uncertainty affecting the
output. Indeed, considering the behaviour of the
maximum fuel temperature along the active fuel
length (see Fig.3), larger output oscillations are
registered in the central region of the fuel
assemblies (approximately around 2m), where the
highest values of temperature are expected. In
spite of this, the amount of uncertainty affecting
the maximum temperature value remains quite
small, with a range width of 8.7°C (i.e. [406.8°C,
415.5°C]) for the peak and 7.2°C (i.e. [283.3°C,
290.5°C]) for the lowest values.
Figure 3 Sampling results for the maximum overall
temperature along fuel channels’ length
Conversely, the uncertainty affecting the
computed average values of moderator
temperature (see Fig.4) appears to be consistent
along all the reactor channels and is not affected
by the location of the assembly of reference in
spite of the increase in temperature between
channel 150 and 240.
Figure 4 Uncertainty propagation results for the average
moderator temperature over the reactor channel
This steady trend can be explained as the
consequence of the averaging operation and can be
also observed in the computed average fuel
temperature, shown in Fig. 5: also in this case, in
spite of the large oscillations registered in
proximity of the rejection rod, the degree of
uncertainty results steady around 7°C across the
whole reactor.
Figure 5 Uncertainty propagation results for the average
fuel temperature along the reactor channel
The impact of the input uncertainty over the
computed value of the maximum generated power
was also captured through the sampling process.
As shown in Fig. 5, the peak value of the
generated power during the transient varies
between 15353 MW and 16456 MW. Moreover,
the location of the computed peaks seems to
slightly vary in time under 0.75 seconds, while
beyond 1 second the uncertainty affecting the
transients reduces considerably and the trend
becomes strongly homogenous.
Figure 6 Uncertainty propagation results for the power
transient triggered by the rod ejection accident
4.2 (b) Sensitivity Analysis
In order to trace the computed uncertainty back to
the input individual impact, sensitivity analysis
were carried out focusing on key output of the
model, such as the maximum generated power and
the fuel enthalpy. The magnitude of such influence
was captured through the computation of
normalized sensitivity measures.
Figure 7 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the maximum
value of generated power
As shown in Fig. 6, the variation of the thermal
conductivity of the fuel appears to affect the
generated power most significantly, as expected,
while the initial power does not have any
significant impact over the computed values.
Also the cladding manufacturing tolerances seem
to have a certain impact, although limited, on the
overall output uncertainty, as are the cladding
thermal conductivity and the coolant temperature
and pressure.
Figure 8 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the maximum
value of fuel enthalpy over all the reactor channels
Similarly, thermo-hydraulic parameters, such as
the thermal conductivity of the fuel and cladding,
play a crucial role in terms of maximum fuel
enthalpy uncertainty, which is also sensitive to the
manufacturing tolerances of the fuel assemblies
(see Fig. 7).
Figure 9 Normalized sensitivity measures highlighting
the impact of the input uncertainty over the fuel
centreline temperature of the hottest reactor channel
Predictably, the uncertainty affecting the inner and
outer diameter of the cladding influence the
temperature computed for the individual channels,
as shown in Figure 9 for channel 194, where the
highest temperature across the reactor is registered.
4.2 (c) Reliability Analysis
In order to significantly lower the computational
burden associated with the model under study, the
1000 samples previously analysed were used for
the implementation of a surrogate model to be
used for the reliability analysis of the system. This
goal was achieved adopting the response surface
methodology in combination with a training
dataset containing 75% of the computed Latin
Hypercube samples, and a validation dataset
covering the remaining 25%. The implemented
surrogate model shown a satisfying performance
within the validation domain, achieving a R2 value
over 0.999.
Figure 10 Convergence of the response surface surrogate
model for the training and validation sets respectively.
The reliability analysis was carried out focusing
on the possible occurrence of fuel or cladding
damage. In the analysis of reactivity accident, the
departure from nuclear boiling ratio (DNBR) and
the peak fuel enthalpy are the most common
metrics for fuel damage prediction.
The analysis carried out confirmed the robustness
of the system and in no fuel assembly critical
values for the peaked fuel enthalpy or DNBR were
detectable.
Indeed, the computed probability for the peak
enthalpy to overcome a threshold value of 88 kj/kg,
largely lower than values considered dangerous
for the fuel structural integrity [15], remains
between 0.5% and 1.5% for the assemblies with
the highest enthalpy peak.
5. CONCLUSIONS
A complete computational tool for uncertainty
quantification tailored on the requirements of the
Digital Reactor Design virtual framework is
presented. The capabilities of the library were
discussed and tested on a case-study modelling a
PWR core transient triggered by the occurrence of
a rod ejection accident, based on the OECD/NEA
and US NRC PWR MOX/UO2 core transient
benchmark. The uncertainty affecting thermo-
hydraulic parameters, manufacturing tolerances,
initial and boundary conditions in input was taken
into account and its impact on the robustness of
the system analysed in terms of both sensitivity
and reliability. Furthermore, the propagation of the
initial uncertainties to key reactor parameters in
output was quantified and discussed. Future work
will focus on the full integration of the tool in the
simulation framework under implementation and
on its application to coupled numerical codes and
models.
ACKNOWLEDGEMENTS
This work has been funded by the Department for
Business, Energy and Industrial Strategy of the
U.K. government under the Digital Reactor
Design project (DUNS Number: 211991708).
REFERENCES
[1] IAEA, 2005 Safety Margins of Operating
Reactors: Analysis of Uncertainties and
Implications for Decision Making, IAEA
TECDOC-1332
[2] IAEA, 2008 Best estimate safety analysis for
nuclear power plants: uncertainty evaluation’,
Vienna: International Atomic Energy Agency,
Safety reports series, ISSN 10206450, no. 52
[3] A. Petruzzi, F. D’Auria, J. C. Micaelli, A. De
Crecy, J. Royen, 2004 The BEMUSE programme
(Best Estimate Methods Uncertainty and
Sensitivity Evaluation)’, Int. Mtg. on Best-
Estimate Methods in Nuclear Installation Safety
Analysis (BE-2004) IX, Washington D.C., USA,
Nov. 1418
[4] OECD/NEA, 2007 Technology Relevance of
the Uncertainty Analysis in Modelling Project for
Nuclear Reactor Safety, NEA/NSC/DOC(2007)15
[5] A. Bucalossi and A. Petruzzi, 2010 Role of
Best Estimate Plus Uncertainty Methods in Major
Nuclear Power Plant Modifications, Journal of
Nuclear Science and Technology, 47:8, 671-683
[6] E. A. Patterson, R. J. Taylor and M. Bankhead,
2016 ‘A framework for an integrated nuclear
digital environment’ Progress in Nuclear Energy,
87, 97-103
[7] L.W. Schruben, 1980 Establishing the
credibility of simulations’, Simulation, 34, 101-
105
[8] E. Patelli, 2016 COSSAN: a multidisciplinary
software suite for uncertainty quantification and
risk management’, Handbook of uncertainty
quantification, pp.1-69
[9] E. Patelli, S. Tolo, H. George-Williams, J.
Sadeghi, R. Rocchetta, M. De Angelis and
M.Broggi, 2018 OpenCossan 2.0: an efficient
computational toolbox for risk, reliability and
resilience analysis’, Proceedings of the joint
ICVRAM ISUMA Uncertainties conference
Florianopolis, SC, Brazil, April 8-11
[10] COSSAN Project: https://cossan.co.uk/
[11] N. K. Prinja, A. Ogunbadejo, J. Sadeghi, and
E. Patelli, 2017 Structural Reliability of pre-
stressed concrete containments”, Nuclear
Engineering and Design, 2017, 323C, 235-244
[12] U. Rohde et al, 2016 The reactor dynamics
code DYN3D models, validation and
applications, Progress in Nuclear Energy,
Volume 89, pp 170-190, ISSN 0149-1970
[13] T. Kozlowski and T.J. Downar, 2003.
OECD/NEA and US NRC PWR MOX/UO2 core
transient benchmark’, Working Party of the
Physics of Plutonium Fuels and Innovative Fuel
Cycles, OECD/NEA Nuclear Science Committee
[14] X. Pan, B. Jia, J. Han, J. Jing and C. Zhang,
2017 ‘Systematic and quantitative uncertainty
analysis for rod ejection accident of pressurized
water reactor’, Energy Procedia, 127, pp.369-376
[15] P. E. MacDonald, S. L. Seiffert, Z. R.
Martinson, R.K. McCardell, D.E. Owen and S. K.
Fukuda, 1980 ‘Assessment of light water reactor
fuel damage during a reactivity initiated accident’,
(No. CONF-800971-1). Idaho National
Engineering Lab., Idaho Falls (USA).
... Whilst a solution can be developed using a single tool (e.g. DYN3D [32]), the usual industrial practice is to use a number of codes from different sources and this can introduce a lack of efficiency. Once set up, the framework allowed more rapid analysis of differing scenarios with no loss of accuracy. ...
Article
Full-text available
Many approaches are currently being considered to develop digital twins and integrated digital frameworks. These approaches concentrate on an industrial sector or a single product, with the obvious risks of a lack of interoperability at differing levels of abstraction. However, a unified approach is required to truly realize the benefits of digital twins and is achievable at their current stage of development. We present proof-of-concept case studies where a digital twin architecture has been developed using standard techniques and adopting a systems-based approach. This architecture has demonstrated technical benefits such as a step-change in processing efficiency, a reduction in traditional manual interventions, an ability to integrate risk and uncertainty, and a perceived cost benefit despite the necessary up-front investment. This approach also allows for the integration of models and simulations at differing levels of detail within an overall value chain. Several wider benefits to an organization such as improved communication and recording of implicit expert knowledge were identified. These benefits we believe will offset the upfront resource required for the development of the architecture. Challenges to adopting the technology were also identified and should be addressed in parallel with future technology development. This work is a first step in establishing a practical approach to realizing a digital twin architecture that demonstrates the flexibility and scalability that can be applied universally. We discuss the wider application of this architecture to a wide range of industry sectors by adopting this unified approach and thereby realize the major benefits a digital twin can provide.
... It is structured into advanced fuels, advanced manufacturing and materials, advanced reactor design, and recycle and reprocess, providing innovation across the whole nuclear fuel cycle. A project within the advanced reactor design known as DRD (Digital Reactor Design) [7] is being developed by different academic and industrial partners across the UK to deliver virtual replicas of nuclear reactors, providing innovation from a computational perspective. ...
... It is structured into advanced fuels, advanced manufacturing and materials, advanced reactor design, and recycle and reprocess, providing innovation across the whole nuclear fuel cycle. A project within the advanced reactor design known as DRD (Digital Reactor Design) [7] is being developed by different academic and industrial partners across the UK to deliver virtual replicas of nuclear reactors, providing innovation from a computational perspective. ...
Article
Full-text available
Understanding and optimizing the relation between nuclear reactor components or physical phenomena allows us to improve the economics and safety of nuclear reactors, deliver new nuclear reactor designs, and educate nuclear staff. Such relation in the case of the reactor core is described by coupled reactor physics as heat transfer depends on energy production while energy production depends on heat transfer with almost none of the available codes providing full coupled reactor physics at the fuel pin level. A Multiscale and Multiphysics nuclear software development between NURESIM and CASL for LWRs has been proposed for the UK. Improved coupled reactor physics at the fuel pin level can be simulated through coupling nodal codes such as DYN3D as well as subchannel codes such as CTF. In this journal article, the first part of the DYN3D and CTF coupling within the Multiscale and Multiphysics software development is presented to evaluate all inner iterations within one outer iteration to provide partially verified improved coupled reactor physics at the fuel pin level. Such verification has proven that the DYN3D and CTF coupling provides improved feedback distributions over the DYN3D coupling as crossflow and turbulent mixing are present in the former.
Conference Paper
Full-text available
Many complex phenomena and the analysis of large and complex system and network can only be studied adopting advanced computational methods. In addition, in many engineering fields virtual prototypes are used to support and drive the design of new components, structures and systems. Uncertainty quantification is a key requirement and challenge for a realistic and reliable numerical modelling and prediction that spans across various disciplines and industry as well. The treatment of uncertainty required the availability of efficient algorithms and computational techniques able to reduce the computational cost required by the non-deterministic analysis and to interface with open-source and commercial model (e.g. FE/CFD) and libraries. In order to satisfy these requirements and allowing the inclusion of non-deterministic analyses as a practice standard routing in scientific computing, a general purpose software for uncertainty quantification and risk assessment, named COSSAN, is under continuous development. This paper presents an overview of the main capabilities of the recent release of the Matlab open source toolboxes OpenCossan. The new release includes interfaces with 3rd party libraries allowing to couple OPENCOSSAN with the state-of-the-art tools in Machine Learning and Meta-modelling. In addition, new toolboxes for reliability and resilient analysis of system and network are also presented. OpenCossan is released under the Lesser GNU licence. It is therefore freely available. It is also be package as a Python or Java library for distribution to end users who do not need MATLAB.
Article
Full-text available
A systematic uncertainty analysis strategy based on stochastic sampling method is developed for best estimate analysis of the rod ejection accident in pressurized water reactor (PWR). Self-developed SAMP module and DAKOTA code are used to randomly sample the multigroup nuclear data, important neutronics and thermal-hydraulic (T-H) input parameters respectively. The 3D coupled reactor core model for Almaraz PWR plant is established using RELAP5/PARCS code system and the uncertainties of the core power, fuel enthalpy and local power distribution are obtained. The results indicate that the uncertainties of nuclear data and input parameters can result in large uncertainty on the results which should be carefully considered.
Chapter
Full-text available
Computer-aided modeling and simulation is now widely recognized as the third “leg” of scientific method, alongside theory and experimentation. Many phenomena can be studied only by using computational processes such as complex simulations or analysis of experimental data. In addition, in many engineering fields computational approaches and virtual prototypes are used to support and drive the design of new components, structures, and systems. One of the greatest challenges of virtual prototyping is to improve the fidelity of the computational analysis. This can only be achieved by explicitly including variability and uncertainties from different sources. Variability is inherent in many natural systems and therefore cannot be reduced. Uncertainty is also always present since it is not possible to perfectly model or predict future events for which no real-world data is available.Although stochastic methods offer a much more realistic approach for analysis and design, their utilization in practical applications remains quite limited. One of the reasons is that the developments of software for stochastic analysis have received considerably less attention than their deterministic counterparts. Another common limitation is that the computational cost of stochastic analysis is often by orders of magnitude higher than the deterministic analysis. Hence, robust, efficient, and scalable computational tools are necessary, i.e., by making use of the computational power of a cluster and grid computing.This chapter presents the COSSAN project: a developed multidisciplinary general-purpose software suite for uncertainty quantification and risk analysis . The computational tools satisfy the industry requirements regarding usability, numerical efficiency, flexibility, and scalability. The software can be used to solve a wide range of engineering and scientific problems. The availability of such software is particularly important for the analysis and design of resilient structures and systems. In fact, despite the different levels of uncertainty, decision makers still need to take clear choices based on the available information. They need to trust the methodology adopted to propagate the uncertainties through multidisciplinary analysis, in order to quantify the risk with the current level of information and to avoid wrong decisions due to artificial restrictions introduced by the modeling.
Article
Current industrial trends to increase power production challenge the initial safety design limits of plants, which were evaluated generally using conservative tools and hypotheses. Advanced numerical tools and methods allow demonstrating that safety margins are still respected. These tools are modern fully validated thermal-hydraulic codes, coupled thermal-hydraulic/neutron-kinetic codes, and methodologies that use realistic hypotheses (rather than conservative ones) and estimate the uncertainty. The paper illustrates modern computational tools and approaches for improving plant operation and control and for supporting design modifications on existing nuclear power plants. A survey performed by Joint Research Centre's Institute for Energy is also presented and summarizes the feedback received by several countries of the European Union on the impact of modern Thermal-Hydraulics and Neutron-Kinetics tools on Operational Limits and Conditions and Operating Instructions and Procedures.
Article
This paper presents probabilistic analysis of structural capacity of pre-stressed concrete containments subjected to internal pressure. The conventional design methods for containments are based on allowable stress codes which ensure certain factor of safety between expected load and expected structural strength. Such an approach may give different values of structural reliability in different situations. In recent years, two international round robin exercises have been conducted aimed at predicting the capacity of lined and unlined pre-stressed concrete containments used in nuclear industry. These exercises involved experimental testing and numerical analysis of the models. The first exercise involved 1/4 scale steel-lined Pre-stressed Concrete Containment Vessel (PCCV) which was tested at Sandia National Laboratories (SNL) in USA. The second used an unlined containment being tested by the Bhabha Atomic Research Centre (BARC), Tarapur, India. These studies are essentially deterministic studies that have helped validate the analysis methodology and modelling techniques that can be used to predict pre-stressed concrete containment capacity and failure modes. The paper uses these two examples to apply structural reliability method to estimate the probability of failure of the containment.
Article
The article provides an overview on the code DYN3D which is a three-dimensional core model for steady-state, dynamic and depletion calculations in reactor cores with quadratic or hexagonal fuel assembly geometry being developed by the Helm-holtz-Zentrum Dresden-Rossendorf for more than 20 years. The current paper gives an overview on the basic DYN3D models and the available code couplings. The verification and validation status is shortly outlined. The paper concludes with the current developments of the DYN3D code. For more detailed information the reader is referred to the publications cited in the corresponding chapters. Das Reaktordynamikprogramm DYN3D. In diesem Beitrag wird ein Überblick über das am Helmholtz-Zentrum Dres-den-Rossendorf seit mehr als 20 Jahren entwickelte Programm DYN3D gegeben. Mit diesem 3D Kernmodell können statio-näre, dynamische und Abbrandrechungen für Reaktorkerne mit quadratischen oder hexagonalen Brennelementgeometrien durchgeführt werden. Im Folgenden werden die Grundvariante und verfügbare Kopplungen mit anderen Programmen vorge-stellt. Der aktuelle Verifikations-und Validierungsstand wird kurz umrissen. Abschließend werden aktuelle Entwicklungen beschrieben und für detaillierte Informationen auf zahlreiche Publikationen verwiesen.
Article
The article provides an overview of the reactor dynamics code DYN3D. The code comprises various 3D neutron kinetics solvers, a thermal-hydraulics reactor core model and a thermo-mechanical fuel rod model. The implemented models and methods and the capabilities and features of the code are described. Latest developments of models and methods are delineated. An overview on the status of verification and validation is given. Code applications for selected safety analyses are described. Furthermore, multi-physics code couplings to thermal-hydraulic system codes, CFD and sub-channel codes as well as to the fuel performance code TRANSURANUS are outlined. Developments for innovative reactor concepts, in particular Molten Salt Reactor, High Temperature Gas-cooled Reactor and Sodium Fast Reactor are delineated. The management of code maintenance is briefly described. An outlook on further code development is given.
Article
A conceptual framework is proposed for a digital environment extending from the prototype design of nuclear plants through operations and decommissioning to storage and waste disposal. The environment consists of a series of interconnected multi-scale, multi-physics computational models linked to the real-world by data acquired during validation of prototypes, in-service monitoring and inspections of plant, post-shut-down inspections of plant and in-situ monitoring of stored waste. The technology gaps for the implementation of the integrated nuclear digital environment (INDE) are identified and discussed together with the advantages to be gained from its implementation. Implementation of INDE will be dependent on future advances in High Performance Computing systems approaching the exascale and parallel advances in the development of algorithms for processing large amounts of data. The data itself will be acquired through innovations in measurement, analysis and uncertainty and will be applied through projects relating to lifetime extension, decommissioning and resurgent national science programmes. It is postulated that the existence of this type of framework might be inevitable given both nuclear-specific and non-nuclear drivers and may be essential for the nuclear industry to deliver current and future challenges from the clean-up of legacy waste sites to time and budget, future generation nuclear reactors and small-scale mass-production of modular nuclear power plants. It is proposed that implementation of INDE will lead to shorten development times, reduced costs and increased credibility, operability, reliability and safety.
Article
A procedure is proposed to promote the acceptance of a simulation model. The procedure actively involves potential users of the simulation. Several alterna tive approaches for the statistical analysis of the experimental results are suggested. Two contrasting experiences in applying the procedure to actual simulation projects are discussed.