
Majdi I. RadaidehUniversity of Michigan | U-M
Majdi I. Radaideh
PhD
About
68
Publications
20,292
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
514
Citations
Citations since 2017
Introduction
Additional affiliations
November 2019 - present
June 2017 - September 2017
September 2015 - October 2019
Education
February 2017 - June 2019
September 2016 - September 2019
September 2016 - June 2019
Publications
Publications (68)
Providing accurate uncertainty estimations is essential for producing reliable machine learning models, especially in safety-critical applications such as accelerator systems. Gaussian process models are generally regarded as the gold standard method for this task, but they can struggle with large, high-dimensional datasets. Combining deep neural n...
We present a multi-module framework based on Conditional Variational Autoencoder (CVAE) to detect anomalies in the power signals coming from multiple High Voltage Converter Modulators (HVCMs). We condition the model with the specific modulator type to capture different representations of the normalwaveforms and to improve the sensitivity of the mod...
Early fault detection and fault prognosis are crucial to ensure efficient and safe operations of complex engineering systems such as the Spallation Neutron Source (SNS) and its power electronics (high voltage converter modulators). Following an advanced experimental facility setup that mimics SNS operating conditions, the authors successfully condu...
We present a multi-module framework based on Conditional Variational Autoencoder (CVAE) to detect anomalies in the power signals coming from multiple High Voltage Converter Modulators (HVCMs). We condition the model with the specific modulator type to capture different representations of the normal waveforms and to improve the sensitivity of the mo...
We present a study for particle transport design optimization that combines OpenMC (Monte Carlo transport code) with NEORL (NeuroEvolution Optimization with Reinforcement Learning). The proposed open-source framework, OpenNeoMC, empowers particle transport with state-of-the-art optimization techniques to facilitate design optimization and efficient...
High voltage converter modulators (HVCM) provide power to the accelerating cavities of the Spallation Neutron Source (SNS) facility. HVCMs experience catastrophic failures, which increase the downtime of the SNS and reduce beam time. The faults may occur due to different reasons including failures of the resonant capacitor, core saturation due to t...
In this paper, a flexible large-scale ensemble-based optimization algorithm is presented for complex optimization problems. According to the no free lunch theorem, no single optimization algorithm demonstrates superior performance across all optimization problems. Therefore, with the animorphic ensemble optimization (AEO) algorithm presented here,...
Early fault detection and fault prognosis are crucial to ensure efficient and safe operations of complex engineering systems such as the Spallation Neutron Source (SNS) and its power electronics (high voltage converter modulators). Following an advanced experimental facility setup that mimics SNS operating conditions, the authors successfully condu...
High intensity proton pulses strike the Spallation Neutron Source (SNS)'s mercury target to provide bright neutron beams. These strikes also deposit extensive energy into the mercury and its steel vessel. Prediction of the resultant loading on the target is difficult when helium gas is intentionally injected into the mercury to reduce the loading a...
The High Voltage Converter Modulators (HVCM) used to power the klystrons in the Spallation Neutron Source (SNS) linac were selected as one area to explore machine learning due to reliability issues in the past and the availability of large sets of archived waveforms. Progress in the past two years has resulted in generating a significant amount of...
The mercury constitutive model predicting the strain and stress in the target vessel plays a central role in improving the lifetime prediction and future target designs of the mercury targets at the Spallation Neutron Source. We leverage the experiment strain data collected over multiple years to improve the mercury constitutive model through a com...
The anomalies in the high voltage converter modulator (HVCM) remain a major down time for the spallation neutron source facility, that delivers the most intense neutron beam in the world for scientific materials research. In this work, we propose neural network architectures based on Recurrent AutoEncoders (RAE) to detect anomalies ahead of time in...
To improve the marketability of novel microreactor designs, there is a need for automated and optimal control of these reactors. This paper presents a methodology for performing multiobjective optimization of control drum operation for a microreactor under normal and off-nominal conditions. Two different case studies are used where the control drum...
This article describes real time series datasets collected from the high voltage converter modulators (HVCM) of the Spallation Neutron Source facility. HVCMs are used to power the linear accelerator klystrons, which in turn produce the high-power radio frequency to accelerate the negative hydrogen ions (H⁻). Waveform signals have been collected fro...
We propose a new approach called PESA (Prioritized replay Evolutionary and Swarm Algorithms) combining prioritized replay of reinforcement learning with hybrid evolutionary algorithms. PESA hybridizes different evolutionary and swarm algorithms such as particle swarm optimization, evolution strategies, simulated annealing, and differential evolutio...
The reliability of the mercury spallation target is mission-critical for the neutron science program of the spallation neutron source at the Oak Ridge National Laboratory. We present an inverse uncertainty quantification (UQ) study using the Bayesian framework for the mercury equation of state model parameters, with the assistance of polynomial cha...
The mercury constitutive model predicting the strain and stress in the target vessel plays a central role in improving the lifetime prediction and future target designs of the mercury targets at the Spallation Neutron Source (SNS). We leverage the experiment strain data collected over multiple years to improve the mercury constitutive model through...
The reliability of the mercury spallation target is mission-critical for the neutron science program of the spallation neutron source at the Oak Ridge National Laboratory. We present an inverse uncertainty quantification (UQ) study using the Bayesian framework for the mercury equation of state model parameters, with the assistance of polynomial cha...
We present an open-source Python framework for NeuroEvolution Optimization with Reinforcement Learning (NEORL) developed at the Massachusetts Institute of Technology. NEORL offers a global optimization interface of state-of-the-art algorithms in the field of evolutionary computation, neural networks through reinforcement learning, and hybrid neuroe...
We combine advances in deep reinforcement learning (RL) with evolutionary computation to perform large-scale optimisation of boiling water reactor (BWR) bundles using CASMO4/SIMULATE3 codes; capturing fine details, radial/axial fuel heterogeneity, and real-world constraints. RL constructs neural networks that learn how to assign fuel and poison enr...
For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a rule-based RL methodology to guide evolutionary algorithms...
The assumption that void fraction, and by extension coolant density, is uniform in the radial direction is a common approximation used in lattice physics simulations. In this study, models without uniform radial void fraction are used and lattice criticality and pin powers are investigated in two ways. One way uses hypothetical models that reflect...
A new concept using deep learning in neural networks is investigated to characterize the underlying uncertainty of nuclear data. Analysis is performed on multi-group neutron cross-sections (56 energy groups) for the GODIVA U-235 sphere. A deep model is trained with cross-validation using 1000 nuclear data random samples to fit 336 nuclear data para...
Optimization of nuclear fuel assemblies if performed effectively, will lead to fuel efficiency improvement, cost reduction, and safety assurance. However, assembly optimization involves solving high-dimensional and computationally expensive combinatorial problems. As such, fuel designers’ expert judgement has commonly prevailed over the use of stoc...
We propose PESA, a novel approach combining Particle Swarm Optimisation (PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid Algorithm, inspired from reinforcement learning. PESA hybridizes the three algorithms by storing their solutions in a shared replay memory. Next, PESA applies prioritized replay to redistribute data betwee...
Multiphysics coupling of neutronics/thermal-hydraulics models is essential for accurate modeling of nuclear reactor systems with physics feedback. In this work, SCALE/TRACE coupling is used for neutronic analysis and spent fuel validation of BWR assemblies, which have strong coolant feedback. 3D axial power profiles with coolant feedback are captur...
In the last few years, deep learning in neural networks demonstrated impressive successes in the areas of computer vision, speech and image recognition, text generation, and many others. However, sensitive engineering areas such as nuclear engineering benefited less from these efficient techniques. In this work, deep learning expert systems are uti...
Compositions of large nuclear cores (e.g. boiling water reactors) are highly heterogeneous in terms of fuel composition, control rod insertions and flow regimes. For this reason, they usually lack high order of symmetry (e.g. 1/4, 1/8) making it difficult to estimate their neutronic parameters for large spaces of possible loading patterns. A detail...
A new concept using deep learning in neural networks is investigated to characterize the underlying uncertainty of nuclear data. Analysis is performed on multi-group neutron cross-sections (56 energy groups) for the GODIVA U-235 sphere. A deep model is trained with cross-validation using 1000 nuclear data random samples to fit 336 nuclear data para...
The assumption that void fraction, and by extension coolant density, is uniform in the radial direction is a common approximation used in lattice physics simulations. In this study, models without uniform radial void fraction are used and lattice criticality and pin powers are investigated in two ways. One way uses hypothetical models that reflect...
Sensitivity analysis (SA) and uncertainty quantification (UQ) are used to assess and improve engineering models. SA of fuel cell models can be challenging because of the large number of design parameters to optimize. Following the regular approach of manually testing the fuel cell outputs under changing each design parameter one at a time can be te...
Reliance on point estimates when developing hybrid energy systems can over/underestimate system performance. Analyzing the sensitivity and uncertainty of large-scale hybrid systems can be a challenging task due to the large number of design parameters to be explored. In this work, a comprehensive and efficient sensitivity/uncertainty methodology is...
The continuous advancements in computer power and computational modeling through high-fidelity and multiphysics simulations add more challenges on assessing their predictive capability. In this work, metamodeling or surrogate modeling through deep Gaussian processes (DeepGP) is performed to construct surrogates of advanced computer simulations draw...
This work presents two different methods for quantifying and propagating the uncertainty associated with fuel composition at end of life for cask criticality calculations. The first approach, the computational approach uses parametric uncertainty including those associated with nuclear data, fuel geometry, material composition, and plant operation...
A novel and modern framework for energy modeling is developed in this paper with a focus on nuclear energy modeling and simulation. The framework combines multiphysics simulations and real data, with validation by uncertainty quantification tasks and facilitation by machine and deep learning methods. The hybrid framework is built on the basis of a...
Bayesian-based model selection and UQ methodology is developed and applied to nuclear thermal-hydraulics codes and void fraction data in this paper. Uncertainties inherent in the experimental data along with the predictive and model-form uncertainty are quantified to construct a composite/hybrid model based on the competent models to predict the re...
Group Method of Data Handling (GMDH) is considered one of the earliest deep learning methods. Deep learning gained additional interest in today's applications due to its capability to handle complex and high dimensional problems. In this study, multi-layer GMDH networks are used to perform uncertainty quantification (UQ) and sensitivity analysis (S...
A new data-driven sampling-based framework was developed for uncertainty quantification (UQ) of the homogenized kinetic parameters calculated by lattice physics codes such as TRITON and Polaris. In this study, extension of the database for the delayed neutron data (DND) is performed by exploring more delayed neutron experiments and adding additiona...
A framework for model evaluation and uncertainty quantification (UQ)is presented with applications oriented to nuclear engineering simulation codes. Our framework is inspired by the previous research on Bayesian statistics and model averaging. The methodology is demonstrated by performing UQ of three thermal-hydraulic computer codes used for two-ph...
This paper analyzes a new method for reducing boiling water reactor fuel temperature during a Loss of Coolant Accident (LOCA). The method uses a device called Reverse Flow Restriction Device (RFRD) at the inlet of fuel bundles in the core to prevent coolant loss from the bundle inlet due to the reverse flow after a large break in the recirculation...
In this study, an analysis on burnup credit for cask criticality safety in BWR spent fuel is conducted. Accurate burnup credit can be used to reduce overly conservative safety margins to increase shipping and storage efficiency while maintaining criticality level within regulatory limits. This analysis is based on advanced lattice depletion models...
In this study, the Energy Absorption Buildup Factor (EABF) of different tissue equivalent materials have been calculated using the Monte Carlo code MCNP5. The results of MCNP5 calculations are validated against the results obtained using the Geometric Progression (G-P) fitting method. The energy range for the gamma rays was 0.1-15 MeV and penetrati...
Analyzing the variance of complex computer models is an essential practice to assess and improve that model by identifying the influential parameters that cause the output variance. Variance-based sensitivity analysis is the process of decomposing the output variance into components associated with each input parameter. In this study, we applied a...
Delayed neutrons, which are described by kinetic parameters, are significant for nuclear reactor operation as they make nuclear reactors controllable. Modern lattice physics codes such as TRITON and Polaris in SCALE code system, generate kinetic parameters for a given material composition. The calculation is performed by summation of the delayed ne...
In this study, we applied Sobol indices to decompose the variance in the homogenized kinetic parameters , to identify the key precursor groups and delayed neutron data that contribute to the kinetic parameters' variance. Based on our previously developed framework, we selected two kinetic parameter responses in which variance-based sensitivity anal...
This study seeks to compare two approaches for uncertainty quantification (UQ) of the nuclide/isotopic composition into the criticality calculations of spent fuel casks. The first approach uses input parameter uncertainties for nuclear data libraries, fuel design, and plant operation data. These uncertainties are based on experimental data and prio...
A new concept inspired by game theory principles proposed by Shapley is used to decompose the output variance in lattice reactivity. The concept is called Shapley effect, and it is used to perform variance-based sensitivity analysis of the homogenized cross-sections based on a PWR lattice depletion problem. The Shapley effect is compared to Sobol i...