Karen Willcox’s research while affiliated with University of Texas at Austin and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (113)


Fig. 1 | Personalized health trajectory predictions using healthcare digital twins for guiding clinical decisions. Digital twins are updated periodically using clinical and ambulatory data. Compared to traditional healthcare, digital twins provide optimized interventions with quantified uncertainty, reducing decision-making uncertainty.
Fig. 2 | Digital twin elements in the context of cardiovascular health. Figure adapted from National Academies of Sciences, Engineering, and Medicine. 2023. Foundational Research Gaps and Future Directions for Digital Twins 2 . https://doi.
Fig. 3 | Application of the VVUQ processes across different stages of a digital twin's lifecycle, from model construction and parameter estimation to simulating treatment paradigms and disease progression. The integration of patientspecific medical history and clinical data into a computational model enables the prediction of the patient's health trajectory from their current health state. The digital twin also predicts future health progression under different intervention scenarios and disease models (i, ii, iii). The VVUQ processes ensures the trustworthiness and accuracy of the model's updates as new disease and treatment paradigms are incorporated. MRI, 3D structural mesh, and cardiac fluid dynamic simulation images are reproduced from Campbell-Washburn et al. (NIH Image Gallery) 48 , and Milosevic et al. 49 , respectively under the terms and conditions of the Creative Commons Attribution (CC-BY) License.
Survey and perspective on verification, validation, and uncertainty quantification of digital twins for precision medicine
  • Literature Review
  • Full-text available

January 2025

·

18 Reads

npj Digital Medicine

·

Andrea Hawkins-Daarud

·

·

[...]

·

Roozbeh Jafari
Download

Multifidelity uncertainty quantification for ice sheet simulations

January 2025

·

10 Reads

Computational Geosciences

Ice sheet simulations suffer from vast parametric uncertainties, such as the basal sliding boundary condition or geothermal heat flux. Quantifying the resulting uncertainties in predictions is of utmost importance to support judicious decision-making, but high-fidelity simulations are too expensive to embed within uncertainty quantification (UQ) computations. UQ methods typically employ Monte Carlo simulation to estimate statistics of interest, which requires hundreds (or more) of ice sheet simulations. Cheaper low-fidelity models are readily available (e.g., approximated physics, coarser meshes), but replacing the high-fidelity model with a lower fidelity surrogate introduces bias, which means that UQ results generated with a low-fidelity model cannot be rigorously trusted. Multifidelity UQ retains the high-fidelity model but expands the estimator to shift computations to low-fidelity models, while still guaranteeing an unbiased estimate. Through this exploitation of multiple models, multifidelity estimators guarantee a target accuracy at reduced computational cost. This paper presents a comprehensive multifidelity UQ framework for ice sheet simulations. We present three multifidelity UQ approaches—Multifidelity Monte Carlo, Multilevel Monte Carlo, and the Best Linear Unbiased Estimator—that enable tractable UQ for continental-scale ice sheet simulations. We demonstrate the techniques on a model of the Greenland ice sheet to estimate the 2015-2050 ice mass loss, verify their estimates through comparison with Monte Carlo simulations, and give a comparative performance analysis. For a target accuracy equivalent to 1 mm sea level rise contribution at 95 % confidence, the multifidelity estimators achieve computational speedups of two orders of magnitude.


Fig. 3 FFD box controls the airfoil shape via the control points (black dots)
Fig. 4 SVD on 80 í µí±š 1 training data showing the singular values and the cumulative energy retained as defined by Eq. (2)
Fig. 5 SVD on 100 í µí±š 2 and 100 í µí±š 3 training data respectively showing the singular values and the cumulative energy retained as defined by Eq. (2)
Aerodynamic design requirements to be explored in the inverse design map
Improving neural network efficiency with multifidelity and dimensionality reduction techniques

Design problems in aerospace engineering often require numerous evaluations of expensive-to-evaluate high-fidelity models, resulting in prohibitive computational costs. One way to address the computational cost is through building surrogates, such as deep neural networks (DNNs). However, DNNs may only be an effective surrogate when sufficient evaluations of the high-fidelity model are required such that the up-front training cost is amortized, or in situations that require real-time responses (such as interactive visualizations). Typically, the data requirements for adequately accurate training of DNNs are often impractical for engineering applications. To alleviate this issue, the proposed work utilizes output dimensionality reduction along with information from multiple models of varying fidelities and cost to develop accurate projection-enabled multifidelity neural networks (MF-NNs) with limited training samples. The dimensionality reduction leads to a more parsimonious network and the multifidelity aspect adds more training data from lower-cost, lower-fidelity models. Three approaches for MF-NNs that leverage proper orthogonal decomposition based projections are introduced: (i) pre-training method, (ii) additive method, and (iii) multi-step method. The MF-NN is applied to approximate the optimal design of 2D aerodynamic airfoils given the performance and design requirements. The MF-NN leads to ∼ 27% computational cost reduction compared to single-fidelity neural networks at the same accuracy (90%), with the multi-step approach performing the best for this application.


Multifidelity Uncertainty Quantification for Ice Sheet Simulations

December 2024

·

22 Reads

Ice sheet simulations suffer from vast parametric uncertainties, such as the basal sliding boundary condition or geothermal heat flux. Quantifying the resulting uncertainties in predictions is of utmost importance to support judicious decision-making, but high-fidelity simulations are too expensive to embed within uncertainty quantification (UQ) computations. UQ methods typically employ Monte Carlo simulation to estimate statistics of interest, which requires hundreds (or more) of ice sheet simulations. Cheaper low-fidelity models are readily available (e.g., approximated physics, coarser meshes), but replacing the high-fidelity model with a lower fidelity surrogate introduces bias, which means that UQ results generated with a low-fidelity model cannot be rigorously trusted. Multifidelity UQ retains the high-fidelity model but expands the estimator to shift computations to low-fidelity models, while still guaranteeing an unbiased estimate. Through this exploitation of multiple models, multifidelity estimators guarantee a target accuracy at reduced computational cost. This paper presents a comprehensive multifidelity UQ framework for ice sheet simulations. We present three multifidelity UQ approaches -- Multifidelity Monte Carlo, Multilevel Monte Carlo, and the Best Linear Unbiased Estimator -- that enable tractable UQ for continental-scale ice sheet simulations. We demonstrate the techniques on a model of the Greenland ice sheet to estimate the 2015-2050 ice mass loss, verify their estimates through comparison with Monte Carlo simulations, and give a comparative performance analysis. For a target accuracy equivalent to 1 mm sea level rise contribution at 95% confidence, the multifidelity estimators achieve computational speedups of two orders of magnitude.


Real-time aerodynamic load estimation for hypersonics via strain-based inverse maps

August 2024

·

55 Reads

This work develops an efficient real-time inverse formulation for inferring the aerodynamic surface pressures on a hypersonic vehicle from sparse measurements of the structural strain. The approach aims to provide real-time estimates of the aerodynamic loads acting on the vehicle for ground and flight testing, as well as guidance, navigation, and control applications. Specifically, the approach targets hypersonic flight conditions where direct measurement of the surface pressures is challenging due to the harsh aerothermal environment. For problems employing a linear elastic structural model, we show that the inference problem can be posed as a least-squares problem with a linear constraint arising from a finite element discretization of the governing elasticity partial differential equation. Due to the linearity of the problem, an explicit solution is given by the normal equations. Pre-computation of the resulting inverse map enables rapid evaluation of the surface pressure and corresponding integrated quantities, such as the force and moment coefficients. The inverse approach additionally allows for uncertainty quantification, providing insights for theoretical recoverability and robustness to sensor noise. Numerical studies demonstrate the estimator performance for reconstructing the surface pressure field, as well as the force and moment coefficients, for the Initial Concept 3.X (IC3X) conceptual hypersonic vehicle.



Digital twins in mechanical and aerospace engineering

March 2024

·

189 Reads

·

23 Citations

Nature Computational Science

Digital twins bring value to mechanical and aerospace systems by speeding up development, reducing risk, predicting issues and reducing sustainment costs. Realizing these benefits at scale requires a structured and intentional approach to digital twin conception, design, development, operation and sustainment. To bring maximal value, a digital twin does not need to be an exquisite virtual replica but instead must be envisioned to be fit for purpose, where the determination of fitness depends on the capability needs and the cost-benefit trade-offs.


Learning physics-based reduced-order models from data using nonlinear manifolds

March 2024

·

73 Reads

·

11 Citations

We present a novel method for learning reduced-order models of dynamical systems using nonlinear manifolds. First, we learn the manifold by identifying nonlinear structure in the data through a general representation learning problem. The proposed approach is driven by embeddings of low-order polynomial form. A projection onto the nonlinear manifold reveals the algebraic structure of the reduced-space system that governs the problem of interest. The matrix operators of the reduced-order model are then inferred from the data using operator inference. Numerical experiments on a number of nonlinear problems demonstrate the generalizability of the methodology and the increase in accuracy that can be obtained over reduced-order modeling methods that employ a linear subspace approximation.


Generalized Multifidelity Active Learning for Gaussian-process-based Reliability Analysis

February 2024

·

21 Reads

·

1 Citation

Lecture Notes in Computer Science

Efficient methods for achieving active learning in complex physical systems are essential for achieving the two-way interaction between data and models that underlies DDDAS. This work presents a two-stage multifidelity active learning method for Gaussian-process-based reliability analysis. In the first stage, the method allows for the flexibility of using any single-fidelity acquisition function for failure boundary identification when selecting the next sample location. We demonstrate the generalized multifidelity method using the existing acquisition functions of expected feasibility, U-learning, targeted integrated mean square error acquisition functions, or their a priori Monte Carlo sampled variants. The second stage uses a weighted information-gain-based criterion for the fidelity model selection. The multifidelity method leads to significant computational savings over the single-fidelity versions for real-time reliability analysis involving expensive physical system simulations.



Citations (72)


... To provide further evidence about the improvement gains of NPF-Net, we will, in the future, apply that approach to other applications for which we have in hand nonlocal models and their discretizations and which incorporate phase fields. Specifically, we will first consider alloy solidification models [8,10,30] and superconductivity models [12,13,16,19] to which we apply NPF-Net methodology. These activities will provide further evidence about the gains effected by using that methodology. ...

Reference:

An End-to-End Deep Learning Method for Solving Nonlocal Allen-Cahn and Cahn-Hilliard Phase-Field Models
Multifidelity methods for uncertainty quantification of a nonlocal model for phase changes in materials
  • Citing Article
  • July 2024

Computers & Structures

... It can be assumed that new developments will have a profound impact on strategic and operational managers in organizations. Technologies still in the early stages of development and introduction, such as digital twins and artificial intelligence for e.g. the development of new materials and design and manufacturing processes, will have a massive impact on industrial design and manufacturing processes [44][45][46]. The combination of these technologies, which has not yet been considered in detail by research, has not yet been considered. ...

Digital twins in mechanical and aerospace engineering
  • Citing Article
  • March 2024

Nature Computational Science

... We propose to use quadratic approximations with Neural Galerkin schemes in the following: Quadratic manifolds have been first used in model reduction in [8,30] and since then have led to a series of works that address intrusive and non-intrusive reduced modeling with quadratic manifolds as well as constructing quadratic manifolds from training data [9,10,31,32,33,34,35,36,37,38,39,40]. The work [10] learns reduced models on quadratic manifolds from data in a non-intrusive way, i.e., without requiring intrusive access to the underlying full-model solver that generates the training data. ...

Learning physics-based reduced-order models from data using nonlinear manifolds

... We propose to use quadratic approximations with Neural Galerkin schemes in the following: Quadratic manifolds have been first used in model reduction in [8,30] and since then have led to a series of works that address intrusive and non-intrusive reduced modeling with quadratic manifolds as well as constructing quadratic manifolds from training data [9,10,31,32,33,34,35,36,37,38,39,40]. The work [10] learns reduced models on quadratic manifolds from data in a non-intrusive way, i.e., without requiring intrusive access to the underlying full-model solver that generates the training data. ...

Learning Latent Representations in High-Dimensional State Spaces Using Polynomial Manifold Constructions
  • Citing Conference Paper
  • December 2023

... For example, Kalman filters have been used for measuring uncertainties in clinical hemodynamic observations (e.g., blood vessel diameter) from imaging data (e.g., MRI) for circulation models 27,28 . Markov chain Monte Carlo methods have been employed for estimating parameter distributions from brain MRI to tailor patient-specific radiotherapy regimens 29 . Similarly, several Monte Carlobased methods have been proposed to manage uncertainties probabilistically during the calibration of electrophysiology (EP) models, such as identifying ablation targets for atrial fibrillation (AFib) treatment from electrocardiogram data 30 and cardiac mechanics models, like determining patient-specific parameters for estimating stroke volume, ejection fraction, and left-ventricular ejection time from echocardiography and blood pressure data 31 . ...

Predictive digital twin for optimizing patient-specific radiotherapy regimens under uncertainty in high-grade gliomas

Frontiers in Artificial Intelligence

... [cs.CE] 2 Sep 2024 as train stations or chemistry plants in the energy sector. In emergency situations, real-time predictions of contaminant dispersion are urgently needed for informed decision-making, e.g., in an evacuation scenario [3]. To respond to this need, the digital twinning of a built environment for chemical accident response is proposed in this paper. ...

From Data to Decisions: A Real-Time Measurement–Inversion–Prediction–Steering Framework for Hazardous Events and Health Monitoring
  • Citing Chapter
  • March 2023

... Reduced-order models (ROMs) [54] are indispensable tools for accelerating high-fidelity simulations of complex physical systems by projecting these systems onto lower-dimensional subspaces, thereby reducing computational costs while maintaining sufficient accuracy for real-time applications, uncertainty quantification, and optimization. One of the earliest techniques in dimensionality reduction is Principal Component Analysis (PCA) [55], which identifies the principal directions of variance in the data by finding orthogonal eigenvectors of the covariance matrix. ...

Learning high-dimensional parametric maps via reduced basis adaptive residual networks
  • Citing Article
  • December 2022

Computer Methods in Applied Mechanics and Engineering

... Projection-based methods rely on the assumption of an intrinsic low dimensionality of the solution manifold, which makes their application to transport-dominated dynamics challenging [40]. Several methods have been proposed to tackle this issue, namely [40,39,48,14]; however, making predictions on transport dominated problems remains a challenging task because the singular values associated with the training data may decay slowly and thus require large reduced orders, or because the system's predictions evolve beyond the span of the training snapshots. ...

Operator inference for non-intrusive model reduction with quadratic manifolds
  • Citing Article
  • January 2023

Computer Methods in Applied Mechanics and Engineering

... 1. HPROMs require intrusive access to the source code to collect the intrusive snapshots of nonlinear operators, 15,16 leading to challenges for most users of commercial software. 2. The dependence of PROMs and HPROMs on numerical calculation models leads to challenges for most users of commercial software. ...

Learning physics-based models from data: perspectives from inverse problems and model reduction

Acta Numerica

... While most projection-based ROMs are intrusive, maintaining extrapolation robustness and high accuracy with less training data, they necessitate access to the numerical solver and a detailed understanding of specific implementation [20,25,29,30]. On the other hand, non-intrusive ROMs [31][32][33][34][35][36][37][38][39] are purely data-driven, independent of governing equations of physics and the high-fidelity physical solver. ...

Stress-constrained topology optimization of lattice-like structures using component-wise reduced order models
  • Citing Article
  • October 2022

Computer Methods in Applied Mechanics and Engineering