Numerical simulations for engineering applications solve partial differential equations (PDE) to model various physical processes. Traditional PDE solvers are very accurate but computationally costly. On the other hand, Machine Learning (ML) methods offer a significant computational speedup but face challenges with accuracy and generalization to different PDE conditions, such as geometry, boundary conditions, initial conditions and PDE source terms. In this work, we propose a novel ML-based approach, CoAE-MLSim (Composable AutoEncoder Machine Learning Simulation), which is an unsupervised, lower-dimensional, local method, that is motivated from key ideas used in commercial PDE solvers. This allows our approach to learn better with relatively fewer samples of PDE solutions. The proposed ML-approach is compared against commercial solvers for better benchmarks as well as latest ML-approaches for solving PDEs. It is tested for a variety of complex engineering cases to demonstrate its computational speed, accuracy, scalability, and generalization across different PDE conditions. The results show that our approach captures physics accurately across all metrics of comparison (including measures such as results on section cuts and lines).
Optimizing the parameters of partial differential equations (PDEs), i.e., PDE-constrained optimization (PDE-CO), allows us to model natural systems from observations or perform rational design of structures with complicated mechanical, thermal, or electromagnetic properties. However, PDE-CO is often computationally prohibitive due to the need to solve the PDE-typically via finite element analysis (FEA)-at each step of the optimization procedure. In this paper we propose amortized finite element analysis (AmorFEA), in which a neural network learns to produce accurate PDE solutions, while preserving many of the advantages of traditional finite element methods. This network is trained to directly minimize the potential energy from which the PDE and finite element method are derived, avoiding the need to generate costly supervised training data by solving PDEs with traditional FEA. As FEA is a vari-ational procedure, AmorFEA is a direct analogue to popular amortized inference approaches in latent variable models, with the finite element basis acting as the variational family. AmorFEA can perform PDE-CO without the need to repeatedly solve the associated PDE, accelerating optimization when compared to a traditional workflow using FEA and the adjoint method.
Reynolds-averaged Navier-Stokes (RANS) equations are widely used in engineering turbulent flow simulations. However, the mean flow fields predicted by RANS solvers could come with large discrepancies due to the uncertainties in modeled Reynolds stresses. Recently, Wang et al. demonstrated that machine learning can be used to improve the RANS modeled Reynolds stresses by leveraging data from high fidelity simulations (Physics informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data. Physical Review Fluids. 2, 034603, 2017). However, solving for mean flows from the machine-learning predicted Reynolds stresses still poses significant challenges. The present work is a critical extension of (Wang et al. 2017), and it enables the machine learning model to yield improved predictions of not only Reynolds stresses but also mean velocities therefrom. Such a development is of profound practical importance, because often the velocities and the derived quantities (e.g., drag, lift, surface friction), and not the Reynolds stresses per se, are the ultimate quantities of interest in RANS simulations. The present work has two innovations. First, we demonstrate a systematic procedure to generate mean flow features based on the integrity basis for a set of mean flow tensors, which is in contrast to the ad hoc choices features in (Wang et al. 2017). Second, we propose using machine learning to predict linear and nonlinear parts of the Reynolds stress tensor separately. Inspired by the finite polynomial representation of tensors in classical turbulence modeling, such a decomposition is instrumental in overcoming the instability of RANS equations. Several test cases are used to demonstrate the merits of the proposed approach.
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
We propose to formulate physical reasoning and manipulation planning as an optimization problem that integrates first order logic, which we call Logic-Geometric Programming.
Jan 2020
Kiwon Um
Philipp Holl
Robert Brand
Nils Thuerey
Kiwon Um, Philipp Holl, Robert Brand, Nils Thuerey, et al. Solver-in-the-loop: Learning from
differentiable physics to interact with iterative pde-solvers. arXiv preprint arXiv:2007.00016,
2020.
Learning incompressible fluid dynamics from scratch towards fast, differentiable fluid models that generalize
Jan 2020
Nils Wandel
Michael Weinmann
Reinhard Klein
Nils Wandel, Michael Weinmann, and Reinhard Klein. Learning incompressible fluid dynamics from scratch towards fast, differentiable fluid models that generalize. arXiv preprint
arXiv:2006.08762, 2020.
Train once and use forever: Solving boundary value problems in unseen domains with pre-trained deep learning models
Jan 2021
Hengjie Wang
Robert Planas
Aparna Chandramowlishwaran
Ramin Bostanabad
Hengjie Wang, Robert Planas, Aparna Chandramowlishwaran, and Ramin Bostanabad. Train once
and use forever: Solving boundary value problems in unseen domains with pre-trained deep learning models. arXiv preprint arXiv:2104.10873, 2021.
Differentiable molecular simulations for control and learning
Jan 2020
Wujie Wang
Simon Axelrod
Rafael Gómez-Bombarelli
Wujie Wang, Simon Axelrod, and Rafael Gómez-Bombarelli. Differentiable molecular simulations
for control and learning. arXiv preprint arXiv:2003.00868, 2020.
Latent space subdivision: stable and controllable time predictions for fluid flow
Jan 2020
15-25
Steffen Wiewel
Byungsoo Kim
C Vinicius
Barbara Azevedo
Nils Solenthaler
Thuerey
Steffen Wiewel, Byungsoo Kim, Vinicius C Azevedo, Barbara Solenthaler, and Nils Thuerey. Latent
space subdivision: stable and controllable time predictions for fluid flow. In Computer Graphics
Forum, volume 39, pp. 15-25. Wiley Online Library, 2020.