Figure 7 - uploaded by Michael A. Christie
Content may be subject to copyright.
Viscous Fingering in a Realization of Porous Media Flow Low-viscosity gas (purple) is injected into a reservoir to displace higher-viscosity oil (red). The displacement is unstable and the gas fingers into the oil, reducing recovery efficiency. 

Viscous Fingering in a Realization of Porous Media Flow Low-viscosity gas (purple) is injected into a reservoir to displace higher-viscosity oil (red). The displacement is unstable and the gas fingers into the oil, reducing recovery efficiency. 

Source publication
Article
Full-text available
Large-scale computer-based simulations are being used increasingly to predict the behavior of complex systems. Prime examples include the weather, global climate change, the performance of nuclear weapons, the flow through an oil reservoir, and the performance of advanced aircraft. Simulations invariably involve theory, experimental data, and numer...

Context in source publication

Context 1
... the injected gas has lower viscosity than the oil, the displace- ment process is unstable and viscous fingers develop (see Figure 7). The phenomenon is similar to the Rayleigh-Taylor instability of a dense fluid on top of a less dense fluid. ...

Similar publications

Article
Full-text available
Block methods that approximate the solution at several points in block form are commonly used to solve higher order differential equations. Inspired by the literature and ongoing research in this field, this paper intends to explore a new derivation of block backward differentiation formula that employs independent parameter to provide sufficient a...
Preprint
Full-text available
We introduce a family of various finite volume discretization schemes for the Fokker--Planck operator, which are characterized by different weight functions on the edges. This family particularly includes the well-established Scharfetter--Gummel discretization as well as the recently developed square-root approximation (SQRA) scheme. We motivate th...
Article
Full-text available
A simple method to measure the resistance of a sensor and convert it into digital information in a programmable digital device is by using a direct interface circuit. This type of circuit deduces the value of the resistor based on the discharge time through it for a capacitor of a known value. Moreover, the discharge times of this capacitor should...
Article
Full-text available
We study the optimal design of numerical integrators for dissipative systems, for which there exists an underlying thermodynamic structure known as GENERIC (General Equation for the NonEquilibrium Reversible-Irreversible Coupling). We present a framework to construct structure-preserving integrators for linearly damped systems by splitting the syst...
Article
Full-text available
Histograms are commonly used in databases to store statistics for query size estimation. Unfortunately, the sizes of histograms can grow dramatically with the number of attributes, and thus may not be suitable for multi-dimensional range queries. We take up the challenge to perform an estimation error analysis of histogram for multi-dimensional ran...

Citations

... A model discrepancy term needs to be included during calibration, since the knowledge about the state of a composite laminate and the governing damage evolution process suffer from lack of completeness and/or accuracy [44]. Following the Kennedy-O'Hagan framework, the relationship between the experimental observation Y obs , true value of the quantity Y, and the model output Y model is described as ...
Article
A methodology to account for the effect of epistemic uncertainty (regarding model parameters) on the strength prediction of carbon fiber reinforced polymer (CFRP) composite laminates is presented. A three-dimensional concurrent multiscale physics modeling framework is considered. A continuum damage mechanics-based constitutive relation is used for multiscale analysis. The parameters for the constitutive model are unknown and need to be calibrated. A least squares-based approach is employed for the calibration of model parameters and a model discrepancy term. The calibrated constitutive model is validated quantitatively using experimental data for both unnotched and open-hole specimens with different composite layups. The quantitative validation results are used to indicate further steps for model improvement.
... This is an area of active and needed research [26], [27], [28], [29]. An especially interesting discussion of this issue, with a proposed approach to UQ when validation is not possible or practical, is presented in [32] where the authors address a broad spectrum of modeling activities where uncertainty quantification is needed to inform decision making. ...
... In der Praxis sind die Ursachen für eine Abweichung zwischen Mess-und Simulationsdaten vielfältig [8]. Bild ...
Conference Paper
Simulation nimmt eine zunehmend wichtige Rolle im Engineering und Betrieb modularer Maschinen und Anlagen ein. Nach wie vor erfordern sowohl die Erstellung als auch die Pflege der Modelle einen beträchtlichen manuellen Aufwand. Um diesem Problem zu begegnen wurde in Vorveröffentlichungen ein Assistenzsystem für die automatische Komposition und Konfiguration von Co-Simulationen eingeführt, dessen Fokus auf Modellen für die Inbetriebnahmephase liegt. In diesem Beitrag erweitern die Autoren das Konzept des Assistenzsystems auf die Betriebsphase. Ziel ist es, die Simulation hinsichtlich Genauigkeit und Laufzeit zu optimieren. Zur Generierung unterschiedlicher Modellkonfigurationen werden evolutionäre Algorithmen eingesetzt, welche die vorgegebenen Ziele in unterschiedlicher Ausprägung erfüllen. Anschließend wird dem Bediener, auf Basis des jeweiligen Use-Cases, eine geeignete Variante vorgeschlagen. Das Konzept wird anhand eines industriellen Anwendungsbeispiels prototypisch validiert.
... crucial to judge its usefulness and to steer future efforts for developing new methods [Johnsen and Larsson 2011;Montecinos et al. 2012;Peshkov et al. 2019;Zhao et al. 2019]. Traditionally, the evaluation of numerical results relies on simple metrics in the form of vector norms [Christie et al. 2005;Kat and Els 2012;Press et al. 2007] to compute the distance between the approximation and a ground truth target. The latter can take the form of an analytic solution or can be obtained with a highly refined numerical solution that yields a converged result [Oberkampf et al. 2004]. ...
Article
Comparative evaluation lies at the heart of science, and determining the accuracy of a computational method is crucial for evaluating its potential as well as for guiding future efforts. However, metrics that are typically used have inherent shortcomings when faced with the under-resolved solutions of real-world simulation problems. We show how to leverage the human visual system in conjunction with crowd-sourced user studies to address the fundamental problems of widely used classical evaluation metrics. We demonstrate that such user studies driven by visual perception yield a very robust metric and consistent answers for complex phenomena without any requirements for proficiency regarding the physics at hand. This holds even for cases away from convergence where traditional metrics often end up with inconclusive results. More specifically, we evaluate results of different essentially non-oscillatory (ENO) schemes in different fluid flow settings. Our methodology represents a novel and practical approach for scientific evaluations that can give answers for previously unsolved problems.
... The calibration process involves quantifying the errors and estimating the unknown model parameters to minimize the difference between model predictions and experimental observations. For this purpose error models need to be built as the knowledge about the state of a composite laminate and the governing damage evolution process suffer from lack of completeness and/or accuracy [15]. In case of a mechanics-based model, the source of uncertainty in model prediction arises due to (1) aleatory uncertainty and (2) epistemic uncertainty. ...
Thesis
This dissertation presents development of constitutive relations for progressive damage analysis of laminated composites and efficient computational implementation of a multiscale computational framework. All the numerical studies presented in this dissertation consider quasi-static tensile and compressive loadings. A continuum damage mechanics-based constitutive model for the constituent materials contained in a unidirectional carbon fiber reinforced polymer composite material is presented. The model is formulated to be consistent with the thermodynamics theory of material damage. All model parameters are directly calibrated using experimental data and are updated to monitor the evolution of damage as a function of the strain state. Although quality control techniques are utilized by manufacturers, composite materials entail variability in both the constituent materials as well as the involved processes. In addition to variability, a mechanics-based computational model also has epistemic uncertainty due to lack of knowledge about model parameters, model form and solution errors. In this study the epistemic uncertainty is quantified with respect to model parameters. Furthermore, a multiscale crack band model is proposed to alleviate spurious mesh size dependence in the application of the constitutive model. The formulation and dissipated energy regularization within the multiscale modeling framework is considered. Improvement in computational cost of the microscale analysis is achieved by efficient use of parallel computing tools.
... Because H is typically rank deficient, the (5) is an ill posed inverse problem [36,37]. The Tikhonov formulation [38] leads to an unconstrained least square problem, where the term in (6) ensures the existence of a unique solution of the (5). The DA process can be then described in Algorithm 1, where: ...
... However, the error propagation into the forecating model is not improved by DA, so that, at each step, correction have to be based from scratch without learning from previous experience of error correction. The strongly nonlinear character of many physical processes of interest can result in the dramatic amplification of even small uncertainties in the input, so that they produce large uncertainties in the system behavior [5]. Because this instability, as many observations are assimilated as possible to the point where a strong requirement to DA is to enable real-time utilization of data to improve predictions. ...
Article
Full-text available
In this paper, we propose Deep Data Assimilation (DDA), an integration of Data Assimilation (DA) with Machine Learning (ML). DA is the Bayesian approximation of the true state of some physical system at a given time by combining time-distributed observations with a dynamic model in an optimal way. We use a ML model in order to learn the assimilation process. In particular, a recurrent neural network, trained with the state of the dynamical system and the results of the DA process, is applied for this purpose. At each iteration, we learn a function that accumulates the misfit between the results of the forecasting model and the results of the DA. Subsequently, we compose this function with the dynamic model. This resulting composition is a dynamic model that includes the features of the DA process and that can be used for future prediction without the necessity of the DA. In fact, we prove that the DDA approach implies a reduction of the model error, which decreases at each iteration; this is achieved thanks to the use of DA in the training process. DDA is very useful in that cases when observations are not available for some time steps and DA cannot be applied to reduce the model error. The effectiveness of this method is validated by examples and a sensitivity study. In this paper, the DDA technology is applied to two different applications: the Double integral mass dot system and the Lorenz system. However, the algorithm and numerical methods that are proposed in this work can be applied to other physical problem that involves other equations and/or state variables.
... Unlike measurements, errors in simulation results are purely systematic and arise from limitations in three main areas [21]: ...
Article
Full-text available
Validation and verification activities are essential enablers for the practical application of computational electromagnetics (CEM) models in industrial applications, but these terms are widely confused or merged. Furthermore, concepts such as the accuracy and precision of measurement results are often poorly understood. As model accuracy is most commonly judged against measured results there is a strong case to establish a well‐defined vocabulary for describing the accuracy of CEM models that is both consistent with, and complementary to, the existing vocabulary associated with measurements. In order to help clarify the situation, this study identifies the difference between validation and verification in relation to CEM, suggests a mapping between CEM and measurement processes, collates a number of relevant definitions from existing measurement standards, and proposes new and complementary definitions that relate specifically to CEM simulation. It is considered that the proposed vocabulary could help to avoid misunderstandings between the test and modelling domains and eliminate ambiguity in standards relating to CEM validation and verification. In addition, the terminology presented here could also be readily adapted to help develop or clarify similar standards for other physics‐based simulation domains, and perhaps even for more general mathematical modelling applications.
... Without a doubt, these methods allow scientists and engineers to identify the relationship between the technical parameters taken into account and the energy efficiency of refrigeration units. The presented situation differs sharply from that when research teams analyze the presence of oil in a certain area or predict heating of the spacecraft hull upon entering the Martian atmosphere [19] [20]. In these cases, the researchers closely examine all possible sources of error, calculate the discrepancy between the theoretical experimental results, and find out the ratio of EU and |TD − ED|. ...
Article
Full-text available
Aims: The purpose of this work is to present the information approach as the only effective tool that allows us to calculate the uncertainty of any result of the study on the use of refrigeration equipment. Methodology: Using the definitions and formulas of information theory and similarity theory, the amount of information contained in a model of refrigeration equipment or process is calculated. This allows us to present formulas for calculating the relative and comparative uncertainties of the model without additional assumptions. Based on these formulas, the value of the inevitable threshold of the accuracy of the representation of the studied construction or process is determined. Results: Theoretically substantiated recommendations are formulated for choosing the most effective methods for analyzing refrigeration equipment are formulated. Conclusion: Having calculated the amount of information contained in the model, we presented practical methods for analyzing data on refrigeration equipment.
... 7 Two EnergyPlus models representing types 1 and 2 towers were parameterised with data outlined in table 2 and incrementally improved to satisfy ASHRAE Guide 14 acceptance criteria [37] for energy model calibration using the main statistical indicators of errors, mean bias error (MBE) and coefficient of variation of the root mean squared error (CV(RMSE)) as given by: Absolute errors are generated using expression 3 as recommended by [39]: ...
Article
Energy and environmental data is collected from 5 tower blocks each containing 90 apartments to create two representative calibrated energy models. Three towers (heated by individual natural gas boilers) characterise medium (137.3 kWh/m2/yr.) and two (heated by electrical night storage heaters) characterise low (75.4 kWh/m2/yr.) thermal demands when benchmarked against actual UK domestic portfolio. Across 2020-2040 time horizon, an uncertain landscape is presented by 12 fuel carbon intensity and 14 economic scenarios in order to examine building fabric upgrade, with or in conjunction with centralised CHP engines, GSHP and biomass boilers in the case study towers. Out of 18 retrofit options examined, 7 or 8 solutions (under annual fuel price rises of 2% or 5.2% respectively) can provide lifetime CO2e mitigation at unit costs that fall below the upper bounds of carbon capture and storage technologies (US$143/tCO2e). If carbon taxation were to be used to enable full recovery of retrofit capital expenditure with no government subsidy, the lowest tax level observed belongs to a transition to centralised biomass from decentralised natural gas boilers requiring US$111/tCO2e (in 2020), while deep retrofits (i.e. plant and fabric) require much more punishing carbon taxes with 2020 figures ranging from US$233/tCO2e to US$1665/tCO2e.
... On the contrary, being able to reliably evaluate the quality of an output from a numerical method is crucial to judge its usefulness and to steer future efforts for developing new methods [1,2,3,4]. Traditionally, the evaluation of numerical results relies on simple metrics in the form of vector norms [5,6,7] to compute the distance between the approximation and a ground truth target. The latter can take the form of an analytic solution or can be obtained with a highly refined numerical solution that yields a converged result [8]. ...
Preprint
Comparative evaluation lies at the heart of science, and determining the accuracy of a computational method is crucial for evaluating its potential as well as for guiding future efforts. However, metrics that are typically used have inherent shortcomings when faced with the under-resolved solutions of real-world simulation problems. We show how to leverage crowd-sourced user studies in order to address the fundamental problems of widely used classical evaluation metrics. We demonstrate that such user studies, which inherently rely on the human visual system, yield a very robust metric and consistent answers for complex phenomena without any requirements for proficiency regarding the physics at hand. This holds even for cases away from convergence where traditional metrics often end up inconclusive results. More specifically, we evaluate results of different essentially non-oscillatory (ENO) schemes in different fluid flow settings. Our methodology represents a novel and practical approach for scientific evaluations that can give answers for previously unsolved problems.