Computational Geosciences (COMPUTAT GEOSCI)

Publisher: Springer Verlag

Journal description

Accurate and efficient imaging of subsurface structure and modeling of processes in the subsurface require multidisciplinary collaboration among mathematicians engineers chemists physicists and geoscientists. Presently there exists no journal whose main objective is to provide a platform for interaction among these diverse scientific groups. To remedy this we propose to establish a new journal Computational Geosciences . The aim of this international journal is to facilitate the exchange of ideas across the disciplines and among universities and industrial and governmental laboratories. Computational Geosciences will publish high quality papers on mathematical modeling simulation data analysis imaging inversion and interpretation with applications in the geosciences. The themes and application areas to be covered include reservoir and environmental engineering hydrology geochemistry geomechanics seismic and electromagnetic imaging geostatistics and reservoir/aquifer characterization and high performance parallel computing. More specifically Computational Geosciences welcomes contributions concerning for example bioremediation diffusion and dispersion geology and geostatistics scale up multiphase flow and reactive transport geophysical imaging and inversion methods seismic and electromagnetic modeling numerical methods and parallel computing. Both theoretical and applied scientists are invited to participate. Computational Geosciences focuses mainly on quantitative aspects of models describing transport processes in permeable media. It is targeted at petroleum engineers hydrologists quantitative environmental engineers soil physicists soil and geochemists applied mathematicians geologists and seismologists.


Journal Impact: 3.03*

*This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive.

Journal impact history

2016 Journal impact Available summer 2017
2015 Journal impact 3.03
2014 Journal impact 2.99
2013 Journal impact 2.11
2012 Journal impact 1.92
2011 Journal impact 2.15
2010 Journal impact 1.88
2009 Journal impact 2.28
2008 Journal impact 1.96
2007 Journal impact 1.48
2006 Journal impact 1.64
2005 Journal impact 1.31
2004 Journal impact 1.31
2003 Journal impact 0.78
2002 Journal impact 0.83
2001 Journal impact 0.87
2000 Journal impact 0.91

Journal impact over time

Journal impact
Year

Additional details

Cited half-life 5.30
Immediacy index 0.19
Eigenfactor 0.00
Article influence 1.03
Website Computational Geosciences website
Other titles Computational geosciences (Online), CG
ISSN 1420-0597
OCLC 40420652
Material type Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

This publication is classified Romeo Green.
Learn more

Publications in this journal

  • Vahid Joekar-Niasar
    Article · Jul 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: We consider an iterative scheme for solving a coupled geomechanics and flow problem in a fractured poroelastic medium. The fractures are treated as possibly non-planar interfaces. Our iterative scheme is an adaptation due to the presence of fractures of a classical fixed stress-splitting scheme. We prove that the iterative scheme is a contraction in an appropriate norm. Moreover, the solution converges to the unique weak solution of the coupled problem.
    Article · Jun 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: This study aims at analyzing the combined impact of uncertainties in initial conditions and wind forcing fields in ocean general circulation models (OGCM) using polynomial chaos (PC) expansions. Empirical orthogonal functions (EOF) are used to formulate both spatial perturbations to initial conditions and space-time wind forcing perturbations, namely in the form of a superposition of modal components with uniformly distributed random amplitudes. The forward deterministic HYbrid Coordinate Ocean Model (HYCOM) is used to propagate input uncertainties in the Gulf of Mexico (GoM) in spring 2010, during the Deepwater Horizon oil spill, and to generate the ensemble of model realizations based on which PC surrogate models are constructed for both localized and field quantities of interest (QoIs), focusing specifically on sea surface height (SSH) and mixed layer depth (MLD). These PC surrogate models are constructed using basis pursuit denoising methodology, and their performance is assessed through various statistical measures. A global sensitivity analysis is then performed to quantify the impact of individual modes as well as their interactions. It shows that the local SSH at the edge of the GoM main current—the Loop Current—is mostly sensitive to perturbations of the initial conditions affecting the current front, whereas the local MLD in the area of the Deepwater Horizon oil spill is more sensitive to wind forcing perturbations. At the basin scale, the SSH in the deep GoM is mostly sensitive to initial condition perturbations, while over the shelf it is sensitive to wind forcing perturbations. On the other hand, the basin MLD is almost exclusively sensitive to wind perturbations. For both quantities, the two sources of uncertainty have limited interactions. Finally, the computations indicate that whereas local quantities can exhibit complex behavior that necessitates a large number of realizations, the modal analysis of field sensitivities can be suitably achieved with a moderate size ensemble.
    Article · Jun 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: Homogenisation techniques have been successfully used to estimate the mechanical response of synthetic composite materials, due to their ability to relate the macroscopic mechanical response to the material microstructure. The adoption of these mean-field techniques in geo-composites such as shales is attractive, partly because of the practical difficulties associated with the experimental characterisation of these highly heterogeneous materials. In this paper, numerical modelling has been undertaken to investigate the applicability of homogenisation methods in predicting the macroscopic, elastic response of clayey rocks. The rocks are considered as two-level composites consisting of a porous clay matrix at the first level and a matrix-inclusion morphology at the second level. The simulated microstructures ranged from a simple system of one inclusion/void embedded in a matrix to complex, random microstructures. The effectiveness and limitations of the different homogenisation schemes were demonstrated through a comparative evaluation of the macroscopic elastic response, illustrating the appropriate schemes for upscaling the microstructure of shales. Based on the numerical simulations and existing experimental observations, a randomly distributed pore system for the micro-structure of porous clay matrix has been proposed which can be used for the subsequent development and validation of shale constitutive models. Finally, the homogenisation techniques were used to predict the experimental measurements of elastic response of shale core samples. The developed methodology is proved to be a valuable tool for verifying the accuracy and performance of the homogenisation techniques.
    Article · Jun 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to minimize the mismatch between predicted and observed production data through adjusting the grid block permeability values of the model. 2) performing two optimization exercises to minimize and maximize an economic objective over the remaining field life, for a fixed production strategy, by manipulating the same grid block permeabilities, however without significantly changing the mismatch obtained under step 1. This is accomplished through a hierarchical optimization procedure that limits the solution space of a secondary optimization problem to the (approximate) null space of the primary optimization problem. We applied this procedure to two different reservoir models. We performed a history match based on synthetic data, starting from a uniform prior and using a gradient-based minimization procedure. After history matching, minimization and maximization of the net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying close to the null space of the primary optimization problem. In other words, we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem. We also investigated how this method can be used as a means to assess the cost-effectiveness of acquiring different data types to reduce the uncertainty in the expected NPV.
    Article · Jun 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: The Fully Implicit method (FIM) is often the method of choice for the temporal discretization of the partial differential equations governing multiphase flow in porous media. The FIM involves solving large coupled systems of nonlinear algebraic equations. Newton-based methods, which are employed to solve the nonlinear systems, can suffer from convergence problems-this is especially true for large time steps in the presence of highly nonlinear flow physics. To overcome such convergence problems, the time step is usually reduced, and the Newton steps are restarted from the solution of the previous (converged) time step. Recently, potential ordering and the reduced-Newton method were used to solve immiscible three-phase flow in the presence of buoyancy and capillary effects (e.g., Kwok and Tchelepi, J. Comput. Phys. 227(1), 706-727 2007). Here, we improve the robustness of the potential-based ordering method in the presence of gravity. Furthermore, we also extend this nonlinear approach to interphase mass transfer. Our algorithm deals effectively with mass transfer between the liquid and gas phases, including phase disappearance (e.g., gas going back in solution) and reappearance (e.g., gas coming out of solution and forming a separate phase), as a function of pressure and composition. Detailed comparisons of the robustness and efficiency of the potential-based solver with state-of-the-art nonlinear/linear solvers are presented for immiscible two-phase (Dead-Oil), Black-Oil, and compositional problems using heterogeneous models. The results show that for large time steps, our nonlinear ordering-based solver reduces the number of nonlinear iterations significantly, which leads to gains in the overall computational cost.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: The in-situ upgrading (ISU) of bitumen and oil shale is a very challenging process to model numerically because of the large number of components that need to be modelled using a system of equations that are both highly non-linear and strongly coupled. Operator splitting methods are one way of potentially improving computational performance. Each numerical operator in a process is modelled separately, allowing the best solution method to be used for the given numerical operator. A significant drawback to the approach is that decoupling the governing equations introduces an additional source of numerical error, known as the splitting error. The best splitting method for modelling a given process minimises the splitting error whilst improving computational performance compared to a fully implicit approach. Although operator splitting has been widely used for the modelling of reactive-transport problems, it has not yet been applied to the modelling of ISU. One reason is that it is not clear which operator splitting technique to use. Numerous such techniques are described in the literature and each leads to a different splitting error. While this error has been extensively analysed for linear operators for a wide range of methods, the results cannot be extended to general non-linear systems. It is therefore not clear which of these techniques is most appropriate for the modelling of ISU. In this paper, we investigate the application of various operator splitting techniques to the modelling of the ISU of bitumen and oil shale. The techniques were tested on a simplified model of the physical system in which a solid or heavy liquid component is decomposed by pyrolysis into lighter liquid and gas components. The operator splitting techniques examined include the sequential split operator (SSO), the Strang-Marchuk split operator (SMSO) and the iterative split operator (ISO). They were evaluated on various test cases by considering the evolution of the discretization error as a function of the time-step size compared with the results obtained from a fully implicit simulation. We observed that the error was least for a splitting scheme where the thermal conduction was performed first, followed by the chemical reaction step and finally the heat and mass convection operator (SSO-CKA). This method was then applied to a more realistic model of the ISU of bitumen with multiple components, and we were able to obtain a speed-up of between 3 and 5.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: This paper combines analytical and numerical studies of light oil recovery by air injection. We investigate in detail the internal structure of oxidation fronts in two-phase flow in a porous medium, taking into account reaction, vaporization, and condensation of liquid fuel, with longitudinal heat conduction. Our solution shows that between regimes of total and partial oxygen consumption there is a change in the oxidation wave, which may have negative implications for oxygen breakthrough in light oil recovery process. In spite of the simplifications used to derive the analytical solution, the latter agrees with direct numerical simulations. Finally, based on our analytical solution, we provide a phase diagram to predict conditions for total or partial oxygen consumption in light oil recovery process.
    Article · Jun 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: In this paper, we propose a strategy to bypass the phase identification of fluid mixtures that can form three, or more, phases. The strategy is used for reservoir simulation of multicomponent, three-phase, thermal compositional displacement processes. Since the solution path in compositional space is determined by a limited number of "key" tie-simplexes, the proposed "bypass" method uses information from the parameterized tie-simplexes and their extensions. The tie-simplex parameterization is performed in the discrete phase-fraction space. Once the phase-fraction space is discretized, a conventional three-phase flash is used adaptively to compute the phase states at the discretization nodes. If all discretization vertices of a given discrete cell, in phase-fraction space, have the same phase state, then this state is assigned to the entire cell and expensive flash calculations are bypassed. We demonstrate the robustness and efficiency of our phase identification bypassing strategy for several cases with three-phase flow, including a six-component ES-SAGD (enhanced solvent SAGD) model.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: Corner-point gridding is widely used in reservoir and basin modeling but generally yields approximations in the representation of geological interfaces. This paper introduces an indirect method to generate a hex-dominant mesh conformal to 3D geological surfaces and well paths suitable for finite-element and control-volume finite-element simulations. By indirect, we mean that the method first generates an unstructured tetrahedral mesh whose tetrahedra are then merged into primitives (hexahedra, prisms, and pyramids). More specifically, we focus on determining the optimal set of primitives that can be recombined from a given tetrahedral mesh. First, we detect in the tetrahedral mesh all the feasible volumetric primitives using a pattern-matching algorithm (Meshkat and Talmor Int. J. Numer. Meth. Eng. 49(1-2), 17-30 2000) that we re-visit and extend with configurations that account for degenerated tetrahedra (slivers). Then, we observe that selecting the optimal set of primitives among the feasible ones can be formalized as a maximum weighted independent set problem (Bomze et al. 1999), known to be -Complete. We propose several heuristic optimizations to find a reasonable set of primitives in a practical time. All the tetrahedra of each selected primitive are then merged to build the final unstructured hex-dominant mesh. This method is demonstrated on 3D geological models including a faulted and folded model and a discrete fracture network.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: Intelligent wells (I-wells) provide layer-by-layer production and injection control. This flow control flexibility relies on the real-time operation of multiple, downhole interval control valves (ICVs) installed across the well completion intervals. Proactive control of I-wells, with its ambition of creating an optimal, operational strategy of ICVs over the full well lifetime, is a high-dimensional optimization problem with a computationally demanding and uncertain objective function based on one or more simulated reservoir model(s). This paper illustrates how a stochastic search algorithm based on the simultaneous perturbation stochastic approximation (SPSA) coupled with a utility function approach to define an objective function to account for the uncertainty in the reservoir's description can efficiently solve the proactive, I-well control problem. The utility function accounts for both the expectation and variance of the net present value (NPV) by modifying the objective function to consider multiple reservoir model realizations. Simultaneous optimization of full ensemble of model realizations is prohibitively expensive. By contrast, choosing a small ensemble of model realizations is computationally less demanding, but the small ensemble has to be itself selected. We introduce the use of k-means clustering for selecting a representative ensemble of model realizations that performs in an equivalent manner to all available realizations. A distance measure, tailored to the proactive optimization application, is used to define the similarity/dissimilarity of the different realizations which is then employed to perform the clustering. Moreover, we show that this robust proactive optimization process can either focus on the specific objective of increasing the mean or of reducing the variance (this is achieved via adjustable weights in the utility function). The relative importance of these conflicting objectives has to be taken into account during the model realization selection process to ensure the near-global success of the obtained control scenario. The proposed robust optimization framework has been tested on a representative test case (PUNQ-S3). This is a small field developed with an intelligent producer in which the uncertainty in the model has been quantified by several geological realizations. Our results demonstrate the computational efficiency of employing an ensemble of systematically selected realizations rather than the traditional methods that rely on either a single model realization or a randomly selected ensemble of realizations. Our results show the success of the developed framework in identifying control scenarios that correspond to an acceptable improvement in the expected added value at a controlled risk level while substantially reducing the computation time compared to using full ensemble of model realizations.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: The adjoint gradient method is well recognized for its efficiency in large-scale production optimization. When implemented in a sequential quadratic programming (SQP) algorithm, adjoint gradients enable the construction of a quadratic approximation of the objective function and linear approximation of the nonlinear constraints using just one forward and one backward simulation (with multiple right-hand sides). In this work, the focus is on the performance of the adjoint gradient method with respect to the adaptive time step refinement generated in the underlying forward simulations. First, we demonstrate that the mass transfer in reservoir simulation and, as a consequence, the net-present value (NPV) function are more sensitive to the degree of the time step refinement when using production bottom-hole pressure (BHP) controls than when using production rate controls. Effects of this sensitivity on optimization process are studied using six examples of uniform time stepping with different degrees of refinements. By comparing those examples, we show that corresponding optimal solutions for target production BHPs deviate at early stages of the optimization process. It indicates an inconsistency in the evaluation of the adjoint gradients and NPV function for different time step refinements. Next, we investigate the effects of this inconsistency on the results of a constrained production optimization. Two strategies of nonlinear constraints are considered: (i) nonlinear constraints handled in the optimization process and (ii) constraints applied directly in forward simulations with a common control switch procedure. In both strategies, we observe that the progress of the optimization process is greatly influenced by the degree of the time step refinement after control update. In the case of constrained simulations, the presence of control switches combined with large time steps after control update forces adaptive refinement to vary the time step size significantly. As a result, the inconsistency of the adjoint gradients and NPV values provoke an early termination of the SQP algorithm. In the case of constrained optimization, the inconsistencies in gradient evaluations are less significant, and the performance of the optimization process is governed by a satisfaction of nonlinear constraints in SQP algorithm.
    Conference Paper · Jun 2016
  • [Show abstract] [Hide abstract] ABSTRACT: This work addresses the problem of delineating the spatial layout of ten rock type domains in an iron ore body and of assessing the uncertainty in the domain boundaries. A stochastic approach is proposed to this end, based on truncated Gaussian simulation, which consists in defining successive partitions of the space that comply with the desired spatial continuity and contact relationships between rock type domains. The sequencing of these domains is driven by their position (surficial vs. underlying rocks), granulometry (compact vs. friable rocks), and grades (rich vs. poor) of iron, alumina, manganese and loss on ignition. A total of 100 realizations are produced, conditioned to available drill hole data, and used to quantify the uncertainty in the occurrence of each rock type domain, at both global and local scales.
    Article · May 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: Peaceman’s equivalent well-cell radius for 2D square grids has been generalized to 2D grids consisting of regular hexagons. The development consists of the following steps. Firstly, the analytical solution for the pressure drop between injector and producer for wells in a seven-spot pattern is determined. Secondly, this solution is compared with the numerical solution on hexagonal grids for a sixth of a seven-spot pattern. Finally, the equivalent well-cell radius is calculated, and its asymptotic behavior for infinitely fine grids is derived. The results are valid for both steady-state and unsteady-state conditions.
    Article · May 2016 · Computational Geosciences
  • [Show abstract] [Hide abstract] ABSTRACT: Advances in pore-scale imaging (e.g., μ-CT scanning), increasing availability of computational resources, and recent developments in numerical algorithms have started rendering direct pore-scale numerical simulations of multi-phase flow on pore structures feasible. Quasi-static methods, where the viscous and the capillary limit are iterated sequentially, fall short in rigorously capturing crucial flow phenomena at the pore scale. Direct simulation techniques are needed that account for the full coupling between capillary and viscous flow phenomena. Consequently, there is a strong demand for robust and effective numerical methods that can deliver high-accuracy, high-resolution solutions of pore-scale flow in a computationally efficient manner. Direct simulations of pore-scale flow on imaged volumes can yield important insights about physical phenomena taking place during multi-phase, multi-component displacements. Such simulations can be utilized for optimizing various enhanced oil recovery (EOR) schemes and permit the computation of effective properties for Darcy-scale multi-phase flows. We implement a phase-field model for the direct pore-scale simulation of incompressible flow of two immiscible fluids. The model naturally lends itself to the transport of fluids with large density and viscosity ratios. In the phase-field approach, the fluid-phase interfaces are expressed in terms of thin transition regions, the so-called diffuse interfaces, for increased computational efficiency. The conservation law of mass for binary mixtures leads to the advective Cahn–Hilliard equation and the condition that the velocity field is divergence free. Momentum balance, on the other hand, leads to the Navier–Stokes equations for Newtonian fluids modified for two-phase flow and coupled to the advective Cahn–Hilliard equation. Unlike the volume of fluid (VoF) and level-set methods, which rely on regularization techniques to describe the phase interfaces, the phase-field method facilitates a thermodynamic treatment of the phase interfaces, rendering it more physically consistent for the direct simulations of two-phase pore-scale flow. A novel geometric wetting (wall) boundary condition is implemented as part of the phase-field method for the simulation of two-fluid flows with moving contact lines. The geometric boundary condition accurately replicates the prescribed equilibrium contact angle and is extended to account for dynamic (non-equilibrium) effects. The coupled advective Cahn–Hilliard and modified Navier–Stokes (phase-field) system is solved by using a robust and accurate semi-implicit finite volume method. An extension of the momentum balance equations is also implemented for Herschel–Bulkley (non-Newtonian) fluids. Non-equilibrium-induced two-phase flow problems and dynamic two-phase flows in simple two-dimensional (2-D) and three-dimensional (3-D) geometries are investigated to validate the model and its numerical implementation. Quantitative comparisons are made for cases with analytical solutions. Two-phase flow in an idealized 2-D pore-scale conduit is simulated to demonstrate the viability of the proposed direct numerical simulation approach.
    Article · May 2016 · Computational Geosciences