Geoscientific Model Development

Published by Copernicus GmbH

Online ISSN: 1991-9603

Articles


Fig. 1. One 2 × 2 tile with the lowest layer of the atmospheric grid and the fire subgrid on the surface 
Fig. 2. Division of fire mesh cells into subcells for fuel fraction computation. The level-set function ψ and the ignition time t i are given at the centers a 1 ,..., a 4 of the cells of the fire grid. The inte- 
Fig. 3. Parallel communication in WRF. The computational domain is divided into disjoint rectangular patches. Each patch is updated by a single MPI process (distributed memory parallelism), and the process may read arary data in a strip around the patch, called halo region. The communication between the patches is by halo calls to the RSL parallel infrastructure (Michalakes, 2000), which update the halo regions by the values from the neighboring patches. Each patch may be divided into tiles, which execute in separate OpenMP threads (shared memory parallelism). Following WRF coding conventions (WRF Working Group 2, 2007), computational kernels execute in a single tile. They may read array values from a strip beyond the tile boundary but no explicit communication is allowed. 3-D arrays are divided into patches and tiles in the horizontal plane, cf., Fig. 1. 
Fig. 4. Parallel structure of the fire module in the WRF physics layer. The core code itself executes on a single tile, with all communication done outside. Multiple passes through the fire module are needed in each time step. 
Fig. 5. Software layers of WRF-Fire. All physics dependencies are in the dashed box. The utilities layer is called from all the other layers above. 

+7

Coupled atmosphere-wildland fire modeling with WRF 3.3 and SFIRE 2011
  • Article
  • Full-text available

February 2011

·

1,734 Reads

Jan Mandel

·

·

We describe the physical model, numerical algorithms, and software structure of WRF-Fire. WRF-Fire consists of a fire-spread model, implemented by the level-set method, coupled with the Weather Research and Forecasting model. In every time step, the fire model inputs the surface wind, which drives the fire, and outputs the heat flux from the fire into the atmosphere, which in turn influences the atmosphere. The level-set method allows submesh representation of the burning region and flexible implementation of various ignition modes. WRF-Fire is distributed as a part of WRF and it uses the WRF parallel infrastructure for parallel computing.
Download
Share

Fig. 4. Example results of Monte-Carlo simulations using a simple chemistry mechanism (tropospheric conditions, gas-phase chemistry without any non-methane hydrocarbons). The first figure is a histogram showing the distribution of steady-state O 3 mixing ratios (nmol mol −1 ) in 1000 model runs. The other figures are scatter plots of O 3 against individual gas-phase photolysis rate coefficients (in s −1 ). It can be seen that O 3 decreases with increasing values of J1001a (O 3 + hν → O( 1 D)). In contrast, its dependence on the photolysis rate J1001b (O 3 + hν → O( 3 P)) is statistically insignificant.  
The atmospheric chemistry box model CAABA/MECCA-3.0

May 2011

·

460 Reads

·

·

·

[...]

·

We present version 3.0 of the atmospheric chemistry box model CAABA/MECCA. In addition to a complete update of the rate coefficients to the most recent recommendations, a number of new features have been added: chemistry in multiple aerosol size bins; automatic multiple simulations reaching steady-state conditions; Monte-Carlo simulations with randomly varied rate coefficients within their experimental uncertainties; calculations along Lagrangian trajectories; mercury chemistry; more detailed isoprene chemistry; tagging of isotopically labeled species. Further changes have been implemented to make the code more user-friendly and to facilitate the analysis of the model results. Like earlier versions, CAABA/MECCA-3.0 is a community model published under the GNU General Public License.

The ACCENT-Protocol: A framework for benchmarking and model

December 2011

·

55 Reads

We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

Fig. 1. Schematic representation of the aerosol distribution in MADE-in. BC indicates black carbon, POM particulate organic matter, SS sea salt and DU dust. The shaded mode is the coarse mode, which does not interact with the sub-micrometer modes. The black line depicts the fine modes without BC and dust, the red line the modes for externally mixed BC and dust particles and the blue line the modes for internally mixed BC and dust. 
MADE-in: A new aerosol microphysics submodel for global simulation of insoluble particles and their mixing state

April 2011

·

100 Reads

Black carbon (BC) and mineral dust are among the most abundant insoluble aerosol components in the atmosphere. When released, most BC and dust particles are externally mixed with other aerosol species. Through coagulation with particles containing soluble material and condensation of gases, the externally mixed particles may obtain a liquid coating and be transferred into an internal mixture. The mixing state of BC and dust aerosol particles influences their radiative and hygroscopic properties, as well as their ability of forming ice crystals. We introduce the new aerosol microphysics submodel MADE-in, implemented within the ECHAM/MESSy Atmospheric Chemistry global model (EMAC). MADE-in is able to track mass and number concentrations of BC and dust particles in their different mixing states, as well as particles free of BC and dust. MADE-in describes these three classes of particles through a superposition of seven log-normally distributed modes, and predicts the evolution of their size distribution and chemical composition. Six out of the seven modes are mutually interacting, allowing for the transfer of mass and number among them. Separate modes for the different mixing states of BC and dust particles in EMAC/MADE-in allow for explicit simulations of the relevant aging processes, i.e. condensation, coagulation and cloud processing. EMAC/MADE-in has been evaluated with surface and airborne measurements and mostly performs well both in the planetary boundary layer and in the upper troposphere and lowermost stratosphere.

Table 2. Information on chosen reaction rates for the two chemical reaction systems. 
Fig. 3. Equilibrium concentrations for Z (red line) and˜Zand˜ and˜Z (blue line) [ppbv] as a function of the concentration of Y for a constant concentration of X = 20 ppbv (solid) and a constant ratio between the concentration of species X and Y of 1:10.  
Fig. 4. Net production rates [ppbv/s] for a constant concentration of Z (red line) and˜Zand˜ and˜Z (blue line) of 40 ppbv and a mixing ratio of X=20 ppbv.  
Fig. 2. Equilibrium concentrations for species Z (top) and˜Zand˜ and˜Z (bottom ) [ppbv] as a function of the concentrations of X and Y.  
On the attribution of contributions of atmospheric trace gases to emissions in atmospheric model applications

October 2010

·

46 Reads

We present an improved tagging method, which describes the combined effect of emissions of various species from individual emission categories, e.g. the impact of both, nitrogen oxides and non-methane hydrocarbon emissions on ozone. This method is applied to two simplified chemistry schemes, which represent the main characteristics of atmospheric ozone chemistry. Analytical solutions are presented for this tagging approach. In the past, besides tagging approaches, sensitivity methods were used, which estimate the contributions from individual sources based on differences in two simulations, a base case and a simulation with a perturbation in the respective emission category. We apply both methods to our simplified chemical systems and demonstrate that potentially large errors (factor of 2) occur with the sensitivity method, which depend on the degree of linearity of the chemical system. This error depends on two factors, the ability to linearise the chemical system around a base case, and second the completeness of the contributions, which means that all contributions should principally add up to 100%. For some chemical regimes the first error can be minimised by employing only small perturbations of the respective emission, e.g. 5%. The second factor depends on the chemical regime and cannot be minimized by a specific experimental set-up. It is inherent to the sensitivity method. Since a complete tagging algorithm for global chemistry models is difficult to achieve, we present two error metrics, which can be applied for sensitivity methods in order to estimate the potential error of this approach for a specific application.

The 1-way on-line coupled atmospheric chemistry model system MECO(n) – Part 3: Meteorological evaluation of the on-line coupled system

July 2011

·

107 Reads

Three detailed meteorological case studies are conducted with the global and regional atmospheric chemistry model system ECHAM5/MESSy(→COSMO/MESSy)n, shortly named MECO(n). The aim of this article is to assess the general performance of the on-line coupling of the regional model COSMO to the global model ECHAM5. The cases are characterised by intense weather systems in Central Europe: a cold front passage in March 2010, a convective frontal event in July 2007, and the high impact winter storm "Kyrill" in January 2007. Simulations are performed with the new on-line-coupled model system and compared to classical, off-line COSMO hindcast simulations driven by ECMWF analyses. Precipitation observations from rain gauges and ECMWF analysis fields are used as reference, and both qualitative and quantitative measures are used to characterise the quality of the various simulations. It is shown that, not surprisingly, simulations with a shorter lead time generally produce more accurate simulations. Irrespective of lead time, the accuracy of the on-line and off-line COSMO simulations are comparable for the three cases. This result indicates that the new global and regional model system MECO(n) is able to simulate key mid-latitude weather systems, including cyclones, fronts, and convective precipitation, as accurately as present-day state-of-the-art regional weather prediction models in standard off-line configuration. Therefore, MECO(n) will be applied to simulate atmospheric chemistry exploring the model's full capabilities during meteorologically challenging conditions.

Table 1. Definition of tracer field instances in COSMO/MESSy. The middle column lists the variable names of the respective fields in TRACER. The abbreviations RK and LF denote the Runge-Kutta and Leap-frog scheme, respectively. 
Table 2. Definition of initialisation patterns for passive tracers used in this study. 
Fig. 4. Initialisation pattern of the passive tracer V 1. The horizontal axis shows rotated coordinates.
Fig. 5. Corrected negative tracer mass (see text) in kg for the passive tracers H (left), V 1 (middle) and V 2 (right) in the COSMO-7 region. For H and V 1 all lines are on top of each other.
Fig. 7. Horizontal distribution at 900 hPa of the artificial tracer PNT. The location of the emission point is indicated by the light blue plus sign. Results are shown for the 12th, 15th and 18th simulation day at 12:00 UTC (columns). First row: ECHAM5/MESSy, second row COSMO-40/MESSy, third row COSMO-7/MESSy and last row composite of all three model domains.
The 1-way on-line coupled atmospheric chemistry model system MECO(n) Part 1: Description of the limited-area atmospheric chemistry model COSMO/MESSy

January 2012

·

156 Reads

The numerical weather prediction model of the Consortium for Small Scale Modelling (COSMO), maintained by the German weather service (DWD), is connected with the Modular Earth Submodel System (MESSy). This effort is undertaken in preparation of a new, limited-area atmospheric chemistry model. Limited-area models require lateral boundary conditions for all prognostic variables. Therefore the quality of a regional chemistry model is expected to improve, if boundary conditions for the chemical constituents are provided by the driving model in consistence with the meteorological boundary conditions. The new developed model is as consistent as possible, with respect to atmospheric chemistry and related processes, with a previously developed global atmospheric chemistry general circulation model: the ECHAM/MESSy Atmospheric Chemistry (EMAC) model. The combined system constitutes a new research tool, bridging the global to the meso-γ scale for atmospheric chemistry research. MESSy provides the infrastructure and includes, among others, the process and diagnostic submodels for atmospheric chemistry simulations. Furthermore, MESSy is highly flexible allowing model setups with tailor made complexity, depending on the scientific question. Here, the connection of the MESSy infrastructure to the COSMO model is documented and also the code changes required for the generalisation of regular MESSy submodels. Moreover, previously published prototype submodels for simplified tracer studies are generalised to be plugged-in and used in the global and the limited-area model. They are used to evaluate the TRACER interface implementation in the new COSMO/MESSy model system and the tracer transport characteristics, an important prerequisite for future atmospheric chemistry applications. A supplementary document with further details on the technical implementation of the MESSy interface into COSMO with a complete list of modifications to the COSMO code is provided.

Tropical troposphere to stratosphere transport of carbon monoxide and long-lived trace species in the Chemical Lagrangian Model of the Stratosphere (CLaMS)

August 2014

·

182 Reads

Variations in the mixing ratio of trace gases of tropospheric origin entering the stratosphere in the tropics are of interest for assessing both troposphere to stratosphere transport fluxes in the tropics and the impact of these transport fluxes on the composition of the tropical lower stratosphere. Anomaly patterns of carbon monoxide (CO) and long-lived tracers in the lower tropical stratosphere allow conclusions about the rate and the variability of tropical upwelling to be drawn. Here, we present a simplified chemistry scheme for the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the simulation, at comparatively low numerical cost, of CO, ozone, and long-lived trace substances (CH4, N2O, CCl3F (CFC-11), CCl2F2 (CFC-12), and CO2) in the lower tropical stratosphere.

Fig. 2. Flowchart of how the QCTM is implemented into the EMAC submodel PSC. The partitioning is calculated twice using a) offline, and b) online mixing ratios of HNO 3 (see text for more detailed explanations). 
A quasi chemistry-transport model mode for EMAC

November 2010

·

92 Reads

A quasi chemistry-transport model mode (QCTM) is presented for the numerical chemistry-climate simulation system ECHAM/MESSy Atmospheric Chemistry (EMAC). It allows for a quantification of chemical signals through suppression of any feedback between chemistry and dynamics. Noise would otherwise interfere too strongly. The signal follows from the difference of two QCTM simulations, reference and sensitivity. These are fed with offline chemical fields as a substitute of the feedbacks between chemistry and dynamics: offline mixing ratios of radiatively active substances enter the radiation scheme (a), offline mixing ratios of nitric acid enter the scheme for re-partitioning and sedimentation from polar stratospheric clouds (b). Offline methane oxidation is the exclusive source of chemical water-vapor tendencies (c). Any set of offline fields suffices to suppress the feedbacks, though may be inconsistent with the simulation setup. An adequate set of offline climatologies can be produced from a non-QCTM simulation of the reference setup. Test simulations reveal the particular importance of adequate offline fields associated with (a). Inconsistencies from (b) are negligible when using adequate fields of nitric acid. Acceptably small inconsistencies come from (c), but should vanish for an adequate prescription of water vapor tendencies. Toggling between QCTM and non-QCTM is done via namelist switches and does not require a source code re-compilation.

GEOCLIM reloaded (v 1.0): A new coupled earth system model for past climate change

November 2010

·

226 Reads

We present a new version of the coupled Earth system model GEOCLIM. The new release, GEOCLIM reloaded, links the existing atmosphere and weathering modules to a novel, temporally and spatially resolved model of the global ocean circulation, which provides a physical framework for a mechanistic description of the marine biogeochemical dynamics of carbon, nitrogen, phosphorus and oxygen. The ocean model is also coupled to a fully formulated, vertically resolved diagenetic model. GEOCLIM reloaded is thus a unique tool to investigate the short- and long-term feedbacks between climatic conditions, continental inputs, ocean biogeochemical dynamics and diagenesis. A complete and detailed description of the resulting Earth system model and its new features is first provided. The performance of GEOCLIM reloaded is then evaluated by comparing steady-state simulation under present-day conditions with a comprehensive set of oceanic data and existing global estimates of bio-element cycling in the pelagic and benthic compartments.

Fig. 3. A screen shot of the Diamond interface. The options tree, including greyed out unselected options and options with unset values (blue), is displayed on the left. The current option is the initial condition for the velocity field. On the right, the value of the current option is displayed in the space labelled " Data " . In this case the option is an embedded Python script, specifying the initial velocity field from data in a vtu file. In the top right, the schema annotation for this option is visible while at the bottom right a space is available for user comments. Element names, provided by name attributes, are clearly displayed by elements such as fields.  
Fig. 1. Data flow in a scientific model using Spud. Blue components are supplied by Spud and are model independent, red components form a part of the model but are independent of the scenario being simulated and yellow components depend on the particular scenario.  
Spud 1.0: Generalising and automating the user interfaces of scientific computer models

March 2009

·

213 Reads

The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. In this paper, we present a model-independent system, Spud, which formalises the specification of model input formats in terms of formal grammars. This is combined with an automated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module which minimises the development cost of adding model options. Together, this provides a user friendly, well documented, self validating user interface which is applicable to a wide range of scientific models and which minimises the developer input required to maintain and extend the model interface.

Fig. 2. Model predictions with the version of the ICBM model implemented in SOILR. This graph reproduces Fig. 2 in Andren and Katterer (1997). This figure can be reproduced typing example(ICBMModel) or attr(ICBMModel,"ex") in SOILR.
Fig. 3. Examples of three different representations of a two-pool model with different model structures and environmental effects on decomposition rates. The upper panel shows carbon stocks and the lower panel carbon release. Additional details about the implementation are given in the text.
Fig. 4. Carbon accumulation for the different pools included in the RothC model. DPM: the decomposable plant material pool, RPM: resistant plant material, BIO: microbial biomass pool, HUM: humified organic matterpool, and IOM: inert organic matter pool.
Fig. 1. Basic model structures implemented in SOILR. Squares represent the compartments, and arrows represent inputs and outputs to and from the compartments. These model structures are special cases of the matrix A.  
Fig. 5. Climate decomposition index (CDI) calculated as the product of a function of temperature (fT.Century1) and a function of precipitation and potential evapotranspiration (fW.Century) using monthly data from the WATCH dataset (Weedon et al., 2011).  
Models of soil organic matter decomposition: The SoilR package, version 1.0

August 2012

·

1,270 Reads

Soil organic matter decomposition is a very important process within the Earth system because it controls the rates of mineralization of carbon and other biogeochemical elements, determining their flux to the atmosphere and the hydrosphere. SoilR is a modeling framework that contains a library of functions and tools for modeling soil organic matter decomposition under the R environment for computing. It implements a variety of model structures and tools to represent carbon storage and release from soil organic matter. In SoilR, organic matter decomposition is represented as a linear system of ordinary differential equations that generalizes the structure of most compartment-based decomposition models. A variety of functions is also available to represent environmental effects on decomposition rates. This document presents the conceptual basis for the functions implemented in the package. It is complementary to the help pages released with the software.

Table 1 . Physical model parameters. 
Table 2 . Comparisons of key physical model diagnostics. 
Fig. 8. Model-data difference in dissolved O 2 (a) above 1500 m water depth and (b) below 1500 m in µmol kg −1. We seek the smallest error from observations in both. 
Figure 8.
. The postindustrial transient run is obtained by running the model from the preindustrial state, taken to be year 1765 and repre- sented by the equilibrium run, to the year 1994 following the 
First description of the Minnesota Earth System Model for Ocean Biogeochemistry (MESMO 1.0)

August 2008

·

98 Reads

Here we describe the first version of the Minnesota Earth System Model for Ocean biogeochemistry (MESMO 1.0), an intermediate complexity model based on the Grid ENabled Integrated Earth system model (GENIE-1). As with GENIE-1, MESMO has a 3D dynamical ocean, energy-moisture balance atmosphere, dynamic and thermodynamic sea ice, and marine biogeochemistry. Main development goals of MESMO were to: (1) bring oceanic uptake of anthropogenic transient tracers within data constraints; (2) increase vertical resolution in the upper ocean to better represent near-surface biogeochemical processes; (3) calibrate the deep ocean ventilation with observed abundance of radiocarbon. We achieved all these goals through a combination of objective model optimization and subjective targeted tuning. An important new feature in MESMO that dramatically improved the uptake of CFC-11 and anthropogenic carbon is the depth dependent vertical diffusivity in the ocean, which is spatially uniform in GENIE-1. In MESMO, biological production occurs in the top two layers above the compensation depth of 100 m and is modified by additional parameters, for example, diagnosed mixed layer depth. In contrast, production in GENIE-1 occurs in a single layer with thickness of 175 m. These improvements make MESMO a well-calibrated model of intermediate complexity suitable for investigations of the global marine carbon cycle requiring long integration time.

Fig. 1. Pseudocode description of the sequence algorithm.
Fig. 2. The NO2 yields from oxidation of emitted VOC species. Bars are coloured according to the number of carbon atoms in each species. Only species with a yield of more than 0.5 molecules of NO2 are shown.
Automated sequence analysis of atmospheric oxidation pathways: SEQUENCE version 1.0

October 2009

·

319 Reads

An algorithm for the sequential analysis of the atmospheric oxidation of chemical species using output from a photochemical model is presented. Starting at a "root species", the algorithm traverses all possible reaction sequences which consume this species, and lead, via intermediate products, to final products. The algorithm keeps track of the effects of all of these reactions on their respective reactants and products. Upon completion, the algorithm has built a detailed picture of the effects of the oxidation of the root species on its chemical surroundings. The output of the algorithm can be used to determine product yields, radical recycling fractions, and ozone production potentials of arbitrary chemical species.

The ICON-1.2 hydrostatic atmospheric dynamical core on triangular grids - Part 1: Formulation and performance of the baseline version

June 2013

·

538 Reads

As part of a broader effort to develop next-generation models for numerical weather prediction and climate applications, a hydrostatic atmospheric dynamical core is developed as an intermediate step to evaluate a finite-difference discretization of the primitive equations on spherical icosahedral grids. Based on the need for mass-conserving discretizations for multi-resolution modelling as well as scalability and efficiency on massively parallel computing architectures, the dynamical core is built on triangular C-grids using relatively small discretization stencils. This paper presents the formulation and performance of the baseline version of the new dynamical core, focusing on properties of the numerical solutions in the setting of globally uniform resolution. Theoretical analysis reveals that the discrete divergence operator defined on a single triangular cell using the Gauss theorem is only first-order accurate, and introduces grid-scale noise to the discrete model. The noise can be suppressed by fourth-order hyper-diffusion of the horizontal wind field using a time-step and grid-size-dependent diffusion coefficient, at the expense of stronger damping than in the reference spectral model. A series of idealized tests of different complexity are performed. In the deterministic baroclinic wave test, solutions from the new dynamical core show the expected sensitivity to horizontal resolution, and converge to the reference solution at R2B6 (35 km grid spacing). In a dry climate test, the dynamical core correctly reproduces key features of the meridional heat and momentum transport by baroclinic eddies. In the aqua-planet simulations at 140 km resolution, the new model is able to reproduce the same equatorial wave propagation characteristics as in the reference spectral model, including the sensitivity of such characteristics to the meridional sea surface temperature profile. These results suggest that the triangular-C discretization provides a reasonable basis for further development. The main issues that need to be addressed are the grid-scale noise from the divergence operator which requires strong damping, and a phase error of the baroclinic wave at medium and low resolutions.

Table 2 . Continued.
Table 3 . Continued.
Fig. 6. HNO 3 (top) and NO y (bottom) as a function of flight time (UTC) from measurements of selected ER-2 flights (3 February, 5 March, 12 March) (blue dots) compared to modeled values (lines). NO y measurements give the total (solid and gas phase, for details see text) values, HNO 3 measurements show only the gas phase. The black line shows the model values for the gas phase, the grey line the total (solid not in the Lagrangian particles and gas phase) values and the dashed grey line the passive NO y tracer.
Fig. 8. ClO (top), Cl 2 O 2 (middle) and ClONO 2 (bottom) as a function of flight time (UTC) from measurements of selected ER-2 flights (27 January, 3 February, 12 March) (blue dots, cyan lines show 2σ accuracy) compared to modeled values (black lines).
Fig. 9. ClO and its reservoirs HOCl, HCl and ClONO 2 from data of the OMS balloon launch on 15 March 2000. Measurements of the Mark IV instrument (blue, with error bars), the SLS instrument (red) and modeled values (black lines) are shown for comparison. Note that Mark IV and SLS are remote sensing instruments viewing into opposite directions and that the mixing ratios of the species are interpolated to the Mark IV tangent points.
The Lagrangian chemistry and transport model ATLAS: Simulation and validation of stratospheric chemistry and ozone loss in the winter 1999/2000

November 2010

·

149 Reads

ATLAS is a new global Lagrangian Chemistry and Transport Model (CTM), which includes a stratospheric chemistry scheme with 46 active species, 171 reactions, heterogeneous chemistry on polar stratospheric clouds and a Lagrangian denitrification module. Lagrangian (trajectory-based) models have several important advantages over conventional Eulerian models, including the absence of spurious numerical diffusion, efficient code parallelization and no limitation of the largest time step by the Courant-Friedrichs-Lewy criterion. This work describes and validates the stratospheric chemistry scheme of the model. Stratospheric chemistry is simulated with ATLAS for the Arctic winter 1999/2000, with a focus on polar ozone depletion and denitrification. The simulations are used to validate the chemistry module in comparison with measurements of the SOLVE/THESEO 2000 campaign. A Lagrangian denitrification module, which is based on the simulation of the nucleation, sedimentation and growth of a large number of polar stratospheric cloud particles, is used to model the substantial denitrification that occured in this winter.

Fig. 4. Scatterplots between forecasts and observations for selected percentiles for the daily mean PM 2.5 conentrations (µg/m 3 ): (a) raw model forecasts, (b) Kalman fliter-adjusted forecasts.  
Fig. 9. Box plots of RMSE and decomposed RMSE (systematic, RMSEs; unsystematic, RMSEu) values of the daily mean PM 2.5 concentrations (µg/m 3 ) for the raw model forecasts and KF bias-adjusted forecasts.
Fig. 10. (a and b) RMSE and (c and d) mean bias (MB) values over observed daily mean PM 2.5 concentration (µg/m 3 ) bins for the raw model forecasts and the KF bias-adjusted forecasts. Figure 9a and c for warm season, and Fig. 9b and d for cool season.
Fig. 11. False alarm ratio (FAR) and hit rate (H) for the daily mean PM 2.5 forecasts by the raw model and the KF bias-adjustment over the domain (DM) and all the sub-regions during (a) warm season and (b) cool season: FAR-MD, FAR associated with raw model forecasts; FAR-KF, FAR associated with KF forecasts; H-MD, H associated with raw model forecasts; and H-KF, H associated with KF forecasts.  
Assessment of bias-adjusted PM2.5 air quality forecasts over the continental United States during 2007

April 2010

·

322 Reads

To develop fine particulate matter (PM<sub>2.5</sub>) air quality forecasts for the US, a National Air Quality Forecast Capability (NAQFC) system, which linked NOAA's North American Mesoscale (NAM) meteorological model with EPA's Community Multiscale Air Quality (CMAQ) model, was deployed in the developmental mode over the continental United States during 2007. This study investigates the operational use of a bias-adjustment technique called the Kalman Filter Predictor approach for improving the accuracy of the PM<sub>2.5</sub> forecasts at monitoring locations. The Kalman Filter Predictor bias-adjustment technique is a recursive algorithm designed to optimally estimate bias-adjustment terms using the information extracted from previous measurements and forecasts. The bias-adjustment technique is found to improve PM<sub>2.5</sub> forecasts (i.e. reduced errors and increased correlation coefficients) for the entire year at almost all locations. The NAQFC tends to overestimate PM<sub>2.5</sub> during the cool season and underestimate during the warm season in the eastern part of the continental US domain, but the opposite is true for the Pacific Coast. In the Rocky Mountain region, the NAQFC system overestimates PM<sub>2.5</sub> for the whole year. The bias-adjusted forecasts can quickly (after 2–3 days' lag) adjust to reflect the transition from one regime to the other. The modest computational requirements and systematic improvements in forecast outputs across all seasons suggest that this technique can be easily adapted to perform bias adjustment for real-time PM<sub>2.5</sub> air quality forecasts.


Incremental testing of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7

March 2010

·

257 Reads

This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a) updates to the heterogeneous N2O5 parameterization, (b) improvement in the treatment of secondary organic aerosol (SOA), (c) inclusion of dynamic mass transfer for coarse-mode aerosol, (d) revisions to the cloud model, and (e) new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5) is dominated by overpredictions of unspeciated PM2.5 (PMother) in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.

The Lagrangian chemistry and transport model ATLAS: Validation of advective transport and mixing

November 2009

·

88 Reads

We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS (Alfred Wegener InsTitute LAgrangian Chemistry/Transport System). Lagrangian (trajectory-based) models have several important advantages over conventional Eulerian (grid-based) models, including the absence of spurious numerical diffusion, efficient code parallelization and no limitation of the largest time step by the Courant-Friedrichs-Lewy criterion. The basic concept of transport and mixing is similar to the approach in the commonly used CLaMS model. Several aspects of the model are different from CLaMS and are introduced and validated here, including a different mixing algorithm for lower resolutions which is less diffusive and agrees better with observations with the same mixing parameters. In addition, values for the vertical and horizontal stratospheric bulk diffusion coefficients are inferred and compared to other studies. This work focusses on the description of the dynamical part of the model and the validation of the mixing algorithm. The chemistry module, which contains 49 species, 170 reactions and a detailed treatment of heterogeneous chemistry, will be presented in a separate paper.

Description and evaluation of GMXe: A new aerosol submodel for global simulations (v1)

September 2010

·

108 Reads

We present a new aerosol microphysics and gas aerosol partitioning submodel (Global Modal-aerosol eXtension, GMXe) implemented within the ECHAM/MESSy Atmospheric Chemistry model (EMAC, version 1.8). The submodel is computationally efficient and is suitable for medium to long term simulations with global and regional models. The aerosol size distribution is treated using 7 log-normal modes and has the same microphysical core as the M7 submodel (Vignati et al., 2004). The main developments in this work are: (i) the extension of the aerosol emission routines and the M7 microphysics, so that an increased (and variable) number of aerosol species can be treated (new species include sodium and chloride, and potentially magnesium, calcium, and potassium), (ii) the coupling of the aerosol microphysics to a choice of treatments of gas/aerosol partitioning to allow the treatment of semi-volatile aerosol, and, (iii) the implementation and evaluation of the developed submodel within the EMAC model of atmospheric chemistry. Simulated concentrations of black carbon, particulate organic matter, dust, sea spray, sulfate and ammonium aerosol are shown to be in good agreement with observations (for all species at least 40% of modeled values are within a factor of 2 of the observations). The distribution of nitrate aerosol is compared to observations in both clean and polluted regions. Concentrations in polluted continental regions are simulated quite well, but there is a general tendency to overestimate nitrate, particularly in coastal regions (geometric mean of modelled values/geometric mean of observed data ≈2). In all regions considered more than 40% of nitrate concentrations are within a factor of two of the observations. Marine nitrate concentrations are well captured with 96% of modeled values within a factor of 2 of the observations.

An analytical solution to calculate bulk mole fractions for any number of components in aerosol droplets after considering partitioning to a surface layer

November 2010

·

185 Reads

Calculating the equilibrium composition of atmospheric aerosol particles, using all variations of Köhler theory, has largely assumed that the total solute concentrations define both the water activity and surface tension. Recently however, bulk to surface phase partitioning has been postulated as a process which significantly alters the predicted point of activation. In this paper, an analytical solution to calculate the removal of material from a bulk to a surface layer in aerosol particles has been derived using a well established and validated surface tension framework. The applicability to an unlimited number of components is possible via reliance on data from each binary system. Whilst assumptions regarding behaviour at the surface layer have been made to facilitate derivation, it is proposed that the framework presented can capture the overall impact of bulk-surface partitioning. Demonstrations of the equations for two and five component mixtures are given while comparisons are made with more detailed frameworks capable at modelling ternary systems at higher levels of complexity. Predictions made by the model across a range of surface active properties should be tested against measurements. Indeed, reccomendations are given for experimental validation and to assess sensitivities to accuracy and required level of complexity within large scale frameworks. Importantly, the computational efficiency of using the solution presented in this paper is roughly a factor of 20 less than a similar iterative approach, a comparison with highly coupled approaches not available beyond a 3 component system.

Table 2 . Theoretical and calculated AOD at λ = 532 nm per size section.
Fig. 4. Profiles of attenuated backscatter (β ) for BCAR (left) and DUST (right), per size section (bin) as a function of altitude for λ = 532, 1064 nm.
Fig. 5. Profiles of attenuated scattering ratio (R ) and color ratio (χ ) for BCAR (left) and DUST (right) per size section as a function of altitude.
Fig. 9. Temporal evolution of the daily mean AOD (500 nm) by AERONET (red line) and the corresponding CHIMERE AOD (at 532 nm, black line) at three AERONET sites (Blida, Carpentras, Lecce).
Fig. 12. Aerosol Optical Depth modeled with CHIMERE for λ = 532nm for the 9 and 14 July 2007 at the same hour as the CALIPSO overpass time.
Lidar signal simulation for the evaluation of aerosols in chemistry transport models

December 2012

·

215 Reads

We present an adaptable tool, the OPTSIM (OPTical properties SIMulation) software, for the simulation of optical properties and lidar attenuated backscattered profiles (β') from aerosol concentrations calculated by chemistry transport models (CTM). It was developed to model both Level 1 observations and Level 2 aerosol lidar retrievals in order to compare model results to measurements: the level 2 enables to estimate the main properties of aerosols plume structures, but may be limited due to specific assumptions. The level 1, originally developed for this tool, gives access to more information about aerosols properties (β') requiring, at the same time, less hypothesis on aerosols types. In addition to an evaluation of the aerosol loading and optical properties, active remote sensing allows the analysis of aerosols' vertical structures. An academic case study for two different species (black carbon and dust) is presented and shows the consistency of the simulator. Illustrations are then given through the analysis of dust events in the Mediterranean region during the summer 2007. These are based on simulations by the CHIMERE regional CTM and observations from the CALIOP space-based lidar, and highlight the potential of this approach to evaluate the concentration, size and vertical structure of the aerosol plumes.

Fig. 3b. Relative differences (in %) of annual mean vertically integrated concentrations of sulfate, BC, POM, SOA, dust, and sea salt between MAM7 and MAM3.
Fig. 6. Same as Fig. 5, except for annual and zonal mean aerosol number concentrations in Aitken, accumulation and coarse mode.
Fig. 7. Annual averaged global distribution of CCN number concentration at 0.1 % supersaturation at surface in MAM3 (upper) and MAM7 (lower).
Toward a minimal representation of aerosols in climate models: Description and evaluation in the Community Atmosphere Model CAM5

May 2012

·

293 Reads

A modal aerosol module (MAM) has been developed for the Community Atmosphere Model version 5 (CAM5), the atmospheric component of the Community Earth System Model version 1 (CESM1). MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically-based manner. Two MAM versions were developed: a more complete version with seven lognormal modes (MAM7), and a version with three lognormal modes (MAM3) for the purpose of long-term (decades to centuries) simulations. In this paper a description and evaluation of the aerosol module and its two representations are provided. Sensitivity of the aerosol lifecycle to simplifications in the representation of aerosol is discussed. Simulated sulfate and secondary organic aerosol (SOA) mass concentrations are remarkably similar between MAM3 and MAM7. Differences in primary organic matter (POM) and black carbon (BC) concentrations between MAM3 and MAM7 are also small (mostly within 10%). The mineral dust global burden differs by 10% and sea salt burden by 30-40% between MAM3 and MAM7, mainly due to the different size ranges for dust and sea salt modes and different standard deviations of the log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and temporal variations of aerosol mass and number concentrations, size distributions, and aerosol optical properties. However, there are noticeable biases; e.g., simulated BC concentrations are significantly lower than measurements in the Arctic. There is a low bias in modeled aerosol optical depth on the global scale, especially in the developing countries. These biases in aerosol simulations clearly indicate the need for improvements of aerosol processes (e.g., emission fluxes of anthropogenic aerosols and precursor gases in developing countries, boundary layer nucleation) and properties (e.g., primary aerosol emission size, POM hygroscopicity). In addition, the critical role of cloud properties (e.g., liquid water content, cloud fraction) responsible for the wet scavenging of aerosol is highlighted.

Fig. 2. (a) Monthly mean O 3 with CB05-Base (b) percent increases 
Fig. 7. The median and inter-quartile range of mean bias for the daily maximum 8-h O 3 with CB05-TU and CB05-Base: (a) Los Angeles (b) Portland (c) Seattle (d) Chicago (e) New York/New Jersey (f) Detroit. Number beneath each paired evaluation represents the total sample number in each binned range of observed concentration. 
Fig. 8. The median and inter-quartile range of mean normalized bias for the daily maximum 8-h O 3 with CB05-TU and CB05-Base: (a) Los Angeles (b) Portland (c) Seattle (d) Chicago (e) New York/New Jersey (f) Detroit. Number beneath each paired evaluation represents the total sample number in each binned range of observed concentration. 
Impact of a new condensed toluene mechanism on air quality model predictions in the US

December 2010

·

69 Reads

A new condensed toluene mechanism is incorporated into the Community Multiscale Air Quality Modeling system. Model simulations are performed using the CB05 chemical mechanism containing the existing (base) and the new toluene mechanism for the western and eastern US for a summer month. With current estimates of tropospheric emission burden, the new toluene mechanism increases monthly mean daily maximum 8-h ozone by 1.0–3.0 ppbv in Los Angeles, Portland, Seattle, Chicago, Cleveland, northeastern US, and Detroit compared to that with the base toluene chemistry. It reduces model mean bias for ozone at elevated observed ozone mixing ratios. While the new mechanism increases predicted ozone, it does not enhance ozone production efficiency. Sensitivity study suggests that it can further enhance ozone if elevated toluene emissions are present. While changes in total fine particulate mass are small, predictions of in-cloud SOA increase substantially.

Automatic generation of large ensembles for air quality forecasting using the Polyphemus system

July 2009

·

45 Reads

This paper describes a method to automatically generate a large ensemble of air quality simulations. This is achieved using the Polyphemus system, which is flexible enough to build various different models. The system offers a wide range of options in the construction of a model: many physical parameterizations, several numerical schemes and different input data can be combined. In addition, input data can be perturbed. In this paper, some 30 alternatives are available for the generation of a model. For each alternative, the options are given a probability, based on how reliable they are supposed to be. Each model of the ensemble is defined by randomly selecting one option per alternative. In order to decrease the computational load, as many computations as possible are shared by the models of the ensemble. As an example, an ensemble of 101 photochemical models is generated and run for the year 2001 over Europe. The models' performance is quickly reviewed, and the ensemble structure is analyzed. We found a strong diversity in the results of the models and a wide spread of the ensemble. It is noteworthy that many models turn out to be the best model in some regions and some dates.

Fig. 1. Illustration of block structure for simulation domain as well as one generic block in MUSCAT.  
Table 2. Examples of two stage, second order explicit Runge-Kutta methods. 
Table 3. Synopsis of spatial structure for academic test case. 
Table 4. Synopsis of spatial structure for realistic test case. 
Implementation of multirate time integration methods for air pollution modelling

November 2012

·

772 Reads

Explicit time integration methods are characterised by a small numerical effort per time step. In the application to multiscale problems in atmospheric modelling, this benefit is often more than compensated by stability problems and step size restrictions resulting from stiff chemical reaction terms and from a locally varying Courant-Friedrichs-Lewy (CFL) condition for the advection terms. Splitting methods may be applied to efficiently combine implicit and explicit methods (IMEX splitting). Complementarily multirate time integration schemes allow for a local adaptation of the time step size to the grid size. In combination, these approaches lead to schemes which are efficient in terms of evaluations of the right-hand side. Special challenges arise when these methods are to be implemented. For an efficient implementation, it is crucial to locate and exploit redundancies. Furthermore, the more complex programme flow may lead to computational overhead which, in the worst case, more than compensates the theoretical gain in efficiency. We present a general splitting approach which allows both for IMEX splittings and for local time step adaptation. The main focus is on an efficient implementation of this approach for parallel computation on computer clusters.

Description of a hybrid ice sheet-shelf model, and application to Antarctica

October 2012

·

434 Reads

The formulation of a 3-D ice sheet-shelf model is described. The model is designed for long-term continental-scale applications, and has been used mostly in paleoclimatic studies. It uses a hybrid combination of the scaled shallow ice and shallow shelf approximations for ice flow. Floating ice shelves and grounding-line migration are included, with parameterized ice fluxes at grounding lines that allows relatively coarse resolutions to be used. All significant components and parameterizations of the model are described in some detail. Basic results for modern Antarctica are compared with observations, and simulations over the last 5 million years are compared with previously published results. The sensitivity of ice volumes during the last deglaciation to basal sliding coefficients is discussed.

Second-order Shallow Ice Approximation with Non-linear Rheology: Exploring Validity by Performing Numerical Experiments

December 2013

·

32 Reads

In ice sheet modelling, the shallow-ice approximation (SIA) and second-order shallow-ice approximation (SOSIA) schemes are approaches to approximate the solution of the full Stokes equations governing ice sheet dynamics. This is done by writing the solution to the full Stokes equations as an asymptotic expansion in the aspect ratio , i.e. the quotient between a characteristic height and a characteristic length of the ice sheet. SIA retains the zeroth-order terms and SOSIA the zeroth-, first-, and second-order terms in the expansion. Here, we evaluate the order of accuracy of SIA and SOSIA by numerically solving a two-dimensional model problem for different values of , and comparing the solutions with a finite element solution to the full Stokes equations obtained from Elmer/Ice. The SIA and SOSIA solutions are also derived analytically for the model problem. For decreasing , the computed errors in SIA and SOSIA decrease , but not always in the expected way. Moreover, they depend critically on a parameter introduced to avoid singu-larities in Glen's flow law in the ice model. This is because the assumptions behind the SIA and SOSIA neglect a thick, high-viscosity boundary layer near the ice surface. The sensitivity to the parameter is explained by the analytical solutions. As a verification of the comparison technique, the SIA and SOSIA solutions for a fluid with Newtonian rheology are compared to the solutions by Elmer/Ice, with results agreeing very well with theory.

Downscaling the climate change for oceans around Australia

September 2012

·

217 Reads

At present, global climate models used to project changes in climate poorly resolve mesoscale ocean features such as boundary currents and eddies. These missing features may be important to realistically project the marine impacts of climate change. Here we present a framework for dynamically downscaling coarse climate change projections utilising a near-global ocean model that resolves these features in the Australasian region, with coarser resolution elsewhere. A time-slice projection for a 2060s ocean was obtained by adding climate change anomalies to initial conditions and surface fluxes of a near-global eddy-resolving ocean model. Climate change anomalies are derived from the differences between present and projected climates from a coarse global climate model. These anomalies are added to observed fields, thereby reducing the effect of model bias from the climate model. The downscaling model used here is ocean-only and does not include the effects that changes in the ocean state will have on the atmosphere and air–sea fluxes. We use restoring of the sea surface temperature and salinity to approximate real-ocean feedback on heat flux and to keep the salinity stable. Extra experiments with different feedback parameterisations are run to test the sensitivity of the projection. Consistent spatial differences emerge in sea surface temperature, salinity, stratification and transport between the downscaled projections and those of the climate model. Also, the spatial differences become established rapidly (< 3 yr), indicating the importance of mesoscale resolution. However, the differences in the magnitude of the difference between experiments show that feedback of the ocean onto the air–sea fluxes is still important in determining the state of the ocean in these projections. Until such a time when it is feasible to regularly run a global climate model with eddy resolution, our framework for ocean climate change downscaling provides an attractive way to explore the response of mesoscale ocean features with climate change and their effect on the broader ocean.

Fig. 1. Flow diagram of implementing GEOS-Chem chemistry with KPP.
Table 2 . Comparison of the average number of function calls and chemical time steps per advection time step per grid cell.
Fig. 6. Sensitivity of the O 3 column measured by TES with respect to the NO x emissions (parts per billion volume) over Asia on 1st April 2001.
Implementation and evaluation of an array of chemical solvers in the Global Chemical Transport Model GEOS-Chem

July 2009

·

215 Reads

This paper discusses the implementation and performance of an array of gas-phase chemistry solvers for the state-of-the-science GEOS-Chem global chemical transport model. The implementation is based on the Kinetic PreProcessor (KPP). Two perl parsers automatically generate the needed interfaces between GEOS-Chem and KPP, and allow access to the chemical simulation code without any additional programming effort. This work illustrates the potential of KPP to positively impact global chemical transport modeling by providing additional functionality as follows. (1) The user can select a highly efficient numerical integration method from an array of solvers available in the KPP library. (2) KPP offers a wide variety of user options for studies that involve changing the chemical mechanism (e.g., a set of additional reactions is automatically translated into efficient code and incorporated into a modified global model). (3) This work provides access to tangent linear, continuous adjoint, and discrete adjoint chemical models, with applications to sensitivity analysis and data assimilation.

Table 1 . Post-delivery problem rates as reported by Pfleeger and Hatton (1997)
Fig. 4. Defect density of projects by defect assignment method. Previously published defect densities from Table 1 are shown on the right.  
Assessing climate model software quality: A defect density analysis of three models

August 2012

·

282 Reads

A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model, one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

Upgrading photolysis in the p-TOMCAT CTM: Model evaluation and assessment of the role of clouds

May 2009

·

77 Reads

A new version of the p-TOMCAT Chemical Transport Model (CTM) which includes an improved photolysis code, Fast-JX, is validated. Through offline testing we show that Fast-JX captures well the observed J(NO2) and J(O1D) values obtained at Weybourne and during a flight above the Atlantic, though with some overestimation of J(O1D) when comparing to the aircraft data. By comparing p-TOMCAT output of CO and ozone with measurements, we find that the inclusion of Fast-JX in the CTM strongly improves the latter's ability to capture the seasonality and levels of tracers' concentrations. A probability distribution analysis demonstrates that photolysis rates and oxidant (OH, ozone) concentrations cover a broader range of values when using Fast-JX instead of the standard photolysis scheme. This is not only driven by improvements in the seasonality of cloudiness but also even more by the better representation of cloud spatial variability. We use three different cloud treatments to study the radiative effect of clouds on the abundances of a range of tracers and find only modest effects on a global scale. This is consistent with the most relevant recent study. The new version of the validated CTM will be used for a variety of future studies examining the variability of tropospheric composition and its drivers.

Regional scale ozone data assimilation using an Ensemble Kalman Filter and the CHIMERE Chemical-Transport Model

February 2014

·

182 Reads

An ensemble Kalman filter (EnKF) has been coupled to the CHIMERE chemical transport model in order to assimilate ozone ground-based measurements on a regional scale. The number of ensembles is reduced to 20, which allows for future operational use of the system for air quality analysis and forecast. Observation sites of the European ozone monitoring network have been classified using criteria on ozone temporal variability, based on previous work by Flemming et al. (2005). This leads to the choice of specific subsets of suburban, rural and remote sites for data assimilation and for evaluation of the reference run and the assimilation system. For a 10-day experiment during an ozone pollution event over Western Europe, data assimilation allows for a significant improvement in ozone fields: the RMSE is reduced by about a third with respect to the reference run, and the hourly correlation coefficient is increased from 0.75 to 0.87. Several sensitivity tests focus on an a posteriori diagnostic estimation of errors associated with the background estimate and with the spatial representativeness of observations. A strong diurnal cycle of both these errors with an amplitude up to a factor of 2 is made evident. Therefore, the hourly ozone background error and the observation error variances are corrected online in separate assimilation experiments. These adjusted background and observational error variances provide a better uncertainty estimate, as verified by using statistics based on the reduced centered random variable. Over the studied 10-day period the overall EnKF performance over evaluation stations is found relatively unaffected by different formulations of observation and simulation errors, probably due to the large density of observation sites. From these sensitivity tests, an optimal configuration was chosen for an assimilation experiment extended over a three-month summer period. It shows a similarly good performance as the 10-day experiment.

Fig. 2. Skill score (against persistence of observed field shown in insert) evolution of the control and experimental simulations. A 1-step DA algorithm was implemented on day 54 (23 February 2004) to the experimental run, whereby the ice model state was updated by changing material properties based on agreement of lower dimensional features deduced from RGPS data processed to show regions of high deformation (insert). Days are Julian days of 2004.  
Fig. 3. Comparisons of the experimental (left column) and control (right column) simulations with LKFs interpreted  
Fig. A1. Failure envelope in principal stress space for the elastic-decohesive model.  
Physically-based data assimilation

May 2010

·

69 Reads

Ideally, a validation and assimilation scheme should maintain the physical principles embodied in the model and be able to evaluate and assimilate lower dimensional features (e.g., discontinuities) contained within a bulk simulation, even when these features are not directly observed or represented by model variables. We present such a scheme and suggest its potential to resolve or alleviate some outstanding problems that stem from making and applying required, yet often non-physical, assumptions and procedures in common operational data assimilation. As proof of concept, we use a sea-ice model with remotely sensed observations of leads in a one-step assimilation cycle. Using the new scheme in a sixteen day simulation experiment introduces model skill (against persistence) several days earlier than in the control run, improves the overall model skill and delays its drop off at later stages of the simulation. The potential and requirements to extend this scheme to different applications, and to both empirical and statistical multivariate and full cycle data assimilation schemes, are discussed.

Unified parameterization of the planetary boundary layer and shallow convection with a higher-order turbulence closure in the Community Atmosphere Model: Single-column experiments

November 2012

·

207 Reads

This paper describes the coupling of the Community Atmosphere Model (CAM) version 5 with a unified multi-variate probability density function (PDF) parameterization, Cloud Layers Unified by Binormals (CLUBB). CLUBB replaces the planetary boundary layer (PBL), shallow convection, and cloud macrophysics schemes in CAM5 with a higher-order turbulence closure based on an assumed PDF. Comparisons of single-column versions of CAM5 and CAM-CLUBB are provided in this paper for several boundary layer regimes. As compared to large eddy simulations (LESs), CAM-CLUBB and CAM5 simulate marine stratocumulus regimes with similar accuracy. For shallow convective regimes, CAM-CLUBB improves the representation of cloud cover and liquid water path (LWP). In addition, for shallow convection CAM-CLUBB offers better fidelity for subgrid-scale vertical velocity, which is an important input for aerosol activation. Finally, CAM-CLUBB results are more robust to changes in vertical and temporal resolution when compared to CAM5.

Fig. 1. Mean-annual zonal averaged atmospheric temperature profiles. (a) Observed (NCEP2, 1981-2005) December, January, February (DJF), (b) Observed June, July, and August (JJA), (c) GENMOM DJF, (d) GENMOM JJA.
Fig. 2. Winter (DJF) and summer (JJA) zonally averaged eastward wind velocity. (a) Observed (NCEP2, 1981-2005) DJF, (b) Observed JJA, (c) GENMOM DJF, (d) GENMOM JJA.
Fig. 7. Observed and modeled seasonal cycle amplitude of surface temperature and anomalies. The amplitude of the seasonal cycle is calculated as the standard deviation of the 12 climatological months.
Fig. 8. Observed and simulated mean annual total precipitation from GENMOM and 8 AOGCMs included in IPCC AR4. All IPCC AR4 models are averaged over the last 30 years (1970-1999) of the Climate of the 20th Century experiment. All data are bi-linearly interpolated to a 5 • × 5 • grid.
Fig. 12. Mean-annual zonally averaged ocean salinity profile for both observed (WOA05) and simulated (GENMOM) for the Atlantic Ocean (top), Indian and Pacific Oceans (middle), and anomalies between observed and simulated (bottom).
Evaluation of a present-day climate simulation with a new atmosphere-ocean model GENMOM

October 2010

·

270 Reads

We present a new, non-flux corrected AOGCM, GENMOM, that combines the GENESIS version 3 atmospheric GCM (Global ENvironmental and Ecological Simulation of Interactive Systems) and MOM2 (Modular Ocean Model version 2). We evaluate GENMOM by comparison with reanalysis products (e.g., NCEP2) and eight models used in the IPCC AR4 assessment. The overall present-day climate simulated by GENMOM is on par with the models used in IPCC AR4. The model produces a global temperature bias of 0.6 °C. Atmospheric features such as the jet stream structure and major semi-permanent sea level pressure centers are well simulated as is the mean planetary-scale wind structure that is needed to produce the correct position of stormtracks. The gradients and spatial distributions of annual surface temperature compare well both to observations and to the IPCC AR4 models. A warm bias of ~2 °C is simulated by MOM between 200–1000 m in the ocean. Most ocean surface currents are reproduced except where they are not resolved well by the T31 resolution. The two main weaknesses in the simulations is the development of a split ITCZ and weaker-than-observed overturning circulation.

On the parallelization of atmospheric inversions of CO2 surface fluxes within a variational framework

June 2013

·

34 Reads

The variational formulation of Bayes' theorem allows inferring CO2 sources and sinks from atmospheric concentrations at much higher time-space resolution than the ensemble or analytical approaches. However, it usually exhibits limited scalable parallelism. This limitation hinders global atmospheric inversions operated on decadal time scales and regional ones with kilometric spatial scales because of the computational cost of the underlying transport model that has to be run at each iteration of the variational minimization. Here, we introduce a physical parallelization (PP) of variational atmospheric inversions. In the PP, the inversion still manages a single physically and statistically consistent window, but the transport model is run in parallel overlapping sub-segments in order to massively reduce the computation wall-clock time of the inversion. For global inversions, a simplification of transport modelling is described to connect the output of all segments. We demonstrate the performance of the approach on a global inversion for CO2 with a 32 yr inversion window (1979-2010) with atmospheric measurements from 81 sites of the NOAA global cooperative air sampling network. In this case, we show that the duration of the inversion is reduced by a seven-fold factor (from months to days), while still processing the three decades consistently and with improved numerical stability.

Formulation of the Dutch Atmospheric Large-Eddy Simulation (DALES) and overview of its applications

September 2010

·

431 Reads

The current version of the Dutch Atmospheric Large-Eddy Simulation (DALES) is presented. DALES is a large-eddy simulation code designed for studies of the physics of the atmospheric boundary layer, including convective and stable boundary layers as well as cloudy boundary layers. In addition, DALES can be used for studies of more specific cases, such as flow over sloping or heterogeneous terrain, and dispersion of inert and chemically active species. This paper contains an extensive description of the physical and numerical formulation of the code, and gives an overview of its applications and accomplishments in recent years.

Automated continuous verification for numerical simulation

September 2010

·

125 Reads

Verification and validation are crucially important for the final users of a computational model: code is useless if its results cannot be relied upon. Typically, undergoing these processes is seen as a discrete event, performed once and for all after development is complete. However, this does not reflect the reality that many geoscientific codes undergo continuous development of the mathematical model, discretisation and software implementation. Therefore, we advocate that in such cases verification and validation must be continuous and happen in parallel with development. The desirability of their automation follows immediately. This paper discusses a framework for automated continuous verification and validation of wide applicability to any kind of numerical simulation. It also documents a range of rigorous test cases for use in computational and geophysical fluid dynamics.

iGen: A program for the automated generation of models and parameterisations

September 2011

·

50 Reads

Complex physical systems can often be simulated using very high-resolution models but this is not always practical because of computational restrictions. In this case the model must be simplified or parameterised, but this is a notoriously difficult process that often requires the introduction of `model assumptions' that are hard or impossible to justify. Here we introduce a new approach to parameterising models. The approach makes use of a newly developed computer program, which we call iGen, that analyses the source code of a high-resolution model and formally derives a much faster parameterised model that closely approximates the original, reporting bounds on the error introduced by any approximations. These error bounds can be used to formally justify use of the parameterised model in subsequent numerical experiments. Using increasingly complex physical systems as examples we illustrate that iGen has the ability to produce parameterisations that run typically orders of magnitude faster than the underlying, high-resolution models from which they are derived and show that iGen has the potential to become an important tool in model development.

ESCIMO.spread - A spreadsheet-based point snow surface energy balance model to calculate hourly snow water equivalent and melt rates for historical and changing climate conditions

May 2010

·

63 Reads

This paper describes the spreadsheet-based point energy balance model ESCIMO.spread which simulates the energy and mass balance as well as melt rates of a snow surface. The model makes use of hourly recordings of temperature, precipitation, wind speed, relative humidity, global and longwave radiation. The effect of potential climate change on the seasonal evolution of the snow cover can be estimated by modifying the time series of observed temperature and precipitation by means of adjustable parameters. Model output is graphically visualized in hourly and daily diagrams. The results compare well with weekly measured snow water equivalent (SWE). The model is easily portable and adjustable, and runs particularly fast: hourly calculation of a one winter season is instantaneous on a standard computer. ESICMO.spread can be obtained from the authors on request (contact: ulrich.strasser@uni-graz.at).

Fig. 1. Diagram for a layered feedforward neural network. Solar wind conditions are used as input for predicting L * values. All nodes have a connection to every node from the previous layer but are not drawn here for simplicity. Also, not all possible parameters that can be used as input for the artificial neural network are shown. Specifically, our drift shell model includes additional values for Kp, solar wind density, velocity, and magnetic coordinates.  
Fig. 3. Diagram of finding the last closed drift shell by using a leap-frog method along the radial direction at midnight local time. The dashed line represents the last closed drift shell with L  
Fig. 4. Set of neural networks that can calculate L * as a function of pitch angle. Each set consists of several neural networks for a range of pitch angles. One set calculates L * and the other set computes the last closed drift shell L * max .  
Fig. 7. Test case of calculating L * with the Tsyganenko model TSK03 (blue) and with the neural network (red) for satellite LANL-01A. The standard deviation error is ∆L * ≈0.04 or less than 1% for 90 degrees pitch angle.  
LANL* V1.0: A radiation belt drift shell model suitable for real-time and reanalysis applications

July 2009

·

332 Reads

We describe here a new method for calculating the magnetic drift invariant, L*, that is used for modeling radiation belt dynamics and for other space weather applications. L* (pronounced L-star) is directly proportional to the integral of the magnetic flux contained within the surface defined by a charged particle moving in the Earth's geomagnetic field. Under adiabatic changes to the geomagnetic field L* is a conserved quantity, while under quasi-adiabatic fluctuations diffusion (with respect to a particle's L*) is the primary term in equations of particle dynamics. In particular the equations of motion for the very energetic particles that populate the Earth's radiation belts are most commonly expressed by diffusion in three dimensions: L*, energy (or momentum), and pitch angle (the dot product of velocity and the magnetic field vector). Expressing dynamics in these coordinates reduces the dimensionality of the problem by referencing the particle distribution functions to values at the magnetic equatorial point of a magnetic "drift shell" (or L-shell) irrespective of local time (or longitude). While the use of L* aids in simplifying the equations of motion, practical applications such as space weather forecasting using realistic geomagnetic fields require sophisticated magnetic field models that, in turn, require computationally intensive numerical integration. Typically a single L* calculation can require on the order of 105 calls to a magnetic field model and each point in the simulation domain and each calculated pitch angle has a different value of L*. We describe here the development and validation of a neural network surrogate model for calculating L* in sophisticated geomagnetic field models with a high degree of fidelity at computational speeds that are millions of times faster than direct numerical field line mapping and integration. This new surrogate model has applications to real-time radiation belt forecasting, analysis of data sets involving tens of satellite-years of observations, and other problems in space weather.

Simulated pre-industrial climate in Bergen Climate Model (version 2): Model description and large-scale circulation features

November 2009

·

192 Reads

The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

A model for global biomass burning in preindustrial time: LPJ-LMfire (v1.0)

May 2013

·

571 Reads

Fire is the primary disturbance factor in many terrestrial ecosystems. Wildfire alters vegetation structure and composition, affects carbon storage and biogeochemical cycling, and results in the release of climatically relevant trace gases including CO2, CO, CH4, NOx, and aerosols. One way of assessing the impacts of global wildfire on centennial to multi-millennial timescales is to use process-based fire models linked to dynamic global vegetation models (DGVMs). Here we present an update to the LPJ-DGVM and a new fire module based on SPITFIRE that includes several improvements to the way in which fire occurrence, behaviour, and the effects of fire on vegetation are simulated. The new LPJ-LMfire model includes explicit calculation of natural ignitions, the representation of multi-day burning and coalescence of fires, and the calculation of rates of spread in different vegetation types. We describe a new representation of anthropogenic biomass burning under preindustrial conditions that distinguishes the different relationships between humans and fire among hunter-gatherers, pastoralists, and farmers. We evaluate our model simulations against remote-sensing-based estimates of burned area at regional and global scale. While wildfire in much of the modern world is largely influenced by anthropogenic suppression and ignitions, in those parts of the world where natural fire is still the dominant process (e.g. in remote areas of the boreal forest and subarctic), our results demonstrate a significant improvement in simulated burned area over the original SPITFIRE. The new fire model we present here is particularly suited for the investigation of climate-human-fire relationships on multi-millennial timescales prior to the Industrial Revolution.

Tagged ozone mechanism for MOZART-4, CAM-chem and other chemical transport models

December 2012

·

170 Reads

A procedure for tagging ozone produced from NO sources through updates to an existing chemical mechanism is described, and results from its implementation in the Model for Ozone and Related chemical Tracers (MOZART-4), a global chemical transport model, are presented. Artificial tracers are added to the mechanism, thus, not affecting the standard chemistry. The results are linear in the troposphere, i.e., the sum of ozone from individual tagged sources equals the ozone from all sources to within 3% in zonal mean monthly averages. In addition, the tagged ozone is shown to equal the standard ozone, when all tropospheric sources are tagged and stratospheric input is turned off. The stratospheric ozone contribution to the troposphere determined from the difference between total ozone and ozone from all tagged sources is significantly less than estimates using a traditional stratospheric ozone tracer (8 vs. 20 ppbv at the surface). The commonly used technique of perturbing NO emissions by 20% in a region to determine its ozone contribution is compared to the tagging technique, showing that the tagged ozone is 2-4 times the ozone contribution that was deduced from perturbing emissions. The ozone tagging described here is useful for identifying source contributions based on NO emissions in a given state of the atmosphere, such as for quantifying the ozone budget.

Carbon-nitrogen feedbacks in the UVic ESCM

September 2012

·

233 Reads

A representation of the terrestrial nitrogen cycle is introduced into the UVic Earth System Climate Model (UVic ESCM). The UVic ESCM now contains five terrestrial carbon pools and seven terrestrial nitrogen pools: soil, litter, leaves, stem and roots for both elements and ammonium and nitrate in the soil for nitrogen. Nitrogen cycles through plant tissue, litter, soil and the mineral pools before being taken up again by the plant. Biological N2 fixation and nitrogen deposition represent external inputs to the plant-soil system while losses occur via leaching. Simulated carbon and nitrogen pools and fluxes are in the range of other models and observations. Gross primary production (GPP) for the 1990s in the CN-coupled version is 129.6 Pg C a-1 and net C uptake is 0.83 Pg C a-1, whereas the C-only version results in a GPP of 133.1 Pg C a-1 and a net C uptake of 1.57 Pg C a-1. At the end of a transient experiment for the years 1800-1999, where radiative forcing is held constant but CO2 fertilisation for vegetation is permitted to occur, the CN-coupled version shows an enhanced net C uptake of 1.05 Pg C a-1, whereas in the experiment where CO2 is held constant and temperature is transient the land turns into a C source of 0.60 Pg C a-1 by the 1990s. The arithmetic sum of the temperature and CO2 effects is 0.45 Pg C a-1, 0.38 Pg C a-1 lower than seen in the fully forced model, suggesting a strong nonlinearity in the CN-coupled version. Anthropogenic N deposition has a positive effect on Net Ecosystem Production of 0.35 Pg C a-1. Overall, the UVic CN-coupled version shows similar characteristics to other CN-coupled Earth System Models, as measured by net C balance and sensitivity to changes in climate, CO2 and temperature.

Development of a system emulating the global carbon cycle in Earth system models

August 2010

·

59 Reads

By combining the strong points of general circulation models (GCMs), which contain detailed and complex processes, and Earth system models of intermediate complexity (EMICs), which are quick and capable of large ensembles, we have developed a loosely coupled model (LCM) which can represent the outputs of a GCM-based Earth system model using much smaller computational resources. We address the problem of relatively poor representation of precipitation within our EMIC, which prevents us from directly coupling it to a vegetation model, by coupling it to a precomputed transient simulation using a full GCM. The LCM consists of three components: an EMIC (MIROC-lite) which consists of a 2-D energy balance atmosphere coupled to a low resolution 3-D GCM ocean including an ocean carbon cycle; a state of the art vegetation model (Sim-CYCLE); and a database of daily temperature, precipitation, and other necessary climatic fields to drive Sim-CYCLE from a precomputed transient simulation from a state of the art AOGCM. The transient warming of the climate system is calculated from MIROC-lite, with the global temperature anomaly used to select the most appropriate annual climatic field from the pre-computed AOGCM simulation which, in this case, is a 1% pa increasing CO2 concentration scenario. By adjusting the climate sensitivity of MIROC-lite, the transient warming of the LCM could be adjusted to closely follow the low sensitivity (4.0 K) version of MIROC3.2. By tuning of the physical and biogeochemical parameters it was possible to reasonably reproduce the bulk physical and biogeochemical properties of previously published CO2 stabilisation scenarios for that model. As an example of an application of the LCM, the behavior of the high sensitivity version of MIROC3.2 (with 6.3 K climate sensitivity) is also demonstrated. Given the highly tunable nature of the model, we believe that the LCM should be a very useful tool for studying uncertainty in global climate change.

An isopycnic ocean carbon cycle model

February 2010

·

102 Reads

The carbon cycle is a major forcing component in the global climate system. Modelling studies aiming to explain recent and past climatic changes and to project future ones thus increasingly include the interaction between the physical and biogeochemical systems. Their ocean components are generally z-coordinate models that are conceptually easy to use but that employ a vertical coordinate that is alien to the real ocean structure. Here we present first results from a newly developed isopycnic carbon cycle model and demonstrate the viability of using an isopycnic physical component for this purpose. As expected, the model represents interior ocean transport of biogeochemical tracers well and produces realistic tracer distributions. Difficulties in employing a purely isopycnic coordinate lie mainly in the treatment of the surface boundary layer which is often represented by a bulk mixed layer. The most significant adjustments of the biogeochemical code for use with an isopycnic coordinate are in the representation of upper ocean biological production. We present a series of sensitivity studies exploring the effect of changes in biogeochemical and physical processes on export production and nutrient distribution. Apart from giving us pointers for further model development, they highlight the importance of preformed nutrient distributions in the Southern Ocean for global nutrient distributions. Use of a prognostic slab atmosphere allows us to assess the effect of the changes in export production on global ocean carbon uptake and atmospheric CO2 levels. Sensitivity studies show that iron limitation for biological particle production, the treatment of light penetration for biological production, and the role of diapycnal mixing result in significant changes of modelled air-sea fluxes and nutrient distributions.

Global high-resolution simulations of CO2 and CH4 using a NIES transport model to produce a priori concentrations for use in satellite data retrievals

August 2012

·

97 Reads

The Greenhouse gases Observing SATellite (GOSAT) measures column-averaged dry air mole fractions of carbon dioxide and methane (XCO2 and XCH4, respectively). Since the launch of GOSAT, model-simulated three-dimensional concentrations from a National Institute for Environmental Studies offline tracer Transport Model (NIES TM) have been used as a~priori concentration data for retrieving XCO2 and XCH4 from GOSAT short-wavelength infrared spectra at NIES. Though a priori concentrations for retrievals are optional, more reliable concentrations are desirable. In this paper we describe the newly developed NIES TM that has been adapted to provide global and near real-time concentrations of CO2 and CH4 using a high-resolution meteorological dataset, the Grid Point Value (GPV) prepared by the Japan Meteorological Agency. The spatial resolution of the NIES TM is set to 0.5° × 0.5° in the horizontal in order to utilize GPV data, which have a resolution of 0.5° × 0.5°, 21 pressure levels, and a time interval of 3 h. GPV data are provided to the GOSAT processing system with a delay of several hours, and the near real-time model simulation produces a priori concentrations driven by diurnally varying meteorology. A priori variance-covariance matrices of CO2 and CH4 are also derived from the simulation outputs and observation-based reference data for each month of the year at a resolution of 0.5° × 0.5° and 21 pressure levels. Model performance is assessed by comparing simulation results with the GLOBALVIEW dataset and other observational data. The overall root-mean-square differences between model predictions and GLOBALVIEW analysis are estimated to be 2.28 ppm and 12.68 ppb for CO2 and CH4, respectively, and the seasonal correlation coefficients are 0.86 for CO2 and 0.61 for CH4. The model showed good performance particularly at oceanic and free tropospheric sites. The model also performs well in reproducing both the observed synoptic variations at some sites, and stratospheric profiles over Japan. These results give us confidence that the performance of our GPV-forced high-resolution NIES TM is adequate for use in satellite retrievals.

Top-cited authors