Geoscientific Model Development

Published by European Geosciences Union
Online ISSN: 1991-9603
Publications
Article
We describe the physical model, numerical algorithms, and software structure of WRF-Fire. WRF-Fire consists of a fire-spread model, implemented by the level-set method, coupled with the Weather Research and Forecasting model. In every time step, the fire model inputs the surface wind, which drives the fire, and outputs the heat flux from the fire into the atmosphere, which in turn influences the atmosphere. The level-set method allows submesh representation of the burning region and flexible implementation of various ignition modes. WRF-Fire is distributed as a part of WRF and it uses the WRF parallel infrastructure for parallel computing.
 
Example results of Monte-Carlo simulations using a simple chemistry mechanism (tropospheric conditions, gas-phase chemistry without any non-methane hydrocarbons). The first figure is a histogram showing the distribution of steady-state O 3 mixing ratios (nmol mol −1 ) in 1000 model runs. The other figures are scatter plots of O 3 against individual gas-phase photolysis rate coefficients (in s −1 ). It can be seen that O 3 decreases with increasing values of J1001a (O 3 + hν → O( 1 D)). In contrast, its dependence on the photolysis rate J1001b (O 3 + hν → O( 3 P)) is statistically insignificant.  
Article
We present version 3.0 of the atmospheric chemistry box model CAABA/MECCA. In addition to a complete update of the rate coefficients to the most recent recommendations, a number of new features have been added: chemistry in multiple aerosol size bins; automatic multiple simulations reaching steady-state conditions; Monte-Carlo simulations with randomly varied rate coefficients within their experimental uncertainties; calculations along Lagrangian trajectories; mercury chemistry; more detailed isoprene chemistry; tagging of isotopically labeled species. Further changes have been implemented to make the code more user-friendly and to facilitate the analysis of the model results. Like earlier versions, CAABA/MECCA-3.0 is a community model published under the GNU General Public License.
 
Article
We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.
 
Schematic representation of the aerosol distribution in MADE-in. BC indicates black carbon, POM particulate organic matter, SS sea salt and DU dust. The shaded mode is the coarse mode, which does not interact with the sub-micrometer modes. The black line depicts the fine modes without BC and dust, the red line the modes for externally mixed BC and dust particles and the blue line the modes for internally mixed BC and dust. 
Article
Black carbon (BC) and mineral dust are among the most abundant insoluble aerosol components in the atmosphere. When released, most BC and dust particles are externally mixed with other aerosol species. Through coagulation with particles containing soluble material and condensation of gases, the externally mixed particles may obtain a liquid coating and be transferred into an internal mixture. The mixing state of BC and dust aerosol particles influences their radiative and hygroscopic properties, as well as their ability of forming ice crystals. We introduce the new aerosol microphysics submodel MADE-in, implemented within the ECHAM/MESSy Atmospheric Chemistry global model (EMAC). MADE-in is able to track mass and number concentrations of BC and dust particles in their different mixing states, as well as particles free of BC and dust. MADE-in describes these three classes of particles through a superposition of seven log-normally distributed modes, and predicts the evolution of their size distribution and chemical composition. Six out of the seven modes are mutually interacting, allowing for the transfer of mass and number among them. Separate modes for the different mixing states of BC and dust particles in EMAC/MADE-in allow for explicit simulations of the relevant aging processes, i.e. condensation, coagulation and cloud processing. EMAC/MADE-in has been evaluated with surface and airborne measurements and mostly performs well both in the planetary boundary layer and in the upper troposphere and lowermost stratosphere.
 
Article
Three detailed meteorological case studies are conducted with the global and regional atmospheric chemistry model system ECHAM5/MESSy(→COSMO/MESSy)n, shortly named MECO(n). The aim of this article is to assess the general performance of the on-line coupling of the regional model COSMO to the global model ECHAM5. The cases are characterised by intense weather systems in Central Europe: a cold front passage in March 2010, a convective frontal event in July 2007, and the high impact winter storm "Kyrill" in January 2007. Simulations are performed with the new on-line-coupled model system and compared to classical, off-line COSMO hindcast simulations driven by ECMWF analyses. Precipitation observations from rain gauges and ECMWF analysis fields are used as reference, and both qualitative and quantitative measures are used to characterise the quality of the various simulations. It is shown that, not surprisingly, simulations with a shorter lead time generally produce more accurate simulations. Irrespective of lead time, the accuracy of the on-line and off-line COSMO simulations are comparable for the three cases. This result indicates that the new global and regional model system MECO(n) is able to simulate key mid-latitude weather systems, including cyclones, fronts, and convective precipitation, as accurately as present-day state-of-the-art regional weather prediction models in standard off-line configuration. Therefore, MECO(n) will be applied to simulate atmospheric chemistry exploring the model's full capabilities during meteorologically challenging conditions.
 
Article
A new, highly flexible model system for the seamless dynamical down-scaling of meteorological and chemical processes from the global to the meso-γ scale is presented. A global model and a cascade of an arbitrary number of limited-area model instances run concurrently in the same parallel environment, in which the coarser grained instances provide the boundary data for the finer grained instances. Thus, disk-space intensive and time consuming intermediate and pre-processing steps are entirely avoided and the time interpolation errors of common off-line nesting approaches are minimised. More specifically, the regional model COSMO of the German Weather Service (DWD) is nested on-line into the atmospheric general circulation model ECHAM5 within the Modular Earth Submodel System (MESSy) framework. ECHAM5 and COSMO have previously been equipped with the MESSy infrastructure, implying that the same process formulations (MESSy submodels) are available for both models. This guarantees the highest degree of achievable consistency, between both, the meteorological and chemical conditions at the domain boundaries of the nested limited-area model, and between the process formulations on all scales. The on-line nesting of the different models is established by a client-server approach with the newly developed Multi-Model-Driver (MMD), an additional component of the MESSy infrastructure. With MMD an arbitrary number of model instances can be run concurrently within the same message passing interface (MPI) environment, the respective coarser model (either global or regional) is the server for the nested finer (regional) client model, i.e. it provides the data required to calculate the initial and boundary fields to the client model. On-line nesting means that the coupled (client-server) models exchange their data via the computer memory, in contrast to the data exchange via files on disk in common off-line nesting approaches. MMD consists of a library (Fortran95 and some parts in C) which is based on the MPI standard and two new MESSy submodels, MMDSERV and MMDCLNT (both Fortran95) for the server and client models, respectively. MMDCLNT contains a further sub-submodel, INT2COSMO, for the interpolation of the coarse grid data provided by the server models (either ECHAM5/MESSy or COSMO/MESSy) to the grid of the respective client model (COSMO/MESSy). INT2COSMO is based on the off-line pre-processing tool INT2LM provided by the DWD.
 
Information on chosen reaction rates for the two chemical reaction systems. 
Equilibrium concentrations for Z (red line) and˜Zand˜ and˜Z (blue line) [ppbv] as a function of the concentration of Y for a constant concentration of X = 20 ppbv (solid) and a constant ratio between the concentration of species X and Y of 1:10.  
Net production rates [ppbv/s] for a constant concentration of Z (red line) and˜Zand˜ and˜Z (blue line) of 40 ppbv and a mixing ratio of X=20 ppbv.  
Equilibrium concentrations for species Z (top) and˜Zand˜ and˜Z (bottom ) [ppbv] as a function of the concentrations of X and Y.  
Article
We present an improved tagging method, which describes the combined effect of emissions of various species from individual emission categories, e.g. the impact of both, nitrogen oxides and non-methane hydrocarbon emissions on ozone. This method is applied to two simplified chemistry schemes, which represent the main characteristics of atmospheric ozone chemistry. Analytical solutions are presented for this tagging approach. In the past, besides tagging approaches, sensitivity methods were used, which estimate the contributions from individual sources based on differences in two simulations, a base case and a simulation with a perturbation in the respective emission category. We apply both methods to our simplified chemical systems and demonstrate that potentially large errors (factor of 2) occur with the sensitivity method, which depend on the degree of linearity of the chemical system. This error depends on two factors, the ability to linearise the chemical system around a base case, and second the completeness of the contributions, which means that all contributions should principally add up to 100%. For some chemical regimes the first error can be minimised by employing only small perturbations of the respective emission, e.g. 5%. The second factor depends on the chemical regime and cannot be minimized by a specific experimental set-up. It is inherent to the sensitivity method. Since a complete tagging algorithm for global chemistry models is difficult to achieve, we present two error metrics, which can be applied for sensitivity methods in order to estimate the potential error of this approach for a specific application.
 
Definition of tracer field instances in COSMO/MESSy. The middle column lists the variable names of the respective fields in TRACER. The abbreviations RK and LF denote the Runge-Kutta and Leap-frog scheme, respectively. 
Definition of initialisation patterns for passive tracers used in this study. 
Initialisation pattern of the passive tracer V 1. The horizontal axis shows rotated coordinates.
Corrected negative tracer mass (see text) in kg for the passive tracers H (left), V 1 (middle) and V 2 (right) in the COSMO-7 region. For H and V 1 all lines are on top of each other.
Horizontal distribution at 900 hPa of the artificial tracer PNT. The location of the emission point is indicated by the light blue plus sign. Results are shown for the 12th, 15th and 18th simulation day at 12:00 UTC (columns). First row: ECHAM5/MESSy, second row COSMO-40/MESSy, third row COSMO-7/MESSy and last row composite of all three model domains.
Article
The numerical weather prediction model of the Consortium for Small Scale Modelling (COSMO), maintained by the German weather service (DWD), is connected with the Modular Earth Submodel System (MESSy). This effort is undertaken in preparation of a new, limited-area atmospheric chemistry model. Limited-area models require lateral boundary conditions for all prognostic variables. Therefore the quality of a regional chemistry model is expected to improve, if boundary conditions for the chemical constituents are provided by the driving model in consistence with the meteorological boundary conditions. The new developed model is as consistent as possible, with respect to atmospheric chemistry and related processes, with a previously developed global atmospheric chemistry general circulation model: the ECHAM/MESSy Atmospheric Chemistry (EMAC) model. The combined system constitutes a new research tool, bridging the global to the meso-γ scale for atmospheric chemistry research. MESSy provides the infrastructure and includes, among others, the process and diagnostic submodels for atmospheric chemistry simulations. Furthermore, MESSy is highly flexible allowing model setups with tailor made complexity, depending on the scientific question. Here, the connection of the MESSy infrastructure to the COSMO model is documented and also the code changes required for the generalisation of regular MESSy submodels. Moreover, previously published prototype submodels for simplified tracer studies are generalised to be plugged-in and used in the global and the limited-area model. They are used to evaluate the TRACER interface implementation in the new COSMO/MESSy model system and the tracer transport characteristics, an important prerequisite for future atmospheric chemistry applications. A supplementary document with further details on the technical implementation of the MESSy interface into COSMO with a complete list of modifications to the COSMO code is provided.
 
Climatology (2000-2009) of the contribution of ozone originating from the regions indicated by the boxes to the full ozone field (O i 3 /O 3 ) for each of the nine source regions.
As in Fig. 4, but difference 2040s minus 2000s. Colours follow Fig. 2: yellow: local, purple: tropical middle stratosphere, pink: tropical lower stratosphere. Here, the vertical bars denote the 1σ uncertainty in the differences.
Climatology (2000-2009) of the annual cycle in the net monthly tendencies dO i 3 in ozone (top) and in the lower panel the tendencies due to transport (T i ) (solid line), production (P ) (dashed line) and destruction (DO i 3 ) (dash-dotted line) in the northern midlatitudes. Yellow lines are local mid-latitude ozone, purple lines tropical ozone.
Relative differences 2040s-2000s in mean ozone mixing ratios ((O p2 3 −O p1 3 )/O p1 3 ) of each region. The first bar and the second bar are the differences calculated from the left and right hand side of Eq. (5). The other bars are changes in ozone over this period due to changes in chemistry (production [P ] and destruction rates [D]) and dynamics (import, Im, and export, Ex). The errorbars denote the 1σ uncertainty in the differences.
Annual ozone mass transport in 10 10 kg yr −1 from each predefined region to each of the eight other regions.
Article
Chemistry-climate models (CCMs) are commonly used to simulate the past and future development of Earth’s ozone layer. The fully coupled chemistry schemes calculate the chemical production and destruction of ozone interactively and ozone is transported by the simulated atmospheric flow. Due to the complexity of the processes acting on ozone it is not straightforward to disentangle the influence of individual processes on the temporal development of ozone concentrations. A method is introduced here that quantifies the influence of chemistry and transport on ozone concentration changes and that is easily implemented in CCMs and chemistry-transport models (CTMs). In this method, ozone tendencies (i.e. the time rate of change of ozone) are partitioned into a contribution from ozone production and destruction (chemistry) and a contribution from transport of ozone (dynamics). The influence of transport on ozone in a specific region is further divided into export of ozone out of that region and import of ozone from elsewhere into that region. For this purpose, a diagnostic is used that disaggregates the ozone mixing ratio field into 9 separate fields according to in which of 9 predefined regions of the atmosphere the ozone originated. With this diagnostic the ozone mass fluxes between these regions are obtained. Furthermore, this method is used here to attribute long-term changes in ozone to chemistry and transport. The relative change in ozone from one period to another that is due to changes in production or destruction rates, or due to changes in import or export of ozone, are quantified. As such, the diagnostics introduced here can be used to attribute changes in ozone on monthly, interannual and long-term time-scales to the responsible mechanisms. Results from a CCM simulation are shown here as examples, with the main focus of the paper being on introducing the method.
 
Article
Variations in the mixing ratio of trace gases of tropospheric origin entering the stratosphere in the tropics are of interest for assessing both troposphere to stratosphere transport fluxes in the tropics and the impact of these transport fluxes on the composition of the tropical lower stratosphere. Anomaly patterns of carbon monoxide (CO) and long-lived tracers in the lower tropical stratosphere allow conclusions about the rate and the variability of tropical upwelling to be drawn. Here, we present a simplified chemistry scheme for the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the simulation, at comparatively low numerical cost, of CO, ozone, and long-lived trace substances (CH4, N2O, CCl3F (CFC-11), CCl2F2 (CFC-12), and CO2) in the lower tropical stratosphere.
 
Flowchart of how the QCTM is implemented into the EMAC submodel PSC. The partitioning is calculated twice using a) offline, and b) online mixing ratios of HNO 3 (see text for more detailed explanations). 
Article
A quasi chemistry-transport model mode (QCTM) is presented for the numerical chemistry-climate simulation system ECHAM/MESSy Atmospheric Chemistry (EMAC). It allows for a quantification of chemical signals through suppression of any feedback between chemistry and dynamics. Noise would otherwise interfere too strongly. The signal follows from the difference of two QCTM simulations, reference and sensitivity. These are fed with offline chemical fields as a substitute of the feedbacks between chemistry and dynamics: offline mixing ratios of radiatively active substances enter the radiation scheme (a), offline mixing ratios of nitric acid enter the scheme for re-partitioning and sedimentation from polar stratospheric clouds (b). Offline methane oxidation is the exclusive source of chemical water-vapor tendencies (c). Any set of offline fields suffices to suppress the feedbacks, though may be inconsistent with the simulation setup. An adequate set of offline climatologies can be produced from a non-QCTM simulation of the reference setup. Test simulations reveal the particular importance of adequate offline fields associated with (a). Inconsistencies from (b) are negligible when using adequate fields of nitric acid. Acceptably small inconsistencies come from (c), but should vanish for an adequate prescription of water vapor tendencies. Toggling between QCTM and non-QCTM is done via namelist switches and does not require a source code re-compilation.
 
Article
A new model to simulate and predict the properties of a large ensemble of contrails as a function of given air traffic and meteorology is described. The model is designed for approximate prediction of contrail cirrus cover and analysis of contrail climate impact, e.g. within aviation system optimization processes. The model simulates the full contrail life-cycle. Contrail segments form between waypoints of individual aircraft tracks in sufficiently cold and humid air masses. The initial contrail properties depend on the aircraft. The advection and evolution of the contrails is followed with a Lagrangian Gaussian plume model. Mixing and bulk cloud processes are treated quasi analytically or with an effective numerical scheme. Contrails disappear when the bulk ice content is sublimating or precipitating. The model has been implemented in a "Contrail Cirrus Prediction Tool" (CoCiP). This paper describes the model assumptions, the equations for individual contrails, and the analysis-method for contrail-cirrus cover derived from the optical depth of the ensemble of contrails and background cirrus. The model has been applied for a case study and compared to the results of other models and in-situ contrail measurements. The simple model reproduces a considerable part of observed contrail properties. Mid-aged contrails provide the largest contributions to the product of optical depth and contrail width, important for climate impact.
 
Article
An important issue in the evaluation of the environmental impact of emissions from concentrated sources such as transport modes, is to understand how processes occurring at the scales of exhaust plumes can influence the physical and chemical state of the atmosphere at regional and global scales. Indeed, three-dimensional global circulation models or chemistry transport models generally assume that emissions are instantaneously diluted into large-scale grid boxes, which may lead, for example, to overpredict the efficiency of NOx to produce ozone. In recent times, various methods have been developed to incorporate parameterizations of plume processes into global models that are based e.g. on correcting the original emission indexes or on introducing "subgrid" reaction rates in the models. This paper provides a review of the techniques proposed so far in the literature to account for local conversion of emissions in the plume, as well as the implementation of these techniques into atmospheric codes.
 
Article
This study uses in-situ measurements collected during the FireFlux field experiment to evaluate and improve the performance of coupled atmosphere-fire model WRF-Sfire. The simulation by WRF-Sfire of the experimental burn shows that WRF-Sfire is capable of providing realistic head fire rate-of-spread and the vertical temperature structure of the fire plume, and, up to 10 m above ground level, fire-induced surface flow and vertical velocities within the plume. The model captured the changes in wind speed and direction before, during, and after fire front passage, along with arrival times of wind speed, temperature, and updraft maximae, at the two instrumented flux towers used in FireFlux. The model overestimated vertical velocities and underestimated horizontal wind speeds measured at tower heights above the 10 m, and it is hypothesized that the limited model resolution over estimated the fire front depth, leading to too high a heat release and, subsequently, too strong an updraft. However, on the whole, WRF-Sfire fire plume behavior is consistent with the interpretation of FireFlux observations. The study suggests optimal experimental pre-planning, design, and execution of future field campaigns that are needed for further coupled atmosphere-fire model development and evaluation.
 
Article
We present a new version of the coupled Earth system model GEOCLIM. The new release, GEOCLIM reloaded, links the existing atmosphere and weathering modules to a novel, temporally and spatially resolved model of the global ocean circulation, which provides a physical framework for a mechanistic description of the marine biogeochemical dynamics of carbon, nitrogen, phosphorus and oxygen. The ocean model is also coupled to a fully formulated, vertically resolved diagenetic model. GEOCLIM reloaded is thus a unique tool to investigate the short- and long-term feedbacks between climatic conditions, continental inputs, ocean biogeochemical dynamics and diagenesis. A complete and detailed description of the resulting Earth system model and its new features is first provided. The performance of GEOCLIM reloaded is then evaluated by comparing steady-state simulation under present-day conditions with a comprehensive set of oceanic data and existing global estimates of bio-element cycling in the pelagic and benthic compartments.
 
Model predictions with the version of the ICBM model implemented in SOILR. This graph reproduces Fig. 2 in Andren and Katterer (1997). This figure can be reproduced typing example(ICBMModel) or attr(ICBMModel,"ex") in SOILR.
Examples of three different representations of a two-pool model with different model structures and environmental effects on decomposition rates. The upper panel shows carbon stocks and the lower panel carbon release. Additional details about the implementation are given in the text.
Carbon accumulation for the different pools included in the RothC model. DPM: the decomposable plant material pool, RPM: resistant plant material, BIO: microbial biomass pool, HUM: humified organic matterpool, and IOM: inert organic matter pool.
Basic model structures implemented in SOILR. Squares represent the compartments, and arrows represent inputs and outputs to and from the compartments. These model structures are special cases of the matrix A.  
Climate decomposition index (CDI) calculated as the product of a function of temperature (fT.Century1) and a function of precipitation and potential evapotranspiration (fW.Century) using monthly data from the WATCH dataset (Weedon et al., 2011).  
Article
Soil organic matter decomposition is a very important process within the Earth system because it controls the rates of mineralization of carbon and other biogeochemical elements, determining their flux to the atmosphere and the hydrosphere. SoilR is a modeling framework that contains a library of functions and tools for modeling soil organic matter decomposition under the R environment for computing. It implements a variety of model structures and tools to represent carbon storage and release from soil organic matter. In SoilR, organic matter decomposition is represented as a linear system of ordinary differential equations that generalizes the structure of most compartment-based decomposition models. A variety of functions is also available to represent environmental effects on decomposition rates. This document presents the conceptual basis for the functions implemented in the package. It is complementary to the help pages released with the software.
 
A screen shot of the Diamond interface. The options tree, including greyed out unselected options and options with unset values (blue), is displayed on the left. The current option is the initial condition for the velocity field. On the right, the value of the current option is displayed in the space labelled " Data " . In this case the option is an embedded Python script, specifying the initial velocity field from data in a vtu file. In the top right, the schema annotation for this option is visible while at the bottom right a space is available for user comments. Element names, provided by name attributes, are clearly displayed by elements such as fields.  
Data flow in a scientific model using Spud. Blue components are supplied by Spud and are model independent, red components form a part of the model but are independent of the scenario being simulated and yellow components depend on the particular scenario.  
Article
The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. In this paper, we present a model-independent system, Spud, which formalises the specification of model input formats in terms of formal grammars. This is combined with an automated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module which minimises the development cost of adding model options. Together, this provides a user friendly, well documented, self validating user interface which is applicable to a wide range of scientific models and which minimises the developer input required to maintain and extend the model interface.
 
Pseudocode description of the sequence algorithm.
The NO2 yields from oxidation of emitted VOC species. Bars are coloured according to the number of carbon atoms in each species. Only species with a yield of more than 0.5 molecules of NO2 are shown.
Article
An algorithm for the sequential analysis of the atmospheric oxidation of chemical species using output from a photochemical model is presented. Starting at a "root species", the algorithm traverses all possible reaction sequences which consume this species, and lead, via intermediate products, to final products. The algorithm keeps track of the effects of all of these reactions on their respective reactants and products. Upon completion, the algorithm has built a detailed picture of the effects of the oxidation of the root species on its chemical surroundings. The output of the algorithm can be used to determine product yields, radical recycling fractions, and ozone production potentials of arbitrary chemical species.
 
Physical model parameters. 
Comparisons of key physical model diagnostics. 
Model-data difference in dissolved O 2 (a) above 1500 m water depth and (b) below 1500 m in µmol kg −1. We seek the smallest error from observations in both. 
. The postindustrial transient run is obtained by running the model from the preindustrial state, taken to be year 1765 and repre- sented by the equilibrium run, to the year 1994 following the 
Article
Here we describe the first version of the Minnesota Earth System Model for Ocean biogeochemistry (MESMO 1.0), an intermediate complexity model based on the Grid ENabled Integrated Earth system model (GENIE-1). As with GENIE-1, MESMO has a 3D dynamical ocean, energy-moisture balance atmosphere, dynamic and thermodynamic sea ice, and marine biogeochemistry. Main development goals of MESMO were to: (1) bring oceanic uptake of anthropogenic transient tracers within data constraints; (2) increase vertical resolution in the upper ocean to better represent near-surface biogeochemical processes; (3) calibrate the deep ocean ventilation with observed abundance of radiocarbon. We achieved all these goals through a combination of objective model optimization and subjective targeted tuning. An important new feature in MESMO that dramatically improved the uptake of CFC-11 and anthropogenic carbon is the depth dependent vertical diffusivity in the ocean, which is spatially uniform in GENIE-1. In MESMO, biological production occurs in the top two layers above the compensation depth of 100 m and is modified by additional parameters, for example, diagnosed mixed layer depth. In contrast, production in GENIE-1 occurs in a single layer with thickness of 175 m. These improvements make MESMO a well-calibrated model of intermediate complexity suitable for investigations of the global marine carbon cycle requiring long integration time.
 
Article
As part of a broader effort to develop next-generation models for numerical weather prediction and climate applications, a hydrostatic atmospheric dynamical core is developed as an intermediate step to evaluate a finite-difference discretization of the primitive equations on spherical icosahedral grids. Based on the need for mass-conserving discretizations for multi-resolution modelling as well as scalability and efficiency on massively parallel computing architectures, the dynamical core is built on triangular C-grids using relatively small discretization stencils. This paper presents the formulation and performance of the baseline version of the new dynamical core, focusing on properties of the numerical solutions in the setting of globally uniform resolution. Theoretical analysis reveals that the discrete divergence operator defined on a single triangular cell using the Gauss theorem is only first-order accurate, and introduces grid-scale noise to the discrete model. The noise can be suppressed by fourth-order hyper-diffusion of the horizontal wind field using a time-step and grid-size-dependent diffusion coefficient, at the expense of stronger damping than in the reference spectral model. A series of idealized tests of different complexity are performed. In the deterministic baroclinic wave test, solutions from the new dynamical core show the expected sensitivity to horizontal resolution, and converge to the reference solution at R2B6 (35 km grid spacing). In a dry climate test, the dynamical core correctly reproduces key features of the meridional heat and momentum transport by baroclinic eddies. In the aqua-planet simulations at 140 km resolution, the new model is able to reproduce the same equatorial wave propagation characteristics as in the reference spectral model, including the sensitivity of such characteristics to the meridional sea surface temperature profile. These results suggest that the triangular-C discretization provides a reasonable basis for further development. The main issues that need to be addressed are the grid-scale noise from the divergence operator which requires strong damping, and a phase error of the baroclinic wave at medium and low resolutions.
 
Continued.
Continued.
HNO 3 (top) and NO y (bottom) as a function of flight time (UTC) from measurements of selected ER-2 flights (3 February, 5 March, 12 March) (blue dots) compared to modeled values (lines). NO y measurements give the total (solid and gas phase, for details see text) values, HNO 3 measurements show only the gas phase. The black line shows the model values for the gas phase, the grey line the total (solid not in the Lagrangian particles and gas phase) values and the dashed grey line the passive NO y tracer.
ClO (top), Cl 2 O 2 (middle) and ClONO 2 (bottom) as a function of flight time (UTC) from measurements of selected ER-2 flights (27 January, 3 February, 12 March) (blue dots, cyan lines show 2σ accuracy) compared to modeled values (black lines).
ClO and its reservoirs HOCl, HCl and ClONO 2 from data of the OMS balloon launch on 15 March 2000. Measurements of the Mark IV instrument (blue, with error bars), the SLS instrument (red) and modeled values (black lines) are shown for comparison. Note that Mark IV and SLS are remote sensing instruments viewing into opposite directions and that the mixing ratios of the species are interpolated to the Mark IV tangent points.
Article
ATLAS is a new global Lagrangian Chemistry and Transport Model (CTM), which includes a stratospheric chemistry scheme with 46 active species, 171 reactions, heterogeneous chemistry on polar stratospheric clouds and a Lagrangian denitrification module. Lagrangian (trajectory-based) models have several important advantages over conventional Eulerian models, including the absence of spurious numerical diffusion, efficient code parallelization and no limitation of the largest time step by the Courant-Friedrichs-Lewy criterion. This work describes and validates the stratospheric chemistry scheme of the model. Stratospheric chemistry is simulated with ATLAS for the Arctic winter 1999/2000, with a focus on polar ozone depletion and denitrification. The simulations are used to validate the chemistry module in comparison with measurements of the SOLVE/THESEO 2000 campaign. A Lagrangian denitrification module, which is based on the simulation of the nucleation, sedimentation and growth of a large number of polar stratospheric cloud particles, is used to model the substantial denitrification that occured in this winter.
 
Scatterplots between forecasts and observations for selected percentiles for the daily mean PM 2.5 conentrations (µg/m 3 ): (a) raw model forecasts, (b) Kalman fliter-adjusted forecasts.  
Box plots of RMSE and decomposed RMSE (systematic, RMSEs; unsystematic, RMSEu) values of the daily mean PM 2.5 concentrations (µg/m 3 ) for the raw model forecasts and KF bias-adjusted forecasts.
(a and b) RMSE and (c and d) mean bias (MB) values over observed daily mean PM 2.5 concentration (µg/m 3 ) bins for the raw model forecasts and the KF bias-adjusted forecasts. Figure 9a and c for warm season, and Fig. 9b and d for cool season.
False alarm ratio (FAR) and hit rate (H) for the daily mean PM 2.5 forecasts by the raw model and the KF bias-adjustment over the domain (DM) and all the sub-regions during (a) warm season and (b) cool season: FAR-MD, FAR associated with raw model forecasts; FAR-KF, FAR associated with KF forecasts; H-MD, H associated with raw model forecasts; and H-KF, H associated with KF forecasts.  
Article
To develop fine particulate matter (PM<sub>2.5</sub>) air quality forecasts for the US, a National Air Quality Forecast Capability (NAQFC) system, which linked NOAA's North American Mesoscale (NAM) meteorological model with EPA's Community Multiscale Air Quality (CMAQ) model, was deployed in the developmental mode over the continental United States during 2007. This study investigates the operational use of a bias-adjustment technique called the Kalman Filter Predictor approach for improving the accuracy of the PM<sub>2.5</sub> forecasts at monitoring locations. The Kalman Filter Predictor bias-adjustment technique is a recursive algorithm designed to optimally estimate bias-adjustment terms using the information extracted from previous measurements and forecasts. The bias-adjustment technique is found to improve PM<sub>2.5</sub> forecasts (i.e. reduced errors and increased correlation coefficients) for the entire year at almost all locations. The NAQFC tends to overestimate PM<sub>2.5</sub> during the cool season and underestimate during the warm season in the eastern part of the continental US domain, but the opposite is true for the Pacific Coast. In the Rocky Mountain region, the NAQFC system overestimates PM<sub>2.5</sub> for the whole year. The bias-adjusted forecasts can quickly (after 2–3 days' lag) adjust to reflect the transition from one regime to the other. The modest computational requirements and systematic improvements in forecast outputs across all seasons suggest that this technique can be easily adapted to perform bias adjustment for real-time PM<sub>2.5</sub> air quality forecasts.
 
Sketch illustrating several aspects/components to be considered in ice sheet modelling (adapted after Sandhäger, 2000). 
Comparison of modelled SIA ice thicknesses of experiments described in Huybrechts and Payne (1996) (red), the Richardson extrapolation result of Bueler et al. (2005) (green), and RIMBAY results (blue). The RIMBAY A-Grid implementation corresponds essentially with the 3-D/Type-II. 
Modelled horizontal velocity for two synthetic floating ice structures of complex geometries. Nunataks are indicated in brown. At the southern (lower) edge no-slip boundary conditions are applied, at the northern edge and at the ice rise in the left ice body free-slip boundary are valid. 
Geometry for the experiment described in Sect. 6.4. (a) Bedrock topography and ice geomtery. The horizontal ice velocity is plotted on top of the ice sheet surface; the magenta and red lines indicate the interpolated (sub-grid scale) GRL-positions for the coupled SIA/SSA, the HOM-(dotted) and the FS-(solid) solution, respectively; the black rectangle indicates the region, where the FS-solver is applied. Additionally, the basal friction parameter β 2 (according to Eq. 20) is shown. (b) Profile along y = 100 km. The dashed black lines indicate the area where the HOM and FS solutions are calculated, respectively; the red lines indicate the shape of the corresponding ice geometry for the HOM-solution (dotted) and FS-solution (solid). 
Relative positions and numbering of nodes for the SSA. 
Article
Glaciers and ice caps exhibit currently the largest cryospheric contributions to sea level rise. Modelling the dynamics and mass balance of the major ice sheets is therefore an important issue to investigate the current state and the future response of the cryosphere in response to changing environmental conditions, namely global warming. This requires a powerful, easy-to-use, scalable multi-physics ice dynamics model. Based on the well-known and established ice sheet model of Pattyn (2003) we develop the modular multi-physics thermomechanic ice model RIMBAY, in which we improve the original version in several aspects like a shallow-ice-shallow-shelf coupler and a full 3-D-grounding-line migration scheme based on Schoof's (2007) heuristic analytical approach. We summarise the Full-Stokes equations and several approximations implemented within this model and we describe the different numerical discretisations. The results are cross-validated against previous publications dealing with ice modelling, and some additional artificial set-ups demonstrate the robustness of the different solvers and their internal coupling. RIMBAY is designed for an easy adaption to new scientific issues. Hence, we demonstrate in very different set-ups the applicability and functionality of RIMBAY in Earth system science in general and ice modelling in particular.
 
Article
This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to observations and results from previous model versions in a series of simulations conducted to incrementally assess the effect of each change. The focus of this paper is on five major scientific upgrades: (a) updates to the heterogeneous N2O5 parameterization, (b) improvement in the treatment of secondary organic aerosol (SOA), (c) inclusion of dynamic mass transfer for coarse-mode aerosol, (d) revisions to the cloud model, and (e) new options for the calculation of photolysis rates. Incremental test simulations over the eastern United States during January and August 2006 are evaluated to assess the model response to each scientific improvement, providing explanations of differences in results between v4.7 and previously released CMAQ model versions. Particulate sulfate predictions are improved across all monitoring networks during both seasons due to cloud module updates. Numerous updates to the SOA module improve the simulation of seasonal variability and decrease the bias in organic carbon predictions at urban sites in the winter. Bias in the total mass of fine particulate matter (PM2.5) is dominated by overpredictions of unspeciated PM2.5 (PMother) in the winter and by underpredictions of carbon in the summer. The CMAQv4.7 model results show slightly worse performance for ozone predictions. However, changes to the meteorological inputs are found to have a much greater impact on ozone predictions compared to changes to the CMAQ modules described here. Model updates had little effect on existing biases in wet deposition predictions.
 
724
Article
The spin-up of land models to steady state of coupled carbon-nitrogen processes is computationally so costly that it becomes a bottleneck issue for global analysis. In this study, we introduced a semi-analytical solution (SAS) for the spin-up issue. SAS is fundamentally based on the analytic solution to a set of equations that describe carbon transfers within ecosystems over time. SAS is implemented by three steps: (1) having an initial spin-up with prior pool-size values until net primary productivity (NPP) reaches stabilization, (2) calculating quasi-steady-state pool sizes by letting fluxes of the equations equal zero, and (3) having a final spin-up to meet the criterion of steady state. Step 2 is enabled by averaged time-varying variables over one period of repeated driving forcings. SAS was applied to both site-level and global scale spin-up of the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model. For the carbon-cycle-only simulations, SAS saved 95.7% and 92.4% of computational time for site-level and global spin-up, respectively, in comparison with the traditional method (a long-term iterative simulation to achieve the steady states of variables). For the carbon-nitrogen coupled simulations, SAS reduced computational cost by 84.5% and 86.6% for site-level and global spin-up, respectively. The estimated steady-state pool sizes represent the ecosystem carbon storage capacity, which was 12.1 kg C m-2 with the coupled carbon-nitrogen global model, 14.6% lower than that with the carbon-only model. The nitrogen down-regulation in modeled carbon storage is partly due to the 4.6% decrease in carbon influx (i.e., net primary productivity) and partly due to the 10.5% reduction in residence times. This steady-state analysis accelerated by the SAS method can facilitate comparative studies of structural differences in determining the ecosystem carbon storage capacity among biogeochemical models. Overall, the computational efficiency of SAS potentially permits many global analyses that are impossible with the traditional spin-up methods, such as ensemble analysis of land models against parameter variations.
 
Article
We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS (Alfred Wegener InsTitute LAgrangian Chemistry/Transport System). Lagrangian (trajectory-based) models have several important advantages over conventional Eulerian (grid-based) models, including the absence of spurious numerical diffusion, efficient code parallelization and no limitation of the largest time step by the Courant-Friedrichs-Lewy criterion. The basic concept of transport and mixing is similar to the approach in the commonly used CLaMS model. Several aspects of the model are different from CLaMS and are introduced and validated here, including a different mixing algorithm for lower resolutions which is less diffusive and agrees better with observations with the same mixing parameters. In addition, values for the vertical and horizontal stratospheric bulk diffusion coefficients are inferred and compared to other studies. This work focusses on the description of the dynamical part of the model and the validation of the mixing algorithm. The chemistry module, which contains 49 species, 170 reactions and a detailed treatment of heterogeneous chemistry, will be presented in a separate paper.
 
Article
Calculating the equilibrium composition of atmospheric aerosol particles, using all variations of Köhler theory, has largely assumed that the total solute concentrations define both the water activity and surface tension. Recently however, bulk to surface phase partitioning has been postulated as a process which significantly alters the predicted point of activation. In this paper, an analytical solution to calculate the removal of material from a bulk to a surface layer in aerosol particles has been derived using a well established and validated surface tension framework. The applicability to an unlimited number of components is possible via reliance on data from each binary system. Whilst assumptions regarding behaviour at the surface layer have been made to facilitate derivation, it is proposed that the framework presented can capture the overall impact of bulk-surface partitioning. Demonstrations of the equations for two and five component mixtures are given while comparisons are made with more detailed frameworks capable at modelling ternary systems at higher levels of complexity. Predictions made by the model across a range of surface active properties should be tested against measurements. Indeed, reccomendations are given for experimental validation and to assess sensitivities to accuracy and required level of complexity within large scale frameworks. Importantly, the computational efficiency of using the solution presented in this paper is roughly a factor of 20 less than a similar iterative approach, a comparison with highly coupled approaches not available beyond a 3 component system.
 
Article
REMO-HAM is a new regional aerosol-climate model. It is based on the REMO regional climate model and includes most of the major aerosol processes. The structure for aerosol is similar to the global aerosol-climate model ECHAM5-HAM, for example the aerosol module HAM is coupled with a two-moment stratiform cloud scheme. On the other hand, REMO-HAM does not include an online coupled aerosol-radiation nor a secondary organic aerosol module. In this work, we evaluate the model and compare the results against ECHAM5-HAM and measurements. Four different measurement sites were chosen for the comparison of total number concentrations, size distributions and gas phase sulfur dioxide concentrations: Hyytiälä in Finland, Melpitz in Germany, Mace Head in Ireland and Jungfraujoch in Switzerland. REMO-HAM is run with two different resolutions: 50 × 50 km2 and 10 × 10 km2. Based on our simulations, REMO-HAM is in reasonable agreement with the measured values. The differences in the total number concentrations between REMO-HAM and ECHAM5-HAM can be mainly explained by the difference in the nucleation mode. Since we did not use activation nor kinetic nucleation for the boundary layer, the total number concentrations are somewhat underestimated. From the meteorological point of view, REMO-HAM represents the precipitation fields and 2 m temperature profile very well compared to measurement. Overall, we show that REMO-HAM is a functional aerosol-climate model, which will be used in further studies.
 
Article
We present a new aerosol microphysics and gas aerosol partitioning submodel (Global Modal-aerosol eXtension, GMXe) implemented within the ECHAM/MESSy Atmospheric Chemistry model (EMAC, version 1.8). The submodel is computationally efficient and is suitable for medium to long term simulations with global and regional models. The aerosol size distribution is treated using 7 log-normal modes and has the same microphysical core as the M7 submodel (Vignati et al., 2004). The main developments in this work are: (i) the extension of the aerosol emission routines and the M7 microphysics, so that an increased (and variable) number of aerosol species can be treated (new species include sodium and chloride, and potentially magnesium, calcium, and potassium), (ii) the coupling of the aerosol microphysics to a choice of treatments of gas/aerosol partitioning to allow the treatment of semi-volatile aerosol, and, (iii) the implementation and evaluation of the developed submodel within the EMAC model of atmospheric chemistry. Simulated concentrations of black carbon, particulate organic matter, dust, sea spray, sulfate and ammonium aerosol are shown to be in good agreement with observations (for all species at least 40% of modeled values are within a factor of 2 of the observations). The distribution of nitrate aerosol is compared to observations in both clean and polluted regions. Concentrations in polluted continental regions are simulated quite well, but there is a general tendency to overestimate nitrate, particularly in coastal regions (geometric mean of modelled values/geometric mean of observed data ≈2). In all regions considered more than 40% of nitrate concentrations are within a factor of two of the observations. Marine nitrate concentrations are well captured with 96% of modeled values within a factor of 2 of the observations.
 
Deposition on long grass (LUC 14) and influence of the different processes under low and wind high conditions (u * = 10 and 90 cm s −1 ). The canopy is characterized by h = 0.77 m, LAI = 4, z 0 = 0.1 m and d = 0.49 m, while ρ p = 1500 kg m −3. The deposition velocity at z R = 5 m, predicted by the present model, is given on the left hand side. The relative error Err between the present model and the 1-D-model is given on the right hand side, when the different processes are considered separately or together.
Comparison of the present model and the 1D-model under configurations of evergreen needleleaf forest (LUC 4) and short grass (LUC 13, with leaves) for different friction velocity conditions. For the 1D-model, the crown base height of the forest is taken as h/2 and the vertical profile of the leaf surface density as gaussian. Other parameters are given in Table 2. Blue and red plain lines correspond respectively to the present model and the one-dimensional model, while the green plain line corresponds to the relative error between them. The black line corresponds to the sedimentation velocity.
Deposition on coniferous forest, as measured by Beswick et al. (1991); Lorenz and Murphy (1989); Lamaud et al. (1994); Buzorius et al. (2000); Gaman et al. (2004); Grönholm et al. (2009); Gallagher et al. (1997). A friction velocity of 47.5 cm s −1 , a particle density of 1500 kg m −3 and the parameters of the LUC 4 are used to run the model of Zhang et al. (2001, in plain brown), the 1-D-model (in plain red) and the present model (in plain blue). Are added in blue dots the predictions of the present model obtained under the configuration of Beswick et al.'s experiment: u * = 37 cm s −1 , h = 4.2 m, h c = 1 m, LAI = 10, z 0 = 0.3 m and d = 2.8 m, ρ p = 1000 kg m −3. All deposition velocities are re-calculated at z R = 30 m.
Article
A size-resolved particle dry deposition scheme is developed for inclusion in large-scale air quality and climate models where the size distribution and fate of atmospheric aerosols is of concern. The "resistance" structure is similar to what is proposed by Zhang et al. (2001), while a new "surface" deposition velocity (or surface resistance) is derived by simplification of a one-dimensional aerosol transport model (Petroff et al., 2008b, 2009). Compared to Zhang et al.&apos;s model, the present model accounts for the leaf size, shape and area index as well as the height of the vegetation canopy. Consequently, it is more sensitive to the change of land covers, particularly in the accumulation mode (0.1–1 micron). A drift velocity is included to account for the phoretic effects related to temperature and humidity gradients close to liquid and solid water surfaces. An extended comparison of this model with experimental evidence is performed over typical land covers such as bare ground, grass, coniferous forest, liquid and solid water surfaces and highlights its adequate prediction. The predictions of the present model differ from Zhang et al.&apos;s model in the fine mode, where the latter tends to over-estimate in a significant way the particle deposition, as measured by various investigators or predicted by the present model. The present development is thought to be useful to modellers of the atmospheric aerosol who need an adequate parameterization of aerosol dry removal to the earth surface, described here by 26 land covers. An open source code is available in Fortran90.
 
Theoretical and calculated AOD at λ = 532 nm per size section.
Profiles of attenuated backscatter (β ) for BCAR (left) and DUST (right), per size section (bin) as a function of altitude for λ = 532, 1064 nm.
Profiles of attenuated scattering ratio (R ) and color ratio (χ ) for BCAR (left) and DUST (right) per size section as a function of altitude.
Temporal evolution of the daily mean AOD (500 nm) by AERONET (red line) and the corresponding CHIMERE AOD (at 532 nm, black line) at three AERONET sites (Blida, Carpentras, Lecce).
Aerosol Optical Depth modeled with CHIMERE for λ = 532nm for the 9 and 14 July 2007 at the same hour as the CALIPSO overpass time.
Article
We present an adaptable tool, the OPTSIM (OPTical properties SIMulation) software, for the simulation of optical properties and lidar attenuated backscattered profiles (β') from aerosol concentrations calculated by chemistry transport models (CTM). It was developed to model both Level 1 observations and Level 2 aerosol lidar retrievals in order to compare model results to measurements: the level 2 enables to estimate the main properties of aerosols plume structures, but may be limited due to specific assumptions. The level 1, originally developed for this tool, gives access to more information about aerosols properties (β') requiring, at the same time, less hypothesis on aerosols types. In addition to an evaluation of the aerosol loading and optical properties, active remote sensing allows the analysis of aerosols' vertical structures. An academic case study for two different species (black carbon and dust) is presented and shows the consistency of the simulator. Illustrations are then given through the analysis of dust events in the Mediterranean region during the summer 2007. These are based on simulations by the CHIMERE regional CTM and observations from the CALIOP space-based lidar, and highlight the potential of this approach to evaluate the concentration, size and vertical structure of the aerosol plumes.
 
b. Relative differences (in %) of annual mean vertically integrated concentrations of sulfate, BC, POM, SOA, dust, and sea salt between MAM7 and MAM3.
Same as Fig. 5, except for annual and zonal mean aerosol number concentrations in Aitken, accumulation and coarse mode.
Annual averaged global distribution of CCN number concentration at 0.1 % supersaturation at surface in MAM3 (upper) and MAM7 (lower).
Article
A modal aerosol module (MAM) has been developed for the Community Atmosphere Model version 5 (CAM5), the atmospheric component of the Community Earth System Model version 1 (CESM1). MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically-based manner. Two MAM versions were developed: a more complete version with seven lognormal modes (MAM7), and a version with three lognormal modes (MAM3) for the purpose of long-term (decades to centuries) simulations. In this paper a description and evaluation of the aerosol module and its two representations are provided. Sensitivity of the aerosol lifecycle to simplifications in the representation of aerosol is discussed. Simulated sulfate and secondary organic aerosol (SOA) mass concentrations are remarkably similar between MAM3 and MAM7. Differences in primary organic matter (POM) and black carbon (BC) concentrations between MAM3 and MAM7 are also small (mostly within 10%). The mineral dust global burden differs by 10% and sea salt burden by 30-40% between MAM3 and MAM7, mainly due to the different size ranges for dust and sea salt modes and different standard deviations of the log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and temporal variations of aerosol mass and number concentrations, size distributions, and aerosol optical properties. However, there are noticeable biases; e.g., simulated BC concentrations are significantly lower than measurements in the Arctic. There is a low bias in modeled aerosol optical depth on the global scale, especially in the developing countries. These biases in aerosol simulations clearly indicate the need for improvements of aerosol processes (e.g., emission fluxes of anthropogenic aerosols and precursor gases in developing countries, boundary layer nucleation) and properties (e.g., primary aerosol emission size, POM hygroscopicity). In addition, the critical role of cloud properties (e.g., liquid water content, cloud fraction) responsible for the wet scavenging of aerosol is highlighted.
 
(a) Monthly mean O 3 with CB05-Base (b) percent increases 
The median and inter-quartile range of mean bias for the daily maximum 8-h O 3 with CB05-TU and CB05-Base: (a) Los Angeles (b) Portland (c) Seattle (d) Chicago (e) New York/New Jersey (f) Detroit. Number beneath each paired evaluation represents the total sample number in each binned range of observed concentration. 
The median and inter-quartile range of mean normalized bias for the daily maximum 8-h O 3 with CB05-TU and CB05-Base: (a) Los Angeles (b) Portland (c) Seattle (d) Chicago (e) New York/New Jersey (f) Detroit. Number beneath each paired evaluation represents the total sample number in each binned range of observed concentration. 
Article
A new condensed toluene mechanism is incorporated into the Community Multiscale Air Quality Modeling system. Model simulations are performed using the CB05 chemical mechanism containing the existing (base) and the new toluene mechanism for the western and eastern US for a summer month. With current estimates of tropospheric emission burden, the new toluene mechanism increases monthly mean daily maximum 8-h ozone by 1.0–3.0 ppbv in Los Angeles, Portland, Seattle, Chicago, Cleveland, northeastern US, and Detroit compared to that with the base toluene chemistry. It reduces model mean bias for ozone at elevated observed ozone mixing ratios. While the new mechanism increases predicted ozone, it does not enhance ozone production efficiency. Sensitivity study suggests that it can further enhance ozone if elevated toluene emissions are present. While changes in total fine particulate mass are small, predictions of in-cloud SOA increase substantially.
 
Article
This paper describes a method to automatically generate a large ensemble of air quality simulations. This is achieved using the Polyphemus system, which is flexible enough to build various different models. The system offers a wide range of options in the construction of a model: many physical parameterizations, several numerical schemes and different input data can be combined. In addition, input data can be perturbed. In this paper, some 30 alternatives are available for the generation of a model. For each alternative, the options are given a probability, based on how reliable they are supposed to be. Each model of the ensemble is defined by randomly selecting one option per alternative. In order to decrease the computational load, as many computations as possible are shared by the models of the ensemble. As an example, an ensemble of 101 photochemical models is generated and run for the year 2001 over Europe. The models' performance is quickly reviewed, and the ensemble structure is analyzed. We found a strong diversity in the results of the models and a wide spread of the ensemble. It is noteworthy that many models turn out to be the best model in some regions and some dates.
 
Illustration of block structure for simulation domain as well as one generic block in MUSCAT.  
Examples of two stage, second order explicit Runge-Kutta methods. 
Synopsis of spatial structure for academic test case. 
Synopsis of spatial structure for realistic test case. 
Article
Explicit time integration methods are characterised by a small numerical effort per time step. In the application to multiscale problems in atmospheric modelling, this benefit is often more than compensated by stability problems and step size restrictions resulting from stiff chemical reaction terms and from a locally varying Courant-Friedrichs-Lewy (CFL) condition for the advection terms. Splitting methods may be applied to efficiently combine implicit and explicit methods (IMEX splitting). Complementarily multirate time integration schemes allow for a local adaptation of the time step size to the grid size. In combination, these approaches lead to schemes which are efficient in terms of evaluations of the right-hand side. Special challenges arise when these methods are to be implemented. For an efficient implementation, it is crucial to locate and exploit redundancies. Furthermore, the more complex programme flow may lead to computational overhead which, in the worst case, more than compensates the theoretical gain in efficiency. We present a general splitting approach which allows both for IMEX splittings and for local time step adaptation. The main focus is on an efficient implementation of this approach for parallel computation on computer clusters.
 
Article
The formulation of a 3-D ice sheet-shelf model is described. The model is designed for long-term continental-scale applications, and has been used mostly in paleoclimatic studies. It uses a hybrid combination of the scaled shallow ice and shallow shelf approximations for ice flow. Floating ice shelves and grounding-line migration are included, with parameterized ice fluxes at grounding lines that allows relatively coarse resolutions to be used. All significant components and parameterizations of the model are described in some detail. Basic results for modern Antarctica are compared with observations, and simulations over the last 5 million years are compared with previously published results. The sensitivity of ice volumes during the last deglaciation to basal sliding coefficients is discussed.
 
Article
In ice sheet modelling, the shallow-ice approximation (SIA) and second-order shallow-ice approximation (SOSIA) schemes are approaches to approximate the solution of the full Stokes equations governing ice sheet dynamics. This is done by writing the solution to the full Stokes equations as an asymptotic expansion in the aspect ratio , i.e. the quotient between a characteristic height and a characteristic length of the ice sheet. SIA retains the zeroth-order terms and SOSIA the zeroth-, first-, and second-order terms in the expansion. Here, we evaluate the order of accuracy of SIA and SOSIA by numerically solving a two-dimensional model problem for different values of , and comparing the solutions with a finite element solution to the full Stokes equations obtained from Elmer/Ice. The SIA and SOSIA solutions are also derived analytically for the model problem. For decreasing , the computed errors in SIA and SOSIA decrease , but not always in the expected way. Moreover, they depend critically on a parameter introduced to avoid singu-larities in Glen's flow law in the ice model. This is because the assumptions behind the SIA and SOSIA neglect a thick, high-viscosity boundary layer near the ice surface. The sensitivity to the parameter is explained by the analytical solutions. As a verification of the comparison technique, the SIA and SOSIA solutions for a fluid with Newtonian rheology are compared to the solutions by Elmer/Ice, with results agreeing very well with theory.
 
Article
At present, global climate models used to project changes in climate poorly resolve mesoscale ocean features such as boundary currents and eddies. These missing features may be important to realistically project the marine impacts of climate change. Here we present a framework for dynamically downscaling coarse climate change projections utilising a near-global ocean model that resolves these features in the Australasian region, with coarser resolution elsewhere. A time-slice projection for a 2060s ocean was obtained by adding climate change anomalies to initial conditions and surface fluxes of a near-global eddy-resolving ocean model. Climate change anomalies are derived from the differences between present and projected climates from a coarse global climate model. These anomalies are added to observed fields, thereby reducing the effect of model bias from the climate model. The downscaling model used here is ocean-only and does not include the effects that changes in the ocean state will have on the atmosphere and air–sea fluxes. We use restoring of the sea surface temperature and salinity to approximate real-ocean feedback on heat flux and to keep the salinity stable. Extra experiments with different feedback parameterisations are run to test the sensitivity of the projection. Consistent spatial differences emerge in sea surface temperature, salinity, stratification and transport between the downscaled projections and those of the climate model. Also, the spatial differences become established rapidly (
 
Flow diagram of implementing GEOS-Chem chemistry with KPP.
Comparison of the average number of function calls and chemical time steps per advection time step per grid cell.
Sensitivity of the O 3 column measured by TES with respect to the NO x emissions (parts per billion volume) over Asia on 1st April 2001.
Article
This paper discusses the implementation and performance of an array of gas-phase chemistry solvers for the state-of-the-science GEOS-Chem global chemical transport model. The implementation is based on the Kinetic PreProcessor (KPP). Two perl parsers automatically generate the needed interfaces between GEOS-Chem and KPP, and allow access to the chemical simulation code without any additional programming effort. This work illustrates the potential of KPP to positively impact global chemical transport modeling by providing additional functionality as follows. (1) The user can select a highly efficient numerical integration method from an array of solvers available in the KPP library. (2) KPP offers a wide variety of user options for studies that involve changing the chemical mechanism (e.g., a set of additional reactions is automatically translated into efficient code and incorporated into a modified global model). (3) This work provides access to tangent linear, continuous adjoint, and discrete adjoint chemical models, with applications to sensitivity analysis and data assimilation.
 
Comparison of statistical results at the eight EANET stations (1. Rishiri, 2. Tappi, 3. Ogasawara, 4. Sado, 5. Oki, 6. Hedo, 7. Happo, and 8. Yusuhara). R and Sim : Obs, for gas, aerosol and precipitation of anthropogenic sulfur oxides (nss-S, a, b), reduced nitrogen (Red. N, c, d), oxidized nitrogen (Oxid. N, e, f), sodium (Na, g, h), and amounts of precipitation (H 2 O, i, j). Dashed lines indicate (left) R of 0.5 and (right) the factor-of-2 envelope.
Article
We conducted a regional-scale simulation over Northeast Asia for the year 2006 using an aerosol chemical transport model, with time-varying lateral and upper boundary concentrations of gaseous species predicted by a global stratospheric and tropospheric chemistry-climate model. The present one-way nested global-through-regional-scale model is named the Meteorological Research Institute-Passive-tracers Model system for atmospheric Chemistry (MRI-PM/c). We evaluated the model's performance with respect to the major anthropogenic and natural inorganic components, SO42-, NH4+, NO3-, Na+ and Ca2+ in the air, rain and snow measured at the Acid Deposition Monitoring Network in East Asia (EANET) stations. Statistical analysis showed that approximately 40-50 % and 70-80 % of simulated concentration and wet deposition of SO42-, NH4+, NO3-and Ca2+ are within factors of 2 and 5 of the observations, respectively. The prediction of the sea-salt originated component Na+ was not successful at near-coastal stations (where the distance from the coast ranged from 150 to 700 m), because the model grid resolution (Δx=60 km) is too coarse to resolve it. The simulated Na+ in precipitation was significantly underestimated by up to a factor of 30.
 
Post-delivery problem rates as reported by Pfleeger and Hatton (1997)
Defect density of projects by defect assignment method. Previously published defect densities from Table 1 are shown on the right.  
Article
A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model, one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.
 
Article
A new version of the p-TOMCAT Chemical Transport Model (CTM) which includes an improved photolysis code, Fast-JX, is validated. Through offline testing we show that Fast-JX captures well the observed J(NO2) and J(O1D) values obtained at Weybourne and during a flight above the Atlantic, though with some overestimation of J(O1D) when comparing to the aircraft data. By comparing p-TOMCAT output of CO and ozone with measurements, we find that the inclusion of Fast-JX in the CTM strongly improves the latter's ability to capture the seasonality and levels of tracers' concentrations. A probability distribution analysis demonstrates that photolysis rates and oxidant (OH, ozone) concentrations cover a broader range of values when using Fast-JX instead of the standard photolysis scheme. This is not only driven by improvements in the seasonality of cloudiness but also even more by the better representation of cloud spatial variability. We use three different cloud treatments to study the radiative effect of clouds on the abundances of a range of tracers and find only modest effects on a global scale. This is consistent with the most relevant recent study. The new version of the validated CTM will be used for a variety of future studies examining the variability of tropospheric composition and its drivers.
 
Article
An ensemble Kalman filter (EnKF) has been coupled to the CHIMERE chemical transport model in order to assimilate ozone ground-based measurements on a regional scale. The number of ensembles is reduced to 20, which allows for future operational use of the system for air quality analysis and forecast. Observation sites of the European ozone monitoring network have been classified using criteria on ozone temporal variability, based on previous work by Flemming et al. (2005). This leads to the choice of specific subsets of suburban, rural and remote sites for data assimilation and for evaluation of the reference run and the assimilation system. For a 10-day experiment during an ozone pollution event over Western Europe, data assimilation allows for a significant improvement in ozone fields: the RMSE is reduced by about a third with respect to the reference run, and the hourly correlation coefficient is increased from 0.75 to 0.87. Several sensitivity tests focus on an a posteriori diagnostic estimation of errors associated with the background estimate and with the spatial representativeness of observations. A strong diurnal cycle of both these errors with an amplitude up to a factor of 2 is made evident. Therefore, the hourly ozone background error and the observation error variances are corrected online in separate assimilation experiments. These adjusted background and observational error variances provide a better uncertainty estimate, as verified by using statistics based on the reduced centered random variable. Over the studied 10-day period the overall EnKF performance over evaluation stations is found relatively unaffected by different formulations of observation and simulation errors, probably due to the large density of observation sites. From these sensitivity tests, an optimal configuration was chosen for an assimilation experiment extended over a three-month summer period. It shows a similarly good performance as the 10-day experiment.
 
Skill score (against persistence of observed field shown in insert) evolution of the control and experimental simulations. A 1-step DA algorithm was implemented on day 54 (23 February 2004) to the experimental run, whereby the ice model state was updated by changing material properties based on agreement of lower dimensional features deduced from RGPS data processed to show regions of high deformation (insert). Days are Julian days of 2004.  
Comparisons of the experimental (left column) and control (right column) simulations with LKFs interpreted  
Fig. A1. Failure envelope in principal stress space for the elastic-decohesive model.  
Article
Ideally, a validation and assimilation scheme should maintain the physical principles embodied in the model and be able to evaluate and assimilate lower dimensional features (e.g., discontinuities) contained within a bulk simulation, even when these features are not directly observed or represented by model variables. We present such a scheme and suggest its potential to resolve or alleviate some outstanding problems that stem from making and applying required, yet often non-physical, assumptions and procedures in common operational data assimilation. As proof of concept, we use a sea-ice model with remotely sensed observations of leads in a one-step assimilation cycle. Using the new scheme in a sixteen day simulation experiment introduces model skill (against persistence) several days earlier than in the control run, improves the overall model skill and delays its drop off at later stages of the simulation. The potential and requirements to extend this scheme to different applications, and to both empirical and statistical multivariate and full cycle data assimilation schemes, are discussed.
 
Article
We developed an ecosystem/biogeochemical model system, which includes multiple phytoplankton functional groups and carbon cycle dynamics, and applied it to investigate physical-biological interactions in Icelandic waters. Satellite and in situ data were used to evaluate the model. Surface seasonal cycle amplitudes and biases of key parameters (DIC, TA, pCO2, air-sea CO2 flux, and nutrients) are significantly improved when compared to surface observations by prescribing deep water values and trends, based on available data. The seasonality of the coccolithophore and "other phytoplankton" (diatoms and dinoflagellates) blooms is in general agreement with satellite ocean color products. Nutrient supply, biomass and calcite concentrations are modulated by light and mixed layer depth seasonal cycles. Diatoms are the most abundant phytoplankton, with a large bloom in early spring and a secondary bloom in fall. The diatom bloom is followed by blooms of dinoflagellates and coccolithophores. The effect of biological changes on the seasonal variability of the surface ocean pCO2 is nearly twice the temperature effect, in agreement with previous studies. The inclusion of multiple phytoplankton functional groups in the model played a major role in the accurate representation of CO2 uptake by biology. For instance, at the peak of the bloom, the exclusion of coccolithophores causes an increase in alkalinity of up to 4 μmol kg-1 with a corresponding increase in DIC of up to 16 μmol kg-1. During the peak of the bloom in summer, the net effect of the absence of the coccolithophores bloom is an increase in pCO2 of more than 20 μatm and a reduction of atmospheric CO2 uptake of more than 6 mmol m-2 d-1. On average, the impact of coccolithophores is an increase of air-sea CO2 flux of about 27%. Considering the areal extent of the bloom from satellite images within the Irminger and Icelandic Basins, this reduction translates into an annual mean of nearly 1500 tonnes C yr-1.
 
Mean-annual zonal averaged atmospheric temperature profiles. (a) Observed (NCEP2, 1981-2005) December, January, February (DJF), (b) Observed June, July, and August (JJA), (c) GENMOM DJF, (d) GENMOM JJA.
Winter (DJF) and summer (JJA) zonally averaged eastward wind velocity. (a) Observed (NCEP2, 1981-2005) DJF, (b) Observed JJA, (c) GENMOM DJF, (d) GENMOM JJA.
Observed and modeled seasonal cycle amplitude of surface temperature and anomalies. The amplitude of the seasonal cycle is calculated as the standard deviation of the 12 climatological months.
Observed and simulated mean annual total precipitation from GENMOM and 8 AOGCMs included in IPCC AR4. All IPCC AR4 models are averaged over the last 30 years (1970-1999) of the Climate of the 20th Century experiment. All data are bi-linearly interpolated to a 5 • × 5 • grid.
Mean-annual zonally averaged ocean salinity profile for both observed (WOA05) and simulated (GENMOM) for the Atlantic Ocean (top), Indian and Pacific Oceans (middle), and anomalies between observed and simulated (bottom).
Article
We present a new, non-flux corrected AOGCM, GENMOM, that combines the GENESIS version 3 atmospheric GCM (Global ENvironmental and Ecological Simulation of Interactive Systems) and MOM2 (Modular Ocean Model version 2). We evaluate GENMOM by comparison with reanalysis products (e.g., NCEP2) and eight models used in the IPCC AR4 assessment. The overall present-day climate simulated by GENMOM is on par with the models used in IPCC AR4. The model produces a global temperature bias of 0.6 °C. Atmospheric features such as the jet stream structure and major semi-permanent sea level pressure centers are well simulated as is the mean planetary-scale wind structure that is needed to produce the correct position of stormtracks. The gradients and spatial distributions of annual surface temperature compare well both to observations and to the IPCC AR4 models. A warm bias of ~2 °C is simulated by MOM between 200–1000 m in the ocean. Most ocean surface currents are reproduced except where they are not resolved well by the T31 resolution. The two main weaknesses in the simulations is the development of a split ITCZ and weaker-than-observed overturning circulation.
 
Article
This paper describes the coupling of the Community Atmosphere Model (CAM) version 5 with a unified multi-variate probability density function (PDF) parameterization, Cloud Layers Unified by Binormals (CLUBB). CLUBB replaces the planetary boundary layer (PBL), shallow convection, and cloud macrophysics schemes in CAM5 with a higher-order turbulence closure based on an assumed PDF. Comparisons of single-column versions of CAM5 and CAM-CLUBB are provided in this paper for several boundary layer regimes. As compared to large eddy simulations (LESs), CAM-CLUBB and CAM5 simulate marine stratocumulus regimes with similar accuracy. For shallow convective regimes, CAM-CLUBB improves the representation of cloud cover and liquid water path (LWP). In addition, for shallow convection CAM-CLUBB offers better fidelity for subgrid-scale vertical velocity, which is an important input for aerosol activation. Finally, CAM-CLUBB results are more robust to changes in vertical and temporal resolution when compared to CAM5.
 
Article
The accurate modeling of cascades to unresolved scales is an important part of the tracer transport component of dynamical cores of weather and climate models. This paper aims to investigate the ability of the advection schemes in the National Center for Atmospheric Research's Community Atmosphere Model version 5 (CAM5) to model this cascade. In order to quantify the effects of the different advection schemes in CAM5, four two-dimensional tracer transport test cases are presented. Three of the tests stretch the tracer below the scale of coarse resolution grids to ensure the downscale cascade of tracer variance. These results are compared with a high resolution reference solution, which is simulated on a resolution fine enough to resolve the tracer during the test. The fourth test has two separate flow cells, and is designed so that any tracer in the western hemisphere should not pass into the eastern hemisphere. This is to test whether the diffusion in transport schemes, often in the form of explicit hyper-diffusion terms or implicit through monotonic limiters, contains unphysical mixing. An intercomparison of three of the dynamical cores of the National Center for Atmospheric Research's Community Atmosphere Model version 5 is performed. The results show that the finite-volume (CAM-FV) and spectral element (CAM-SE) dynamical cores model the downscale cascade of tracer variance better than the semi-Lagrangian transport scheme of the Eulerian spectral transform core (CAM-EUL). Each scheme tested produces unphysical mass in the eastern hemisphere of the separate cells test.
 
Article
The variational formulation of Bayes' theorem allows inferring CO2 sources and sinks from atmospheric concentrations at much higher time-space resolution than the ensemble or analytical approaches. However, it usually exhibits limited scalable parallelism. This limitation hinders global atmospheric inversions operated on decadal time scales and regional ones with kilometric spatial scales because of the computational cost of the underlying transport model that has to be run at each iteration of the variational minimization. Here, we introduce a physical parallelization (PP) of variational atmospheric inversions. In the PP, the inversion still manages a single physically and statistically consistent window, but the transport model is run in parallel overlapping sub-segments in order to massively reduce the computation wall-clock time of the inversion. For global inversions, a simplification of transport modelling is described to connect the output of all segments. We demonstrate the performance of the approach on a global inversion for CO2 with a 32 yr inversion window (1979-2010) with atmospheric measurements from 81 sites of the NOAA global cooperative air sampling network. In this case, we show that the duration of the inversion is reduced by a seven-fold factor (from months to days), while still processing the three decades consistently and with improved numerical stability.
 
Top-cited authors
Veronika Eyring
  • German Aerospace Center (DLR), Oberpfaffenhofen, Gemany
Gerald A. Meehl
  • National Center for Atmospheric Research
Jean-François Lamarque
  • National Center for Atmospheric Research
Alex B. Guenther
  • University of California, Irvine
Sandrine Bony
  • French National Centre for Scientific Research