Environmental Modelling and Software

Published by Elsevier
Print ISSN: 1364-8152
Publications
This paper uses a methodological framework based on the new advances of Data Envelopment Analysis (DEA) and a sample of 71 developed, developing and under-developed countries to model and evaluate their biodiversity performance. Specifically, by using conditional DEA, bootstrapping and kernel density estimations, the efficiency levels of these countries are compared and analysed in such a way as to show whether the countries environmental policies have been used efficiently in order to enhance biodiversity. Our empirical results indicate that there are major inefficiencies among the countries considered in terms of their biodiversity performances revealing that population level is the dominant threat of countries’ biodiversity performance followed up by GDP per capita and income inequalities. At the same time CO2 per capita, population, income inequalities and GDP per capita have different impact on developed, under-developed and developing countries which implies that environmental policies must be adopted and implemented accordingly.
 
The RAINS (Regional Air Pollution Information and Simulation) model (Alcamo et al., 1990. The RAINS Model of Acidification. Science and Strategies in Europe. Kluwer, Dordrecht) was developed at IIASA as an integrated assessment tool to assist policy advisors in evaluating options for reducing acid rain. Such models help to build consistent frameworks for the analysis of abatement strategies. They combine scientific findings in the various fields relevant to strategy development (economy, technology, atmospheric and ecological sciences) with regional databases. The environmental impacts of alternative scenarios for emission reductions can then be assessed in a consistent manner (`scenario analysis'). This paper outlines the current stage in the development of an integrated assessment model for acidification and tropospheric ozone in Europe and explores the likely impacts of the currently agreed policy measures for controlling emissions on acidification and ground-level ozone.
 
Educational use of environmental software offers both opportunity and challenges to software developers. Modelers should be supplying leadership in the debate about the roles that environmental models should play in both education and practice. Jennings (1997) recently discussed desirable attributes for educational-use software, and provided illustrations based on the bioremediation code BIO1D. In this companion manuscript, the concept of using the MATHCAD ‘electronic book’ format will be presented and illustrated using the air pollution transport package GAUSSIAN MODELS 1.1. ‘Electronic book’ technology is an interesting innovation that is beginning to find its way into educational applications. It is, however, important for the application to capitalize on the unique strengths of this implementation strategy. These strengths are very different from the strengths of traditional books. An example simulation based on particulate lead deposition is presented to illustrate the educational potential of this approach.
 
The MODULUS DSS graphical user interface, showing the system diagram and (clockwise) the following models: Modflow (integrated), irrigation (rebuilt), river flow (integrated), soil hydrology (adapted), land-use (integrated) and plant growth (rebuilt). The adapted climate model dialogue window is also shown, highlighting the daily storm events. 
MODULUS DSS sub-model information flow table (from McIntosh et al., 2000).
Definition of the diverse spatial resolutions of individual models and the different timesteps required of the processes modelled.
The MODULUS simulation engine works with ActiveX Model Components. Existing models are encapsulated in 'wrappers' that make them look like ActiveX Components from the outside.
A great deal of new knowledge and research material have been generated from research carried out under the auspices of the European Union (EU). However, only a small amount has been made available as practical policy-support tools. In this paper, we describe how EU funded research models and understanding have been integrated into an interactive decision-support system addressing physical, economic and social aspects of land degradation in the Mediterranean. We summarise the 10 constituent models that simulate hydrology, human influences, crops, natural vegetation and climatic conditions. The models operate on very different spatial and temporal scales and utilise different modelling techniques and implementation languages. Many scientific, modelling and technical issues were encountered during the transformation of ‘research’ models into ‘policy’ models. We highlight the differences between each type of model and discuss some of the ontological and technical problems in re-using research models for policy-support, including resolving differences in temporal scale and some of the software engineering aspects of model integration. The involvement of policy-makers, ‘stakeholders’ and other end-users is essential for the specification of relevant decision-making issues and the development of useful interactive support tools. We discuss the problems of identifying both the decision-makers and the issues they perceive as important, their receptivity to such tools, and their roles in the policy-making process. Finally, we note the lessons learned, the resources needed, and the types of end-users, scientists and mediators required to ensure effective communication, technical development and exploitation of spatial modelling tools for integrated environmental decision-making.
 
An increased emphasis on integrated water management at a catchment scale has led to the development of numerous modelling tools. To support efficient decision making and to better target investment in management actions, such modelling tools need to link socioeconomic information with biophysical data. However, there is still limited experience in developing catchment models that consider environmental changes and economic values in a single framework. We describe a model development process where biophysical modelling is integrated with economic information on the non-market environmental costs and benefits of catchment management changes for a study of the George catchment in northeast Tasmania, Australia. An integrated assessment approach and Bayesian network modelling techniques were used to integrate knowledge about hydrological, ecological and economic systems. Rather than coupling existing information and models, synchronous data collection and model development ensured tailored information exchange between the different components. The approach is largely transferable to the development of integrated hydro-economic models in other river catchments. Our experiences highlight the challenges in synchronizing economic and scientific modelling. These include the selection of common attributes and definition of their levels suitable for the catchment modelling and economic valuation. The lessons from the model development process are useful for future studies that aim to integrate scientific and economic knowledge.
 
The World Health Organization has activated a global preparedness plan to improve response to avian influenza outbreaks, control outbreaks, and avoid an H5N1 pandemic. The effectiveness of the plan will greatly benefit from identification of epicenters and temporal analysis of outbreaks. Accordingly, we have developed a simulation-based methodology to analyze the spread of H5N1 using stochastic interactions between waterfowl, poultry, and humans. We have incorporated our methodology into a user friendly, extensible software environment called SEARUMS. SEARUMS is an acronym for Studying the Epidemiology of Avian Influenza Rapidly Using Modeling and Simulation. It enables rapid scenario analysis to identify epicenters and timelines of H5N1 outbreaks using existing statistical data. The case studies conducted using SEARUMS have yielded results that coincide with several past outbreaks and provide non-intuitive inferences about global spread of H5N1. This article presents the methodology used for modeling the global epidemiology of avian influenza and discusses its impacts on human and poultry morbidity and mortality. The results obtained from the various case studies and scenario analyses conducted using SEARUMS along with verification experiments are also discussed. The experiments illustrate that SEARUMS has considerable potential to empower researchers, national organizations, and medical response teams with timely knowledge to combat the disease, mitigate its adverse effects, and avert a pandemic.
 
Life cycle assessment (LCA) is known to entail multiple objective decision-making in the analysis of tradeoffs between different environmental impacts. The work of Azapagic and Clift in the late 1990s illustrates the use of multiple objective linear programming (MOLP) in the context of LCA. However, it will be noted that their approach yields a range of Pareto optimal alternatives from which the decision-maker must ultimately select the final solution. An alternative approach making use of Zimmermann's symmetric fuzzy linear programming method is proposed and illustrated with a simple case study. The procedure effectively yields a single, optimal compromise solution.
 
The predictions of nine mathematical models of radiocesium (137Cs) cycling in forest ecosystems were evaluated through a model–data comparison on the basis of a scenario consisting of an acute dry deposition of 137Cs over a pine forest located in the Ukrainian territory affected by the Chernobyl accident (Zhitomir region). The forest compartments included in the comparisons were: organic and mineral soil layers, bilberries, fungi, roe deer, and different parts of the tree: bark, needles, shoots, and wood. The model predictions and the data agreed within a factor of 1.1 to 65 depending on the model and forest compartment. Statistically significant differences in the degree of agreement between model predictions and experimental data could be demonstrated for some models and forest compartments. The observed differences suggest that efforts for improving model predictions of radiocesium transfer to tree wood, needles, and shoots should be directed to a better simulation of the processes of root uptake and translocation of the radionuclide within the tree.
 
To assist the Société Civile des Terres du Larzac (SCTL) in its effort to develop alternative forest management plans, a group of researchers and extension officers proposed applying a companion modelling approach. The objective was to support forest owners and livestock farmers while they worked out a solution to their forest management problems. The approach was based on the co-construction and use of an agent-based model providing a shared representation of the current management of farms and providing multiple view points on alternative forest management scenarios. The validation of the model allowed the development of a shared representation of the territory. The use of the model as an exploratory tool empowered local stakeholders to elaborate alternative management strategies for their renewable resources (forage, timber, firewood). It also expanded the discussion on forest management to a multi-scale level where managers assumed progressively a role of land administrators. When playing this role, they compared their forest policy orientations and forest harvesting decisions with farmers’ individual situations and interests. Participants became aware of how spatial and temporal scales of management overlap and they progressively worked out a compromise between livestock breeding concerns of farmers and forest dynamics concerns of SCTL managers.
 
As an integral aspect of precision agriculture, scientific fertilization which aims to produce high crop yields with little fertilizer and the minimum adverse impact on the environment can not be realized without fertilizing models. Fertilizing models on a large scale carry more practical value in the process of agriculture activities. Using the concept of minimum unit that has been successfully used in other aspects of geo-sciences, this paper adopted an embedding method to integrate fertilizing models with a self-developed Geographical Information System named SMIS. The main functions and data organization of SMIS are presented. With the integrated system, model initialization can be easily performed through a graphical user interface, and model calculation and results presentation are dynamically linked with graphics and their corresponding attribute, which also carries the advantages of promptness and easy operation. The method proved successful and useful when applied to Xuxinzhuang town, an agriculture dominated town in the north part of Tongzhou district, Beijing of China. It therefore holds great potential for extension to other agriculture areas, so as to serve more local farmers in the scientific fertilization of crops.
 
The IHACRES rainfall-runoff model uses a non-linear loss module to calculate the effective rainfall and a linear routing module to convert effective rainfall into streamflow. This paper describes a new version of the non-linear module, developed to aid in estimating flows in ungauged basins and for applications where timeseries estimates of actual evapotranspiration are required. The new module has only 3 parameters and has significantly less correlation between the parameters.
 
Many studies have reported a relationship between urban air pollution levels and respiratory health problems. However, there are notable variations in results, depending on modeling approach, covariate selection, period of analysis, etc. To help clarify these factors we compare and apply two estimation approaches: model selection and Bayesian model averaging, to a new data base on 11 Canadian cities spanning 1974–1994. During this interval pollution levels were typically much higher than the present. Our data allow us to compare monthly hospital admission rates for all lung diagnostic categories to ambient levels of five common air contaminants, while controlling for income, smoking and meteorological covariates. In the most general specifications we find the here-observed health effects of air pollution are very small and insignificant, with signs that are typically opposite to conventional expectations. Smoking effects are robust across specifications. Considering the fact that we are examining an interval of comparatively high air pollution levels, and the contrast between our results and those that have been published previously, we conclude that extra caution should be applied to results estimated on short and/or recent data panels, and to those that do not control for model uncertainty and socioeconomic covariates.
 
Numerical solution of the advection–dispersion equation, used to evaluate transport of solutes in porous media, requires discretization schemes for space and time stepping. We examine use of quadratic upstream interpolation schemes QUICK, QUICKEST, and the total variation diminution scheme ULTIMATE, and compare these with UPSTREAM and CENTRAL schemes in the HYDRUS-1D model. Results for purely convective transport show that quadratic schemes can reduce the oscillations compared to the CENTRAL scheme and numerical dispersion compared to the UPSTREAM scheme. When dispersion is introduced all schemes give similar results for Peclet number Pe < 2. All schemes show similar behavior for non-uniform grids that become finer in the direction of flow. When grids become coarser in the direction of flow, some schemes produce considerable oscillations, with all schemes showing significant clipping of the peak, but quadratic schemes extending the range of stability tenfold to Pe < 20. Similar results were also obtained for transport of a non-linear retarded solute transport (except the QUICK scheme) and for reactive transport (except the UPSTREAM scheme). Analysis of transient solute transport show that all schemes produce similar results for the position of the infiltration front for Pe = 2. When Pe = 10, the CENTRAL scheme produced significant oscillations near the infiltration front, compared to only minor oscillations for QUICKEST and no oscillations for the ULTIMATE scheme. These comparisons show that quadratic schemes have promise for extending the range of stability in numerical solutions of solute transport in porous media and allowing coarser grids.Highlights► Quadratic upstream interpolation and TVD scheme ULTIMATE implemented in HYDRUS-1D Quadratic Interpolation and HYDRUS-1D results are nearly identical for Peclet. ► 2 Quadratic schemes extend the range of stability tenfold to Peclet. ► 20 Quadratic schemes also improve results for purely convective transport.
 
Understanding the interaction between soil, vegetation and atmosphere processes and groundwater dynamics is of paramount importance in water resources planning and management in many practical applications. Hydrological models of complex water resource systems need to include a number of components and should therefore seek a balance between capturing all relevant processes and maintaining data requirement and computing time at an affordable level. Water transfer through the unsaturated zone is a key hydrological process connecting atmosphere, surface water and groundwater. The paper focuses on the analysis of the modelling approaches that are generally used to describe vertical water transfer through unsaturated soil in hydrological models of water resource systems: a physically based approach, using numerical solutions of Richards' equation, and two conceptual models, based on reservoir cascade schemes, are compared. The analysis focuses on the soil water content in the top soil (first meter) and on the outflow from the profile (i.e. recharge to the aquifer). Results show that the water contents simulated by the mechanistic and conceptual models are in good agreement, unless the capillary fringe reaches the top soil (i.e. groundwater table very close to the soil surface). The ability of conceptual models to capture the daily recharge dynamics is generally rather poor, especially when fine textured soils and thick profiles are considered; a better agreement is found when recharge is accumulated over longer time periods (e.g. months). Improvements can be achieved by allowing the number of reservoirs in cascade to vary with changing profile depth, although scientifically sound rules for fixing the number of reservoirs need to be established.
 
We introduce and codify the mathematics of Ecological Network Analysis (ENA) in general and Network Environ Analysis (NEA) in particular used by the web-based simulation software EcoNet® 2.0. Where ecosystem complexity continues to drive an increasingly vast environmental modeling effort, ENA and NEA represent maturation, in part, of the compartment modeling approach. Compartment modeling mathematically represents compartment storages with both internal-connecting and external-environmental flows as ordinary differential equations. ENA and NEA expand these mathematics into complex systems analysis and corresponding network theory. EcoNet was developed to facilitate the mathematical modeling, to enhance the overall presentation, and to improve the subsequent long-term progress of ENA and NEA systems analysis. Thus, as a continuing enhancement to the overall understanding, but more importantly, to the future growth of environmental modeling associated with ENA and NEA, we derive and summarize the canonical mathematics of ENA, NEA, and EcoNet, which facilitates their future use.
 
Maintaining and restoring landscape connectivity is currently a central concern in ecology and biodiversity conservation, and there is an increasing demand of user-driven tools for integrating connectivity in landscape planning. Here we describe the new Conefor Sensinode 2.2 (CS22) software, which quantifies the importance of habitat patches for maintaining or improving functional landscape connectivity and is conceived as a tool for decision-making support in landscape planning and habitat conservation. CS22 is based on graph structures, which have been suggested to possess the greatest benefit to effort ratio for conservation problems regarding landscape connectivity. CS22 includes new connectivity metrics based on the habitat availability concept, which considers a patch itself as a space where connectivity occurs, integrating in a single measure the connected habitat area existing within the patches with the area made available by the connections between different habitat patches. These new metrics have been shown to present improved properties compared to other existing metrics and are particularly suited to the identification of critical landscape elements for connectivity. CS22 is distributed together with GIS extensions that allow for directly generating the required input files from a GIS layer. CS22 and related documentation can be freely downloaded from the World Wide Web.
 
Instruction in traditional unit operations courses in environmental engineering has been performed by stepping through the design and analysis of individual units. Since integration of these unit operations into treatment trains or an entire treatment plant is frequently computationally intensive, the use of professional quality software packages to perform this integration is highly desirable. Based on this need, the use of SuperPro Designer® v.2.7 for instruction is presented. The characteristics of the program in the context of critical attributes necessary when considering professional quality software packages for educational purposes is discussed. Instructional modules developed based on the software package are described.
 
Emission preparation is a critical stage in air quality modelling. The generation of input information from regulatory emission inventories compatible with the requirements of eulerian chemical-transport models needs a computationally efficient, reliable, and flexible emissions data processing system. The Sparse Matrix Operator Kernel System (SMOKE) was developed in the United States and redesigned by the US Environmental Protection Agency to support air quality modelling activities in the framework of the USEPA Models-3 system. In this contribution the adaptation of the SMOKE system to European conditions is discussed. The system has been applied to the Iberian Peninsula and the Madrid Region (Spain) to process Spain's official emission inventories and projections for the years 2000 and 2010. This software tool has been found useful to generate emission input information for the CMAQ model as well as providing a valuable platform for emission scenario analysis. The model has been proved flexible enough to accommodate and process emissions based on the EMEP/CORINAIR methodology, although the lack of meaningful ancillary information may hinder its application outside the US. This study has established a practical methodology for the adaptation of the SMOKE system to Spain and potentially to any other European country.
 
The long wave elevation of a tsunami is closely related to the characteristics of its triggering source. Tide gauges, sea level anomaly altimeters can all be used to measure the amplitude of the wave. Numerical modeling can also be used to approximate tsunami propagation based upon a certain parameterisation of the source describing the mode of generation. Once a robust numerical model has been developed, the simulated wave sequence and the various hydrodynamic observations can be compared to develop a better characterization of the tsunami source. This procedure may be used to refine the description of a tsunami source derived from seismological instrumentations.Numerical sensitivity tests of a tsunami propagation model were used to address whether a simple tide gauge record may help in better identifying the nature of the rupture process. Specifically, four different pure thrust faultings – reverse/normal faults and two opposite dipping orientations – of the 26 December 2004 Indian Ocean tsunami were simulated. The model chosen for the investigation used the representation of Okada, Y., [1985. Surface deformation due to shear and tensile faults in a half-space. Bull. Seism. Soc. Am. 75 (4), 1135–1154] to describe the rupture process and the Funwave numerical model to simulate tsunami propagation.The author finds that it is possible to identify the best-fitted scenario among the four, depending on the nature of the leading wave (crest or trough) and the ratio between the first crest amplitudes. It depends, however, on the dipping amount as the trough/crest dipole reverses at around 43°. Consequently, the depth of the earthquake (location of the epicenter within the interplate slab) also controls the nature of the leading wave (crest or depression).
 
USIAM defined Zones, as follows (in increasing tone of grey): light grey Z Central London, slightly darker grey Z Inner London, dark grey Z Outer London, black line Z M25 motorway and white Z Not in London.
USIAM projected base concentration map for 2005.  
USIAM model structure.  
UERSD part 2, combined strategies emission reductions and costs
Tightening of air quality standards for populated urban areas has led to increasing attention to assessment of air quality management areas (AQMAs) where exceedance occurs, and development of control strategies to eliminate such exceedance. Software tools that bring together data on pollutant sources, their respective contributions to atmospheric concentrations and human exposure, together with information on potential technological and other measures that may be used to reduce concentrations and their economic costs, can be used to identify cost-effective strategies for improving air quality, and hence aid policy development. The Urban Scale Integrated Assessment Model (USIAM) has been developed in this context, illustrated in this paper by application to traffic emissions which are a major contributor to exceedance of air quality objectives for fine particulate matter, PM10, in London. In such a multidisciplinary approach the aim has been to provide a tool for rapid assessment of a wide range of scenarios to identify those that are most cost effective, maintaining a balance between the level of sophistication in the model and the uncertainties and assumptions involved.
 
Recent research on crop-water relations has increasingly been directed towards the application of locally acquired knowledge to answering the questions raised on larger scales. However, the application of the local results to larger scales is often questionable. This paper presents a GIS-based tool, or a GEPIC model, to estimate crop water productivity (CWP) on the land surface with spatial resolution of 30 arc-min. The GEPIC model can estimate CWP on a large-scale by considering the local variations in climate, soil and management conditions. The results show a non-linear relationship between virtual water content (or the inverse of CWP) and crop yield. The simulated CWP values are generally more sensitive to three parameters, i.e. potential harvest index for a crop under ideal growing conditions (HI), biomass-energy ratio indicating the energy conversion to biomass (WA), and potential heat unit accumulation from emergence to maturity (PHU), than other parameters. The GEPIC model is a useful tool to study crop-water relations on large scales with high spatial resolution; hence, it can be used to support large-scale decision making in water management and crop production.
 
urban atmosphere. This neural model is based on the MultiLayer Perceptron (MLP) structure. The inputs of the statistical network are model output statistics of the weather predictions from the French National Weather Service. These predicted meteorological parameters are very easily available through an air quality network. The lead time used in this forecasting is (t + 24) h. Efforts are related to a regularisation method which is based on a Bayesian Information Criterion-like and to the determination of a confidence interval of forecasting. We offer a statistical validation between various statistical models and a deterministic chemistry-transport model. In this experiment, with the final neural network, the ozone peaks are fairly well predicted (in terms of global fit), with an Agreement Index= 92%, the Mean Absolute Error= the Root Mean Square Error = 15 mu g m(-3) and the Mean Bias Error = 5 mu g m(-3), where the European threshold of the hourly ozone is 180 mu g m(-3). To improve the performance of this exceedance forecasting, instead of the previous model, we use a neural classifier with a sigmoid function in the output layer. The output of the network ranges from [0,1] and can be interpreted as the probability of exceedance of the threshold. This model is compared to a classical logistic regression. With this neural classifier, the Success Index of forecasting is 78% whereas it is from 65% to 72% with the classical MLPs. During the validation phase, in the Summer of 2003, six ozone peaks above the threshold were detected. They actually were seven. Finally, the model called NEUROZONE is now used in real time. New data will be introduced in the training data each year, at the end of September. The network will be re-trained and new regression parameters estimated. So, one of the main difficulties in the training phase - namely the low frequency of ozone peaks above the threshold in this region - will be solved.
 
This study is an attempt to verify the presence of non-linear dynamics in the ozone time series by testing a “dynamic” model, evaluated versus a “static” one, in the context of predicting hourly ozone concentrations, one-day ahead. The “dynamic” model uses a recursive structure involving a cascade of 24 multilayer perceptrons (MLP) arranged so that each MLP feeds the next one. The “static” model is a classical single MLP with 24 outputs. For both models, the inputs consist of ozone and of exogenous variables: past 24-h values of meteorological parameters and of NO2; concerning the ozone inputs, the “static” model uses only the 24 past measurements, while the “dynamic” one uses, also, the previously forecast ozone concentrations, as soon as they are predicted by the model. The outputs are, for both configurations, ozone concentrations for a 24-h horizon. The performance of the two models was evaluated for an urban and a rural site, in the greater Paris. Globally, the results indicate a rather good applicability of these models for a short-term prediction of ozone. We notice that the results of the recursive model were comparable with those obtained via the “static” one; thus, we can conclude that there is no evidence of non-linear dynamics in the ozone time series under study.
 
For many applications two-dimensional hydraulic models are time intensive to run due to their computational requirements, which can adversely affect the progress of both research and industry modelling projects. Computational time can be reduced by running a model in parallel over multiple cores. However, there are many parallelisation methods and these differ in terms of difficulty of implementation, suitability for particular codes and parallel efficiency. This study compares three parallelisation methods based on OpenMP, message passing and specialised accelerator cards. The parallel implementations of the codes were required to produce near identical results to a serial version for two urban inundation test cases. OpenMP was considered the easiest method to develop and produced similar speedups (of ∼3.9×) to the message passing code on up to four cores for a fully wet domain. The message passing code was more efficient than OpenMP, and remained over 90% efficient on up to 50 cores for a completely wet domain. All parallel codes were less efficient for a partially wet domain test case. The accelerator card code was faster and more power efficient than the standard code on a single core for a fully wet domain, but was subject to longer development time (2 months compared to <2 week for the other methods).
 
Educational use of environmental software offers opportunities and challenges to both software developers and educators. Desirable attributes for educational environmental software have recently been discussed in Jennings (1997). Jennings (1997)and Jennings and Kuhlman (1997)went on to illustrate many of these attributes using the groundwater bioremediation code BIO1D and the air pollution transport code GAUSSIAN MODELS 1.1. This paper continues the exploration of educational applications of environmental software by examining both a public domain and proprietary code built around EPA's Hydrologic Evaluation of Landfill Performance (HELP) analysis package. The first, HELP 3.04, is a revised version of HELP intended to be more user-interactive than its predecessors. The second, HMfW v2.05, makes use of proprietary user interfaces to provide superior problem definition and result visualization in a pseudo-CAD working environment. Examples generated with both codes are presented to explore the nature of modern user interfaces, and to examine how desirable features might differ between student and professional users. Example simulations are also presented to illustrate how these codes can be used in ways that transcend their original intent to teach about hydraulic barrier design and performance.
 
Recently, it has been recognized that large lakes exert considerable influence on regional climate systems and vice versa and that the Canadian Regional Climate Model (CRCM), which does not currently have a lake component, requires the development of a coupled lake sub-model. Prior to a full effort for this model development, however, studies are needed to select and assess the suitability of a lake hydrodynamic model in terms of its capability to couple with the CRCM. This paper evaluates the performance of the 3-dimensional hydrodynamic model ELCOM on Great Slave Lake, one of Canada's largest lakes in the northern climatic system. Model simulations showed dominant circulation patterns that can create relatively large spatial and temporal gradients in temperature. Simulated temperatures compared well with cross-lake temperature observations both at the surface and vertically. Sensitivity analysis was applied to determine the critical meteorological variables affecting simulations of temperature and surface heat fluxes. For example, a 10% increase in air temperature and solar radiation was found to result in a 3.1% and 8.3% increase in water surface temperature and 8.5% increase in latent heat flux. Knowledge of the model sensitivity is crucial for future research in which the hydrodynamic model coupled with the atmosphere will be forced from the CRCM output.
 
Data-driven constituent transport models (CTM), which take surface current measurements from High Frequency (HF) Radar as input can be applied within the context of real-time environmental monitoring, oceanographic assessment, response to episodic events, as well as search and rescue in surface waters.This paper discusses a numerical method that allows for the evaluation of diffusion coefficients in anisotropic flow fields from surface current measurements using HF Radar. The numerical scheme developed was incorporated into a data-driven CTM and through model error analyses, the effects of using spatially variable transport coefficients on model predictions were examined. The error analyses were performed on the model by varying the cell Reynolds number, Re = f(u,K,Δx) between 0.15 and 100, where u is the velocity vector within the flow field, K is a diffusivity tensor and Δx is the computational grid cell size.Two instantaneous releases of conservative material were then modeled, the model being initialized at two different locations within the domain. From the two simulation runs, marked differences in the predicted spatial extent of the conservative material resulting from the spatially varying diffusivity values within the study area were observed. Model predictions in terms of variance or size estimates of a diffusing patch were found to be more affected from using inaccurate diffusivity estimates, and less affected from using inaccurate current measurements. The largest errors occurred at Re > 2 associated with changing diffusivity values, going up to as much as a 25-fold difference in variance estimates at Re = 100. Very little effect on variance estimates due to varying velocity values were observed even at Re > 2. This model was applied within the framework of constituent tracking to Corpus Christi Bay in the Texas Gulf of Mexico region.
 
This paper illustrates the application of both local and global sensitivity analysis techniques to an estimation of the uncertainty in the output of a 3D reaction–diffusion ecological model; the model describes the seasonal dynamics of dissolved Nitrogen and Phosphorous, and those of the phytoplanktonic and zooplanktonic communities in the lagoon of Venice. Two sources of uncertainty were taken into account and compared: (1) uncertainty concerning the parameters of the governing equation; (2) uncertainty concerning the forcing functions. The mean annual concentrations of Dissolved Inorganic Nitrogen (DIN) was regarded as the model output, as it represents the largest fraction of the Total Dissolved Nitrogen, TDN, for which the current Italian legislation sets a quality target in the lagoon of Venice. A local sensitivity analysis was initially used, so as to rank the parameters and provide an initial estimation of the uncertainty, which is a result of an imperfect knowledge of the dynamic of the system. This uncertainty was compared with that induced by an imperfect knowledge of the loads of Nitrogen, which represent the main forcing functions. On the basis of the results of the local analysis, the most important parameters and loads were then taken as the sources of uncertainty, in an attempt to assess their relative contributions. The global uncertainty and sensitivity analyses were carried out by means of a sampling-based Monte Carlo method. The results of the subsequent input–output regression analysis suggest that the variance in the model output could be partitioned among the sources of uncertainty, in accordance with a linear model. Based on this model, 79% of the variance in the mean annual concentration of DIN was accounted for by the uncertainty in the parameters which specify the dynamics of the phytoplankton and zooplankton, and only 5% by the uncertainties in the three main Nitrogen sources.
 
The problem of analyzing the chemical state of the troposphere and the associated emission scenario on the basis of observations and model simulations is considered. The method applied is the four-dimensional variational data assimilation method (4D-var) which iteratively minimizes the misfit between modeled concentration levels and measurements. The overall model–observation discrepancy is measured in terms of a cost function, of which the gradient is calculated for subsequent minimization by adjoint modeling. The model applied is the University of Cologne EURopean Air pollution Dispersion model (EURAD) simulating the meso-alpha scale. The forward and adjoint components are Bott's horizontal and vertical advection scheme (Bott, Mon. Wea. Rev. 117 (1989), 1006), implicit vertical diffusion, and the RADM2 gas phase chemistry. The basic feasibility of the adjoint modeling technique for emission rate assessment is demonstrated by identical twin experiments. The objective of the paper is to demonstrate the skill and limits of the 4D-var technique to analyze the emission rates of non-observed precursor constituents of ozone, when only ozone observations are available. It is shown that the space–time variational approach is able to analyze emission rates of NO directly. For volatile organic compounds (VOC), regularization techniques must be introduced, however.
 
Fortran 90 code, defining a skeletal domain data structure.
Computing time as a percentage of total run time, using one domain
Example of recursive allocation of nested domain data structures.
In this work we describe the development of a parallel implementation of the Princeton Ocean Model (POM2k) with a nested-domain feature. Parallelization has been handled using the Run-Time System Library (RSL) and the Fortran Loop Index Converter (FLIC), avoiding a direct use of the MPI library. Modularity and flexibility have been added through advanced Fortran 90 features, such as modules, dynamic memory allocation, pointers and recursion. The “seamount problem”, either in a nested and non-nested configuration, is used as a test bed for showing results scalability.
 
Overview of the modelling framework
New Zealand dairy farmers face a tradeoff between profit maximisation and environmental performance. The integrated simulation model presented here enables assessment of the economic and environmental impact of dairy farming with a focus on nitrogen pollution at the catchment level. Our approach extends the value of the DairyNZ Whole Farm Model (Beukes et al., 2005) as an environmental policy tool by building and integrating nitrogen discharge functions for specific soil types and topography using a metamodelling technique. A hybrid model is created by merging the merits of differential evolution and non-linear optimisation to expedite policy simulations, in which farm profits and nitrogen discharges obtained from the differential evolution optimisation process are assembled to form a profit–pollution frontier. This frontier is then subject to constrained optimisation based on non-linear optimisation in order to predict producer responses to alternative pollution control policies. We apply this framework to derive marginal abatement costs for heterogeneous farm types and find that abatement costs for intensive farms are lower than for moderate and extensive farming systems. We further conclude that abatement can be achieved more cheaply using a compulsory standard or threshold tax than using a standard emissions tax.
 
This paper describes a generic model for predicting the migration of 137Cs and 90Sr through complex catchments and the effects of countermeasures to reduce the contamination levels. The model provides assessments of radionuclide behaviour in water systems comprised of rivers, lakes and reservoirs. It makes use of aggregate, “collective” parameters which summarise the overall effects of competing migration processes occurring in fresh water bodies. The model accounts for the radionuclide fluxes from the water column to the sediment and vice-versa, for the radionuclide migration from the catchment and for the transport of contaminated matter through the water body. It has been applied to several European fresh water systems contaminated by 90Sr and 137Cs due to nuclear weapon tests in the atmosphere of past decades and to the Chernobyl accident. The model can predict the effects of the following countermeasures: (a) Sediment removal; (b) Diversion of water from sub-catchments; and (c) Decontamination of sub-catchments. The results of the sensitivity and uncertainty analyses are described. Some examples of countermeasure applications are described and discussed.
 
Understanding and managing ecosystems as biocomplex wholes is the compelling scientific challenge of our times. Several different system-theoretic approaches have been proposed to study biocomplexity and two in particular, Kauffman's NK networks and Patten's ecological network analysis, have shown promising results. This research investigates the similarities between these two approaches, which to date have developed separately and independently. Kauffman (1993) has demonstrated that networks of non-equilibrium, open thermodynamic systems can exhibit profound order (subcritical complexity) or profound chaos (fundamental complexity). He uses Boolean NK networks to describe system behavior, where N is the number of nodes in the network and K the number of connections at each node. Ecological network analysis uses a different Boolean network approach in that the pair-wise node interactions in an ecosystem food web are scaled by the throughflow (or storage) to determine the probability of flow along each pathway in the web. These flow probabilities are used to determine system-wide properties of ecosystems such as cycling index, indirect-to-direct effects ratio, and synergism. Here we modify the NK model slightly to develop a fitness landscape of interacting species and calculate how the network analysis properties change as the model's species coevolve. We find that, of the parameters considered, network synergism increases modestly during the simulation whereas the other properties generally decrease. Furthermore, we calculate several ecosystem level goal functions and compare their progression during increasing fitness and determine that at least at this stage there is not a good correspondence between the reductionistic and holistic drivers for the system. This research is largely a proof of concept test and will lay the foundation for future integration and model scenario analysis between two important network techniques.
 
We use a local learning algorithm to predict the abundance of the Alpine ibex population living in the Gran Paradiso National Park, Northern Italy. Population abundance, recorded for a period of 40 years, have been recently analyzed by [Jacobson, A., Provenzale, A., Von Hardenberg, A., Bassano, B., Festa-Bianchet, M., 2004. Climate forcing and density dependence in a mountain ungulate population. Ecology 85, 1598–1610], who showed that the rate of increase of the population depends both on its density and snow depth. In the same paper, a threshold linear model is proposed for predicting the population abundance.In this paper, we identify a similar linear model in a local way, using a lazy learning algorithm. The advantages of the local model over the traditional global model are: improved forecast accuracy, easier understanding of the role and behaviour of the parameters, effortless way to keep the model up-to-date.Both data and software used in this work are of public domain; therefore, experiments can be easily replicated and further discussions are welcome.
 
Many water tables are currently overexploited throughout the world. This situation raises the question of their management. Integrated management of such systems, established in both supply and demand areas, calls for thorough knowledge of the functioning of both the water table and its users, so models are usually required. This study is based on the case of the Kairouan water table, located in Tunisia, which has been continuously and globally decreasing for more than 20 years, due to overexploitation by private irrigators. The field study led to the hypothesis that the dynamics of the system is heavily influenced by local interaction between the resource and its users, and by direct, non-economic interaction between the farmers. The literature shows that several kinds of model have already been used to represent interaction between a water table and its users but none of them are able to take this kind of social behaviour into account. The simulator of a water table and user interaction (SINUSE) based upon multi-agent systems enabled us to overcome these limitations. This model proved to be very useful for representing a complex and distributed system such as the Kairouan water table. It enabled us to explore the interaction between the physical and socio-economic components of the system and to conclude that local and non-economic behaviour do have a major impact on the global dynamics of the system and must therefore be taken into account. The management interventions simulated with the SINUSE model have raised interesting questions, leading to the conclusion that this model could provide a useful tool for negotiating the integrated management of the water table system. Though this model is rather specific, the approach developed could be transferred to other water table systems, to improve the knowledge of their functioning and examine the possible impacts of different management tools.
 
The Gap Analysis Program (GAP) is a nationwide effort to find areas of suitable habitat in the US, which if protected from habitat degradation, may help to preserve the native animal and plant biodiversity. In recent years, the GAP protocols used to analyze habitat data have become more scale and species dependent. This research describes the creation of a Spatial Decision Support System (SDSS) that applies species-specific parameters of Individual Area Requirement (IAR) (Vos et al., 2001), Minimum viable population (MVP), and Reach (Allen et al., 2001) to determine critical habitats for animal species, thereby eliminating those areas that are effectively unusable because of size or inaccessibility. The utility of the SDSS, and the three algorithms contained within it (i.e., core area, core area growth and aggregate), is demonstrated by creating distribution maps of usable habitat for five species (i.e., alligator, black bear, bobcat, gray fox and wild turkey) commonly found in the state of Arkansas. This knowledge can then be used to guide and prioritize conservation efforts towards protecting usable, and often critical, habitats.
 
The open source RFortran library is introduced as a convenient tool for accessing the functionality and packages of the R programming language from Fortran programs. It significantly enhances Fortran programming by providing a set of easy-to-use functions that enable access to R′s very rapidly growing statistical, numerical and visualization capabilities, and support a richer and more interactive model development, debugging and analysis setup. RFortran differs from current approaches that require calling Fortran Dynamic link libraries (DLL) from R, and instead enables the Fortran program to transfer data to/from R and invoke R-based procedures via the R command interpreter. More generally, RFortran obviates the need to re-organize Fortran code into DLLs callable from R, or to re-write existing R packages in Fortran, or to jointly compile their Fortran code with the R language itself. Code snippets illustrate the basic transfer of data and commmands to and from R using RFortran, while two case studies discuss its advantages and limitations in realistic environmental modelling applications. These case studies include the generation of automated and interactive inference diagnostics in hydrological model calibration, and the integration of R statistical packages into a Fortran-based numerical quadrature code for joint probability analysis of coastal flooding using numerical hydraulic models. Currently, RFortran uses the Component Object Model (COM) interface for data/command transfer and is supported on the Microsoft Windows operating system and the Intel and Compaq Visual Fortran compilers. Extending its support to other operating systems and compilers is planned for the future. We hope that RFortran expedites method and software development for scientists and engineers with primary programming expertise in Fortran, but who wish to take advantage of R′s extensive statistical, mathematical and visualization packages by calling them from their Fortran code. Further information can be found at www.rfortran.org.
 
An account of the Bhopal Methyl Isocyanate (MIC) gas accident is given. A numerical modelling approach is presented that provides the prevailing flow and dispersion at the time of the infamous gas accident. The meteorological model produces a low wind speed stable surface layer capped by a 250-m-high nocturnal inversion. The results compared well with the available observations. Further analysis indicates that the MIC vapours dispersed in the form of marginally heavy gas clouds. Also, the presence of two large lakes in Bhopal modified the flow pattern near the surface which resulted in the transport of MIC into the city area of Bhopal.
 
Outil de SImulation des RISques (OSIRIS) is a hazard simulation tool for the training of firemen or manufacturers during interventions with dangerous raw material transportation. This software allows one to obtain a quick answer regarding areas touched by the accident and to provide assistance for judicious decision taking as well as to define the means to implement and the time of reaction to realise an efficient intervention. OSIRIS simulates different types of accident like calculation of leak flow, calculation of evaporation flow, simulation of toxic gas dispersion, confining, simulation of explosions, simulation of fires. Three types of result are obtained: numerical values, graphs of evolution according to distance, effect distances drawn on a map of the accident site.In the second part of the publication, a simulation of a toxic gas dispersion accident is realised. OSIRIS is a decision-taking help tool which allows a simple use and a reduced calculation time, and that is able to simulate many cases of technological accidents.
 
Lombardy is the Italian region with the highest number of hazardous industrial activities as defined by the Seveso II Directive (96/82/EC). The regional Civil Protection Department has developed an integrated modelling system for the simulation of the consequences of industrial accidents. One of the system modules, the decision support system, implements the algorithms described by the US-EPA for the management of the offsite consequence analysis that allow a fast evaluation of the hazards associated to a given industrial plant.This paper presents the main rationale and features of this fast and simple but effective tool, together with examples of application for different possible accidents that may happen in industrial plants where hazardous activities are carried out.
 
A new software tool for the automatic detection and monitoring of plumes caused by major industrial accidents is described. This tool has been designed in order to use near real time information as provided by satellite images, perform sophisticated image analysis and elaborate a user-friendly operational environment for the detection of plumes caused by major industrial accidents. The methodology, based on NOAA/AVHRR (Advanced Very High Resolution Radiometer) imagery, uses a two-dimensional feature space in order to discriminate pixels that contain plumes from those that correspond to clouds or to the underlying surface. The two-dimensional feature space is generated by combining AVHRR channels 1 (visible), 2 (near infrared) and 5 (thermal infrared). The software tool proposed has been coded in JAVA2 language, using the concepts of interoperability and object-oriented programming. This study demonstrates the applicability of the tool for the detection of a plume caused by a massive explosion in a firework factory in Enschede, The Netherlands, on May 13, 2000. The effectiveness and reliability of the software tool was found to be satisfactory, as plume was automatically detected and discriminated from the underlying surface.
 
There is a growing need in the atmospheric modeling community for city-scale energy consumption data to estimate the magnitude of waste heat emissions in urban areas. While energy consumption data are widely available at aggregate space and time scales they are often difficult to obtain at the finer scales needed in such applications.Simply assuming that local consumption patterns mirror those at coarser scales can lead to significant errors. We, therefore, present a method for correcting coarse-resolution energy data for use at the urban scale. The method is developed and validated using state and city-scale electricity data from cities in the US. Our approach develops regression models relating state-level sector-specific energy consumption to statewide temperature variables. These relations are then applied to temperature data for the city of interest to estimate city-scale consumption.This approach has been validated using residential electricity consumption data for three US cities – Houston, Los Angeles and Seattle. The fine scale weather correction scheme was found to be superior to the alternative of using the aggregate (state-level) data, reducing root mean square errors in estimated consumption by 8–40%. Much of the remaining error is believed to be a result of the assumption that the state-level building infrastructure (including heating and cooling equipment) is similar to that in each of the cities.
 
The conventional Streeter and Phelps model does not account for the settleable component of BOD. Their model is therefore of little value in the present day context of polluted streams in which part of the BOD removal necessarily takes place through sedimentation, especially when untreated or partially treated wastes are discharged into streams. Several other dispersion models developed to date also do not account for the settleable part of BOD. In the work presented here, an attempt is made to present a mathematical model accounting for dispersion effects, settling of the settleable part of BOD and the periodic variation of the BOD source. The dispersion model takes into account the bioflocculated sedimentation, as well as biochemical decay of nonsettleable BOD. An alternate finite difference scheme is used to solve the model representing the BOD–DO balance under the stated conditions in a stream.
 
Typically a prioritization between collections of objects involves several parameters and requires thus the application of multicriteria methodologies. Partial order ranking offers a non-parametric method that neither includes any assumptions about linearity nor any assumptions about distribution properties. Accumulating partial order ranking (APOR) is a novel technique where data from a series of individual tests of various characteristics are aggregated, however, maintaining the basics of the partial order ranking methodology. APOR offers prioritization based on mutual probabilities derived from the aggregated data. Alternatively prioritization may be achieved based on averaged ranks derived from the APOR. The present study illustrates the application APOR by an assessment of a series of chemical substances.
 
Discharge models must be validated by comparing their predictions with experimental data. Such comparisons improve user confidence in model predictions. The validation process involves significant work for each case tested and requires tedious labor. This paper describes an automated validation system and its use in validating the offshore operators committee (OOC) Mud and Produced Water Discharge Model. Once the validation system is set up, very little additional work is needed for repeated validations to test the model after changes related to maintenance and development. The validation system provides a complete record of all validation methods, data, and results.The principle benefits of the automated validation system are: the combined validation tests are completely documented with an HTML report, the tests are easily repeated, the system quickly reveals flaws arising from model maintenance and development activities, the system can be adapted to other numerical models containing standalone executable modules that read and write text files.The automatic validation system consists of several parts: (1) a hierarchical arrangement of data to segregate individual experiments in separate file system directories; (2) command scripts to run validation tests in each directory (model runs, statistical comparisons of predictions and observations, plots of predictions compared with observations); (3) a top-level script to summarize overall comparison statistics and scatter plots; (4) a report generator to assemble validation results in a linked set of HTML pages with plots; and (5) a tool to compare validations run at different times (e.g., to compare predictions of different versions of the model).Experiments included in the validation system are summarized briefly. One laboratory and one field experiment concerned with particle deposition on the sea floor were added during the development of the validation system. These cases are described in more detail.
 
Image classification is a complex process affected by some uncertainties and decisions made by the researchers. The accuracy achieved by a supervised classification is largely dependent upon the training data provided by the analyst. The use of representative training data sets is of significant importance for the performance of all classification methods. However, this issue is more important for neural network classifiers since they take each sample into consideration in the training stage. The representativeness is related to the size and quality of the training data that are highly important in assessing the accuracy of the thematic maps derived from remotely sensed data. Quality analysis of training data helps to identify outlier and mixed pixels that can undermine the reliability and accuracy of a classification resulting from an incorrect class boundary definition. Training data selection can be thought of as an iterative process conducted to form a representative data set after some refinements. Unfortunately, in many applications the quality of the training data is not questioned, and the data set is directly employed in the training stage. In order to increase the representativeness of the training data, a two-stage approach is presented, and performance tests are conducted for a selected region. Multi-layer perceptron model trained with backpropagation learning algorithm is employed to classify major land cover/land use classes present in the study area, the city of Trabzon in Turkey. Results show that the use of representative training data can help the classifier to produce more accurate and reliable results. An improvement of several percent in classification accuracy can make significant effect on the quality of the classified image. Results also confirm the value of visualization tools for the assessment of training pixels through decision boundary analysis.
 
TRIPLEX1.0 is a hybrid model that integrates three well-established process models including 3-PG, TREEDYN3.0 and CENTURY4.0. We have conducted calibrations using eight sites to determine and generalize parameters of the TRIPLEX. We also performed model validation using 66 independent data sets to examine the model accuracy and the generality of its application. Simulations were conducted for plots with large sample size from the boreal ecosystem atmosphere study (BOREAS) program, including the northern study area (NSA) near Thompson, Manitoba (55.7° N, 97.8° W) and the southern study area (SSA) near Prince Albert, Saskatchewan (53.7° N, 105.1° W). The calibrations and simulations emphasized on generating average parameters and initial statuses for applying a complex model in a broad region where site detailed information such as photosynthetic capacity, soil carbon, nutrient, soil water, and tree growth is not always available. A suggestion was presented regarding adjusting the sensitive parameter by estimating tree growth rate corresponding to different site conditions. The study actually presented a reasonable and balanced parameter generalization procedure that did not lead to a significant reduction of model accuracy, but did increase the model practicability. The comparison of observations and simulations produced a good agreement for tree density, mean tree height, DBH, soil carbon, above-ground and total biomass, net primary productivity (above-ground) and soil nitrogen in both short- and long-term simulation. Results presented here imply that the set of parameters generalized and suggested in this study can be used as basic referenced values, in which TRIPLEX can be applied to simulate the general site conditions of boreal forest ecosystems.
 
This paper presents a Matlab™ toolbox to assess the accuracy of the estimated parameters of environmental models, based on their approximate confidence regions. Before describing the application, the underlying theory is briefly recalled to familiarize the reader with the numerical methods involved. The software, named PEAS as an acronym for Parameter Estimation Accuracy Software, performs both the estimation and the accuracy analysis, using a user-friendly graphical interface to minimize the required programming. The user is required to specify the model structure according to the Matlab/Simulink™ syntax, supply the experimental data, provide an initial parameter guess and select an estimation method. PEAS provides several model assessment tools, in addition to parameter estimation, such as error function plotting, trajectory sensitivity, Monte Carlo analysis, all useful to assess the adequacy of the experimental data to the estimation problem. After the parameters have been estimated, the reliability assessment is performed: approximate and exact confidence regions are computed and a confidence test is produced. The Monte Carlo analysis is available for approximate accuracy assessment whenever the model structure prevents the application of the confidence regions method. The software, which is freely available for research purposes, is demonstrated here with two examples: a dynamical and an algebraic model. In both cases, software usage and outputs are presented and commented. The examples show how the user is guided through the application of the methods and how warning messages are returned if the estimation does not satisfy the accuracy criteria.
 
This work encompasses ozone modeling in the lower atmosphere. Data on seven environmental pollutant concentrations (CH4, NMHC, CO, CO2, NO, NO2, and SO2) and five meteorological variables (wind speed, wind direction, air temperature, relative humidity, and solar radiation) were used to develop models to predict the concentration of ozone in Kuwait's lower atmosphere. The models were developed by using summer air quality and meteorological data from a typical urban site when ozone concentration levels were the highest. The site was selected to represent a typical residential area with high traffic influences. The combined method, which is based on using both multiple regression combined with principal component analysis (PCR) and artificial neural network (ANN) modeling, was used to predict ozone concentration levels in the lower atmosphere. This combined approach was used to improve the prediction accuracy of ozone. The predictions of the models were found to be consistent with observed values. The R2 values were 0.965, 0.986, and 0.995 for PCR, ANN, and the combined model prediction, respectively. It was found that combining the predictions from the PCR and ANN models reduced the root mean square errors (RMSE) of ozone concentrations. It is clear that combining predictions generated by different methods could improve the accuracy and provide a prediction that is superior to a single model prediction.
 
Plain linear models have recently been used in methodologies to model fate and transport for assessing acidification in life cycle impact assessment (LCIA), or in support of air pollution abatement policies. These models originate from a statistical analysis of the relationship between inputs and outputs of physically-based models that reflect the mechanics of a system in detail. Linear models applied to assess acidification use an acidification factor (AF), which relates changes in the magnitude of emissions to changes in the total area that is protected against acidification in Europe. The changes in emission volume refer to changes of one substance, within one country and one sector or one grid cell.This paper evaluates the dependence of AFs on three spatial characteristics, i.e. the spatial emission and deposition resolution, the spatial emission distribution and the actual spatial location of emissions.The applied spatial resolutions of emission and deposition cause non-systematic variations in AFs of up to 60%, relative to the finest resolution. The manner in which the distribution of emissions is modelled, i.e. grid or sector-specific, is shown to affect AFs considerably, as well. We conclude that spatial characteristics of the physically-based acidification model can affect the assessment of acidification by means of plain linear models.
 
Top-cited authors
Holger Robert Maier
  • University of Adelaide
A.J. Jakeman
  • Australian National University
Alexey Voinov
  • University of Technology Sydney
Graeme Clyde Dandy
  • University of Adelaide
Thorsten Wagener
  • Universität Potsdam