185 reads in the past 30 days
Bridging the gap: a new module for human water use in the Community Earth System Model version 2.2.1October 2024
·
185 Reads
Published by Copernicus Publications on behalf of European Geosciences Union
Online ISSN: 1991-9603
·
Print ISSN: 1991-959X
Disciplines: Geosciences, Multidisciplinary
185 reads in the past 30 days
Bridging the gap: a new module for human water use in the Community Earth System Model version 2.2.1October 2024
·
185 Reads
130 reads in the past 30 days
Deep dive into hydrologic simulations at global scale: harnessing the power of deep learning and physics-informed differentiable models (δHBV-globe1.0-hydroDL)September 2024
·
331 Reads
·
1 Citation
95 reads in the past 30 days
A perspective on the next generation of Earth system model scenarios: towards representative emission pathways (REPs)June 2024
·
309 Reads
·
5 Citations
86 reads in the past 30 days
The biogeochemical model Biome-BGCMuSo v6.2 provides plausible and accurate simulations of the carbon cycle in central European beech forestsOctober 2024
·
87 Reads
64 reads in the past 30 days
Selecting CMIP6 global climate models (GCMs) for Coordinated Regional Climate Downscaling Experiment (CORDEX) dynamical downscaling over Southeast Asia using a standardised benchmarking frameworkOctober 2024
·
64 Reads
Interactive Public Peer Review · Community driven · Not for profit
Geoscientific Model Development (GMD) is a not-for-profit international scientific journal dedicated to the publication and public discussion of the description, development, and evaluation of numerical models of the Earth system and its components.
November 2024
Samiha Binte Shahid
·
Forrest G. Lacey
·
Christine Wiedinmyer
·
[...]
·
Kelley C. Barsanti
Accurate representation of fire emissions is critical for modeling the in-plume, near-source, and remote effects of biomass burning (BB) on atmospheric composition, air quality, and climate. In recent years application of advanced instrumentation has significantly improved knowledge of the compounds emitted from fires, which, coupled with a large number of recent laboratory and field campaigns, has facilitated the emergence of new emission factor (EF) compilations. The Next-generation Emissions InVentory expansion of Akagi (NEIVA) version 1.0 is one such compilation in which the EFs for 14 globally relevant fuel and fire types have been updated to include data from recent studies, with a focus on gaseous non-methane organic compounds (NMOC_g). The data are stored in a series of connected tables that facilitate flexible querying from the individual study level to recommended averages of all laboratory and field data by fire type. The querying features are enabled by assignment of unique identifiers to all compounds and constituents, including thousands of NMOC_g. NEIVA also includes chemical and physical property data and model surrogate assignments for three widely used chemical mechanisms for each NMOC_g. NEIVA EF datasets are compared with recent publications and other EF compilations at the individual compound level and in the context of overall volatility distributions and hydroxyl (OH) reactivity (OHR) estimates. The NMOC_g in NEIVA include ∼4–8 times more compounds with improved representation of intermediate volatility organic compounds, resulting in much lower overall volatility (lowest-volatility bin shifted by as much as 3 orders of magnitude) and significantly higher OHR (up to 90 %) than other compilations. These updates can strongly impact model predictions of the effects of BB on atmospheric composition and chemistry.
November 2024
·
14 Reads
Snow cover modeling remains a major challenge in climate and numerical weather prediction (NWP) models even in recent versions of high-resolution coupled surface–atmosphere (i.e., at kilometer scale) regional models. Evaluation of recent climate simulations, carried out as part of the WCRP-CORDEX Flagship Pilot Study on Convection (FPSCONV) with the CNRM-AROME convection-permitting regional climate model at 2.5 km horizontal resolution, has highlighted significant snow cover biases, severely limiting its potential in mountain regions. These biases, which are also found in AROME numerical weather prediction (NWP) model results, have multiple causes, involving atmospheric processes and their influence on input data to the land surface models in addition to deficiencies of the land surface model itself. Here we present improved configurations of the SURFEX-ISBA land surface model used in CNRM-AROME. We thoroughly evaluated these configurations on their ability to represent seasonal snow cover across the European Alps. Our evaluation was based on coupled simulations spanning the winters of 2018–2019 and 2019–2020, which were compared against remote sensing data and in situ observations. More specifically, the study tests the influence of various changes in the land surface configuration, such as the use of multi-layer soil and snow schemes, the division of the energy balance calculation by surface type within a grid cell (multiple patches), and new physiographic databases and parameter adjustments. Our findings indicate that using only more detailed individual components in the surface model did not improve the representation of snow cover due to limitations in the approach used to account for partial snow cover within a grid cell. These limitations are addressed in further configurations that highlight the importance, even at kilometer resolution, of taking into account the main subgrid surface heterogeneities and improving representations of interactions between fractional snow cover and vegetation. Ultimately, we introduce a land surface configuration, which substantially improves the representation of seasonal snow cover in the European Alps in coupled CNRM-AROME simulations. This holds promising potential for the use of such model configurations in climate simulations and numerical weather prediction both for AROME and other high-resolution climate models.
October 2024
·
5 Reads
This paper introduces the AtsMOS (At-scale Model Output Statistics) workflow, designed to enhance mountain meteorology predictions through the downscaling of coarse numerical weather predictions using local observational data. AtsMOS provides a modular, open-source toolkit for local and large-scale forecasting of various meteorological variables through modified model output statistics – and may be applied to data from a single station or an entire network. We demonstrate its effectiveness through an example application at the summit of Mt. Everest, where it improves the prediction of both meteorological variables (e.g. wind speed, temperature) and derivative variables (e.g. facial frostbite time) critical for mountaineering safety. As a bridge between numerical weather prediction models and ground observations, AtsMOS contributes to hazard mitigation, water resource management, and other weather-dependent issues in mountainous regions and beyond.
October 2024
·
7 Reads
The atmospheric composition forecasting system used to produce the Copernicus Atmosphere Monitoring Service (CAMS) forecasts of global aerosol and trace gas distributions, the Integrated Forecasting System (IFS-COMPO), undergoes periodic upgrades. In this study we describe the development of the future operational cycle 49R1 and focus on the implementation of the thermodynamical model EQSAM4Clim version 12, which represents gas–aerosol partitioning processes for the nitric acid–nitrate and ammonia–ammonium couples and computes diagnostic aerosol, cloud, and precipitation pH values at the global scale. This information on aerosol acidity influences the simulated tropospheric chemistry processes associated with aqueous-phase chemistry and wet deposition. The other updates of cycle 49R1 concern wet deposition, sea-salt aerosol emissions, dust optics, and size distribution used for the calculation of sulfate aerosol optics. The implementation of EQSAM4Clim significantly improves the partitioning of reactive nitrogen compounds, decreasing surface concentrations of both nitrate and ammonium in the particulate phase, which reduces PM2.5 biases for Europe, the US, and China, especially during summertime. For aerosol optical depth there is generally a decrease in the simulated wintertime biases and for some regions an increase in the summertime bias. Improvements in the simulated Ångström exponent are noted for almost all regions, resulting in generally good agreement with observations. The diagnostic aerosol and precipitation pH calculated by EQSAM4Clim have been compared to ground observations and published simulation results. For precipitation pH, the annual mean values show relatively good agreement with the regional observational datasets, while for aerosol pH the simulated values over continents are quite close to those simulated by ISORROPIA II. The use of aerosol acidity has a relatively smaller impact on the aqueous-phase production of sulfate compared to the changes in gas-to-particle partitioning induced by the use of EQSAM4Clim.
October 2024
·
13 Reads
Despite the increasing use of physical snow cover simulations in regional avalanche forecasting, avalanche forecasting is still an expert-based decision-making process. However, recently, it has become possible to obtain fully automated avalanche danger level predictions with satisfying accuracy by combining physically based snow cover models with machine learning approaches. These predictions are made at the location of automated weather stations close to avalanche starting zones. To bridge the gap between these local predictions and fully data- and model-driven regional avalanche danger maps, we developed and evaluated a three-stage model pipeline (RAvaFcast v1.0.0), involving the steps classification, interpolation, and aggregation. More specifically, we evaluated the impact of various terrain features on the performance of a Gaussian-process-based model for interpolation of local predictions to unobserved locations on a dense grid. Aggregating these predictions using an elevation-based strategy, we estimated the regional danger level and the corresponding elevation range for predefined warning regions, resulting in a forecast similar to the human-made public avalanche forecast in Switzerland. The best-performing model matched the human-made forecasts with a mean day accuracy of approximately 66 % for the entire forecast domain and 70 % specifically for the Alps. However, the performance depended strongly on the classifier's accuracy (i.e., a mean day accuracy of 68 %) and the density of local predictions available for the interpolation task. Despite these limitations, we believe that the proposed three-stage model pipeline has the potential to improve the interpretability of machine-made danger level predictions and has, thus, the potential to assist avalanche forecasters during forecast preparation, for instance, by being integrated in the forecast process in the form of an independent virtual forecaster.
October 2024
·
32 Reads
Numerical methods and simulation codes are essential for the advancement of our understanding of complex atmospheric processes. As technology and computer hardware continue to evolve, the development of sophisticated code is vital for accurate and efficient simulations. In this paper, we present the recent advancements made in the FLEXible PARTicle dispersion model (FLEXPART), a Lagrangian particle dispersion model, which has been used in a wide range of atmospheric transport studies over the past 3 decades, extending from tracing radionuclides from the Fukushima nuclear disaster, to inverse modelling of greenhouse gases, and to the study of atmospheric moisture cycles. This version of FLEXPART includes notable improvements in accuracy and computational efficiency. (1) By leveraging the native vertical coordinates of European Centre for Medium Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) instead of interpolating to terrain-following coordinates, we achieved an improvement in trajectory accuracy, leading to a ∼8 %–10 % reduction in conservation errors for quasi-conservative quantities like potential vorticity. (2) The shape of aerosol particles is now accounted for in the gravitational settling and dry-deposition calculation, increasing the simulation accuracy for non-spherical aerosol particles such as microplastic fibres. (3) Wet deposition has been improved by the introduction of a new below-cloud scheme, by a new cloud identification scheme, and by improving the interpolation of precipitation. (4) Functionality from a separate version of FLEXPART, the FLEXPART CTM (chemical transport model), is implemented, which includes linear chemical reactions. Additionally, the incorporation of Open Multi-Processing parallelisation makes the model better suited for handling large input data. Furthermore, we introduced novel methods for the input and output of particle properties and distributions. Users now have the option to run FLEXPART with more flexible particle input data, providing greater adaptability for specific research scenarios (e.g. effective backward simulations corresponding to satellite retrievals). Finally, a new user manual (https://flexpart.img.univie.ac.at/docs/, last access: 11 September 2024) and restructuring of the source code into modules will serve as a basis for further development.
October 2024
·
6 Reads
We have produced a series of 5 km resolution future climate dynamic downscalings for the ocean surrounding New Zealand covering CMIP6 reference conditions; SSP2-4.5 and SSP3-7.0 emissions trajectories. These downscalings combine the Moana Backbone 5 km resolution ROMS configuration with lateral boundary forcing from the 15 km resolution New Zealand Earth System Model (NZESM) and atmospheric forcing from the New Zealand Regional Climate Model 12 km atmospheric model. We validated our reference period downscaling against the Moana Ocean Hindcast and find reasonable agreement to the west and north of New Zealand, but significant disagreement in the region of the Sub-Tropical Front to the east and southeast of the domain. This disagreement is consistent with known issues with the version of NZESM used as forcing in this study. We see similar relative rates of increase in Ocean Heat Content in the upper ocean and mode waters all around New Zealand, but in the deeper ocean the rate of warming is stronger in the Tasman Sea and Antarctic Circumpolar Current than in the Sub Tropical Front East of New Zealand. We examine the occurrence of Marine Heat Waves (MHWs) and find that the use of a “fixed" baseline or one that takes into consideration a long-term warming based on the historical period results in important differences in the estimated number of days under MHWs for mid and end-of-the-century scenarios.
October 2024
·
21 Reads
Urban air quality is an important part of human well-being, and its detailed and precise modeling is important for efficient urban planning. In this study the potential sources of errors in large eddy simulation (LES) runs of the PALM model in stable conditions for a high-traffic residential area in Prague, Czech Republic, with a focus on street canyon ventilation, are investigated. The evaluation of the PALM model simulations against observations obtained during a dedicated campaign revealed unrealistically high concentrations of modeled air pollutants for a short period during a winter inversion episode. To identify potential reasons, the sensitivities of the model to changes in meteorological boundary conditions and adjustments of model parameters were tested. The model adaptations included adding the anthropogenic heat from cars, setting a bottom limit of the subgrid-scale turbulent kinetic energy (TKE), adjusting the profiles of parameters of the synthetic turbulence generator in PALM, and limiting the model time step. The study confirmed the crucial role of the correct meteorological boundary conditions for realistic air quality modeling during stable conditions. Besides this, the studied adjustments of the model parameters proved to have a significant impact in these stable conditions, resulting in a decrease in concentration overestimation in the range 30 %–66 % while exhibiting a negligible influence on model results during the rest of the episode. This suggested that the inclusion or improvement of these processes in PALM is desirable despite their negligible impact in most other conditions. Moreover, the time step limitation test revealed numerical inaccuracies caused by discretization errors which occurred during such extremely stable conditions.
October 2024
·
57 Reads
A multiscale modeling ensemble chain has been assembled as a first step towards an air quality analysis and forecasting (AQF) system for Latin America. Two global and three regional models were tested and compared in retrospective mode over a shared domain (120–28° W, 60° S–30° N) for the months of January and July 2015. The objective of this experiment was to understand their performance and characterize their errors. Observations from local air quality monitoring networks in Colombia, Chile, Brazil, Mexico, Ecuador and Peru were used for model evaluation. The models generally agreed with observations in large cities such as Mexico City and São Paulo, whereas representing smaller urban areas, such as Bogotá and Santiago, was more challenging. For instance, in Santiago during wintertime, the simulations showed large discrepancies with observations. No single model demonstrated superior performance over others or among pollutants and sites available. In general, ozone and NO2 exhibited the lowest bias and errors, especially in São Paulo and Mexico City. For SO2, the bias and error were close to 200 %, except for Bogotá. The ensemble, created from the median value of all models, was evaluated as well. In some cases, the ensemble outperformed the individual models and mitigated extreme over- or underestimation. However, more research is needed before concluding that the ensemble is the path for an AQF system in Latin America. This study identified certain limitations in the models and global emission inventories, which should be addressed with the involvement and experience of local researchers.
October 2024
·
33 Reads
Atmospheric transport models are often used to simulate the distribution of greenhouse gases (GHGs). This can be in the context of forward modeling of tracer transport using surface–atmosphere fluxes or flux estimation through inverse modeling, whereby atmospheric tracer measurements are used in combination with simulated transport. In both of these contexts, transport errors can bias the results and should therefore be minimized. Here, we analyze transport uncertainties in the commonly used Weather Research and Forecasting (WRF) model coupled with the greenhouse gas module (WRF-GHG), enabling passive tracer transport simulation of CO2 and CH4. As a mesoscale numerical weather prediction model, WRF's transport is constrained by global meteorological fields via initialization and at the lateral boundaries of the domain of interest. These global fields were generated by assimilating various meteorological data to increase the accuracy of modeled fields. However, in limited-domain models like WRF, the winds in the center of the domain can deviate considerably from these driving fields. As the accuracy of the wind speed and direction is critical to the prediction of tracer transport, maintaining a close link to the observations across the simulation domain is desired. On the other hand, a link that is too close to the global meteorological fields can degrade performance at smaller spatial scales that are better represented by the mesoscale model. In this work, we evaluated the performance of strategies for keeping WRF's meteorology compatible with meteorological observations. To avoid the complexity of assimilating meteorological observations directly, two main strategies of coupling WRF-GHG with ERA5 meteorological reanalysis data were tested over a 2-month-long simulation over the European domain: (a) restarting the model daily with fresh initial conditions (ICs) from ERA5 and (b) nudging the atmospheric winds, temperatures, and moisture to those of ERA5 continuously throughout the simulation period, using WRF's built-in four-dimensional data assimilation (FDDA) in grid-nudging mode. Meteorological variables and simulated mole fractions of CO2 and CH4 were compared against observations to assess the performance of the different strategies. We also compared planetary boundary layer height (PBLH) with radiosonde-derived estimates. Either nudging or daily restarts similarly improved the meteorology and GHG transport in our simulations, with a small advantage of using both methods in combination. However, notable differences in soil moisture were found that accumulated over the course of the simulation when not using frequent restarts. The soil moisture drift had an impact on the simulated PBLH, presumably via changing the Bowen ratio. This is partially mitigated through nudging without requiring daily restarts, although not entirely alleviated. Soil moisture drift did not have a noticeable impact on GHG performance in our case, likely because it was dominated by other errors. However, since the PBLH is critical for accurately simulating GHG transport, we recommend transport model setups that tie soil moisture to observations. Our method of frequently re-initializing simulations with meteorological reanalysis fields proved suitable for this purpose.
October 2024
·
10 Reads
Engaging ecological resource users in intervention to protect resources is challenging for governments due to the self-interest of users and uncertainty about intervention consequences. Focusing on a case of forest insect infestations, we addressed questions of resource protection and environmentally responsible behavior promotion with a conceptual model. We coupled a forest infestation model with a social model in which a governing agent applies a mechanism for the recognition and promotion of environmentally responsible behavior among several user agents. We ran the coupled model in various scenarios with a reinforcement-learning algorithm for the governing agent as well as best-case, worst-case, and random baselines. Results showed that a proper recognition policy facilitates emergence of environmentally responsible behavior. However, ecosystem health may deteriorate due to temporal differences between the social and ecological systems. Our work shows it is possible to gain insight about complexities of social–ecological systems with conceptual models through scenario analysis.
October 2024
·
38 Reads
We evaluate the vertical turbulent-kinetic-energy (TKE) mixing scheme of the NEMO-SI3 ocean–sea-ice model in sea-ice-covered regions of the Arctic Ocean. Specifically, we assess the parameters involved in TKE mixed-layer-penetration (MLP) parameterization. This ad hoc parameterization aims to capture processes that impact the ocean surface boundary layer, such as near-inertial oscillations, ocean swells, and waves, which are often not well represented in the default TKE scheme. We evaluate this parameterization for the first time in three regions of the Arctic Ocean: the Makarov, Eurasian, and Canada basins. We demonstrate the strong effect of the scaling parameter that accounts for the presence of sea ice. Our results confirm that TKE MLP must be scaled down below sea ice to avoid unrealistically deep mixed layers. The other parameters evaluated are the percentage of energy penetrating below the mixed layer and the length scale of its decay with depth. All these parameters affect mixed-layer depth and its seasonal cycle, surface temperature, and salinity, as well as underlying stratification. Shallow mixed layers are associated with stronger stratification and fresh surface anomalies, and deeper mixed layers correspond to weaker stratification and salty surface anomalies. Notably, we observe significant impacts on sea-ice thickness across the Arctic Ocean in two scenarios: when the scaling parameter due to sea ice is absent and when the TKE mixed-layer-penetration process vanishes. In the former case, we observe an increase of several meters in mixed-layer depth, along with a reduction in sea-ice thickness ranging from 30 to 40 cm, reflecting the impact of stronger mixing. Conversely, in the latter case, we notice that a shallower mixed layer is accompanied by a moderate increase in sea-ice thickness, ranging from 10 to 20 cm, as expected from weaker mixing. Additionally, interannual variability suggests that experiments incorporating a scaling parameter based on sea-ice concentration display an increased mixed-layer depth during periods of reduced sea ice, which is consistent with observed trends. These findings underscore the influence of enhanced ocean mixing, through specific parameterizations, on the physical properties of the upper ocean and sea ice.
October 2024
·
185 Reads
Water scarcity is exacerbated by rising water use and climate change, yet state-of-the-art Earth system models typically do not represent human water demand. Here we present an enhancement to the Community Earth System Model (CESM) and its land (CLM5) and river (MOSART) components by introducing sectoral water abstractions. The new module enables a better understanding of water demand and supply dynamics across various sectors, including domestic, livestock, thermoelectric, manufacturing, mining, and irrigation. The module conserves water by integrating abstractions from the land component with river component flows and dynamically calculates daily water scarcity based on local demand and supply. Through land-only simulations spanning 1971–2010, we verify our model against known water scarcity hotspots, historical global water withdrawal trends, and regional variations in water use. Our findings show that non-irrigative sectoral consumption has an insignificant effect on regional climate, while emphasizing the importance of including all sectors for water scarcity assessment capabilities. Despite its advancements, the model's limitations, such as its exclusive focus on river water abstractions, highlight areas for potential future refinement. This research paves the way for a more holistic representation of human–water interactions in ESMs, aiming to inform sustainable water management decisions in an evolving global landscape.
October 2024
·
87 Reads
Process-based ecosystem models are increasingly important for predicting forest dynamics under future environmental conditions, which may encompass non-analogous climate coupled with unprecedented disturbance regimes. However, challenges persist due to the extensive number of model parameters, scarce calibration data, and trade-offs between the local precision and the applicability of the model over a wide range of environmental conditions. In this paper, we describe a protocol that allows a modeller to collect transferable ecosystem properties based on ecosystem characteristic criteria and to compile the parameters that need to be described in the field. We applied the procedure to develop a new parameterisation for European beech (Fagus sylvatica L.) for the Biome-BGCMuSo model, the most advanced member of the Biome-BGC family. For model calibration and testing, we utilised multiyear forest carbon data from 87 plots distributed across five European countries. The initial values of 48 new ecophysiological parameters were defined based on a literature review. The final values of six calibrated parameters were optimised for single sites as well as for multiple sites using generalised likelihood uncertainty estimation (GLUE) and model output conditioning that ensured plausible simulations based on user-defined ranges of carbon stock output variables (carbon stock in aboveground wood biomass, soil, and litter) and finding the intersections of site-specific plausible parameter hyperspaces. To support the model use, we tested the model performance by simulating aboveground tree wood, soil, and litter carbon across a large geographical gradient of central Europe and evaluated the trade-offs between parameters tailored to single plots and parameters estimated using multiple sites. Our findings indicated that parameter sets derived from single sites provided an improved local accuracy of simulations of aboveground wood, soil, and litter carbon stocks by 35 %, 55 %, and 11 % in comparison to the a priori parameter set. However, their broader applicability was very limited. A multi-site optimised parameter set, on the other hand, performed satisfactorily across the entire geographical domain studied here, including on sites not involved in the parameter estimation, but the errors were, on average, 26 %, 35 % and 9 % greater for the aboveground wood, soil, and litter carbon stocks than those obtained with the site-specific parameter sets. Importantly, model simulations demonstrated plausible responses across large-scale environmental gradients, featuring a clear production optimum of beech that aligns with empirical studies. These findings suggest that the model is capable of accurately simulating the dynamics of European beech across its range and can be used for more comprehensive experimentations.
October 2024
·
9 Reads
Effective observation of the ocean is vital for studying and assessing the state and evolution of the marine ecosystem and for evaluating the impact of human activities. However, obtaining comprehensive oceanic measurements across temporal and spatial scales and for different biogeochemical variables remains challenging. Autonomous oceanographic instruments, such as Biogeochemical (BGC)-Argo profiling floats, have helped expand our ability to obtain subsurface and deep-ocean measurements, but measuring biogeochemical variables, such as nutrient concentration, still remains more demanding and expensive than measuring physical variables. Therefore, developing methods to estimate marine biogeochemical variables from high-frequency measurements is very much needed. Current neural network (NN) models developed for this task are based on a multilayer perceptron (MLP) architecture, trained over point-wise pairs of input–output features. Although MLPs can produce smooth outputs if the inputs change smoothly, convolutional neural networks (CNNs) are inherently designed to handle profile data effectively. In this study, we present a novel one-dimensional (1D) CNN model to predict profiles leveraging the typical shape of vertical profiles of a variable as a prior constraint during training. In particular, the Predict Profiles Convolutional (PPCon) model predicts nitrate, chlorophyll, and backscattering (bbp700) starting from the date and geolocation and from temperature, salinity, and oxygen profiles. Its effectiveness is demonstrated using a robust BGC-Argo dataset collected in the Mediterranean Sea for training and validation. Results, which include quantitative metrics and visual representations, prove the capability of PPCon to produce smooth and accurate profile predictions improving upon previous MLP applications.
October 2024
·
64 Reads
Downscaling global climate models (GCMs) provides crucial high-resolution data needed for informed decision-making at regional scales. However, there is no uniform approach to select the most suitable GCMs. Over Southeast Asia (SEA), observations are sparse and have large uncertainties, complicating GCM selection especially for rainfall. To guide this selection, we apply a standardised benchmarking framework to select CMIP6 GCMs for dynamical downscaling over SEA, addressing current observational limitations. This framework identifies fit-for-purpose models through a two-step process: (a) selecting models that meet minimum performance requirements in simulating the fundamental characteristics of rainfall (e.g. bias, spatial pattern, annual cycle and trend) and (b) selecting models from (a) to further assess whether key precipitation drivers (monsoon) and teleconnections from modes of variability are captured, i.e. the El Niño–Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD). GCMs generally exhibit wet biases, particularly over the complex terrain of the Maritime Continent. Evaluations from the first step identify 19 out of 32 GCMs that meet our minimum performance expectations in simulating rainfall. These models also consistently capture atmospheric circulations and teleconnections with modes of variability over the region but overestimate their strength. Ultimately, we identify eight GCMs meeting our performance expectations. There are obvious, high-performing GCMs from allied modelling groups, highlighting the dependency of the subset of models identified from the framework. Therefore, further tests of model independence, data availability and future climate change spread are conducted, resulting in a final subset of two independent models that align with our a priori expectations for downscaling over the Coordinated Regional Climate Downscaling Experiment –Southeast Asia (CORDEX-SEA).
October 2024
·
19 Reads
Monitoring, reporting, and verification frameworks for greenhouse gas emissions are being developed by countries across the world to keep track of progress towards national emission reduction targets. Data assimilation plays an important role in monitoring frameworks, combining different sources of information to achieve the best possible estimate of fossil fuel emissions and, as a consequence, better estimates for fluxes from the natural biosphere. Robust estimates for fossil fuel emissions rely on accurate estimates of uncertainties corresponding to different pieces of information. We describe prior uncertainties in CO2 and CO fossil fuel fluxes, paying special attention to spatial error correlations and the covariance structure between CO2 and CO. This represents the first time that prior uncertainties in CO2 and the important co-emitted trace gas CO are defined consistently, with error correlations included, which allows us to make use of the synergy between the two trace gases to better constrain CO2 fossil fuel fluxes. CO:CO2 error correlations differ by sector, depending on the diversity of sub-processes occurring within a sector, and also show a large range of values between pixels within the same sector. For example, for other stationary combustion, pixel correlation values range from 0.1 to 1.0, whereas for road transport, the correlation is mostly larger than 0.6. We illustrate the added value of our definition of prior uncertainties using closed-loop numerical experiments over mainland Europe and the UK, which isolate the influence of using error correlations between CO2 and CO and the influence of prescribing more detailed information about prior emission uncertainties. For the experiments, synthetic in situ observations are used, allowing us to validate the results against a “truth”. The “true” emissions are made by perturbing the prior emissions (from an emission inventory) according to the prescribed prior uncertainties. We find that using our realistic definition of prior uncertainties helps our data assimilation system to differentiate more easily between CO2 fluxes from biogenic and fossil fuel sources. Using improved prior emission uncertainties, we find fewer geographic regions with significant deviations from the prior compared to when using default prior uncertainties (32 vs. 80 grid cells of 0.25°×0.3125°, with an absolute difference of more than 1 kg s-1 between the prior and posterior), but these deviations from the prior almost consistently move closer to the prescribed true values, with 92 % showing an improvement, in contrast to the default prior uncertainties, where 61 % show an improvement. We also find that using CO provides additional information on CO2 fossil fuel fluxes, but this is only the case if the CO:CO2 error covariance structure is defined realistically. Using the default prior uncertainties, the CO2 fossil fuel fluxes move farther away from the truth in many geographical regions (with 50 % showing an improvement compared to 94 % when advanced prior uncertainties are used). With the default uncertainties, the maximum deviation of fossil fuel CO2 from the prescribed truth is about 7 % in both the prior and posterior results. With the advanced uncertainties, this is reduced to 3 % in the posterior results.
October 2024
·
40 Reads
The reduction of in situ observations over the last few decades poses a potential risk of losing important information in regions where local effects dominate the climatology. Reanalyses face challenges in representing climatologies with highly localized effects, especially in regions with complex orography. Empirical downscaling methods offer a cost-effective and easier-to-implement alternative to dynamic downscaling methods and can partially overcome the aforementioned limitations of reanalyses by taking into account the local effects through statistical relationships. This article introduces RASCAL (Reconstruction by AnalogS of ClimatologicAL time series), an open-source Python tool designed to extend time series and fill gaps in observational climate data, especially in regions with limited long-term data and significant local effects, such as mountainous areas. Employing an object-oriented programming style, RASCAL's methodology effectively links large-scale circulation patterns with local atmospheric features using the analog method in combination with principal component analysis (PCA). The package contains routines for preprocessing observations and reanalysis data, generating reconstructions using various methods, and evaluating the reconstruction's performance in reproducing the time series of observations, statistical properties, and relevant climatic indices. Its high modularity and flexibility allow fast and reproducible downscaling. The evaluations carried out in central Spain, in mountainous and urbanized areas, demonstrate that RASCAL performs better than the ERA20C and ERA20CM reanalysis, as expected, in terms of R2, standard deviation, and bias. When analyzing reconstructions against observations, RASCAL generates series with statistical properties, such as seasonality and daily distributions, that closely resemble observations. This confirms the potential of this method for conducting robust climate research. The adaptability of RASCAL to diverse scientific objectives is also highlighted. However, as with any other method based on empirical training, this method requires the availability of sufficiently long-term data series. Furthermore, it is susceptible to disruption caused by changes in land use or urbanization processes that might compromise the homogeneity of the training data. Despite these limitations, RASCAL's positive outcomes offer opportunities for comprehensive climate variability analyses and potential applications in downscaling short-term forecasts, seasonal predictions, and climate change scenarios. The Python code and the Jupyter Notebook for the reconstruction validation are publicly available as an open project.
October 2024
·
32 Reads
Snowpacks modulate water storage over extended land regions and at the same time play a central role in the surface albedo feedback, impacting the climate system energy balance. Despite the complexity of snow processes and their importance for both land hydrology and global climate, several state-of-the-art land surface models and Earth System Models still employ relatively simple descriptions of the snowpack dynamics. In this study we present a newly-developed snow scheme tailored to the Geophysical Fluid Dynamics Laboratory (GFDL) land model version 4.1. This new snowpack model, named GLASS (Global LAnd–Snow Scheme), includes a refined and dynamical vertical-layering snow structure that allows us to track the temporal evolution of snow grain properties in each snow layer, while at the same time limiting the model computational expense, as is necessary for a model suited to global-scale climate simulations. In GLASS, the evolution of snow grain size and shape is explicitly resolved, with implications for predicted bulk snow properties, as they directly impact snow depth, snow thermal conductivity, and optical properties. Here we describe the physical processes in GLASS and their implementation, as well as the interactions with other surface processes and the land–atmosphere coupling in the GFDL Earth System Model. The performance of GLASS is tested over 10 experimental sites, where in situ observations allow for a comprehensive model evaluation. We find that when compared to the current GFDL snow model, GLASS improves predictions of seasonal snow water equivalent, primarily as a consequence of improved snow albedo. The simulated soil temperature under the snowpack also improves by about 1.5 K on average across the sites, while a negative bias of about 1 K in snow surface temperature is observed.
September 2024
·
32 Reads
The Weather Research and Forecasting (WRF) double-moment 6-class (WDM6) scheme was modified by incorporating predicted graupel density. Explicitly predicted graupel density, in turn, modifies graupel characteristics such as the fall velocity–diameter and mass–diameter relationships of graupel. The modified WDM6 has been evaluated based on a two-dimensional (2D) idealized squall line simulation and winter snowfall events that occurred during the International Collaborative Experiment for Pyeongchang Olympics and Paralympics (ICE-POP 2018) field campaign over the Korean Peninsula. From the 2D simulation, we confirmed that the modified WDM6 can simulate varying graupel densities, ranging from low values in an anvil cloud region to high values in the convective region at the mature stage of a squall line. Simulations with the modified WDM6 increased graupel amounts at the surface and decreased graupel aloft because of the faster sedimentation of graupel for two winter snowfall cases during the ICE-POP 2018 campaign, as simulated in the 2D idealized model. The altered graupel sedimentation in the modified WDM6 influenced the magnitude of the major microphysical processes of graupel and snow, subsequently reducing the surface snow amount and precipitation over the mountainous region. The reduced surface precipitation over the mountainous region mitigates the surface precipitation bias observed in the original WDM6, resulting in better statistical skill scores for the root mean square errors. Notably, the modified WDM6 reasonably captures the relationship between graupel density and its fall velocity, as retrieved from 2D video disdrometer measurements, thus emphasizing the necessity of including predicted graupel density to realistically represent the microphysical properties of graupel in models.
September 2024
·
331 Reads
·
1 Citation
Accurate hydrologic modeling is vital to characterizing how the terrestrial water cycle responds to climate change. Pure deep learning (DL) models have been shown to outperform process-based ones while remaining difficult to interpret. More recently, differentiable physics-informed machine learning models with a physical backbone can systematically integrate physical equations and DL, predicting untrained variables and processes with high performance. However, it is unclear if such models are competitive for global-scale applications with a simple backbone. Therefore, we use – for the first time at this scale – differentiable hydrologic models (full name δHBV-globe1.0-hydroDL, shortened to δHBV here) to simulate the rainfall–runoff processes for 3753 basins around the world. Moreover, we compare the δHBV models to a purely data-driven long short-term memory (LSTM) model to examine their strengths and limitations. Both LSTM and the δHBV models provide competitive daily hydrologic simulation capabilities in global basins, with median Kling–Gupta efficiency values close to or higher than 0.7 (and 0.78 with LSTM for a subset of 1675 basins with long-term discharge records), significantly outperforming traditional models. Moreover, regionalized differentiable models demonstrated stronger spatial generalization ability (median KGE 0.64) than a traditional parameter regionalization approach (median KGE 0.46) and even LSTM for ungauged region tests across continents. Nevertheless, relative to LSTM, the differentiable model was hampered by structural deficiencies for cold or polar regions, highly arid regions, and basins with significant human impacts. This study also sets the benchmark for hydrologic estimates around the world and builds a foundation for improving global hydrologic simulations.
September 2024
·
27 Reads
·
1 Citation
Lightning is an important atmospheric process for generating reactive nitrogen, resulting in the production of tropospheric ozone, as well as igniting wildland fires, which result in potentially large emissions of many pollutants and short-lived climate forcers. Lightning is also expected to change in frequency and location with the changing climate. As such, lightning is an important component of Earth system models. Until now, the Canadian Earth System Model (CanESM) did not contain an interactive-lightning parameterization. The fire parameterization in CanESM5.1 was designed to use prescribed monthly climatological lightning. In this study, we have added a logistical regression lightning model that predicts lightning occurrence interactively based on three environmental variables and their interactions in CanESM5.1's atmospheric model, CanAM5.1 (Canadian Atmospheric Model), creating the capacity to interactively model lightning, allowing for future projections under different climate scenarios. The modelled lightning and resulting burned area were evaluated against satellite measurements over the historical period, and model biases were found to be acceptable. Modelled lightning had a small negative bias and excellent land–ocean ratio compared to satellite measurements. The modified version of CanESM5.1 was used to simulate two future climate scenarios (SSP2-4.5 and SSP5-8.5; Shared Socioeconomic Pathway) to assess how lightning and burned area change in the future. Under the higher-emissions scenario (SSP5-8.5), CanESM5.1 predicts almost no change to the global mean lightning flash rate by the end of the century (2081–2100 vs. 2015–2035 average). However, there are substantial regional changes to lightning – particularly over land – such as a mean increase of 6 % in the northern mid-latitudes and decrease of −8 % in the tropics. By the century's end, the change in global total burned area with prescribed climatological lightning was about 2 times greater than that with interactive lightning (42 % vs. 26 % increase, respectively). Conversely, in the northern mid-latitudes the use of interactive lightning resulted in 3 times more burned area compared to that with unchanging lightning (48 % vs. 16 % increase, respectively). These results show that the future changes to burned area are greatly dependent on a model's lightning scheme, both spatially and overall.
September 2024
·
58 Reads
Methane (CH4) cycling in the Baltic Sea is studied through model simulations that incorporate the stable isotopes of CH4 (12C–CH4 and 13C–CH4) in a physical–biogeochemical model. A major uncertainty is that spatial and temporal variations in the sediment source are not well known. Furthermore, the coarse spatial resolution prevents the model from resolving shallow-water near-shore areas for which measurements indicate occurrences of considerably higher CH4 concentrations and emissions compared with the open Baltic Sea. A preliminary CH4 budget for the central Baltic Sea (the Baltic Proper) identifies benthic release as the dominant CH4 source, which is largely balanced by oxidation in the water column and to a smaller degree by outgassing. The contributions from river loads and lateral exchange with adjacent areas are of marginal importance. Simulated total CH4 emissions from the Baltic Proper correspond to an average ∼1.5 mmol CH4 m−2 yr−1, which can be compared to a fitted sediment source of ∼18 mmol CH4 m−2 yr−1. A large-scale approach is used in this study, but the parameterizations and parameters presented here could also be implemented in models of near-shore areas where CH4 concentrations and fluxes are typically substantially larger and more variable. Currently, it is not known how important local shallow-water CH4 hotspots are compared with the open water outgassing in the Baltic Sea.
September 2024
·
76 Reads
Evapotranspiration (ET) is a crucial flux of the hydrological water balance, commonly estimated using (semi-)empirical formulas. The estimated flux may strongly depend on the formula used, adding uncertainty to the outcomes of environmental studies using ET. Climate change may cause additional uncertainty, as the ET estimated by each formula may respond differently to changes in meteorological input data. To include the effects of model uncertainty and climate change and facilitate the use of these formulas in a consistent, tested, and reproducible workflow, we present PyEt. PyEt is an open-source Python package for the estimation of daily potential evapotranspiration (PET) using available meteorological data. It allows the application of 20 different PET methods on both time series and gridded datasets. The majority of the implemented methods are benchmarked against literature values and tested with continuous integration to ensure the correctness of the implementation. This article provides an overview of PyEt's capabilities, including the estimation of PET with 20 PET methods for station and gridded data, a simple procedure for calibrating the empirical coefficients in the alternative PET methods, and estimation of PET under warming and elevated atmospheric CO2 concentration. Further discussion on the advantages of using PyEt estimates as input for hydrological models, sensitivity and uncertainty analyses, and hindcasting and forecasting studies (especially in data-scarce regions) is provided.
September 2024
·
39 Reads
The Marine Ice Sheet–Ocean Model Intercomparison Project – phase 2 (MISOMIP2) is a natural progression of previous and ongoing model intercomparison exercises that have focused on the simulation of ice-sheet and ocean processes in Antarctica. The previous exercises motivate the move towards realistic configurations, as well as more diverse model parameters and resolutions. The main objective of MISOMIP2 is to investigate the performance of existing ocean and coupled ice-sheet–ocean models in a range of Antarctic environments through comparisons to observational data. We will assess the status of ice-sheet–ocean modelling as a community and identify common characteristics of models that are best able to capture observed features. As models are highly tuned based on present-day data, we will also compare their sensitivity to prescribed abrupt atmospheric perturbations leading to either very warm or slightly warmer ocean conditions compared to the present day. The approach of MISOMIP2 is to welcome contributions of models as they are, including global and regional configurations, but we request standardized variables and common grids for the outputs. We target the analysis at two specific regions, the Amundsen Sea and the Weddell Sea, since they describe two different ocean environments and have been relatively well observed compared to other areas of Antarctica. An observational “MIPkit” synthesizing existing ocean and ice-sheet observations for a common period is provided to evaluate ocean and ice-sheet models in these two regions.
Journal Impact Factor™
CiteScore™
SNIP
SJR
Article processing charges
Chief-executive editor
Imperial College London, United Kingdom