Victor Venema’s research while affiliated with University of Bonn and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (107)


Average monthly precipitations of the three climates analysed (a) and correlation‐distance scatter plots of these networks (b) calculated with the first differences of the series. [Colour figure can be viewed at wileyonlinelibrary.com]
Correlation‐distance scatter plots of the four master networks Tm1 (a), Tm2 (b), Tm3 (c) and Tr2 (d) of monthly temperature with correlations calculated on the first differences of the series. [Colour figure can be viewed at wileyonlinelibrary.com]
Examples of shifts applied to obtain the problem series in the first five temperature experiments: (a) One or two large shifts in the first five series; (b) the same as (a) but with an added strong sinusoidal seasonal variation; (c) large‐size short‐term biases and local trends in the first five series; (d) random number of biases of random magnitude and location in all 10 series; (e) the same as (d) but with sinusoidal seasonal variation of random amplitude. [Colour figure can be viewed at wileyonlinelibrary.com]
Data availability after a partial deletion in one of the samples of 40 series extracted from the master network Tr2. (a) Dark segments indicate the presence of data. (b) Total data availability along time. The last four series are free from missing data to ensure available references to infill the gaps present in the other series. [Colour figure can be viewed at wileyonlinelibrary.com]
RMSE (left column: a, c, e; in mm) and trend errors (right column: b, d, f; in mm per 100 years) of the homogenization of the three precipitation networks: Atlantic temperate (top row: a, b), Mediterranean (middle row: c, d) and Monsoonal (bottom row: e, f). (Fixed scale for a better comparison; outliers may lie outside the graphic limits.) [Colour figure can be viewed at wileyonlinelibrary.com]

+3

Homogenization of monthly series of temperature and precipitation: Benchmarking results of the MULTITEST project
  • Article
  • Publisher preview available

April 2023

·

149 Reads

·

13 Citations

·

José A. López

·

·

[...]

·

The homogenization of climate observational series is a needed process before undertaking confidently any study of their internal variability, since changes in the observation methods or in the surroundings of the observatories, for instance, can introduce biases in the data of the same order of magnitude than the underlying climate variations and trends. Many methods have been proposed in the past to remove the unwanted perturbations from the climatic series, and some of them have been implemented in software packages freely available on the Internet. The Spanish project MULTITEST was intended to test their performance in an automatic way with synthetic monthly series of air temperature and atmospheric precipitation, to update inter‐comparison results from former projects, especially those of the COST Action ES0601. Several networks representing different climates and station densities were used to test a variety of homogenization packages on hundreds of random samples. Results were evaluated mainly in the form of root mean squared errors and errors in the trend of the series, showing that ACMANT, followed by Climatol, minimized these errors. However, other packages performed also relatively well, even outperforming them when there were simultaneous biases of the same sign in most or all the test series.

View access options

Insights from 20 years of temperature parallel measurements in Mauritius around the turn of the 20th century

April 2022

·

49 Reads

·

2 Citations

There is considerable import in creating more complete, better understood holdings of early meteorological data. Such data permit an improved understanding of climate variability and long-term changes. Early records are particularly incomplete in the tropics, with implications for estimates of global and regional temperature. There is also a relatively low level of scientific understanding of how these early measurements were made and, as a result, of their homogeneity and comparability to more modern techniques and measurements. Herein we describe and analyse a newly rescued set of long-term, up to six-way parallel measurements undertaken over 1884–1903 in Mauritius, an island situated in the southern Indian Ocean. Data include (i) measurements from a well-ventilated room, (ii) a shaded thermograph, (iii) instruments housed in a manner broadly equivalent to a modern Stevenson screen, (iv) a set of measurements by a hygrometer mounted in a Stevenson screen, and for a much shorter period (v) two additional Stevenson screen configurations. All measurements were undertaken within an ∼ 80 m radius of each other. To our knowledge this is the first such multidecadal multi-instrument assessment of meteorological instrument transition impacts ever undertaken, providing potentially unique insights. The intercomparison also considers the impact of different ways of deriving daily and monthly averages. The long-term comparison is sufficient to robustly characterize systematic offsets between all the instruments and seasonally varying impacts. Differences between all techniques range from tenths of a degree Celsius to more than 1 ∘C and are considerably larger for maximum and minimum temperatures than for means or averages. Systematic differences of several tenths of a degree Celsius also exist for the different ways of deriving average and mean temperatures. All differences, except two average temperature series pairs, are significant at the 0.01 level using a paired t test. Given that all thermometers were regularly calibrated against a primary Kew standard thermometer maintained by the observatory, this analysis highlights significant impacts of instrument exposure, housing, siting, and measurement practices in early meteorological records. These results reaffirm the importance of thoroughly assessing the homogeneity of early meteorological records.


Insights from 20 years of temperature parallel measurements in Mauritius around the turn of the 20th Century

October 2021

·

228 Reads

There is considerable import in creating more complete, better understood, holdings of early meteorological data. Such data permit an improved understanding of climate variability and long-term changes. Early records are particularly incomplete in the tropics, with implications for estimates of global and regional temperature. There is also a relatively low level of scientific understanding of how these measurements were made and, as a result, of their homogeneity and comparability to more modern techniques and measurements. Herein we describe and analyse a newly rescued set of long-term, up to six-way parallel measurements, undertaken over 1884–1903 in Mauritius, an island situated in the southern Indian Ocean. Data include: i) measurements from a well-ventilated room, ii) a shaded Thermograph; iii) instruments housed in a manner broadly equivalent to a modern Stevenson Screen; iv) a set of measurements by a Hygrometer mounted in a Stevenson Screen; and for a very much shorter period v) two additional Stevenson Screen configurations. All measurements were undertaken within roughly 80 metre radius. To our knowledge this is the first such multidecadal multi-instrument assessment of meteorological instrument transition impacts ever undertaken, providing potentially unique insights. The intercomparison also considers the impact of different ways of deriving daily and monthly averages. The long-term comparison is sufficient to robustly characterise systematic offsets between all the instruments and seasonally varying impacts. Differences between all techniques range from tenths of a degree Celsius to in excess of a degree Celsius and are considerably larger for maximum and minimum temperatures than for means or averages. Systematic differences of several tenths of a degree also exist for the different ways of deriving average / mean temperatures. All differences bar two average temperature series pairs are significant at the 0.01 level using a paired t-test. Given that all thermometers were regularly calibrated against a primary Kew standard thermometer this analysis highlights significant impacts of instrument exposure, housing, siting and measurement practices in early meteorological records. These results reaffirm the importance of thoroughly assessing the homogeneity of early meteorological records.


Multi‐objective downscaling of precipitation time series by genetic programming

June 2021

·

55 Reads

·

2 Citations

We use symbolic regression to estimate daily precipitation amounts at six stations in the Alpine region from a global reanalysis. Symbolic regression only prescribes the set of mathematical expressions allowed in the regression model, but not its structure. The regression models are generated by genetic programming (GP) in analogy to biological evolution. The two conflicting objectives of a low root‐mean‐square error (RMSE) and consistency in the distribution between model and observations are treated as a multi‐objective optimization problem. This allows us to derive a set of downscaling models that represents different achievable trade‐offs between the two conflicting objectives, a so‐called Pareto set. Our GP setup limits the size of the regression models and uses an analytical quotient instead of a standard or protected division operator. With this setup we obtain models that have a generalization performance comparable with generalized linear regression models (GLMs), which are used as a benchmark. We generate deterministic and stochastic downscaling models with GP. The deterministic downscaling models with low RMSE outperform the respective stochastic models. The stochastic models with low IQD, however, perform slightly better than the respective deterministic models for the majority of cases. No approach is uniquely superior. The stochastic models with optimal IQD provide useful distribution estimates that capture the stochastic uncertainty similar to or slightly better than the GLM‐based downscaling.


Efficiency of Time Series Homogenization: Method Comparison with 12 Monthly Temperature Test Datasets

January 2021

·

131 Reads

·

29 Citations

Journal of Climate

The aim of time series homogenization is to remove non-climatic effects, such as changes in station location, instrumentation, observation practices, etc., from observed data. Statistical homogenization usually reduces the non-climatic effects, but does not remove them completely. In the Spanish MULTITEST project, the efficiencies of automatic homogenization methods were tested on large benchmark datasets of a wide range of statistical properties. In this study, test results for 9 versions, based on 5 homogenization methods (ACMANT, Climatol, MASH, PHA and RHtests) are presented and evaluated. The tests were executed with 12 synthetic/surrogate monthly temperature test datasets containing 100 to 500 networks with 5 to 40 time series in each. Residual centred root mean square errors and residual trend biases were calculated both for individual station series and for network mean series. The results show that a larger fraction of the non-climatic biases can be removed from station series than from network-mean series. The largest error reduction is found for the long-term linear trends of individual time series in datasets with a high signal-to-noise ratio (SNR), there the mean residual error is only 14 – 36% of the raw data error. When the SNR is low, most of the results still indicate error reductions, although with smaller ratios than for large SNR. Generally, ACMANT gave the most accurate homogenization results. In the accuracy of individual time series ACMANT is closely followed by Climatol, while for the accurate calculation of mean climatic trends over large geographical regions both PHA and ACMANT are recommended.



The topographic control on land surface energy fluxes: A statistical approach to bias correction

February 2020

·

16 Reads

·

5 Citations

Journal of Hydrology

Subsurface hydrodynamics are an important component of the hydrological cycle and a key factor in the partitioning of land surface water and energy fluxes. Because of computational reasons they are often neglected, or strongly simplified, in numerical weather prediction and climate models. Particularly in regions where the water table is shallow, soil moisture acts as a link between groundwater, land and atmosphere. Because of its long-term memory, groundwater represents a buffer for the effects of climate variability. To dynamically model this system we study the outputs of a variably saturated groundwater flow model (ParFlow) coupled with a surface model (CLM) and propose an empirical approach for the statistical correction of the bias of the energy fluxes in a highly-parametrized simulation. Corrections are based on a comparison of the simple scheme with the fully-coupled subsurface-land-atmosphere simulations. This simple dynamical scheme computes the potential latent heat flux in case of near saturation, while the fully-coupled simulations approximate the reality. Our statistical model examines the evapotranspiration surplus, i.e., the difference in latent heat flux between the two runs, and aims to correct the bias in the latent heat flux of the simple scheme, based on the characteristics of each point of the domain. In particular, we focus on the ability of topography-related indices, such as the topographic wetness index and the depth-to-water index, to provide information on the availability of water for evapotranspiration. Our results confirm that topographic indices are good predictors for moisture availability. While small-scale structures cannot be accurately reproduced by our model, large-scale biases of latent heat flux over the domain are effectively removed. Moreover, the bias corrected fluxes show a better agreement with the fluxes of the full modelling system than commonly used free-drainage simulations. Thus, the proposed approach can be useful in approximating the effect of groundwater on land surface water and energy fluxes in, e.g., regional climate models.


Variance of the trend as a function of break rate for RD‐type breaks. The crosses denote the result for modelled data, the solid line gives an approximation for constant break numbers. For a transfer to varying break numbers, a weighted average using the Poisson distribution is calculated, which is given by the thin curve. The thick straight line denotes a slope of 1.2 K²⋅cty⁻¹, which fits well to the data at small break rates
As Figure 1 but for modelled data with BM‐type breaks (O‐symbols). The result for RD‐type breaks (crosses) from Figure 1 is much smaller and given for comparison
Variance of the trend difference between neighbouring station pairs as a function of distance (bold line). The thin lines show the sum of variances for comparison. Dividing the values of the bold curve by the corresponding values of the thin curve gives 1 − r, with r being the correlation coefficient. These pairs of curves are marked by U for United States and by G for German stations
As Figure 3 but for a distance scale reduced by a factor of 10 and only for the United States
Random trend errors in climate station data due to inhomogeneities

October 2019

·

54 Reads

·

5 Citations

Inhomogeneities in station series are a large part of the uncertainty budget of long‐term temperature trend estimates. This article introduces two analytical equations for the dependence of the station trend uncertainty on the statistical properties of the inhomogeneities. One equation is for inhomogeneities that act as random deviations from a fixed baseline, where the deviation levels are random and independent. The second equation is for inhomogeneities, which behave like Brownian Motion (BM), where not the levels themselves but the jumps between them are independent. It shows that BM‐type breaks introduce much larger trend errors, growing linearly with the number of breaks. Using the information about type, strength, and frequency of the breaks for the United States and Germany, the random trend errors for these two countries are calculated for the period 1901–2000. An alternative and independent estimate is obtained by an empirical approach, exploiting the distance dependence of the trend variability for neighbouring station pairs. Both methods (empirical and analytical) find that the station trend uncertainty is larger in the United States (0.71 and 0.82°C per century, respectively) than in Germany (0.50 and 0.58°C per century, respectively). The good agreement of the analytical and the empirical estimate gives confidence in the methods to assess trend uncertainties, as well as in the method to determine the statistical properties of the break inhomogeneities.


A limited role for unforced internal variability in 20 th century warming.

May 2019

·

1,191 Reads

·

86 Citations

Journal of Climate

The early twentieth-century warming (EW; 1910–45) and the mid-twentieth-century cooling (MC; 1950–80) have been linked to both internal variability of the climate system and changes in external radiative forcing. The degree to which either of the two factors contributed to EW and MC, or both, is still debated. Using a two-box impulse response model, we demonstrate that multidecadal ocean variability was unlikely to be the driver of observed changes in global mean surface temperature (GMST) after AD 1850. Instead, virtually all (97%–98%) of the global low-frequency variability (>30 years) can be explained by external forcing. We find similarly high percentages of explained variance for interhemispheric and land–ocean temperature evolution. Three key aspects are identified that underpin the conclusion of this new study: inhomogeneous anthropogenic aerosol forcing (AER), biases in the instrumental sea surface temperature (SST) datasets, and inadequate representation of the response to varying forcing factors. Once the spatially heterogeneous nature of AER is accounted for, the MC period is reconcilable with external drivers. SST biases and imprecise forcing responses explain the putative disagreement between models and observations during the EW period. As a consequence, Atlantic multidecadal variability (AMV) is found to be primarily controlled by external forcing too. Future attribution studies should account for these important factors when discriminating between externally forced and internally generated influences on climate. We argue that AMV must not be used as a regressor and suggest a revised AMV index instead [the North Atlantic Variability Index (NAVI)]. Our associated best estimate for the transient climate response (TCR) is 1.57 K (±0.70 at the 5%–95% confidence level).


A new method to study inhomogeneities in climate records: Brownian motion or random deviations?

May 2019

·

53 Reads

·

12 Citations

Climate data are affected by inhomogeneities due to historical changes in the way the measurements were performed. Understanding these inhomogeneities is important for accurate estimates of long‐term changes in the climate. These inhomogeneities are typically characterized by the number of breaks and the size of the jumps or the variance of the break signal, but a full characterization of the break signal also includes its temporal behaviour. This study develops a method to distinguish between two types of breaks: random deviations from a baseline and Brownian motion. Strength and frequency of both break types are estimated by using the variance of the spatiotemporal differences in the time series of two nearby stations as input. Thus, the result is directly obtained from the data without running a homogenization algorithm to estimate the break signal from the data. This opens the possibility to determine the total number of breaks and not only that of the significantly large ones. The application to German temperature observations suggests generally small inhomogeneities dominated by random deviations from a baseline. U.S. stations, on the other hand, also show the characteristics of a strong Brownian‐motion‐type component.


Citations (57)


... Many algorithms apply the standard normal homogeneity test (SNHT; Alexandersson 1986;Alexandersson and Moberg 1997) and detect one break after the other by the hierarchical splitting approach. These are, for example, the algorithms iCraddock (Craddock 1979), RHtest (Wang, Wen, and Wu 2007), by Venema et al. (2012) and recently by Guijarro et al. (2023). Both research teams applied the algorithms to a large variety of synthetic and surrogate data sets and proved in this way their functioning. ...

Reference:

Estimation of Break and Noise Variance and the Maximum Distance of Climate Stations Allowed in Relative Homogenisation of Annual Temperature Anomalies
Homogenization of monthly series of temperature and precipitation: Benchmarking results of the MULTITEST project

... The impact of differing thermometer exposures on temperature readings has been investigated previously, including by: Chenoweth (1992) in North America; Böhm et al. (2010), Brunet et al. (2006Brunet et al. ( , 2011, Butler et al. (2005), Gaster (1882), Margary (1924), Marriott (1879) and Nordli et al. (1996Nordli et al. ( , 1997 in Europe; Gill (1882) in South Africa; Ashcroft et al. (2022) and Nicholls et al. (1996) in Australia, Awe et al. (2022) in Mauritius and Parker (1994), globally. All present similar findings-significant differences in temperature readings between Stevenson screens and historic exposures, which vary seasonally, diurnally and according to weather conditions and type of exposure. ...

Insights from 20 years of temperature parallel measurements in Mauritius around the turn of the 20th century

... Such FSSP data is available from the CLARA campaigns, which took place in 1996 in the Netherlands. For more information on CLARA, see van Lammeren et al. [16]. Droplet size distributions measured on eight days have been used to look at the variations in β that may occur. ...

CLOUDS AND RADIATION: INTENSIVE EXPERIMENTAL STUDY OF CLOUDS AND RADIATION IN THE NETHERLANDS (CLARA)

... This approach, characterized by its simplicity and accessibility, contrasts with the intricate methodologies employed by other researchers in optimizing and fusing reanalysis data. These methodologies encompass sophisticated techniques, including artificial neural networks [59], wavelet transform methods [60], genetic algorithms [61,62], and machine learning [63,64]. However, it is noteworthy that the efficacy of the aforementioned optimization is constrained to situations akin to CMADS, where certain metrics exhibit suboptimal performance relative to others. ...

Multi‐objective downscaling of precipitation time series by genetic programming

... Specifically, in terms of observed climate datasets, we assumed temporal and spatial homogeneity in observational climate data over the IRB. This assumption could introduce nonclimatic biases in the time series of climate records (Domonkos et al. 2021). A more rigorous approach would include cross-validation with different observational datasets or the application of techniques to remove inhomogeneities, ensuring that the observed data is consistent over time and space before using it for model validation (Venema et al. 2012;Ding et al. 2024). ...

Efficiency of Time Series Homogenization: Method Comparison with 12 Monthly Temperature Test Datasets
  • Citing Article
  • January 2021

Journal of Climate

... Additionally, the topography and geology of the surrounding landscape play a crucial role in influencing water flow, thereby impacting river water levels (Schumm and Spitz, 1996;Detty and McGuire, 2010;Grant, 2012;Rinderer, Meerveld van and Seibert, 2014;Trevisan et al., 2020;Dai et al., 2021). Notably, the topography of the study area, characterised by a slope from the Nida River side to the Smuga Umianowicka branch, contributes to the abnormal increase in water level observed at the Smuga Umianowicka branch during periods of heavy rainfall. ...

The topographic control on land surface energy fluxes: A statistical approach to bias correction
  • Citing Article
  • February 2020

Journal of Hydrology

... Temperature observations from climate stations suffer from inhomogeneities caused by issues such as relocations or changes in the operational measuring practice. In this way, sudden breaks are introduced into the time series, which have the potential to falsify the true climate trend (Lindau and Venema 2020). In the past decades, numerous homogenisation algorithms have been developed to detect and correct this spurious break signal. ...

Random trend errors in climate station data due to inhomogeneities

... Warming", or ETCW (Hegerl et al., 2018), which characterizes many, particularly high-latitude, temperature records. It is debated whether this shape is attributable to anthropogenic causes or reflects natural, internal climate variability (Haustein et al., 2019). The green line in Fig. 8a is the third warming scenario we consider. ...

A limited role for unforced internal variability in 20 th century warming.

Journal of Climate

... Using the autocovariance of data pairs within a difference time series with constant time lag is a suitable tool to determine the strength of inhomogeneities (Lindau and Venema 2019). For short time lags, the probability is high that both data points are situated in the same inhomogeneity segment, which causes a positive autocovariance; for longer time lags, this effect decreases. ...

A new method to study inhomogeneities in climate records: Brownian motion or random deviations?

... Network of the stations in this study are only seven and to make maximum use of available data Homogeneity Adjustment of Surface Temperature Data and Study of the Climate from this sparse network of stations, reference series was created from a network of stations which can change with time by choosing the best stations available (Peterson & Easterling, 1994), so, we consider at least three stations for creation of a reference series. Minimum three stations are necessary for statistical homogenization (Venema et al., 2018) and use of five stations (four reference stations) can be considered as good result. After examining the temperature time series data of seven stations and nonavailability of metadata, it is felt necessary to create a composite reference series using time series data of all costal stations except the target station under homogeneity testing. ...

Guidance on the homogenization of climate station data