About
24
Publications
6,836
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
568
Citations
Introduction
I am a scientist at the European Centre for Medium-Range Weather Forecasts (ECMWF). I was previously a DPhil (i.e. PhD) student in the Predictability of Weather and Climate Group at the University of Oxford, where I graduated in 2019. My DPhil research focused on ways to improve the computational effiency of weather and climate simulators through reduced-precision arithmetic. I am now putting these ideas into practice at ECMWF in order to improve their operational weather forecasts.
Current institution
Additional affiliations
October 2019 - present
Education
October 2015 - August 2019
University of Oxford
Field of study
- Atmospheric Physics
October 2010 - June 2014
Publications
Publications (24)
A coupled atmosphere-ocean model is necessary for tropical cyclone (TC) prediction to accurately characterize ocean feedback on atmospheric processes within the TC environment. Here, the ECMWF coupled global model is run at horizontal resolutions from 9 km to 1.4 km in the atmosphere, as well as 25 km and 8 km in the ocean, to identify how resoluti...
General circulation models (GCMs) are the foundation of weather and climate prediction1,2. GCMs are physics-based simulators that combine a numerical solver for large-scale dynamics with tuned representations for small-scale processes such as cloud formation. Recently, machine-learning models trained on reanalysis data have achieved comparable or b...
SpeedyWeather.jl is a library to simulate and analyze the global atmospheric
circulation on the sphere. It implements several 2D and 3D
models which solve different sets of equations:
- the primitive equations with and without humidity,
- the shallow water equations, and
- the barotropic vorticity equation.
The current top supercomputer in the world is Fugaku, based at the RIKEN Centre for Computational Science (R-CCS) in Japan. Fugaku is notable not only for its size, with 160,000 nodes providing a peak performance of almost half an exaFLOPS, but also for having achieved this speed entirely through ARM CPU technology. Taking advantage of a collaborat...
Most Earth‐system simulations run on conventional central processing units in 64‐bit double precision floating‐point numbers Float64, although the need for high‐precision calculations in the presence of large uncertainties has been questioned. Fugaku, currently the world's fastest supercomputer, is based on A64FX microprocessors, which also support...
Most Earth-system simulations run on conventional CPUs in 64-bit double precision floating-point numbers Float64, although the need for high-precision calculations in the presence of large uncertainties has been questioned. Fugaku, currently the world’s fastest supercomputer, is based on A64FX microprocessors, which also support the 16-bit low-prec...
Reducing the numerical precision of the forecast model of the Integrated Forecasting System (IFS) of the European Centre for Medium‐Range Weather Forecasts (ECMWF) from double to single precision results in significant computational savings without negatively affecting forecast accuracy. The computational savings allow to increase the vertical reso...
We assess the ability of neural network emulators of physical parametrization schemes in numerical weather prediction models to aid in the construction of linearized models required by four‐dimensional variational (4D‐Var) data assimilation. Neural networks can be differentiated trivially, and so if a physical parametrization scheme can be accurate...
Plain Language Summary
The ability of computers to construct models from data (machine learning) has had significant impacts on many areas of science. Here, we use this ability to construct a model of an element of a numerical weather forecasting system. This element captures one physical process in the model, a part of the model that describes the...
We assess the value of machine learning as an accelerator for the parameterisation schemes of operational weather forecasting systems, specifically the parameterisation of non-orographic gravity wave drag. Emulators of this scheme can be trained that produce stable and accurate results up to seasonal forecasting timescales. Generally, more complex...
Abstract In an attempt to advance the understanding of the Earth's weather and climate by representing deep convection explicitly, we present a global, four‐month simulation (November 2018 to February 2019) with ECMWF's hydrostatic Integrated Forecasting System (IFS) at an average grid spacing of 1.4 km. The impact of explicitly simulating deep con...
The use of single-precision arithmetic in ECMWF's forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint mo...
Reducing numerical precision can save computational costs which can then be reinvested for more useful purposes. This study considers the effects of reducing precision in the parametrizations of an intermediate complexity atmospheric model (SPEEDY). We find that the difference between double‐precision and reduced‐precision parametrization tendencie...
The skill of weather forecasts has improved dramatically over the past 30 years. This improvement has depended to a large degree on developments in supercomputing, which have allowed models to increase in complexity and resolution with minimal technical effort. However, the nature of supercomputing is undergoing a significant change, with the adven...
The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware.
In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at d...
The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware.
In this poster, we investigate the use of mixed-precision hardware that supports floating-point operations at d...
Reducing precision can provide a significant reduction in wallclock time for
weather models, e.g. 40% reduction when using single-precision instead of double-
precision. This is because modern weather codes are memory-bound and reducing-precision reduces the amount of data that must be transferred. Reducing precision will introduce errors into arit...
The use of reduced numerical precision within an atmospheric data assimilation system is investigated. An atmospheric model with a spectral dynamical core is used to generate synthetic observations, which are then assimilated back into the same model using an ensemble Kalman filter. The effect on the analysis error of reducing precision from 64 bit...
Conventional wisdom dictates that, in atmospheric models, it is always best to use the highest numerical precision available. Only recently, several studies have found a significant tolerance in models to a reduction in precision, and therefore a potential free source of computational resources. The aim of this project is to extend these investigat...
A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-p...