## No full-text available

Request the article directly

from the author on ResearchGate.

The paper considers fundamental limitations that impede the growth of performance of supercomputers based on silicon transistors and do not allow satisfying the needs of developing numerical weather and climate prediction models. A brief review of studies dealing with the development of quantum computing algorithms and with the creation of real quantum computers is provided. The shift from pure science to engineering solutions is observed. The leaders of the computer industry set the goal of creating a general-purpose quantum computer in the next few years. It is proved that the applicability of quantum computations and quantum computers for solving the problems of numerical weather and climate prediction should be preliminarily studied and assessed.

Request the article directly

from the author on ResearchGate.

... As the technology is still in its infancy, the potential of quantum information processing has not been explored/exposed completely. Even at present, in addition to the scientific interest/research, the unique technique and unprecedented computing power of QIP systems show tremendous prospects in many practical/commercial applications such as the development of new medicines and materials [16,17], machine learning & artificial intelligence [18,19], financial services, supply chain & logistics, software ver-ification & validation [17], numerical weather prediction [20], image processing [21], cybersecurity [11,22] etc. ...

Silicon has several features that make it an attractive platform for implementing quantum information processing (QIP). The unique combination of a mature fabrication technology, extraordinarily long coherence times and high control fidelities of donor spins in isotopically purified silicon have made them good candidates for realising spin based quantum bits. Increased spin-photon coupling has the potential to add additional benefits including efficient optical readout of individual donor spins and even a route to generating entanglement. Efficient optical detection of the donor spin state could provide the missing piece of puzzle to realise long-range qubit couplings and construct quantum networks, however, achieving this in silicon is challenging due to its indirect bandgap. Photonic structures such as solid immersion lenses (SILs), circular Bragg resonators (CBRs), photonic crystal (PhC) cavities etc. can enhance radiative emission and/or its collection up to several orders of magnitude, potentially allowing it to compete with non-radiative processes such as Auger recombination. In this thesis, we report our first steps towards fabricating and characterizing such photonic structures, designed to enhance radiative emission and optical collection from 31P donor bound excitons (D0Xs) in silicon. We have fabricated silicon SILs using a neon focussed ion beam milling system. A bilayer resist based fabrication recipe has also been optimised using relatively inexpensive process materials, which efficiently produces CBRs and PhC cavities with desired optical properties on undoped silicon-on-insulator wafers. The optical properties of the fabricated devices are investigated using cavity reflection measurements. We have measured an absorption limited cavity quality factor (Q) of ∼ 5,000 around 31P D0X emission wavelengths (∼ 1078 nm) for silicon PhC cavities at room temperature. Silicon PhC cavities with such a quality factor (∼ 5,000) and enhanced collection can permit for optically accessing spins at a single or at least few donors level.

Forecasting for wind and solar renewable energy is becoming more important as the amount of energy generated from these sources increases. Forecast skill is improving, but so too is the way forecasts are being used. In this paper, we present a brief overview of the state‐of‐the‐art of forecasting wind and solar energy. We describe approaches in statistical and physical modeling for time scales from minutes to days ahead, for both deterministic and probabilistic forecasting. Our focus changes then to consider the future of forecasting for renewable energy. We discuss recent advances which show potential for great improvement in forecast skill. Beyond the forecast itself, we consider new products which will be required to aid decision making subject to risk constraints. Future forecast products will need to include probabilistic information, but deliver it in a way tailored to the end user and their specific decision making problems. Businesses operating in this sector may see a change in business models as more people compete in this space, with different combinations of skills, data and modeling being required for different products. The transaction of data itself may change with the adoption of blockchain technology, which could allow providers and end users to interact in a trusted, yet decentralized way. Finally, we discuss new industry requirements and challenges for scenarios with high amounts of renewable energy. New forecasting products have the potential to model the impact of renewables on the power system, and aid dispatch tools in guaranteeing system security.
This article is categorized under:
• Energy Infrastructure > Systems and Infrastructure
• Wind Power > Systems and Infrastructure
• Photovoltaics > Systems and Infrastructure

Companies are increasingly turning to Analytics to gain a competitive edge, providing decision makers with better insights. As they do, firms must resolve unique demands on their Information Technology resources and processes. This paper attempts to review the chronology of the adoption of Analytics. Next, it highlights how various economic sectors have incorporated Analytics and the resulting changes in the ways businesses interact with their customers. Further, the paper explores the cutting edge innovations in the Analytics domain. Subsequently, this paper tries to do an impact analysis on all these developments of our lives. Finally, the opportunities and threats, and how they may stand to affect our future, are discussed.

Ferromagnetic nanofibers and nanofiber based networks with new electronic, magnetic, mechanical and other physical properties can be considered significant components of bio-inspired cognitive computing units. For this purpose, it is necessary to examine all relevant physical parameters of such nanofiber networks. Due to the more or less random arrangement of the nanofibers, first of all, the elementary single nanofibers with varying bending radii, from straight fibers to those bent along half-circles, were investigated by micromagnetic simulations, using different angles with respect to the external magnetic field. Different fiber cross-sections, i.e. circular, circle-segment, rectangle one, significantly altered the coercive fields and their dependence on the bending radius, for the magnetic field oriented differently in relation to the fiber axes. The shapes of the longitudinal and transverse hysteresis curves showed strong differences, depending on cross-section, bending radius and orientation to the magnetic field, often depicting distinct transverse magnetization peaks perpendicular to the fibers for fibers which were not completely oriented parallel to the magnetic field. Varying these parameters thus provides a broad spectrum of magnetization reversal processes in magnetic nanofibers and correspondingly scenarios for a variety of fiber-based information processing.

A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I

- A Arakawa
- C.-M Wu

A Technology of the Operational Production of Global Weather Forecasts for 1-10 days Based on the T169L31 Model (with a 60-70 km Resolution) Using the New Supercomputer System of WMC Moscow

- I A Rozinkina
- E D Astakhova
- T Ya
- Ponomareva

Complex Supercomputer Upgrade Completed

- M Hawkins

The ICON (ICOsahedral Non-hydrostatic) Modelling Framework of DWD and MPI-M: Description of the Nonhydrostatic Dynamical Core

- G Zangl
- D Reinert
- P Tripods
- M Baldauf

Global Semi-Lagrangian Numerical Weather Prediction Model (OAO FOP

- M A Tolstykh
- M. A. Tolstykh

The Computable and the Noncomputable (Sovetskoe Radio

- Yu I Manin
- Yu. I. Manin

Operational Convective-scale Numerical Weather Prediction with the COSMO Model: Description and Sensitivities

- M Baldauf
- A Seifert
- J Forstner
- M. Baldauf

Mathematical Modeling of the Earth System

- E M Volodin
- V Ya
- A S Galin
- Gritsun

Algorithms for Quantum Computers

- J Smith
- M Mosca
- J. Smith

The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 Configurations

- D Walters
- I Boutle
- M Brooks
- D. Walters

Controllable, coherent many-body systems provide unique insights into fundamental properties of quantum matter, allow for the realization of novel quantum phases, and may ultimately lead to computational systems that are exponentially superior to existing classical approaches. Here, we demonstrate a novel platform for the creation of controlled many-body quantum matter. Our approach makes use of deterministically prepared, reconfigurable arrays of individually controlled, cold atoms. Strong, coherent interactions are enabled by coupling to atomic Rydberg states. We realize a programmable Ising-type quantum spin model with tunable interactions and system sizes of up to 51 qubits. Within this model we observe transitions into ordered states (Rydberg crystals) that break various discrete symmetries, verify high-fidelity preparation of ordered states, and investigate dynamics across the phase transition in large arrays of atoms. In particular, we observe a novel type of robust many-body dynamics corresponding to persistent oscillations of crystalline order after a sudden quantum quench. These observations enable new approaches for exploring many-body phenomena and open the door for realizations of novel quantum algorithms.

We describe Global Atmosphere 6.0 and Global Land 6.0 (GA6.0/GL6.0): the latest science configurations of the Met Office Unified Model and JULES (Joint UK Land Environment Simulator) land surface model developed for use across all timescales. Global Atmosphere 6.0 includes the ENDGame (Even Newer Dynamics for General atmospheric modelling of the environment) dynamical core, which significantly increases mid-latitude variability improving a known model bias. Alongside developments of the model's physical parametrisations, ENDGame also increases variability in the tropics, which leads to an improved representation of tropical cyclones and other tropical phenomena. Further developments of the atmospheric and land surface parametrisations improve other aspects of model performance, including the forecasting of surface weather phenomena.
We also describe GA6.1/GL6.1, which includes a small number of long-standing differences from our main trunk configurations that we continue to require for operational global weather prediction.
Since July 2014, GA6.1/GL6.1 has been used by the Met Office for operational global numerical weather prediction, whilst GA6.0/GL6.0 was implemented in its remaining global prediction systems over the following year.

We describe Global Atmosphere 4.0 (GA4.0) and Global Land 4.0
(GL4.0): configurations of the Met Office Unified Model
and JULES (Joint UK Land Environment
Simulator) community land surface model developed for use in
global and regional climate research and weather prediction
activities. GA4.0 and GL4.0 are based on the previous GA3.0 and
GL3.0 configurations, with
the inclusion of developments made by the Met Office and its
collaborators during its annual development cycle.
This paper provides a comprehensive technical and scientific
description of GA4.0 and GL4.0 as well as details of how
these differ from their predecessors. We also present the
results of some initial evaluations of their performance.
Overall, performance is comparable with that of
GA3.0/GL3.0; the updated configurations include
improvements to the science of several parametrisation schemes,
however, and will form a baseline for further ongoing development.

We describe Global Atmosphere 6.0 and Global Land 6.0: the latest science configurations of the Met Office Unified Model and JULES land surface model developed for use across all timescales. Global Atmosphere 6.0 includes the ENDGame dynamical core, which significantly increases mid-latitude variability improving a known model bias. Alongside developments of the model’s physical parametrisations, ENDGame also increases variability in the tropics, which leads to an improved representation of tropical cyclones and other tropical phenomena. Further developments of the atmospheric and land surface parametrisations improve other aspects of model performance, including the forecasting of surface weather phenomena.
We also describe Global Atmosphere 6.1 and Global Land 6.1, which include a small number of long-standing differences from our main trunk configurations that we continue to require for operational global weather prediction.
Since July 2014, GA6.1/GL6.1 has been used by the Met Office for operational global NWP, whilst GA6.0/GL6.0 was implemented in its remaining global prediction systems over the following year.

Quantum computers are designed to outperform standard computers by running
quantum algorithms. Areas in which quantum algorithms can be applied include
cryptography, search and optimisation, simulation of quantum systems, and
solving large systems of linear equations. Here we briefly survey known quantum
algorithms, with an emphasis on a broad overview of their applications rather
than their technical details. We include a discussion of recent developments
and near-term applications of quantum algorithms.

Described is the second stage of the work (2011-2014) on the implementation and development of the COSMO-Ru system of nonhydrostatic short-range weather forecasting (the first stage of the implementation and development of the COSMO-Ru system is described in [7, 8]). Demonstrated is how the research activities and ideas of G.I. Marchuk influenced modern methods for solving the systems of differential equations that describe atmospheric processes (in particular, the version of the Marchuk’s splitting method is used to find the solution of the finite-difference analog of the system of differential equations in the COSMO-Ru model); it is shown how he contributed to the development of the methods of assimilation of meteorological information associated with the use of adjoint equations. Given is a brief description of the COSMO model of the atmosphere and soil active layer, the COSMO-Ru system, and research activities on this system development.

The problem of improving the accuracy of numerical weather prediction is considered. The review
of the current state of two directions related to the solution of this problem is presented: increasing the accuracy
of numerical solution of the atmospheric hydrothermodynamics equations and improving the evaluation of the
initial state of the atmosphere. To increase the accuracy of solutions to the equations, it is necessary to apply
efficient numerical methods and to switch to the nonhydrostatic equations. The improvement of the evaluation
of the atmosphere’s initial state is related to the development of the observation system and assimilation methods
for observational data. Variational methods of data assimilation are considered in detail.

Advances in numerical weather prediction represent a quiet revolution because they have resulted from a steady accumulation of scientific knowledge and technological advances over many years that, with only a few exceptions, have not been associated with the aura of fundamental physics breakthroughs. Nonetheless, the impact of numerical weather prediction is among the greatest of any area of physical science. As a computational problem, global weather prediction is comparable to the simulation of the human brain and of the evolution of the early Universe, and it is performed every day at major operational centres across the world.

The global hydrodynamic atmosphere model SL-AV is applied for operational medium-range weather forecast and as a component of the probabilistic long-range forecast system. The review of the previous development of the model is presented and the model features are noted. The existing model versions are described. The unified multi-scale version of the model is developed on the basis of these versions. This version is in tended both for numerical weather prediction and for modeling of climate changes. The numerical experiments on climate modeling with the developed multi-scale version are carried out according to the protocol of the international AMIP2 experiment. First results are presented. The possibility of application of the unified version of the SL-AV model for the medium range weather forecast, and, after some development, for modeling of climate changes is shown. Keywords: Global hydrodynamic model of the atmosphere, numerical weather prediction, modeling of climate changes, parameterization of subgrid-scale processes, numerical solution of the atmosphere dynamics equations

The development of atmospheric mesoscale models from their early origins in the 1970's until the present day is described. Evolution has occurred in dynamical and physics representations in these models. The dynamics has had to change from hydrostatic to fully nonhydrostatic equations to handle the finer scales that have become possible in the last few decades with advancing computer power, which has enabled real-time forecasting to go to finer grid sizes. Meanwhile the physics has also become more sophisticated than the initial representations of the major processes associated with the surface, boundary layer, radiation, clouds and convection. As resolutions have become finer, mesoscale models have had to change paradigms associated with assumptions related to what is considered sub-grid scale needing parameterization, and what is resolved well enough to be explicitly handled by the dynamics. This first occurred with cumulus parameterization as real-time forecast models became able to represent individual updrafts, and is now starting to occur in the boundary layer as future forecast models may be able resolve individual thermals. Beyond that, scientific research has provided a greater understanding of detailed microphysical and land-surface processes that are important to aspects of weather prediction, and these parameterizations have been developing complexity at a steady rate. This paper can just give a perspective of these developments in the broad field of research associated with mesoscale atmospheric model development.

The gray zone of a physical process in numerical models is defined as the range of model resolution in which the process is partly resolved by model dynamics and partly parameterized. In this study, the authors examine the grid-size dependencies of resolved and parameterized vertical transports in convective boundary layers (CBLs) for horizontal grid scales including the gray zone. To assess how stability alters the dependencies on grid size, four CBLs with different surface heating and geostrophic winds are considered. For this purpose, reference data for grid-scale (GS) and subgrid-scale (SGS) fields are constructed for 50-4000-m mesh sizes by filtering 25-m large-eddy simulation (LES) data.As relative importance of shear increases, the ratio of resolved turbulent kinetic energy increases for a given grid spacing. Vertical transports of potential temperature, momentum, and a bottom-up diffusion passive scalar behave in a similar fashion. The effects of stability are related to the horizontal scale of coherent large-eddy structures that change in the different stability. The subgrid-scale vertical transport of heat and the bottom-up scalar are divided into a nonlocal mixing owing to the coherent structures and remaining local mixing. The separate treatment of the nonlocal and local transports shows that the grid-size dependency of the SGS nonlocal flux and its sensitivity to stability predominantly determine the dependency of total (nonlocal plus local) SGS transport.

The evolution of global atmospheric model dynamical cores from the first developments in the early 1960s to present day is reviewed. Numerical methods for atmospheric models are not straightforward because of the so-called pole problem. The early approaches include methods based on composite meshes, on quasi-homogeneous grids such as spherical geodesic and cubed sphere, on reduced grids, and on a latitude-longitude grid with short time steps near the pole, none of which were entirely successful. This resulted in the dominance of the spectral transform method after it was introduced. Semi-Lagrangian semi-implicit methods were developed which yielded significant computational savings and became dominant in Numerical Weather Prediction. The need for improved physical properties in climate modeling led to developments in shape preserving and conservative methods. Today the numerical methods development community is extremely active with emphasis placed on methods with desirable physical properties, especially conservation and shape preservation, while retaining the accuracy and efficiency gained in the past. Much of the development is based on quasi-uniform grids. Although the need for better physical properties is emphasized in this paper, another driving force is the need to develop schemes which are capable of running efficiently on computers with thousands of processors and distributed memory. Test cases for dynamical core evaluation are also briefly reviewed. These range from well defined deterministic tests to longer term statistical tests with both idealized forcing and complete parameterization packages but simple geometries. Finally some aspects of coupling dynamical cores to parameterization suites are discussed.

Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation, and in particular, on problems with an algebraic flavor. Comment: 52 pages, 3 figures, to appear in Reviews of Modern Physics

The concept of computers that harness the laws of quantum mechanics has transformed our thinking about how information can be processed. Now the environment exists to make prototype devices a reality.

Google, Microsoft and a host of labs and start-ups are racing to turn scientific curiosities into working machines.

Accuracy is estimated of the meteorological variable forecasts produced within the framework of the forecasting system of the Hydrometeorological Center of Russia, as based on the T85L31 global spectral model of the atmosphere. On a large statistical data set, it is shown that in the Northern Hemisphere the predictability limit of geopotential height in the free atmosphere during the year reaches 6-7 days in the cold and 5-6 days when it is warm. Improvement of horizontal and vertical resolution up to 85 harmonics (about 100 km) and to 31 levels, respectively, introduction of the diurnal radiation cycle, and fine adjustment of the subgrid process parametrization blocks allowed obtaining practically useful forecasts of midlatitude precipitation at the projection range from 24 to 84 h, of surface temperature, from 6 to 120 h, of surface pressure, from 6 to 120 h, and of cloudiness, from 18 to 84 h. Fairly high accuracy of surface weather forecasts implies that many adverse weather phenomena caused by mesoscale atmospheric processes are to a large extent determined by synoptic and other large-scale objects. That is why these phenomena can be successfully predicted by high-resolution numerical models with projections significantly exceeding the lifecycles of the phenomena.

Preliminary results of the works on development of a global forecast atmosphere model in the Hydrometeorological Center of Russian Federation are presented. Algorithms of global nonlinear initialization and the possibilities of reduced gaussian grid application for a spectral model with triangle truncation are considered. Changes of a radiation algorithm and parametrization block of processes on a covering surface are discussed that are made to introduce the 24-hourly variation in the model. In the case of unchanged horizontal and vertical resolution quality of operative forecasts is shown to rise from 1986 to 1993 due to model improvement and decrease of objective analysis errors of initial meteorological fields.

It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.

A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

A pragmatic approach for representing partially resolved turbulence in numerical weather prediction models is introduced and tested. The method blends a conventional boundary layer parameterization, suitable for large grid lengths, with a subgrid turbulence scheme suitable for large-eddy simulation. The key parameter for blending the schemes is the ratio of grid length to boundary layer depth. The new parameterization is combined with a scale-aware microphysical parameterization and tested on a case study forecast of stratocumulus evolution. Simulations at a range of model grid lengths between 1 km and 100 m are compared to aircraft observations. The improved microphysical representation removes the correlation between precipitation rate and model grid length, while the new turbulence parameterization improves the transition from unresolved to resolved turbulence as grid length is reduced.

This paper describes the nonhydrostatic dynamical core developed for the ICON (ICOsahedral Nonhydrostatic) modelling framework. ICON is a joint project of the German Weather Service (DWD) and the Max Planck Institute for Meteorology (MPI-M), targeting a unified modelling system for global numerical weather prediction (NWP) and climate modelling. Compared to the existing models at both institutions, the main achievements of ICON are exact local mass conservation, mass-consistent tracer transport, a flexible grid nesting capability, and the use of nonhydrostatic equations on global domains. The dynamical core is formulated on an icosahedral-triangular Arakawa-C grid. Achieving mass conservation is facilitated by a flux-form continuity equation with density as the prognostic variable. Time integration is performed with a two-time-level predictor-corrector scheme that is fully explicit except for the terms describing vertical sound-wave propagation. To achieve competitive computational efficiency, time splitting is applied between the dynamical core on the one hand, and tracer advection, the physics parametrizations and horizontal diffusion on the other hand. A sequence of tests with varying complexity indicate that the ICON dynamical core combines high numerical stability over steep mountain slopes with good accuracy and reasonably low diffusivity. Preliminary NWP test suites initialized with interpolated analysis data reveal that the ICON modelling system achieves already better skill scores than its predecessor at DWD, the operational hydrostatic GME, and at the same time requires significantly less computational resources.

Information is always stored in a physical medium and manipulated by
physical processes. Therefore the laws of physics dictate the
capabilities and limitations of any information processing device.
Consequently, any meaningful model of computation must refer (at least
implicitly) to a realistic physical theory. Quantum Information
Processing, or Quantum Computation, treats information processing in a
quantum mechanical framework. The quantum features of nature lead to
qualitatively different and apparently more powerful models of
computation and communication. Quantum computers are devices that allow
us to exploit the quantum mechanical features of physical systems. These
quantum mechanical features allow us to efficiently solve problems that
were previously believed to be intractable. One important example is the
problem of factoring integers, which would have an enormous impact on
the existing information security infrastructure. The quantum computer
appears to be a physically reasonable model of computation. I will
introduce the quantum computer as a natural generalization of a
classical computer, and survey the state of the art in quantum
algorithms.

Since April 2007, the numerical weather prediction model, COSMO (Consortium for Small Scale Modelling), has been used operationally in a convection-permitting configuration, named COSMO-DE, at the Deutscher Wetterdienst (DWD; German weather service). Here the authors discuss the model changes that were necessary for the convective scale, and report on the experience from the first years of operational application of the model. For COSMO-DE the ability of the numerical solver to treat small-scale structures has been improved by using a Runge-Kutta method, which allows for the use of higher-order upwind advection schemes. The one-moment cloud microphysics parameterization has been extended by a graupel class, and adaptations for describing evaporation of rain and stratiform precipitation processes were made. Comparisons with a much more sophisticated two-moment scheme showed only minor differences in most cases with the exception of strong squall-line situations. Whereas the deep convection parameterization was switched off completely, small-scale shallow convection was still parameterized by the appropriate part of the Tiedtke scheme. During the first year of operational use, convective events in synoptically driven situations were satisfactorily simulated. Also the daily cycles of summertime 10-m wind and 1-h precipitation sums were well captured. However, it became evident that the boundary layer description had to be adapted to enhance convection initiation in airmass convection situations. Here the asymptotic Blackadar length scale l(infinity) had proven to be a sensitive parameter.

Very high resolution spectral transform models are believed to become prohibitively expensive, due to the relative increase in computational cost of the Legendre transforms compared to the gridpoint computations. This article describes the implementation of a practical fast spherical harmonics transform into the Integrated Forecasting System (IFS) at ECMWF. Details of the accuracy of the computations, of the parallelisation and memory use are discussed. Results are presented that demonstrate the cost-effectiveness and accuracy of the fast spherical harmonics transform, successfully mitigating the concern about the disproportionally growing computational cost. Using the new transforms, the first T7999 global weather forecast (equivalent to approximately 2.5km horizontal grid size) using a spectral transform model has been produced.

We accelerate the computation of spherical harmonic transforms, using what is
known as the butterfly scheme. This provides a convenient alternative to the
approach taken in the second paper from this series on "Fast algorithms for
spherical harmonic expansions." The requisite precomputations become manageable
when organized as a "depth-first traversal" of the program's control-flow
graph, rather than as the perhaps more natural "breadth-first traversal" that
processes one-by-one each level of the multilevel procedure. We illustrate the
results via several numerical examples.

The book represents the Russian version of a book by L.D. Landau and
E.M. Lifshitz with very little adds comparatively the edition of the
same book, issued in 1965.

The spherical (geographical) coordinate system, which is an analog of Cartesian coordinates on a sphere, is widely used in the solution of geophysical problems. Any continuous function of a real variable (geophysical field) can be approximated to any arbitrary accuracy along latitudinal circles by a trigonometric polynomial. The functions in the spherical coordinate system are defined along the meridian over segment [0, π ] . These functions can be approximated by series of Legendre polynomials or Chebyshev polynomials of the first kind (even functions) and Chebyshev polynomials of the second kind (odd functions). Owing to intersection of meridians at the poles, the spherical coordinate system includes singular points, where vector functions and odd derivatives of scalar functions are discontinuous. The existence of singular points does not allow us to form series by orthogonal (including trigonometric) polynomials that converge uniformly over the entire segment [0, π ], including its endpoints. This peculiarity of the spherical coordinate system leads to the so-called “problem of poles” in interpolation, numerical differentiation, and integration of geophysical fields in the direction along meridian, as well as to computational instability in the solution of applied initial value. In this paper, we suggest a new method for approximating vector and scalar geophysical fields on the sphere using trigonometric polynomials, which provides uniform convergence of series at all points of the sphere including the poles. Therefore, we propose to calculate Fourier integrals in a domain that is a direct product of full meridional circles (segment [0, 2 π ]) and latitudinal circles (segment [0, 2 π ]). The values of the sought function along a meridian in the North Pole‐ South Pole direction are taken within the limits [0, π ]. Upon movement along the closing great-circle arc of this meridian (from the South Pole to the North Pole within the limits [ π , 2 π ]), the values are taken at the corresponding latitudes with a π shift and opposite sign if the function is vector. At the poles, the values of the vector function are defined from an obvious requirement of continuity of the geophysical fields at all points of the sphere. Let us consider this algorithm in more detail. Suppose we have a sphere with unit radius and center located at the origin of a spherical coordinate system with independent variables: θ is complement with

During the last few decades, an extensive development of the theory of
computing machines has occurred. On an intuitive basis, a computing
machine is considered to be any physical system whose dynamical
evolution takes it from one of a set of 'input' states to one of a set
of 'output' states. For a classical deterministic system the measured
output label is a definite function f of the prepared input label.
However, quantum computing machines, and indeed classical stochastic
computing machines, do not 'compute functions' in the considered sense.
Attention is given to the universal Turing machine, the Church-Turing
principle, quantum computers, the properties of the universal quantum
computer, and connections between physics and computer science.

Detailed study of the European Centre for Medium-Range Weather Forecasts verifications shows that identifiable improvements in the data assimilation, model and observing systems have significantly increased the accuracy of both short- and medium-range forecasts, although interannual (flow-dependent) variations in error-growth characteristics complicate the picture. The implied r.m.s. error of 500 hPa height analyses has fallen well below the 10 m level typical of radiosonde measurement error. Intrinsic error-doubling times, computed from the divergence of northern hemisphere forecasts started 1 day apart, exhibit a small overall reduction over the past 10 years at day two and beyond, and a much larger reduction at day one. Error-doubling times for the southern hemisphere have become generally shorter and are now similar to those for the northern hemisphere. One-day forecast errors have been reduced so much in the southern hemisphere that medium-range forecasts for the region have become almost as skilful as those for the northern hemisphere.

An algorithm is proposed for computing two-dimensional Fourier series in a spherical system of coordinates over a set of orthogonal
ultraspherical polynomials whose special cases are the Legendre polynomials and Chebyshev polynomials of the first and second
kind. The series uniformly converge at all points of the sphere including the poles. Unlike traditional spectral expansions
on the sphere, they explicitly contain additional terms that characterize an odd component of a desired analytical function
relative to the poles. It is shown that the expansion in the small vicinity of the poles (polar caps) is simplified because
of the closeness to zero of the Fourier series terms responsible for the approximation of the components of the function that
are odd relative to the poles. As the equatorial zone is approached, the value of the components of the desired function unsymmetrical
relative to the poles increases and becomes comparable to the contribution of the symmetrical components. The new method is
used for a special case of the spectral approximation of a continuous scalar analytical function with spherical harmonics
used as the orthogonal basis. It is shown that double Fourier series in this case give an extension of the traditional spectral
method. An alternative is to construct double Fourier series in associated Chebyshev polynomials of the first and second kind.
An example of the spectral approximation of an analytical function on the sphere is presented.

A reversible Turing machine is one whose transition function is $1:1$, so that no instantaneous description (ID) has more than one predecessor. Using a pebbling argument, this paper shows that, for any $\varepsilon > 0$, ordinary multitape Turing machines using time T and space S can be simulated by reversible ones using time $O(T^{1 + \varepsilon } )$ and space $O(S\log T)$ or in linear time and space $O(ST^\varepsilon )$. The former result implies in particular that reversible machines can simulate ordinary ones in quadratic space. These results refer to reversible machines that save their input, thereby insuring a global $1:1$ relation between initial and final IDs, even when the function being computed is many-to-one. Reversible machines that instead erase their input can of course compute only $1:1$ partial recursive functions and indeed provide a Godel numbering of such functions. The time/space cost of computing a $1:1$ function on such a machine is equal within a small polynomial to the cost of co...

A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time of at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. AMS subject classifications: 82P10, 11Y05, 68Q10. 1 Introduction One of the first results in the mathematics of computation, which underlies the subsequent development of much of theoretical computer science, was the distinction between compu...