Article

Comparing the ensemble mean and the ensemble standard deviation as inputs for probabilistic medium-range temperature forecasts

11/2003;
Source: arXiv

ABSTRACT We ask the following question: what are the relative contributions of the ensemble mean and the ensemble standard deviation to the skill of a site-specific probabilistic temperature forecast? Is it the case that most of the benefit of using an ensemble forecast to predict temperatures comes from the ensemble mean, or from the ensemble spread, or is the benefit derived equally from the two? The answer is that one of the two is much more useful than the other.

0 Bookmarks
 · 
127 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We show that probabilistic weather forecasts of site specific temperatures can be dramatically improved by using seasonally varying rather than constant calibration parameters.
    03/2004;
  • [Show abstract] [Hide abstract]
    ABSTRACT: The THORPEX goal of improving weather forecasts from one day to two weeks suggests the combination of multi-model and multi-initial-condition ensembles of simulations into a probabilistic forecast of some kind. This contribution presents a simple methodology for combining forecasts (be they high resolution or ensemble forecasts) into a predictive distribution function of a chosen target variable(s). While the identification of the most useful combination is a question of scoring distribution functions, the emphasis here will be on alternative mathematical methods with which to combine the variety of potential inputs. The aim is not to determine which of two operational forecast systems is "better" but rather which combination of all available forecast products is most useful. From a user's point of view, the ideal measure of forecast skill is made in terms of the task at hand, but proof-of-value studies can be expensive and tend to yield domain-specific results. Rather than adopt a particular user's cost function, general measures of skill are employed to distinguish the performance of various combinations (i.e. different interpretations of the information at hand). The methodology is illustrated using a combination of ECMWF and NCEP forecasts, and demonstrates that the distinctions introduced in this work can have a significant impact on the utility of the forecast in application.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
    Tellus 07/2008; 60(4):663 - 678. · 2.74 Impact Factor

Full-text (2 Sources)

Download
2 Downloads
Available from
Oct 16, 2014