May 2025
·
3 Reads
Initialized climate predictions have shown success in predicting interannual to decadal climate variations in some regions. However, the initialized predictions also suffer from different issues arising from imperfect initializations and inconsistencies between the model and the real world climate and processes. In particular, a so-called signal-to-noise paradox has been identified in recent years. The paradox implies that models can predict observations better than they predict themselves despite some physical inconsistencies between modeled and real world climate. This is often interpreted as an indicator of model deficiencies. Here we present results of a perfect-model decadal prediction experiment, where the predictions have been initialized using climate states from the model's own transient simulation. This experiment avoids issues related to model inconsistencies, initialization shock and the climate drift that affect real-world initialized climate predictions. We find that the perfect-model decadal predictions are highly skillful in predicting the near-surface air temperature and sea level pressure of the reference run on decadal timescales. Interestingly, we also find signal-to-noise issues– meaning that the perfect-model reference run is predicted with higher skill than any of the initialized prediction members. This counterintuitive result suggests that the signal-to-noise paradox may not be due just to model deficiencies in representing the observed climate in initialized predictions. We illustrate that this signal-to-noise problem, on multi-annual to decadal timescales, is related to analysis practices that concatenate time series from different discontinuous initialized simulations, which introduces inconsistencies compared to the continuous transient climate realizations and the observations. In particular, the concatenation of predictions initialized independently into a single time series breaks its auto-correlation.