[Show abstract][Hide abstract] ABSTRACT: Extreme precipitation was identified in the midwestern United States using an object-oriented approach applied to the NCEP stage-II hourly precipitation dataset. This approach groups contiguous areas that exceed a user-defined threshold into ''objects,'' which then allows object attributes to be diagnosed. Those objects with precipitation maxima in the 99th percentile (.55 mm) were considered extreme, and there were 3484 such objects identified in the midwestern United States between 1996 and 2010. Precipitation objects ranged in size from hundreds to over 100 000 km 2 , and the maximum precipitation within each object varied between 55 and 104 mm. The majority of occurrences of extreme precipitation were in the summer (June, July, and August), and peaked in the afternoon into night (1900–0200 UTC) in the diurnal cycle. Consistent with the previous work by the authors, this study shows that the systems that produce extreme precipitation in the midwestern United States vary widely across the convective-storm spectrum.
[Show abstract][Hide abstract] ABSTRACT: We explore the use of high-resolution dynamical downscaling as a means to simulate the regional climatology and variability
of hazardous convective-scale weather. Our basic approach differs from a traditional regional climate model application in
that it involves a sequence of daily integrations. We use the weather research and forecasting (WRF) model, with global reanalysis
data as initial and boundary conditions. Horizontal grid lengths of 4.25km allow for explicit representation of deep convective
storms and hence a compilation of their occurrence statistics over a large portion of the conterminous United States. The
resultant 10-year sequence of WRF model integrations yields precipitation that, despite its positive bias, has a diurnal cycle
consistent with observations, and otherwise has a realistic geographical distribution. Similarly, the occurrence frequency
of short-duration, potentially flooding rainfall compares well to analyses of hourly rain gauge data. Finally, the climatological
distribution of hazardous-thunderstorm occurrence is shown to be represented with some degree of skill through a model proxy
that relates rotating convective updraft cores to the presence of hail, damaging surface winds, and tornadoes. The results
suggest that the proxy occurrences, when coupled with information on the larger-scale atmosphere, could provide guidance on
the reliability of trends in the observed occurrences.
KeywordsSevere thunderstorm–Heavy rainfall–Dynamical downscaling–Reanalysis–Weather research and forecasting model
[Show abstract][Hide abstract] ABSTRACT: This research establishes a methodology to quantify the characteristics of convective cloud systems that produce subdiurnal extreme precipitation. Subdiurnal extreme precipitation events are identified by examining hourly precipitation data from 48 rain gauges in the midwestern United States during the period 1956-2005. Time series of precipitation accumulations for 6-h periods are fitted to the generalized Pareto distribution to determine the 10-yr return levels for the stations. An extreme precipitation event is one in which precipitation exceeds the 10-yr return level over a 6-h period. Return levels in the Midwest vary between 54 and 93 mm for 6-h events. Most of the precipitation contributing to these events falls within 1-2 h. Characteristics of the precipitating systems responsible for the extremes are derived from the National Centers for Environmental Prediction stage II and stage IV multisensor precipitation data. The precipitating systems are treated as objects that are identified using an automated procedure. Characteristics considered include object size and the precipitation mean, variance, and maximum within each object. For example, object sizes vary between 96 and 34 480 km(2), suggesting that a wide variety of convective precipitating systems can produce subdiurnal extreme precipitation.
Journal of Hydrometeorology 02/2010; 11(1):211-218. · 3.57 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Few studies examine sub-diurnal precipitation extremes, instead focusing on daily extremes. However, flash flooding occurs on shorter time-scales and significantly impacts life and property. Case studies have identified storm systems responsible for significant flooding events, but this study seeks to quantify the characteristics of systems that produce sub-diurnal extreme precipitation. Sub-diurnal extreme precipitation events are identified by examining hourly precipitation data from select stations in Indiana and Illinois during the period of 1956-2005. Timeseries of precipitation accumulations for 3- and 6- hour periods are fitted to the Pareto distribution to determine the 10-year return levels for the stations. An extreme precipitation event is defined as one that exceeds the 10-year return level over both a 3-hour and a 6-hour period. Stations in Indiana have return levels ranging from 2.02 in. to 2.74 in. for 3-hour periods, and 2.46 in. to 3.16 in. for 6-hour periods. Stations in Illinois have return levels ranging from 2.43 in. to 2.84 in. for 3-hour periods, and 2.84 in. to 3.39 in. for 6-hour periods. These return levels yield about 6 events per station over the 50-year period of record for Indiana and between 3 and 7 events per station in Illinois. Multisensor precipitation data are available beginning in 1996 for stage II analyses and 2002 for stage IV analyses. This results in a total of 6 extreme precipitation events from the Indiana stations and 5 from the Illinois stations. The automated classification procedure developed by Baldwin et al. (2005) is applied to stage II/IV analyses for each hour of each event to determine the statistical characteristics for each event. Areas of continuous precipitation above a user-defined threshold are considered a single object. The threshold value of 5 mm (0.20 in.), used here, is considered the lower bound for convective precipitation. Object characteristics include: number of pixels (4 km x 4 km), mean precipitation, variance, maximum precipitation, shape, and orientation angle. During each hour of an extreme precipitation event the object whose centroid is closest to the station is used to define the characteristics of the precipitating system. Over the course of an extreme precipitation event the maximum precipitation identified in the object corresponds well to hours in which extreme heavy precipitation (> 1.00 in.) occurs. While object sizes seem to be closely related to the amount of maximum precipitation from hour to hour, sizes range from hundreds to thousands of pixels during peak maximum precipitation.
[Show abstract][Hide abstract] ABSTRACT: This COMET proposal describes a two-year project, beginning on 6/1/00, that will create, evaluate, and implement the first precipitation-type probabilistic forecast system, using ensemble forecasting and consensus forecasting concepts, at the Hydrometeorological Prediction Center (HPC) and the Storm Prediction Center (SPC). The research component of this project will evaluate the quality of various precipitation-type algorithms and investigate various ensemble forecasting concepts in applying the collection of algorithms. The operational component of the project will create probabilistic forecasts of each precipitation type diagnosed by the algorithms and consensus forecasts of the most probable precipitation type. After sufficient development and testing at the HPC and the SPC, these forecasts will be made available routinely to all National Weather Service (NWS) offices within two years, provided approval is granted by the NWS. I. Overview Precipitation-type forecasting remains a difficult task for even the most experienced forecasters because of the uncertainties associated with forecasting the evolution of winter weather systems that produce snow, rain, ice pellets, and freezing rain, as well as inadequate atmospheric data sampling, a complete knowledge of precipitation microphysics, and limited techniques to evaluate high resolution model data. The importance of forecasting these events accurately is illustrated by a set of statistics compiled the Office of Meteorology, which reports
[Show abstract][Hide abstract] ABSTRACT: Severe thunderstorms comprise an extreme class of deep convective clouds and produce high-impact weather such as destructive surface winds, hail, and tornadoes. This study addresses the question of how severe thunderstorm frequency in the United States might change because of enhanced global radiative forcing associated with elevated greenhouse gas concentrations. We use global climate models and a high-resolution regional climate model to examine the larger-scale (or “environmental”) meteorological conditions that foster severe thunderstorm formation. Across this model suite, we find a net increase during the late 21st century in the number of days in which these severe thunderstorm environmental conditions (NDSEV) occur. Attributed primarily to increases in atmospheric water vapor within the planetary boundary layer, the largest increases in NDSEV are shown during the summer season, in proximity to the Gulf of Mexico and Atlantic coastal regions. For example, this analysis suggests a future increase in NDSEV of 100% or more in locations such as Atlanta, GA, and New York, NY. Any direct application of these results to the frequency of actual storms also must consider the storm initiation.
Proceedings of the National Academy of Sciences 12/2007; · 9.81 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: A previous study of the mean spatial bias errors associated with operational forecast models motivated an examination of the mechanisms responsible for these biases. One hypothesis for the cause of these errors is that mobile synoptic-scale phenomena are partially responsible. This paper explores this hypothesis using 24-h forecasts from the operational Eta Model and an experimental version of the Eta run with Kain-Fritsch convection (EtaKF). For a sample of 44 well-defined upper-level short-wave troughs arriving on the west coast of the United States, 70% were underforecast (as measured by the 500-hPa geopotential height), a likely result of being undersampled by the observational network. For a different sample of 45 troughs that could be tracked easily across the country, consecutive model runs showed that the height errors associated with 44% of the troughs generally decreased in time, 11% increased in time, 18% had relatively steady errors, 2% were uninitialized entering the West Coast, and 24% exhibited some other kind of behavior. Thus, landfalling short-wave troughs were typically underforecast (positive errors, heights too high), but these errors tended to decrease as they moved across the United States, likely a result of being better initialized as the troughs became influenced by more upper-air data. Nevertheless, some errors in short-wave troughs were not corrected as they fell under the influence of supposedly increased data amount and quality. These results indirectly show the effect that the amount and quality of observational data has on the synoptic-scale errors in the models. On the other hand, long-wave ridges tended to be underforecast (negative errors, heights too low) over a much larger horizontal extent. These results are confirmed in a more systematic manner over the entire dataset by segregating the model output at each grid point by the sign of the 500-hPa relative vorticity. Although errors at grid points with positive relative vorticity are small but positive in the western United States, the errors become large and negative farther east. Errors at grid points with negative relative vorticity, on the other hand, are generally negative across the United States. A large negative bias observed in the Eta and EtaKF over the southeast United States is believed to be due to an error in the longwave radiation scheme interacting with water vapor and clouds. This study shows that model errors may be related to the synoptic-scale flow, and even large-scale features such as long-wave troughs can be associated with significant large-scale height errors.
[Show abstract][Hide abstract] ABSTRACT: The New England High-Resolution Temperature Program seeks to improve the accuracy of summertime 2-m temperature and dewpoint temperature forecasts in the New England region through a collaborative effort between the research and operational components of the National Oceanic and Atmospheric Administration (NOAA). The four main components of this program are 1) improved surface and boundary layer observations for model initialization, 2) special observations for the assessment and improvement of model physical process parameterization schemes, 3) using model forecast ensemble data to improve upon the operational forecasts for near-surface variables, and 4) transfering knowledge gained to commercial weather services and end users. Since 2002 this program has enhanced surface temperature observations by adding 70 new automated Cooperative Observer Program (COOP) sites, identified and collected data from over 1000 non-NOAA mesonet sites, and deployed boundary layer profilers and other special instrumentation throughout the New England region to better observe the surface energy budget. Comparisons of these special datasets with numerical model forecasts indicate that near-surface temperature errors are strongly correlated to errors in the model-predicted radiation fields. The attenuation of solar radiation by aerosols is one potential source of the model radiation bias. However, even with these model errors, results from bias-corrected ensemble forecasts are more accurate than the operational model output statistics (MOS) forecasts for 2-m temperature and dewpoint temperature, while also providing reliable forecast probabilities. Discussions with commerical weather vendors and end users have emphasized the potential economic value of these probabilistic ensemble-generated forecasts.
Bulletin of the American Meteorological Society 04/2006; 87(4). · 11.57 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The spatial structure of bias errors in numerical model output is valuable to both model developers and operational forecasters, especially if the field containing the structure itself has statistical significance in the face of naturally occurring spatial correlation. A semiparametric Monte Carlo method, along with a moving blocks bootstrap method is used to determine the field significance of spatial bias errors within spatially correlated error fields. This process can be completely automated, making it an attractive addition to the verification tools already in use. The process demonstrated here results in statistically significant spatial bias error fields at any arbitrary significance level. To demonstrate the technique, 0000 and 1200 UTC runs of the operational Eta Model and the operational Eta Model using the Kain-Fritsch convective parameterization scheme are examined. The resulting fields for forecast errors for geopotential heights and winds at 850, 700, 500, and 250 hPa over a period of 14 months (26 January 2001-31 March 2002) are examined and compared using the verifying initial analysis. Specific examples are shown, and some plausible causes for the resulting significant bias errors are pro- posed.