Michael E. Baldwin

Purdue University, ウェストラファイエット, Indiana, United States

Are you Michael E. Baldwin?

Claim your profile

Publications (50)124.42 Total impact


  • No preview · Article · Jun 2015 · Bulletin of the American Meteorological Society
  • Source

    Full-text · Article · Feb 2015 · Bulletin of the American Meteorological Society
  • Source
    Eric D. Robinson · Robert J. Trapp · Michael E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: Trends in severe thunderstorms and the associated phenomena of tornadoes, hail, and damaging winds have been difficult to determine because of the many uncertainties in the historical eyewitness-report-based record. The authors demonstrate how a synthetic record that is based on high-resolution numerical modeling may be immune to these uncertainties. Specifically, a synthetic record is produced through dynamical downscaling of global reanalysis data over the period of 1990-2009 for the months of April-June using the Weather Research and Forecasting model. An artificial neural network (ANN) is trained and then utilized to identify occurrences of severe thunderstorms in the model output. The model-downscaled precipitation was determined to have a high degree of correlation with precipitation observations. However, the model significantly overpredicted the amount of rainfall in many locations. The downscaling methodology and ANN generated a realistic temporal evolution of the geospatial severe-thunderstorm activity, with a geographical shift of the activity to the north and east as the warm season progresses. Regional time series of modeled severe-thunderstorm occurrences showed no significant trends over the 20-yr period of consideration, in contrast to trends seen in the observational record. Consistently, no significant trend was found over the same 20-yr period in the environmental conditions that support the development of severe thunderstorms.
    Full-text · Article · Sep 2013 · Journal of Applied Meteorology and Climatology
  • [Show abstract] [Hide abstract]
    ABSTRACT: Weather forecasters often use a feature-specific approach in their forecast process, particularly when considering specific meteorological phenomena, such as surface fronts. This approach involves the identification, characterization, classification, and tracking of well-defined weather systems of interest, either in the forecast guidance or observational data. Researchers have recently proposed developing feature-specific prediction methods using automated methods of identifying features in numerical weather prediction output and providing information related to the characteristics of those features to the forecasters who use that output in their forecasting process. While today's high-resolution operational and research numerical weather prediction models can provide valuable forecast information, they also contribute substantially to the volume of data that the forecaster needs to interpret. By identifying and characterizing predicted meteorological features of interest, guidance on the most relevant events during the forecast period can quickly be obtained, enhancing forecaster efficiency as a result. As with any forecast, it is important to understand the quality of the predictions. Automated methods of evaluating feature-specific predictions are actively being developed by the research community. In this study, we apply subjective feature-based evaluation methods using an Euclidean distance approach to a series of numerical weather prediction forecasts of surface fronts. These results will be compared to forecast verification statistics that are computed from automated frontal analysis procedures. The goal is to gain insight on the quality and usefulness of the various forecast evaluation methods and to determine whether new automated analysis methods are providing information that is consistent with subjectively-determined frontal positions and forecast evaluations. This study was conducted as part of a sophomore-level, research-oriented laboratory course at Purdue University in the Atmospheric Science program.
    No preview · Conference Paper · Jan 2013
  • Source
    Nathan M Hitchens · Michael E Baldwin · Robert J Trapp
    [Show abstract] [Hide abstract]
    ABSTRACT: Extreme precipitation was identified in the midwestern United States using an object-oriented approach applied to the NCEP stage-II hourly precipitation dataset. This approach groups contiguous areas that exceed a user-defined threshold into ''objects,'' which then allows object attributes to be diagnosed. Those objects with precipitation maxima in the 99th percentile (.55 mm) were considered extreme, and there were 3484 such objects identified in the midwestern United States between 1996 and 2010. Precipitation objects ranged in size from hundreds to over 100 000 km 2 , and the maximum precipitation within each object varied between 55 and 104 mm. The majority of occurrences of extreme precipitation were in the summer (June, July, and August), and peaked in the afternoon into night (1900–0200 UTC) in the diurnal cycle. Consistent with the previous work by the authors, this study shows that the systems that produce extreme precipitation in the midwestern United States vary widely across the convective-storm spectrum.
    Full-text · Article · Apr 2012 · Monthly Weather Review
  • Benjamin R. J. Schwedler · Michael E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: While the use of binary distance measures has a substantial history in the field of image processing, these techniques have only recently been applied in the area of forecast verification. Designed to quantify the distance between two images, these measures can easily be extended for use with paired forecast and observation fields. The behavior of traditional forecast verification metrics based on the dichotomous contingency table continues to be an area of active study, but the sensitivity of image metrics has not yet been analyzed within the framework of forecast verification. Four binary distance measures are presented and the response of each to changes in event frequency, bias, and displacement error is documented. The Hausdorff distance and its derivatives, the modified and partial Hausdorff distances, are shown only to be sensitive to changes in base rate, bias, and displacement between the forecast and observation. In addition to its sensitivity to these three parameters, the Baddeley image metric is also sensitive to additional aspects of the forecast situation. It is shown that the Baddeley metric is dependent not only on the spatial relationship between a forecast and observation but also the location of the events within the domain. This behavior may have considerable impact on the results obtained when using this measure for forecast verification. For ease of comparison, a hypothetical forecast event is presented to quantitatively analyze the various sensitivities of these distance measures.
    No preview · Article · Dec 2011 · Weather and Forecasting
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We explore the use of high-resolution dynamical downscaling as a means to simulate the regional climatology and variability of hazardous convective-scale weather. Our basic approach differs from a traditional regional climate model application in that it involves a sequence of daily integrations. We use the weather research and forecasting (WRF) model, with global reanalysis data as initial and boundary conditions. Horizontal grid lengths of 4.25km allow for explicit representation of deep convective storms and hence a compilation of their occurrence statistics over a large portion of the conterminous United States. The resultant 10-year sequence of WRF model integrations yields precipitation that, despite its positive bias, has a diurnal cycle consistent with observations, and otherwise has a realistic geographical distribution. Similarly, the occurrence frequency of short-duration, potentially flooding rainfall compares well to analyses of hourly rain gauge data. Finally, the climatological distribution of hazardous-thunderstorm occurrence is shown to be represented with some degree of skill through a model proxy that relates rotating convective updraft cores to the presence of hail, damaging surface winds, and tornadoes. The results suggest that the proxy occurrences, when coupled with information on the larger-scale atmosphere, could provide guidance on the reliability of trends in the observed occurrences. KeywordsSevere thunderstorm–Heavy rainfall–Dynamical downscaling–Reanalysis–Weather research and forecasting model
    Full-text · Article · Aug 2011 · Climate Dynamics
  • [Show abstract] [Hide abstract]
    ABSTRACT: A feature-specific forecasting method for high-impact weather events that takes advantage of high-resolution numerical weather prediction models and spatial forecast verification methodology is proposed. An application of this method to the prediction of a severe convective storm event is given.
    No preview · Article · Apr 2011 · Weather and Forecasting
  • [Show abstract] [Hide abstract]
    ABSTRACT: This research establishes a methodology to quantify the characteristics of convective cloud systems that produce subdiurnal extreme precipitation. Subdiurnal extreme precipitation events are identified by examining hourly precipitation data from 48 rain gauges in the midwestern United States during the period 1956-2005. Time series of precipitation accumulations for 6-h periods are fitted to the generalized Pareto distribution to determine the 10-yr return levels for the stations. An extreme precipitation event is one in which precipitation exceeds the 10-yr return level over a 6-h period. Return levels in the Midwest vary between 54 and 93 mm for 6-h events. Most of the precipitation contributing to these events falls within 1-2 h. Characteristics of the precipitating systems responsible for the extremes are derived from the National Centers for Environmental Prediction stage II and stage IV multisensor precipitation data. The precipitating systems are treated as objects that are identified using an automated procedure. Characteristics considered include object size and the precipitation mean, variance, and maximum within each object. For example, object sizes vary between 96 and 34 480 km(2), suggesting that a wide variety of convective precipitating systems can produce subdiurnal extreme precipitation.
    No preview · Article · Feb 2010 · Journal of Hydrometeorology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts, 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels. Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18-30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts. In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill-the scale of mesoscale convective features-the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results Suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
    Full-text · Article · Oct 2008 · Weather and Forecasting
  • Source
    John V Cortinas · Keith F Brill · Michael E Baldwin

    Full-text · Article · Aug 2008
  • N. M. Hitchens · R. J. Trapp · M. E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: Few studies examine sub-diurnal precipitation extremes, instead focusing on daily extremes. However, flash flooding occurs on shorter time-scales and significantly impacts life and property. Case studies have identified storm systems responsible for significant flooding events, but this study seeks to quantify the characteristics of systems that produce sub-diurnal extreme precipitation. Sub-diurnal extreme precipitation events are identified by examining hourly precipitation data from select stations in Indiana and Illinois during the period of 1956-2005. Timeseries of precipitation accumulations for 3- and 6- hour periods are fitted to the Pareto distribution to determine the 10-year return levels for the stations. An extreme precipitation event is defined as one that exceeds the 10-year return level over both a 3-hour and a 6-hour period. Stations in Indiana have return levels ranging from 2.02 in. to 2.74 in. for 3-hour periods, and 2.46 in. to 3.16 in. for 6-hour periods. Stations in Illinois have return levels ranging from 2.43 in. to 2.84 in. for 3-hour periods, and 2.84 in. to 3.39 in. for 6-hour periods. These return levels yield about 6 events per station over the 50-year period of record for Indiana and between 3 and 7 events per station in Illinois. Multisensor precipitation data are available beginning in 1996 for stage II analyses and 2002 for stage IV analyses. This results in a total of 6 extreme precipitation events from the Indiana stations and 5 from the Illinois stations. The automated classification procedure developed by Baldwin et al. (2005) is applied to stage II/IV analyses for each hour of each event to determine the statistical characteristics for each event. Areas of continuous precipitation above a user-defined threshold are considered a single object. The threshold value of 5 mm (0.20 in.), used here, is considered the lower bound for convective precipitation. Object characteristics include: number of pixels (4 km x 4 km), mean precipitation, variance, maximum precipitation, shape, and orientation angle. During each hour of an extreme precipitation event the object whose centroid is closest to the station is used to define the characteristics of the precipitating system. Over the course of an extreme precipitation event the maximum precipitation identified in the object corresponds well to hours in which extreme heavy precipitation (> 1.00 in.) occurs. While object sizes seem to be closely related to the amount of maximum precipitation from hour to hour, sizes range from hundreds to thousands of pixels during peak maximum precipitation.
    No preview · Article · May 2008
  • Source
    John V. Cortinas Jr · Keith F. Brill · Michael E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: This COMET proposal describes a two-year project, beginning on 6/1/00, that will create, evaluate, and implement the first precipitation-type probabilistic forecast system, using ensemble forecasting and consensus forecasting concepts, at the Hydrometeorological Prediction Center (HPC) and the Storm Prediction Center (SPC). The research component of this project will evaluate the quality of various precipitation-type algorithms and investigate various ensemble forecasting concepts in applying the collection of algorithms. The operational component of the project will create probabilistic forecasts of each precipitation type diagnosed by the algorithms and consensus forecasts of the most probable precipitation type. After sufficient development and testing at the HPC and the SPC, these forecasts will be made available routinely to all National Weather Service (NWS) offices within two years, provided approval is granted by the NWS. I. Overview Precipitation-type forecasting remains a difficult task for even the most experienced forecasters because of the uncertainties associated with forecasting the evolution of winter weather systems that produce snow, rain, ice pellets, and freezing rain, as well as inadequate atmospheric data sampling, a complete knowledge of precipitation microphysics, and limited techniques to evaluate high resolution model data. The importance of forecasting these events accurately is illustrated by a set of statistics compiled the Office of Meteorology, which reports
    Full-text · Article · Jan 2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Severe thunderstorms comprise an extreme class of deep convective clouds and produce high-impact weather such as destructive surface winds, hail, and tornadoes. This study addresses the question of how severe thunderstorm frequency in the United States might change because of enhanced global radiative forcing associated with elevated greenhouse gas concentrations. We use global climate models and a high-resolution regional climate model to examine the larger-scale (or “environmental”) meteorological conditions that foster severe thunderstorm formation. Across this model suite, we find a net increase during the late 21st century in the number of days in which these severe thunderstorm environmental conditions (NDSEV) occur. Attributed primarily to increases in atmospheric water vapor within the planetary boundary layer, the largest increases in NDSEV are shown during the summer season, in proximity to the Gulf of Mexico and Atlantic coastal regions. For example, this analysis suggests a future increase in NDSEV of 100% or more in locations such as Atlanta, GA, and New York, NY. Any direct application of these results to the frequency of actual storms also must consider the storm initiation.
    Full-text · Article · Dec 2007 · Proceedings of the National Academy of Sciences
  • Source
    William A. Gallus · Michael E. Baldwin · Kimberly L. Elmore
    [Show abstract] [Hide abstract]
    ABSTRACT: This note examines the connection between the probability of precipitation and forecasted amounts from the NCEP Eta (now known as the North American Mesoscale model) and Aviation (AVN; now known as the Global Forecast System) models run over a 2-yr period on a contiguous U.S. domain. Specifically, the quantitative precipitation forecast (QPF)-probability relationship found recently by Gallus and Segal in 10-km grid spacing model runs for 20 warm season mesoscale convective systems is tested over this much larger temporal and spatial dataset. A 1-yr period was used to investigate the QPF-probability relationship, and the predictive capability of this relationship was then tested on an independent 1-yr sample of data. The same relationship of a substantial increase in the likelihood of observed rainfall exceeding a specified threshold in areas where model runs forecasted higher rainfall amounts is found to hold over all seasons. Rainfall is less likely to occur in those areas where the models indicate none than it is elsewhere in the domain; it is more likely to occur in those regions where rainfall is predicted, especially where the predicted rainfall amounts are largest. The probability of rainfall forecasts based on this relationship are found to possess skill as measured by relative operating characteristic curves, reliability diagrams, and Brier skill scores. Skillful forecasts from the technique exist throughout the 48-h periods for which Eta and AVN output were available. The results suggest that this forecasting tool might assist forecasters throughout the year in a wide variety of weather events and not only in areas of difficult-to-forecast convective systems.
    Full-text · Article · Feb 2007 · Weather and Forecasting
  • Source
    S. Lakshmivarahan · Michael E. Baldwin · Tao Zheng
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of this paper is to provide a complete picture of the long-term behavior of Lorenz's maximum simplification equations along with the corresponding meteorological interpretation for all initial conditions and all values of the parameter.
    Full-text · Article · Nov 2006 · Journal of the Atmospheric Sciences
  • Source
    Kimberly L. Elmore · David M. Schultz · Michael E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: A previous study of the mean spatial bias errors associated with operational forecast models motivated an examination of the mechanisms responsible for these biases. One hypothesis for the cause of these errors is that mobile synoptic-scale phenomena are partially responsible. This paper explores this hypothesis using 24-h forecasts from the operational Eta Model and an experimental version of the Eta run with Kain-Fritsch convection (EtaKF). For a sample of 44 well-defined upper-level short-wave troughs arriving on the west coast of the United States, 70% were underforecast (as measured by the 500-hPa geopotential height), a likely result of being undersampled by the observational network. For a different sample of 45 troughs that could be tracked easily across the country, consecutive model runs showed that the height errors associated with 44% of the troughs generally decreased in time, 11% increased in time, 18% had relatively steady errors, 2% were uninitialized entering the West Coast, and 24% exhibited some other kind of behavior. Thus, landfalling short-wave troughs were typically underforecast (positive errors, heights too high), but these errors tended to decrease as they moved across the United States, likely a result of being better initialized as the troughs became influenced by more upper-air data. Nevertheless, some errors in short-wave troughs were not corrected as they fell under the influence of supposedly increased data amount and quality. These results indirectly show the effect that the amount and quality of observational data has on the synoptic-scale errors in the models. On the other hand, long-wave ridges tended to be underforecast (negative errors, heights too low) over a much larger horizontal extent. These results are confirmed in a more systematic manner over the entire dataset by segregating the model output at each grid point by the sign of the 500-hPa relative vorticity. Although errors at grid points with positive relative vorticity are small but positive in the western United States, the errors become large and negative farther east. Errors at grid points with negative relative vorticity, on the other hand, are generally negative across the United States. A large negative bias observed in the Eta and EtaKF over the southeast United States is believed to be due to an error in the longwave radiation scheme interacting with water vapor and clouds. This study shows that model errors may be related to the synoptic-scale flow, and even large-scale features such as long-wave troughs can be associated with significant large-scale height errors.
    Full-text · Article · Nov 2006 · Monthly Weather Review
  • Source
    Michael E. Baldwin · John S. Kain
    [Show abstract] [Hide abstract]
    ABSTRACT: The sensitivity of various accuracy measures to displacement error, bias, and event frequency is analyzed for a simple hypothetical forecasting situation. Each measure is found to be sensitive to displacement error and bias, but probability of detection and threat score do not change as a function of event frequency. On the other hand, equitable threat score, true skill statistic, and odds ratio skill score behaved differently with changing event frequency. A newly devised measure, here called the bias-adjusted threat score, does not change with varying event frequency and is relatively insensitive to bias. Numerous plots are presented to allow users of these accuracy measures to make quantitative estimates of sensitivities that are relevant to their particular application.
    Preview · Article · Aug 2006 · Weather and Forecasting
  • Source
    Melissa S. Bukovsky · John S. Kain · Michael E. Baldwin
    [Show abstract] [Hide abstract]
    ABSTRACT: Bowing, propagating precipitation features that sometimes appear in NCEP's North American Mesoscale model (NAM; formerly called the Eta Model) forecasts are examined. These features are shown to be associated with an unusual convective heating profile generated by the Betts-Miller-Janjić convective parameterization in certain environments. A key component of this profile is a deep layer of cooling in the lower to middle troposphere. This strong cooling tendency induces circulations that favor expansion of parameterized convective activity into nearby grid columns, which can lead to growing, self-perpetuating mesoscale systems under certain conditions. The propagation characteristics of these systems are examined and three contributing mechanisms of propagation are identified. These include a mesoscale downdraft induced by the deep lower-to-middle tropospheric cooling, a convectively induced buoyancy bore, and a boundary layer cold pool that is indirectly produced by the convective scheme in this environment. Each of these mechanisms destabilizes the adjacent atmosphere and decreases convective inhibition in nearby grid columns, promoting new convective development, expansion, and propagation of the larger system. These systems appear to show a poor correspondence with observations of bow echoes on time and space scales that are relevant for regional weather prediction, but they may provide important clues about the propagation mechanisms of real convective systems.
    Full-text · Article · Jun 2006 · Weather and Forecasting
  • John S. Kain · S. J. Weiss · J. J. Levit · M. E. Baldwin · D. R. Bright
    [Show abstract] [Hide abstract]
    ABSTRACT: Convection-allowing configurations of the Weather Research and Forecast (WRF) model were evaluated during the 2004 Storm Prediction Center-National Severe Storms Laboratory Spring Program in a simulated severe weather forecasting environment. The utility of the WRF forecasts was assessed in two different ways. First, WRF output was used in the preparation of daily experimental human forecasts for severe weather. These forecasts were compared with corresponding predictions made without access to WRF data to provide a measure of the impact of the experimental data on the human decision-making process. Second, WRF output was compared directly with output from current operational forecast models. Results indicate that human forecasts showed a small, but measurable, improvement when forecasters had access to the high-resolution WRF output and, in the mean, the WRF output received higher ratings than the operational Eta Model on subjective performance measures related to convective initiation, evolution, and mode. The results suggest that convection-allowing models have the potential to provide a value-added benefit to the traditional guidance package used by severe weather forecasters.
    No preview · Article · Apr 2006 · Weather and Forecasting

Publication Stats

1k Citations
124.42 Total Impact Points

Institutions

  • 2007-2013
    • Purdue University
      • Department of Earth and Atmospheric Sciences
      ウェストラファイエット, Indiana, United States
  • 2002-2006
    • University of Oklahoma
      • Cooperative Institute for Mesoscale Meteorological Studies
      Norman, Oklahoma, United States
  • 2001-2006
    • NOAA's National Severe Storms Laboratory
      Norman, Oklahoma, United States
  • 2000-2003
    • NOAA Fisheries
      Silver Spring, Maryland, United States