William Meeker

William Meeker
  • Professor (Full) at Iowa State University

About

354
Publications
116,778
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
15,028
Citations
Current institution
Iowa State University
Current position
  • Professor (Full)
Additional affiliations
August 1975 - present
Iowa State University
Position
  • Professor (Full)

Publications

Publications (354)
Article
Full-text available
Fatigue data arise in many research and applied areas, and there have been statistical methods developed to model and analyze such data. The distributions of fatigue life and fatigue strength are often of interest to engineers designing products that might fail due to fatigue from cyclic‐stress loading. Based on a specified statistical model and th...
Article
Full-text available
The complex jagged trajectory of fractured surfaces of two pieces of forensic evidence is used to recognize a “match” by using comparative microscopy and tactile pattern analysis. The material intrinsic properties and microstructures, as well as the exposure history of external forces on a fragment of forensic evidence have the premise of uniquenes...
Preprint
Full-text available
Engineers and scientists have been collecting and analyzing fatigue data since the 1800s to ensure the reliability of life-critical structures. Applications include (but are not limited to) bridges, building structures, aircraft and spacecraft components, ships, ground-based vehicles, and medical devices. Engineers need to estimate S-N relationship...
Article
We response to comments on our paper “Specifying Prior Distributions in Reliability Applications” in this rejoinder.
Preprint
Full-text available
The fatigue demonstration test program may be carried out using either the attribute or variables approach. For either approach, a robust test program fulfills the objective of unambiguously demonstrating reliability, as well as, demonstrating understanding of when and how the components could fracture. To this end, it is recommended that a robust...
Article
Customers demand and manufacturers strive for increasingly higher product reliability. Statistics plays a key role in achieving this. Necip Doganaksoy, William Q. Meeker and Gerald J. Hahn explain how
Preprint
Full-text available
Graphics processing units (GPUs) are widely used in many high-performance computing (HPC) applications such as imaging/video processing and training deep-learning models in artificial intelligence. GPUs installed in HPC systems are often heavily used, and GPU failures occur during HPC system operations. Thus, the reliability of GPUs is of interest...
Article
Full-text available
Especially when facing reliability data with limited information (e.g., a small number of failures), there are strong motivations for using Bayesian inference methods. These include the option to use information from physics‐of‐failure or previous experience with a failure mode in a particular material to specify an informative prior distribution....
Preprint
Especially when facing reliability data with limited information (e.g., a small number of failures), there are strong motivations for using Bayesian inference methods. These include the option to use information from physics-of-failure or previous experience with a failure mode in a particular material to specify an informative prior distribution....
Article
Full-text available
Artificial intelligence (AI) systems have become increasingly common and the trend will continue. Examples of AI systems include autonomous vehicles (AV), computer vision, natural language processing and AI medical experts. To allow for safe and effective deployment of AI systems, the reliability of such systems needs to be assessed. Traditionally,...
Article
Statistical prediction plays an important role in many decision processes, such as university budgeting (depending on the number of students who will enroll), capital budgeting (depending on the remaining lifetime of a fleet of systems), the needed amount of cash reserves for warranty expenses (depending on the number of warranty returns), and whet...
Article
Full-text available
Silicone casts are widely used by practitioners in the comparative analysis of forensic items. Fractured surfaces carry unique details that can provide accurate quantitative comparisons of forensic fragments. In this study, a statistical analysis comparison protocol was applied to a set of 3D topological images of fractured surface pairs and their...
Book
(Preface and Table of Contents are available through the Linked data tab below.) Statistical Methods for Reliability Data, Second Edition (SMRD2) is an essential guide to the most widely used and recently developed statistical methods for reliability data analysis and reliability test planning. Written by three experts in the area, SMRD2 updates an...
Preprint
Statistical prediction plays an important role in many decision processes such as university budgeting (depending on the number of students who will enroll), capital budgeting (depending on the remaining lifetime of a fleet of systems), the needed amount of cash reserves for warranty expenses (depending on the number of warranty returns), and wheth...
Preprint
Full-text available
Fractured surfaces carry unique details that can provide an accurate quantitative comparison to support comparative forensic analysis of those fractured surfaces. In this study, a statistical analysis comparison protocol was applied to a set of 3D topological images of fractured surface pairs and their replicas to provide confidence in the quantita...
Preprint
Fractured metal fragments with rough and irregular surfaces are often found at crime scenes. Current forensic practice visually inspects the complex jagged trajectory of fractured surfaces to recognize a ``match'' using comparative microscopy and physical pattern analysis. We developed a novel computational framework, utilizing the basic concepts o...
Article
This article introduces methods for constructing prediction bounds or intervals for the number of future failures from heterogeneous reliability field data. We focus on within-sample prediction where early data from a failure-time process is used to predict future failures from the same process. Early data from high-reliability products, however, o...
Preprint
Full-text available
Artificial intelligence (AI) systems have become increasingly common and the trend will continue. Examples of AI systems include autonomous vehicles (AV), computer vision, natural language processing, and AI medical experts. To allow for safe and effective deployment of AI systems, the reliability of such systems needs to be assessed. Traditionally...
Preprint
Full-text available
This paper reviews two main types of prediction interval methods under a parametric framework. First, we describe methods based on an (approximate) pivotal quantity. Examples include the plug-in, pivotal, and calibration methods. Then we describe methods based on a predictive distribution (sometimes derived based on the likelihood). Examples includ...
Preprint
This article introduces methods for constructing prediction bounds or intervals to predict the number of future failures from heterogeneous reliability field data. We focus on within-sample prediction where early data from a failure-time process is used to predict future failures from the same process. Early data from high-reliability products, how...
Chapter
Parametric maximum likelihood methods can be used to estimate the renewal distribution based on aggregate data from superpositions of a group of renewal processes (SRP). The traditional distributional assessment approaches are, however, unable to be applied to the SRP data directly as the actual locations where the renewal events occurred are unkno...
Preprint
We describe prediction methods for the number of future events from a population of units associated with an on-going time-to-event process. Examples include the prediction of warranty returns and the prediction of the number of future product failures that could cause serious threats to property or life. Important decisions such as whether a produ...
Article
The accuracy of camber and corresponding stresses of precast pretensioned concrete beams (PPCBs) is routinely compromised due to ignoring the thermal effects resulting from continuously changing weather conditions, affecting the deck placement on site. While accounting for the effects of a known temperature gradient down the beam depth is straightf...
Article
Full-text available
For several decades, the resampling based bootstrap has been widely used for computing confidence intervals (CIs) for applications where no exact method is available. However, there are many applications where the resampling bootstrap method can not be used. These include situations where the data are heavily censored due to the success response be...
Article
Accelerated repeated measures degradation tests are often used to assess product or component reliability when there would be few or even no failures during a traditional life test. Such tests are used to estimate the failure-time distributions of highly reliable items in applications where it is possible to take repeated measures of some appropria...
Article
Full-text available
Matrix-variate distributions can intuitively model the dependence structure of matrix-valued observations that arise in applications with multivariate time series, spatio-temporal or repeated measures. This paper develops an Expectation-Maximization algorithm for discriminant analysis and classification with matrix-variate t-distributions. The meth...
Article
Longitudinal inspections of thickness at particular locations along a pipeline provide useful information to assess the remaining life of the pipeline. In applications with different mechanisms of corrosion processes, we have observed various types of general degradation paths. We present two applications of fitting a degradation model to describe...
Article
Full-text available
Model-assisted probability of detection (MAPOD) and sensitivity analysis (SA) are important for quantifying the inspection capability of nondestructive testing (NDT) systems. To improve the computational efficiency, this work proposes the use of polynomial chaos expansions (PCE), integrated with least-angle regression (LARS), a basis-adaptive techn...
Article
Convincing practitioners of the inadequacy of significance testing can employ a two‐step approach. First, explain the difference between statistical significance and practical importance. Then, at least in many situations, use an appropriate statistical interval to quantify the statistical uncertainty. By Gerry Hahn, Necip Doganaksoy and Bill Meeke...
Preprint
Full-text available
Matrix-variate distributions can intuitively model the dependence structure of matrix-valued observations that arise in applications with multivariate time series, spatio-temporal or repeated measures. This paper develops an Expectation-Maximization algorithm for discriminant analysis and classification with matrix-variate $t$-distributions. The me...
Article
The 10th International Conference on Mathematical Methods in Reliability, MMR 2017, held in Grenoble, France during July 3‐7, entailed a panel discussion entitled “Is reliability a new science?” with Mark Brown, Regina Liu, William Meeker, Sheldon Ross, and Nozer Singpurwalla as panelists. Bill Meeker also doubled as a chair and as a moderator of t...
Chapter
Full-text available
The 10th International Conference on Mathematical Methods in Reliability, MMR2017, held in Grenoble, France during July 3-7, entailed a panel discussion entitled “Is reliability a new science?” with Mark Brown, Regina Liu, William Meeker, Sheldon Ross, and Nozer Singpurwalla as panelists.
Article
Full-text available
Understanding the dynamics of disease spread is critical to achieving effective animal disease surveillance. A major challenge in modeling disease spread is the fact that the true disease status cannot be known with certainty due to the imperfect diagnostic sensitivity and specificity of the tests used to generate the disease surveillance data. Oth...
Article
When analyzing field data on consumer products, model-based approaches to inference require a model with sufficient flexibility to account for multiple kinds of failures. The causes of failure, while not interesting to the consumer per se, can lead to various observed lifetime distributions. Because of this, standard lifetime models, such as using...
Preprint
Full-text available
The bootstrap, based on resampling, has, for several decades, been a widely used method for computing confidence intervals for applications where no exact method is available and when sample sizes are not large enough to be able to rely on easy-to-compute large-sample approximate methods, such a Wald (normal-approximation) confidence intervals. Sim...
Chapter
Probability of detection (POD) is used for reliability analysis in nondestructive testing (NDT) area. Traditionally, it is determined by experimental tests, while it can be enhanced by physics-based simulation models, which is called model-assisted probability of detection (MAPOD). However, accurate physics-based models are usually expensive in tim...
Conference Paper
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as...
Article
Full-text available
Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can...
Preprint
Full-text available
Big data features not only large volumes of data but also data with complicated structures. Complexity imposes unique challenges in big data analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an extensive discussion of the opportunities and challenges in big data and reliability, and described engineering systems that can...
Article
Full-text available
Photodegradation, driven primarily by ultraviolet (UV) radiation, is the primary cause of failure for organic paints and coatings, as well as many other products made from polymeric materials exposed to sunlight. Traditional methods of service life prediction involve the use of outdoor exposure in harsh UV environments (e.g., Florida and Arizona)....
Article
Probability of detection (POD) is commonly used to measure a nondestructive evaluation (NDE) inspection procedure’s performance. Due to inherent variability in the inspection procedure caused by variability in factors such as crack morphology and operators, it is important, for some purposes, to model POD as a random function. Traditionally, inspec...
Article
Full-text available
Small-scale wind turbine (SWT) installations saw a dramatic increase between 2008 and 2012. Recently, the trend within industry has shifted towards installing larger wind turbines, leaving little attention for installed SWT reliability. Unfortunately, multiple downtime events raise concerns about thereliability and availability of the large number...
Article
Full-text available
Service life prediction is of great importance to manufacturers of coatings and other polymeric materials. Photodegradation, driven primarily by ultraviolet (UV) radiation, is the primary cause of failure for organic paints and coatings, as well as many other products made from polymeric materials exposed to sunlight. Traditional methods of service...
Chapter
This chapter describes the construction of Bayesian intervals for data generated from the normal distribution, binomial distribution, and Poisson distribution. It extends these methods to consider the construction of Bayesian intervals for the more complicated situation involving hierarchical models. The chapter discusses the construction of Bayesi...
Chapter
This chapter describes and illustrates general methods for constructing statistical intervals that can be applied to many other distributions and to more complicated models and types of data. It explains the motivation for likelihood-based inference and model selection. The chapter discusses the construction of a likelihood function and maximum lik...
Chapter
This chapter shows how to determine sample size requirements for tolerance intervals and for related demonstration tests concerning the proportion of product that exceeds a specified value. It explains sample size determination methods for: Normal distribution tolerance intervals and bounds; a one-sided demonstration test based on normally distribu...
Chapter
This chapter presents a series of case studies that illustrate the methods in the first 10 chapters of this book. They are a representative sample of frequently occurring problems that the authors have encountered, and the proposed solutions. The following applications are discussed: demonstration that the operating temperature of most manufactured...
Chapter
This chapter describes basic requirements for sample size determination. It also deals with sample size determination methods to estimate a normal distribution mean, normal distribution standard deviation, normal distribution quantile, binomial proportion, and Poisson occurrence rate. To determine the sample size required to obtain a useful interva...
Chapter
This chapter presents the basic concepts behind the construction of Bayesian statistical intervals and the integration of prior information with data that Bayesian methods provide. The development of the theory and application of Markov chain Monte Carlo (MCMC) methods and vast improvements in computational capabilities have made the use of such me...
Chapter
This chapter contains some advanced case studies that illustrate the broad applicability of the general methods presented in Chapters 15-17. The first case study shows the use of generalized pivotal quantity and Bayesian methods to compute confidence intervals for quantities of interest calculated from gauge repeatability and reproducibility (GR&R)...
Chapter
This chapter describes statistical intervals for the number of events over some interval of time or region of space, assuming independent events and a constant event-occurrence rate. Such situations can often be modeled by the Poisson distribution. The chapter presents four different approximate methods for constructing a confidence interval or con...
Chapter
This chapter describes and illustrates computationally intensive nonparametric bootstrap methods to compute statistical intervals, primarily for continuous distributions. These methods require obtaining a sequence of simulated bootstrap samples, based on the given data. Then these bootstrap samples are used to generate corresponding bootstrap estim...
Chapter
Various types of statistical intervals may be calculated from sample data. The appropriate interval depends upon the specific application. Frequently used intervals are: confidence interval; statistical tolerance interval; and prediction interval. The assumption of random sampling is of critical importance in constructing statistical intervals. Thi...
Chapter
This chapter describes statistical intervals for proportions or percentages. Such intervals are used, for example, when each observation is either a "conforming" or a "nonconforming" unit and the data consist of the number, or equivalently, the proportion or percentage, of nonconforming units, in a random sample of n units from a population or proc...
Chapter
This chapter describes and illustrates computationally intensive parametric bootstrap and other simulation-based methods to compute statistical intervals, primarily for continuous distributions. The basic concept of using simulation and parametric bootstrap methods to obtain confidence intervals are discussed. This is followed by methods for genera...
Chapter
This chapter presents, tabulates, and compares factors for calculating the different kinds of intervals, based upon a sample of size n from a normal distribution with unknown mean μ and unknown standard deviation σ. The normal distribution is the best known and most frequently used statistical model. Its theoretical justification is often based on...
Chapter
This chapter gives general methods for calculating various statistical intervals for samples from a population or process that can be approximated by a normal distribution. It explains statistical intervals for linear regression analysis. The chapter presents methods for constructing confidence intervals for a normal distribution quantile. It descr...
Chapter
This chapter provides an introduction to the analysis of hierarchical statistical models using Bayesian analysis. It illustrates the use of Bayesian methods for obtaining statistical intervals for characteristics of a hierarchical model when the resulting data are assumed to follow a normal distribution, binomial distribution, and Poisson distribut...
Chapter
This chapter shows how to calculate "distribution-free" two-sided statistical intervals and one-sided statistical bounds. Such intervals and bounds, which are based on order statistics, do not require the assumption of a particular underlying distribution. The topics discussed in the chapter are distribution-free: confidence intervals for a distrib...
Chapter
This chapter explains: reasons for constructing a statistical interval and some examples and different types of confidence intervals and one-sided confidence bounds, and the selection of an appropriate statistical interval. Tolerance intervals and one-sided tolerance bounds, the selection of a confidence level, and the difference between two-sided...
Chapter
This chapter provides guidelines for choosing the sample size required to obtain a prediction interval to contain a future single observation, a specified number of future observations, or some other quantity to be calculated from a future sample from a previously sampled distribution. There are two sources of imprecision in statistical prediction:...
Preprint
Nondestructive evaluation (NDE) techniques are widely used to detect flaws in critical components of systems like aircraft engines, nuclear power plants and oil pipelines in order to prevent catastrophic events. Many modern NDE systems generate image data. In some applications an experienced inspector performs the tedious task of visually examining...
Article
Full-text available
Making predictions of future realized values of random variables based on currently available data is a frequent task in statistical applications. In some applications, the interest is to obtain a two-sided simultaneous prediction interval (SPI) to contain at least k out of m future observations with a certain confidence level based on n previous o...
Article
Full-text available
In this paper, we propose methods to calculate exact factors for two-sided control-the-centre and control-both-tails tolerance intervals for the (log)-location-scale family of distributions, based on complete or Type II censored data. With Type I censored data, exact factors do not exist. For this case, we developed an algorithm to compute approxim...

Network

Cited By