To read the full-text of this research, you can request a copy directly from the authors.
Abstract
In an earlier article, a method of calculating two-sided confidence bands for cumulative distribution functions was suggested. In this article, the construction of one-sided confidence bands is described. The case of the genera1 location-scale parameter mode1 is discussed, and formulas for the normal and extreme-value models are given as illustrations. A simple numerical example is also included.
To read the full-text of this research, you can request a copy directly from the authors.
... Then a simultaneous confidence band is obtained by seeing how the continuous function g(·; θ) changes as the parameters are varied within the joint confidence region. The band is two-sided and has ordinate values that lie within the range of g. Cheng and Iles (1988) extend the result to one-sided simultaneous confidence bands for a cdf under the location-scale model with complete data. The simultaneous confidence bands constructed in this way may be exact or conservative. ...
... Then, as shown in Cheng and Iles (1988), the Wald statistic with Fisher information can be expressed as ...
... The construction of a confidence region for one-sided simultaneous confidence bands is different from the two-sided case. Cheng and Iles (1988) provide an argument for using theFigure 2: The 97.5% convex confidence region for a one-sided simultaneous confidence band constructed from the BWALDL method with data in Section 5. It is the union of a closed convex region and a left semi-infinite region. WALDF method. We extend their argument to other methods that can be used to produ ...
This paper describes existing methods and develops new methods for constructing simultaneous confidence bands for a cumulative distribution function (cdf). Our results are built on extensions of previous work by Cheng and Iles (1983, 1988). Cheng and Iles use Wald statistics with (expected) Fisher information and provide different approaches to find one-sided and two-sided simultaneous confidence bands. We consider three statistics, Wald statistics with Fisher information, Wald statistics with local information, and likelihood ratio statistics. Unlike pointwise confidence intervals, it is not possible to combine two 95% one-sided simultaneous confidence bands to get a 90% two-sided simultaneous confidence band. We present a general approach for construction of two-sided simultaneous confidence bands on a cdf for a continuous parametric model from complete and censored data. Both two-sided and one-sided simultaneous confidence bands for the location-scale parameter model are discussed in...
... In the case of simultaneous confidence bands (SCB) for the cumulative distribution function, Cheng & Iles (1983) used the Wald statistic to construct the SCB for quantiles of the cumulative distribution function and the probability of failure. Cheng & Iles (1988) extended their confidence bands results to cumulative distribution functions that are members of the location and scale family with complete data. Jeng & Meeker (2001) generalized the work of Cheng & Iles (1988) using the Wald statistic with the observed Fisher's information matrix, the Wald statistic with local information matrix Fisher, and the likelihood ratio statistic. ...
... Cheng & Iles (1988) extended their confidence bands results to cumulative distribution functions that are members of the location and scale family with complete data. Jeng & Meeker (2001) generalized the work of Cheng & Iles (1988) using the Wald statistic with the observed Fisher's information matrix, the Wald statistic with local information matrix Fisher, and the likelihood ratio statistic. Finally, Escobar, Hong & Meeker (2009) extend the work of Cheng & Iles (1983) in the following ways: 1. ...
Usually, the exact time at which an event occurs cannot be observed for several reasons; for instance, it is not possible to constantly monitor a characteristic of interest. This generates a phenomenon known as censoring that can be classified as having a left censor, right censor or interval censor. When one is working with survival data in the presence of arbitrary censoring, the survival time of interest is defined as the elapsed time between an initial event and the next event that is generally unknown. This problem has been widely studied in the statistic literature and some progress has been made, toward resolving and the formulation of a bivariate likelihood to estimate parameters in a parametric regression model offers positive development opportunities. In this paper, we construct a bivariate likelihood for the Weibull regression model in the presence of interval censoring. Finally, its performance is illustrated by means of a simulation study.
... Therefore, the calculation of the adequate weight of the information for the posterior function is necessary (for the detailed calculation see [7]). From the number of posterior experiments and the confidence bound it is possible to calculate the value γ (see [6]). The amount of experiments for the posterior information is 29. ...
... The number of experiments increased from 24 of the real defects to 29 of all defects. Through this increased number of data [6] and the joint values of the mean and the variance value, the a90/95 decreased from 1.2 mm to 1.0 mm. ...
The Probability of Detection (POD) is used to evaluate the detectability of non-destructive testing (NDT) systems. The POD is highly dependent on the amount of available data. The Bayesian approach provides a solution to compute POD-curves in case of a small amount of real defects without losing the necessary information. The result contains the needed information for the computation of POD-curves for real defects with an acceptable amount of information, even for sparse amount of data. In this paper is shown limitations of the Bayesian approach and how it can be applied to NDT. The Bayesian approach is applied in this case to the evaluation of radiographic testing. Bayesian approach is applied to determine POD-curves for the inspection techniques of nuclear fuel disposal canisters. The reason for using Bayesian approach is the high safety demands and also the low amount of real defects due to the high quality of the reliable production techniques.
... The size of the band is determined by the number of samples and the desired level of confidence: a high number of samples or a low level of confidence are associated with a small band size. There is an extensive literature Cheng and Iles (1983); Steck (1971); Cheng and Lies (1988); Wang et al. (2013); Faraway and Myoungshic Jhun (1990); Bickel and A. M. Krieger (1989); Hall and Horowitz (2013) on how to estimate the confidence bands of cumulative distributions, and, in this work, we use a simple bootstrap approach 8 . ...
Non-deterministic measurements are common in real-world scenarios: the performance of a stochastic optimization algorithm or the total reward of a reinforcement learning agent in a chaotic environment are just two examples in which unpredictable outcomes are common. These measures can be modeled as random variables and compared among each other via their expected values or more sophisticated tools such as null hypothesis statistical tests. In this paper, we propose an alternative framework to visually compare two samples according to their estimated cumulative distribution functions. First, we introduce a dominance measure for two random variables that quantifies the proportion in which the cumulative distribution function of one of the random variables scholastically dominates the other one. Then, we present a graphical method that decomposes in quantiles i) the proposed dominance measure and ii) the probability that one of the random variables takes lower values than the other. With illustrative purposes, we re-evaluate the experimentation of an already published work with the proposed methodology and we show that additional conclusions-missed by the rest of the methods-can be inferred. Additionally, the software package RVCompare was created as a convenient way of applying and experimenting with the proposed framework.
... The size of the band is determined by the number of samples and the desired level of confidence: a high number of samples or a low level of confidence are associated with a small band size. There is an extensive literature Cheng and Iles (1983); Steck (1971); Cheng and Lies (1988); Wang et al. (2013); Faraway and Myoungshic Jhun (1990); Bickel and A. M. Krieger (1989); Hall and Horowitz (2013) on how to estimate the confidence bands of cumulative distributions, and, in this work, we use a simple bootstrap approach 8 . ...
Non-deterministic measurements are common in real-world scenarios: the performance of a stochastic optimization algorithm or the total reward of a reinforcement learning agent in a chaotic environment are just two examples in which unpredictable outcomes are common. These measures can be modeled as random variables and compared among each other via their expected values or more sophisticated tools such as null hypothesis statistical tests. In this paper, we propose an alternative framework to visually compare two samples according to their estimated cumulative distribution functions. First, we introduce a dominance measure for two random variables that quantifies the proportion in which the cumulative distribution function of one of the random variables scholastically dominates the other one. Then, we present a graphical method that decomposes in quantiles i) the proposed dominance measure and ii) the probability that one of the random variables takes lower values than the other. With illustrative purposes, we re-evaluate the experimentation of an already published work with the proposed methodology and we show that additional conclusions (missed by the rest of the methods) can be inferred. Additionally, the software package RVCompare was created as a convenient way of applying and experimenting with the proposed framework.
... 66 The interested reader may refer to the following references to delve into POD for hit/miss data and the specific statistical methods to compute the corresponding lower confidence bounds. 16,[67][68][69][70][71][72][73][74] Variability sources in structural health monitoring ...
The successful implementation of Structural Health Monitoring (SHM) systems is confined to the capability of evaluating their performance, reliability, and durability. Although there are many SHM techniques capable of detecting, locating and quantifying damage in several types of structures, their certification process is still limited. Despite the effort of academia and industry in defining methodologies for the performance assessment of such systems in recent years, many challenges remain to be solved. Methodologies used in Non-Destructive Evaluation (NDE) have been taken as a starting point to develop the required metrics for SHM, such as Probability of Detection (POD) curves. However, the transposition of such methodologies to SHM is anything but straightforward because additional factors should be considered. The time dependency of the data, the larger amount of variability sources and the complexity of the structures to be monitored exacerbate/aggravate the existing challenges, suggesting that much work has still to be done in SHM. The article focuses on the current challenges and barriers preventing the development of proper reliability metrics for SHM, analyzing the main differences with respect to POD methodologies for NDE. It was found that the development of POD curves for SHM systems requires a higher level of statistical expertise and their use in the literature is still limited to few studies. Finally, the discussion extends beyond POD curves towards new metrics such as Probability of Localization (POL) and Probability of Sizing (POS) curves, reflecting the diagnosis paradigm of SHM.
... Fig. 23 shows the probability of detection curve obtained for selected statistical damage -Cross Correlation. The black dashed line depicts the 95% lower confidence bound determined using the Likelihood Ratio Method [49] . This curve determines -with the probability of 95% -the lower limit for the resultant mean curve, when the measurements are repeated. ...
In this paper, universality of the mechatronic approach is confirmed with the examples of highly specialized and innovative systems, recently developed for a very specific applications. As shown, similar design steps, tools and testing procedures may lead to effective solutions even for distant and challenging research areas. First, an Ultralight Mobile Drilling System, dedicated to extraction of soil and rock probes from subsurface regions, is presented. For this case, minimization of mass and high mobility of the system, whilst maintaining performance of bigger drilling rigs, required during operation in space environment, is taken into account. The presented design consists of a four-wheeled rover with adjustable rocker mechanism, a multifunctional core drilling module and a support module with manipulation capabilities and dedicated sample storage. Second, entire design process for fully scalable and reconfigurable data-based system for monitoring of technical condition for mechanical constructions is introduced. The system is based on the measurements of electromechanical impedance, which are carried out with piezoelectric transducers. Both presented systems successfully passed the laboratory and industrial tests and, therefore, confirmed correctness of the choices made with the applied design procedures. All aspects of the mechatronic approach were investigated to construct fully functional prototypes designed for different tasks and to deal with different environmental conditions.
... The interior potential for the Green's function formulation is given by the identity 6n(Q) -J 9(j>,Q;a) d<p dn dS{Q) (27) The derivatives in the x-direction and the y-direction of the interior potential are needed for the full electromagnetic field formulation. dS{Q) (28) The derivative at the point c of the kernels given in Eq. 23 and Eq. 24 are required in order to compute the gradient of the interior potential. ...
... Since the POD curve determined by this method is itself a measured quantity, it is subject to statistical errors. Berens uses the method of Cheng&Iles[4] to infer a lower 95% confidence bound from the experimentally determined POD curve. The value a 90/95 , where the POD of a flaw equals 90% within the 95% confidence band, is now generally accepted as the limit of an NDT system. ...
The computer simulation of radiography is applicable for different purposes in NDT such as for the qualification of NDT systems, the optimization of system parameters, feasibility analysis, model-based data interpretation, education and training of NDT/NDE personnel, and others. Within the framework of the European project PICASSO simulators will be adapted to support reliability assessments of NDT tasks. The radiographic simulator aRTist developed by BAM is well suited for this task. It combines analytical modelling of the RT inspection process with the CAD-orientated object description applicable to various industrial sectors such as power generation, aerospace, railways and others. The analytic model includes the description of the radiation source, the interaction of radiation with the material of the part, and the detection process with special focus to DIR. To support reliability estimations the simulation tool is completed by a tool for probability of detection (POD) estimation. It consists of a user interface for planning automatic simulation runs with varying parameters, specifically defect variations. Further, an automatic image analysis procedure is included to evaluate the defect visibility and calculate the POD therefrom.
... • hit/miss analysis (according to [9], adapted from [10,11]); • signal response analysis (according to [9]). ...
In the aeronautic industry, mature processes exist to reliably assess and qualify conventional inspection techniques such as handheld ultrasonic inspections via phased array. Standards and specifications define the procedure to be applied and followed by the evaluators to analyse the inspection performance in terms of damage detection and reproducibility of the results, commonly referred to as probability of detection (POD) and probability of false alarms (POFA). In SHM little in terms of such a technical qualification process either exists nor is approved to reliably asses the different SHM technologies available in terms of their performance. Without such a defined process, SHM technologies will never be able to make the final step into application confirming a technology readiness and being finally accepted by the aeronautic authorities. In this paper a conceptual study is presented where a possible qualification process transfer is investigated for a conventional NDT procedure based on a Lamb-wave based SHM system.
... For instance, a probability of 0.9 with a 95 % confidence is used to express the inspectable flaw size which will thus be noted a 90/95 . In the approach described in this paper, the computation of the lower confidence bound is based on asymptotic properties of the maximum likelihood estimate [2,3]. Other probabilistic criteria such as the Probability of False Alarm (PFA) or the Relative Operating Characteristics (ROC) curve are also useful when assessing NDI reliability [4]. ...
Inspection reliability is one of the key issues in ensuring safety of critical structural components. Thus, strong efforts have been conducted in order to determine the reliability of non-destructive methods that are designed to detect flaws in such components. Among the various methods dedicated to NDE performance evaluation, probabilistic approaches have aroused interest due to their ability to naturally account for uncertainties. The SISTAE project (Simulation and Statistics for Non-destructive Evaluation) supported by the French National Research Agency (ANR) addresses inspection evaluation using probabilistic criteria in fields such as aeronautics, the maritime sector and the nuclear power industry. More specifically, a method based on existing NDE simulation tools is developed in order to replace some of the experimental data used in statistical studies with simulation results. Thus, this project aims at reducing the time and cost of the analyses that are required to determine quantities such as POD (Probability of Detection) curves.
... In application, the maximum likelihood estimate ^ h is substituted for h in Eq. (7). A procedure developed by Cheng and Iles can be used to place lower confidence bounds on the PoD(a) function [24,25]. Such bounds are calculated from the variance–covariance matrix of the estimates and reflect the sensitivity of the experiment to both the number and sizes of flaws in the experimental specimens. ...
In this article, quantitative evaluation of optical thermographic
techniques relative to the non-destructive inspection of aluminum foam
material is studied. For this purpose, a set of aluminum foam specimens
with flat-bottom holes (FBH) was inspected by both optical lock-in
thermography (LT) and pulsed thermography (PT). Probability of detection
(PoD) analysis, as a quantitative method to estimate the capability and
reliability of a particular inspection technique, was studied and
compared for both optical LT and PT inspection results.
... For reference horizontal dotted lines are drawn at POD = 0.05, 0.01, 0.5, and 0.9, and vertical dotted lines are drawn at a 10 a 50 , and at a 90 , the target sizes corresponding to POD = 0.1, 0.5 and 0.9, respectively. The line at a 90 is drawn darker since we are interested in how many of the predicted a 90/95 values 2 The R software environment for statistical computing and graphics was used for all computations and plots herein. R is open-source (free) software and is available to download here: http://www.r-project.org/ are smaller than a 90 . ...
Most practitioners see a90/95 as a static, single-point summary of an entire inspection's capability. It purports to be the size of the target having at least 90% probability of detection in 95 of 100 POD experiments under nominally identical conditions. But in some situations the actual coverage is closer to 80%, rather than 95%, with 50% coverage being the median POD(a) curve itself. This paper discusses the two philosophies, the Wald Method, and the Loglikelihood Ratio Method, for constructing lower bounds on POD(a) curves (and therefore determining a90/95) and compares the effectiveness of each as functions of other experimental realities such as sample size and balance.
... Since the POD model is estimated from a finite size data sample, there is a sampling uncertainty on the estimates of the model parameters. This uncertainty is usually characterized by placing confidence bounds on the POD function [4,5]. ...
It is common practice to quantify inspection reliability in terms of the Probability of Detection (POD) and the Probability of False Alarm (PFA). The determination of these statistical indicators is currently based on very costly and time consuming empirical studies. Physic‐based models are now available to replace part of the experimental data with simulation results. This paper proposes a methodology exploiting existing modeling tools (here CIVA models) to simulate POD curves. Applications of this methodology to realistic ultrasonic
inspection setups are also described.
When a failure time observation is right censored, it is not observed beyond a random right threshold. When a possibly right censored observation is further susceptible to a random left threshold, a twice censored observation results. Patilea and Rolin have proposed product limit estimators for survival functions from twice censored data. Simultaneous confidence bands (SCBs) for survival functions from twice censored data are constructed in a way that mimics the approach that Hollander, McKeague and Yang pursued for random right censoring. Their nonparametric likelihood ratio function is adjusted, providing the basis for constructing the SCBs. The critical value needed for the SCBs is obtained using the bootstrap, for which asymptotic justification is provided. The SCBs align nicely as “neighborhoods” of the Patilea–Rolin nonparametric survival function estimator, in much the same way the likelihood ratio SCBs under random censoring are the “neighborhoods” of the Kaplan–Meier estimator. A simulation study supports the effectiveness of the proposed method. An illustration is given using synthetic data.
The scope of this paper is to define a methodology for building robust PoD curves from numerical modelling. First, an experimental database will be created in a laboratory scenario. A representative sample of inspectors with different certification levels in NDT method will be employed. Multiple inspections will be carried out to include the human influence in the MAPOD calculation. In addition, this study will take into account the impact of using different devices in the High Frequency Eddy Current method (HFEC). Then, a simulation model is created taking into consideration the main uncertainties due to human and device factors. These uncertainties are identified and quantified by the observation of experimental NDT inspection. Then, statistical distributions of these uncertainties are derived and used as inputs of the simulation model. Finally, the simulation PoD model will be compared and validated with the experimental results developed in the laboratory scenario. This comparison provided encouraging results to replace or complete experimental tests by simulation.
Probability of Detection (POD) curves are compared by two statistical methods to quantify system-to-system differences. The first method assesses performance among a group of inspection systems through an adaptation of statistical analysis of variance (ANOVA). The second method uses a chi-squared statistic to test for a difference between two systems. Examples using eddy current data are given for each technique.
French nuclear industry has introduced, through the amendment of the RSEM code in 2013, the use of statistical methods for the assessment of NDT performances for PWRs. The previous approach was deterministic and conservative, considering the worst case for all influential parameters of the inspection. IRSN carries out research activities on these statistical methods in order to develop its skills on this scientific and technical topic with the objective to provide, as the public expert for nuclear safety, an assessment on these tools for the regulatory body ASN. IRSN has conducted a literature review and has benchmarked a Probability of Detection (POD) approach for an inspection with eddy current done in the aeronautic industry. The results obtained with different software (mh1823, CIVA) are compared with an in-house Matlab script using the "Bootstrapping" method for the assessment of the confidence intervals. Benchmark results are in good agreement. A Model Assisted Probability of Detection (MAPOD) example has then been tested for Steam Generator tube wear using a simplified model based on CIVA software. The simplified MAPOD approach gives results consistent with the industry tube repair criteria. This work presents a preliminary study of statistical tools used to express the performances of NDT in the framework of nuclear pressure equipment. For nuclear pressure components which rupture is considered in design accidents, such as steam generator tubes, the POD approach seems appropriate for the evaluation of the inspection performances.
In parametric statistics, confidence bands for continuous distribution (quantile) functions may be constructed by unifying the graphs of all distribution (quantile) functions corresponding to parameters lying in some confidence region. It is then desirable that the coverage probabilities of both, band and region, coincide, e.g., to prevent from wide and less informative bands or to transfer the property of unbiasedness; this is ensured if the confidence region is exhaustive. Properties and representations of exhaustive confidence regions are presented. In location-scale families, the property of some confidence region to be exhaustive depends on the boundedness of the supports of the distributions in the family. For unbounded, one-sided bounded and bounded supports, characterizations of exhaustive confidence regions are derived. The results are useful to decide whether the trapezoidal confidence regions based on the standard pivotal quantities are exhaustive and may serve to construct exhaustive confidence regions in (log-)location-scale models.
The currently accepted method of modeling a-hat versus a eddy current data for probability of detection calculations involves time consuming and highly subjective operator intervention. We propose a new method which removes all dependence on the operator and therefore enjoys 100% repeatability. We model the a-hat versus a response using local regression and calculate confidence bounds on probability of detection and A90 threshold curves using the bootstrap method. Censored data are modeled via the expectation maximization algorithm and repeated measure correlations are fully estimated. We present results on four eddy current data sets to compare our new method with the current industry standard method.
The assessment of the Probability of Detection (POD) is used to evaluate the reliability of the non-destructive testing (NDT) system. The POD is required in industries, where a missed flaw might cause grave consequences. If only the artificial defects are evaluated, the POD could lead to wrong conclusion or even be invalid. The POD based on real flaws is needed. A small amount of real flaws can lead to a not statistically significant result or even to incorrect results. This work presents an approach to obtain to a significant result for the POD of the current dataset, despite the small amount of real defects. Two steps are necessary to assess a NDT system based on real flaws. First we evaluated the correlation between the NDT signal and the real size of the flaw. Second we use a statistical approach based on the Bayesian statistics to assess a POD in spite of the small amount of data. The approach allows including information of the POD evaluation of artificial defects in the assessment of the POD of real flaws.
This chapter presents an application of the model assisted probability of detection for Structural Health Monitoring (SHM) systems under uncertain crack configuration and system variability. The phenomena related to a SHM system are studied using numerical simulations of planar structures monitored by transducer arrays. The monitoring reliability is evaluated using a three-dimensional model including both Lamb wave propagation in a plate-like structure and scattering from cracks with different configurations. Computer models are implemented using parallel processing technology, which significantly speeds up the simulation time. The configurations considered in the chapter account for the variations of the relative crack position, orientation and size.
Ce texte expose les notions de détection dans le cadre du contrôle non destructif lorsqu'appliqué aux contextes industriels. Si le modèle " Hit/Miss " permet de traiter les données d'inspections de types binaires " détections/non détections ", le modèle " Signal/Réponse " s'applique quant à lui aux données composées directement de retours sur le signal. Dans chacun des cas, la procédure d'estimation se décompose en deux étapes : modélisation appropriée aux données puis dérivation de la POD. Nous examinons les pièges à éviter dans le cadre de l'utilisation de ces modèles. Nous justifions la nécessité d'introduire la notion de probabilité de fausse alarme (PFA). Les courbes ROC se présenteront dès lors comme un outil de visualisation de la performance intrinsèque du capteur à dissocier bruit et signal. Nous en étudions les principales caractéristiques ainsi que les diverses méthodes d'estimation. Concernant ce dernier point, on s'attarde particulièrement sur les procédures autorisant la prise en compte d'une ou plusieurs covariable(s). Ces méthodes permettent par exemple de mesurer l'impact de la grandeur caractéristique du défaut sur la performance et de se replacer ainsi dans un cadre plus proche de la problématique originelle. Nous étudions enfin les concepts de droites d'isoperformance et d'enveloppe convexe des courbes ROC (ROCCH) et leurs applications. Ces notions ont été introduites de façon à prendre en compte des facteurs venant moduler l'appréciation de la performance globale d'un capteur. Les coûts associés aux erreurs de détection et la fréquence des défauts dans la population générale des pièces (prévalence) en sont de bons exemples.
This book covers the development of efficient methods for the assessment and the management of civil structures is today a major challenge from economical, social and environmental aspects. Tools for handling uncertainties in loads, geometry, material properties, construction and operating conditions are nowadays essential. Covers the key concepts across topics including probability theory and statistics, structural safety, performance-based assessment, modelling uncertainties and principles of decision theory.
Non-destructive inspection is an important technique in the maintenance of plants and factories. It is necessary to appropriately evaluate the reliability of non-destructive inspection techniques. This study focuses on a reliability evaluation method for nondestructive inspections. The reliability method uses the probability of detection (POD) as a metric. A POD function is determined based on the thicknesses, signal responses, and a decision threshold for all pipe-wall measurements, and can be reasonably modeled by the cumulative log normal distribution function. In this study, the POD is applied to reliability evaluations of an electromagnetic acoustic transducer (EMAT), which is a non-destructive inspection device. Using the results of experimental measurements, we confirm that the POD can evaluate the reliabilities of EMATs that have different specifications.
Standards and codes that govern the inspection of industrial components require the techniques selected for use to be calibrated using well-established methods. For ultrasonic techniques making use of data collected using the full matrix capture method to be accepted for use in industry,
the methods by which the inspection system are to be calibrated must be well defined. This paper outlines the essential components of an ultrasonic system used for the inspection of a standard butt weld and presents the methods for calibrating the system where the techniques are based on the
full matrix capture data. The calibration methods are developed to be as generic as possible in order to be widely applicable to techniques using full matrix capture data.
It is not always only the size of the flaw that determines the severity of the flaw for the structure. In such cases, it is important to express the capability of the non-destructive testing (NDT) system to detect a flaw with respect to exactly those parameters that determine flaw severity.
The multi-parameter reliability model presented in this article shows a way of calculating and expressing the probability of detection (POD) as a function of different influencing parameters, using numerically simulated NDT system responses and experimentally measured responses. A successful
application of the model is demonstrated on the data from a transmit-receive longitudinal (TRL) ultrasonic inspection of a cast iron component. The POD of the surface-breaking semi-elliptical crack-like flaw is expressed as a function of its depth and length. In a direct comparison with the
conventional signal response analysis, where the POD is expressed as a function of only the flaw size, the method provides a more comprehensive estimation of the reliability of the NDT system.
The assessment of the Probability of Detection (POD) is used to evaluate the reliability of the non-destructive testing (NDT) system. The POD is required in industries, where a missed flaw might cause grave consequences. If only the artificial defects are evaluated, the POD could lead to wrong conclusion or even be invalid. The POD based on real flaws is needed. A small amount of real flaws can lead to a not statistically significant result or even to incorrect results. This work presents an approach to obtain to a significant result for the POD of the current dataset, despite the small amount of real defects. Two steps are necessary to assess a NDT system based on real flaws. First we evaluated the correlation between the NDT signal and the real size of the flaw. Second we use a statistical approach based on the Bayesian statistics to assess a POD in spite of the small amount of data. The approach allows including information of the POD evaluation of artificial defects in the assessment of the POD of real flaws.
A framework is proposed in this paper in order to estimate statistical criteria such as Probability Of Detection (POD) from numerical simulation data. The software dedicated to Non Destructive Evaluation (NDE) CIVA allows us to simulate an inspection for various techniques and setups. Uncertainty is taken into account using random number generators for various statistical distributions. Finally, the POD is estimated from the simulation results using a state-of-the-art approach which assumes a functional form for the statistical criteria. Application cases are shown for both Eddy Current (EC) and Ultrasonic Techniques (UT).
The operating fleet of U.S. nuclear power plants was built to fossil
plant standards (of workmanship, not fitness for service) and with good
engineering judgment. Fortuitously, those nuclear power plants were
designed using defense-in-depth concepts, with nondestructive
examination (NDE) an important layer, so they can tolerate almost any
component failure and still continue to operate safely. In the 30+ years
of reactor operation, many material failures have occurred.
Unfortunately, NDE has not provided the reliability to detect
degradation prior to initial failure (breaching the pressure boundary).
However, NDE programs have been improved by moving from prescriptive
procedures to performance demonstrations that quantify inspection
effectiveness for flaw detection probability and sizing accuracy. Other
improvements include the use of risk-informed strategies to ensure that
reactor components contributing the most risk receive the best and most
frequent inspections. Another challenge is the recent surge of interest
in building new nuclear power plants in the United States to meet
increasing domestic energy demand. New construction will increase the
demand for NDE but also offers the opportunity for more proactive
inspections. This paper reviews the inception and evolution of NDE for
nuclear power plants over the past 40 years, recounts lessons learned,
and describes the needs remaining as existing plants continue operation
and new construction is contemplated.
Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic
Testing
inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.
In many areas of application, especially life testing and reliability, it is often of interest to estimate an unknown cumulative distribution (cdf). A simultaneous confidence band (SCB) of the cdf can be used to assess the statistical uncertainty of the estimated cdf over the entire range of the distribution. Cheng and Iles [1983. Confidence bands for cumulative distribution functions of continuous random variables. Technometrics 25 (1), 77–86] presented an approach to construct an SCB for the cdf of a continuous random variable. For the log-location-scale family of distributions, they gave explicit forms for the upper and lower boundaries of the SCB based on expected information. In this article, we extend the work of Cheng and Iles [1983. Confidence bands for cumulative distribution functions of continuous random variables. Technometrics 25 (1), 77–86] in several directions. We study the SCBs based on local information, expected information, and estimated expected information for both the “cdf method” and the “quantile method.” We also study the effects of exceptional cases where a simple SCB does not exist. We describe calibration of the bands to provide exact coverage for complete data and type II censoring and better approximate coverage for other kinds of censoring. We also discuss how to extend these procedures to regression analysis.
Computational models whose internal details are not accessible to the analyst are called black boxes. They arise because of security restrictions or because of the loss of the source code for legacy software programs. Computational models whose internal details are extremely complex are also sometimes treated as black boxes. It is often important to assess the uncertainty that should be ascribed to the output from a black box owing to uncertainty about its input quantities, their statistical distributions, or interdependencies. Sensitivity or 'what-if' studies are commonly used for this purpose. In such studies, the space of possible inputs is sampled as a vector of real values which is then provided to the black box to compute the output(s) that corresponds to those inputs. Such studies are often cumbersome to implement and understand, and they generally require many samples, depending on the complexity of the model and the dimensionality of the inputs. This report reviews methods that can be used to propagate about inputs through black boxes, especially 'hard' black boxes whose computational complexity restricts the total number of samples that can be evaluated. The focus is on methods that estimate the uncertainty of the outputs from the outside inward. That is, we are interested in methods that produce conservative characterizations of uncertainty that become tighter and tighter as the total computational effort increases.
Confidence envelopes for the one-, two-, and three-parameter logistic item response models are illustrated. In addition, we describe m-line plots, which show the genesis of the envelope as well as the density of lines in the confidence region. These too are illustrated for the one-, two-, and three- parameter logistic models.
A failure distribution represents an attempt to describe mathematically the length of life of a device. Most often the possibility remains that the analysts may be hesitant or unwilling to entertain any well known theoretical failure distribution.Therefore, one must rely on actual observations of the time to failure to construct an empirical cumulative failure distribution function. We present some useful results in constructing statistical confidence regions for the entire failure cumulative distribution function (cdf), F(x) from which a random sample has been drawn. The bandwidth of these regions becomes narrower in some parts of the distribution where we may want to have more precise information about failure cdf than is afforded by the ordinary Kolmogorov-Smirnov (K−S) confidence region. The problem of constructing the best, having minimum risk, confidence region from a decision theoretic approach is also considered.Illustrative numerical examples are presented.
A methodology is presented for the preparation of a competing modes fatigue design curve. Examples using extruded and isothermally forged powder metallurgy René 95 are presented. Monte Carlo simulation results are given to identify sample size requirements appropriate for the generation of reliable design curves.
A distribution-free simultaneous lower confidence region for an unknown reliability function using lifetime censored data is suggested. The proposed construction is based on a generalization of the usual Kolmogorov-Smirnov distance. This procedure allows construction of a meaningful confidence region in the neighborhood of the lifetime of the oldest item obtained from a “light” censoring plan. Critical values necessary for implementation are given.
Purpose
– Exact confidence interval estimation for the new extreme value model is often impractical. This paper seeks to evaluate the accuracy of approximate confidence intervals for the two‐parameter new extreme value model.
Design/methodology/approach
– The confidence intervals of the parameters of the new model based on likelihood ratio, Wald and Rao statistics are evaluated and compared through the simulation study. The criteria used in evaluating the confidence intervals are the attainment of the nominal error probability and the symmetry of lower and upper error probabilities.
Findings
– This study substantiates the merits of the likelihood ratio, the Wald and the Rao statistics. The results indicate that the likelihood ratio‐based intervals perform much better than the Wald and Rao intervals.
Originality/value
– Exact interval estimates for the new model are difficult to obtain. Consequently, large sample intervals based on the asymptotic maximum likelihood estimators have gained widespread use. Intervals based on inverting likelihood ratio, Rao and Wald statistics are rarely used in commercial packages. This paper shows that the likelihood ratio intervals are superior to intervals based on the Wald and the Rao statistics.
Avalanche runout distances have traditionally been calculated by selecting friction coefficients and then using them in an avalanche dynamics model. Uncertainties about the mechanical properties of flowing snow and its interaction with terrain make this method speculative. Here, an alternative simple method of predicting runout based on terrain variables is documented. By fitting runout data from five mountain ranges to extreme value distributions, we are able to show how (and why) extreme value parameters vary with terrain properties of different ranges. The method is shown to be applicable to small and truncated data sets which makes it attractive for use in situations where detailed information on avalanche runout is limited.
PRELIMINARY Abstract We provide new finite sample nonparametric inference methods for the mean of a bounded random variable. For this purpose, we prove that the impossibility theorem of Bahadur and Savage (1956) does not apply in this case. Next, we observe that confidence intervals for the mean of a bounded random variable can actually be derived by projection from confidence intervals for the adequate distribution function and investigate finite sample nonparametric methods based on improved Kolmogorov Smirnov statistics and likelihood ratio improvement. Further, we apply all studied inference methods on the Foster, Greer and Thorbecke (FGT, 1984) poverty measures. We show that FGT poverty measures are actually expectations of some bounded random variables, namely a mixing between a continuous bounded random variable and a mass at the poverty line. So, all inference methods for the mean of bounded random variable apply to this case. We study the relative performance of such methods. Monte Carlo simulations demonstrate the necessity of using finite sample nonparametric approaches. The asymptotic and bootstrap inference methods appear not reliable in finite sample. On the contrary, the finite sample non-parametric inference methods we propose are robust to the framework and the sample size we use. Confidence intervals we get have a very good coverage prob-ability (always close to 100%) and a good precision. In addition, we provide explicit expressions which make them very easy to compute. Keywords : nonparametric inference ; mean of bounded variables ; Wald, Rao and likelihood ratio principles of improvement ; poverty measures.
The aging of key elements of our civil infrastructure, including dams, is presenting new challenges for the development of methodology that will adequately predict remaining safe life, and identify those structures where remedial action is required. To meet this need, a new approach for non-destructive testing of large concrete dams has been developed. It combines aspects of ultrasonic non-destructive testing and seismic surveying in a new methodology ‘Acoustic Travel Time Tomography’ (ATTT). This is the second part of an investigation on improved assessment of mass concrete dams using ATTT. A companion paper [Bond LJ, Kepler WF, Frangopol, DM. Improved assessment of mass concrete dams using acoustic travel time tomography. Part I — Theory. Constr Build Mater 2000 Vol. 14, No. 3, pp. 133–146] focused on the background of the physics of ATTT. ATTT was tested in the laboratory, modified for large-scale testing, then evaluated for reliability. This study reports the application of the methodology, instrumentation, and data analysis techniques developed during the study. The research demonstrates that ATTT can locate and characterize cracks, voids and other anomalies deep within a mass concrete dam. The results at each stage of the study have shown that ATTT is a potentially highly accurate testing procedure, and that it is uniquely suited to the evaluation of large concrete dams.
The Weibull distribution is widely used in reliability engineering. To estimate its parameters and associated reliability indices, the maximum likelihood (ML) approach is often employed, and the associated Fisher information matrix is used to obtain the confidence bounds on the reliability indices that are of interest. The estimates and the confidence bounds usually behave similarly in terms of monotonic and asymptotic properties. However, the confidence bounds may behave differently under certain circumstances. As a result, the Fisher matrix approach may not always be preferred in obtaining the desired confidence bounds. This paper provides some properties of Fisher confidence bounds for the Weibull distribution. These properties can be used as guidelines when implementing the ML approach and Fisher information matrix to analyze failure time data and plan life tests.
The damage-tolerant life management of fracture-critical components of aircraft requires reliable nondestructive inspections (NDI), both after component manufacture as well as during the service life. In this paper, a historical background to the inspection reliability assessment using probability of detection (POD) has been provided, and the experimental procedures and statistical analysis techniques for the determination of the NDI reliability have been reviewed. Important factors and issues affecting POD measurement have been mentioned and some of the IAR experiences in the NDI POD assessment of aircraft engine components have been discussed.
Just as an interval bounds an uncertain real value, a probability box bounds an uncertain cumulative distribution function. It can express doubt about the shape of a probability distribution, the distribution parameters, the nature of intervariable dependence, or some other aspect of model uncertainty. Probability bounds analysis rigorously projects probability boxes through mathematical expressions. Ben-Haim's info-gap decision theory is a non-probabilistic decision theory that can address poorly characterized and even unbounded uncertainty. It bases decisions on optimizing robustness to failure rather than expected utility. Nested probability boxes can be used to define info-gap models for probability distributions, and probability bounds analysis provides a ready calculus for the calculations needed for an info-gap analysis involving probabilistic uncertainty.
Previously suggested methods for constructing confidence bands for cumulative distribution functions have been based on the classical Kolmogorov-Smirnov test for an empirical distribution function. This paper gives a method based on maximum likelihood estimation of the parameters. The method is described for a general continuous distribution. Detailed results are given for a location-scale parameter model, which includes the normal and extreme-value distributions as special cases. Results are also given for the related lognormal and Weibull distributions. The formulas derived for these distributions give a band with exact confidence coefftcient. A chi-squared approximation, which avoids the use of special tables, is also described. An example is used to compare the resulting bands with those obtained by previously published methods.
A method is developed for constructing a confidence band for a cumulative distribution function with known functional form.
The band is derived using the maximum absolute difference between the true function and an estimator of it. The normal distribution
with unknown mean and variance is treated in detail. The band derived in this case is found to be superior in certain instances
to the one due to Kanofsky (1968a) with respect to the expected band width at selected points, and the expected maximum band
width.
Summary Sharp one-sided confidence bounds of the hyperbolic SchefÉfc type are computed and tabulated for linear regression over intervals.
Width of a one-sided interval is defined as distance from the estimated regression line to the end of the interval. Average
width of the bounds derived is compared to that of sharp one-sided constant-width bounds of Gafarian and to the conservative
Scheffe bounds. Numerical comparisons in cases of particular practical interest indicate the value of the bounds derived.
Summary Based on a random sample from the normal cumulative distribution function ϕ(x; μ, σ) with unknown parameters μ and σ, one-sided confidence contours for ϕ(x; μ, σ), −∞xy; μ, σ)−ϕ(x; μ, σ), −∞xy