Selection between Weibull and lognormal distributions: A comparative simulation study

Department of Industrial Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-Dong, Yuseong-Gu, Daejeon, 305-701, Republic of Korea
Computational Statistics & Data Analysis (Impact Factor: 1.3). 01/2008; 53(2):477-485. DOI: 10.1016/j.csda.2008.08.012
Source: RePEc

ABSTRACT How to select the correct distribution for a given set of data is an important issue, especially when the tail probabilities are of interest as in lifetime data analysis. The Weibull and lognormal distributions are assumed most often in analyzing lifetime data, and in many cases, they are competing with each other. In addition, lifetime data are usually censored due to the constraint on the amount of testing time. A literature review reveals that little attention has been paid to the selection problems for the case of censored samples. In this article, relative performances of the two selection procedures, namely, the maximized likelihood and scale invariant procedures are compared for selecting between the Weibull and lognormal distributions for the cases of not only complete but also censored samples. Monte Carlo simulation experiments are conducted for various combinations of the censoring rate and sample size, and the performance of each procedure is evaluated in terms of the probability of correct selection (PCS) and average error rate. Then, previously unknown behaviors and relative performances of the two procedures are summarized. Computational results suggest that the maximized likelihood procedure can be generally recommended for censored as well as complete sample cases.

  • [Show abstract] [Hide abstract]
    ABSTRACT: In online applications, there is a foreseeable explosive growth in biometric personal authentication systems which are covenant with a measurable behavioral trait. Hand written object [HO] verification is the process used to recognize an individual, which is intuitive reliable indicator. Verification of a HO as a biometric modality still is a challenging field of research, as number of online and offline commercial applications use modern acquisition devices. The employment of HO verification with technology still remains open for novel methods due to inter-class and intra-class variations. This paper discusses the trajectory generation [TG] methods applicable for any HO verification, such as character recognition, handwriting verification, style classification, shape recognition and signature recognition, which are suitable for latest trend of mobile-commerce and web-commerce applications.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Log-normal and Weibull distributions are the two most popular distributions for analysing lifetime data. In this paper, we consider the problem of discriminating between the two distribution functions. It is assumed that the data are coming either from log-normal or Weibull distributions and that they are Type-II censored. We use the difference of the maximized log-likelihood functions, in discriminating between the two distribution functions. We obtain the asymptotic distribution of the discrimination statistic. It is used to determine the probability of correct selection in this discrimination process. We perform some simulation studies to observe how the asymptotic results work for different sample sizes and for different censoring proportions. It is observed that the asymptotic results work quite well even for small sizes if the censoring proportions are not very low. We further suggest a modified discrimination procedure. Two real data sets are analysed for illustrative purposes.
    Statistics: A Journal of Theoretical and Applied Statistics 04/2012; 46(2):197-214. · 1.26 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generated according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.
    Computational Statistics & Data Analysis 01/2011; 55(1):874-887. · 1.30 Impact Factor


Available from