Ranking USRDS provider specific SMRs from 1998–2001

Department of Public Health, University of Massachusetts Amherst, Rm 411 Arnold House, 715 N. Pleasant Rd., Amherst, MA 01003, USA.
Health Services and Outcomes Research Methodology 03/2009; 9(1):22-38. DOI: 10.1007/s10742-008-0040-0


Provider profiling (ranking/percentiling) is prevalent in health services research. Bayesian models coupled with optimizing
a loss function provide an effective framework for computing non-standard inferences such as ranks. Inferences depend on the
posterior distribution and should be guided by inferential goals. However, even optimal methods might not lead to definitive
results and ranks should be accompanied by valid uncertainty assessments. We outline the Bayesian approach and use estimated
Standardized Mortality Ratios (SMRs) in 1998–2001 from the United States Renal Data System (USRDS) as a platform to identify
issues and demonstrate approaches. Our analyses extend Liu et al. (2004) by computing estimates developed by Lin et al. (2006)
that minimize errors in classifying providers above or below a percentile cut-point, by combining evidence over multiple years
via a first-order, autoregressive model on log(SMR), and by use of a nonparametric prior. Results show that ranks/percentiles
based on maximum likelihood estimates of the SMRs and those based on testing whether an SMR = 1 substantially under-perform
the optimal estimates. Combining evidence over the four years using the autoregressive model reduces uncertainty, improving
performance over percentiles based on only one year. Furthermore, percentiles based on posterior probabilities of exceeding
a properly chosen SMR threshold are essentially identical to those produced by minimizing classification loss. Uncertainty
measures effectively calibrate performance, showing that considerable uncertainty remains even when using optimal methods.
Findings highlight the importance of using loss function guided percentiles and the necessity of accompanying estimates with
uncertainty assessments.

10 Reads
  • Source

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Bayesian nonparametric methods are useful for modeling data without having to define the complexity of the entire model a priori, but rather allowing for this complexity determined by the data. We consider novel nonparametric Bayes models for high-dimensional and sparse data in this dissertation. The flexibility of Bayesian nonparametric priors arises from the prior's definition over an infinite dimensional parameter space. Therefore, there are theoretically an infinite number of latent components and an infinite number of latent factors. Nevertheless, draws from each respective prior will produce only a small number of components or factors that appear in a given data set. As mentioned, the number of these components and factors, and their corresponding parameter values, are left for the data to decide. This dissertation is divided into four parts, which motivate novel Bayesian nonparametric methods and clearly illustrate their utilities:Chapter 1: In Chapter 1, we review the Dirichlet process (DP) in detail. There are many other ways of nonparametric modeling, but with the availability of efficient computation and complete set up of theories, the DP is most popular and has been developed and studied extensively. We will also review the most new development of the DP in this chapter.Chapter 2: We propose the multiple Bayesian elastic net (abbreviated as MBEN), a new regularization and variable selection method. High dimensional and highly correlated data are commonplace. In such situations, maximum likelihood procedures typically fail--their estimates are unstable, and have large variance. To address thisproblem, a number of shrinkage methods have been proposed, including ridge regression, the lasso and the elastic net; these methods encourage coefficients to be near zero (in fact, the lasso and the elastic net perform variable selection by forcing some regression coefficients to equal zero). In this paper we describe a semiparametric approach that allows shrinkage to multiple locations, where the location and scale parameters are assigned Dirichlet process hyperpriors. The MBEN prior encourages variables to cluster, so that strongly correlated predictors tend to be in or out of the model together. We apply the MBEN prior to a multi-task learning (MTL) problem, using text data from the Wikipedia. An efficient MCMC algorithm and an automated Monte Carlo EM algorithm enable fast computation in high dimensions. The methods are applied to Wikipedia data using shared words to predict article links.Chapter 3: Latent class models (LCMs) are used increasingly for addressing a broad variety of problems, including sparse modeling of multivariate and longitudinal data, model-based clustering, and flexible inferences on predictor effects. Typical frequentist LCMs require estimation of a single finite number of classes, which does not increase with the sample size, and have a well-known sensitivity toparametric assumptions on the distributions within a class. Bayesian nonparametric methods have been developed to allow an infinite number of classes in the general population, with the number represented in a sample increasing with sample size. In this article, we propose a new nonparametric Bayes model that allows predictors toflexibly impact the allocation to latent classes, while limiting sensitivity to parametric assumptions by allowing class-specific distributions to be unknown subject to a stochastic ordering constraint. An efficient MCMC algorithm is developed for posterior computation. The methods are validated using simulation studies and applied to the problem of ranking medical procedures in terms of the distribution of patient morbidity.Chapter 4: In studies involving multi-level data structures, problems of data sparsity are often encountered and it becomes necessary to borrow information to improve inferences and predictions. This article is motivated by studies collecting data on different outcomes following congenital heart surgery. If there were sufficient numbers of patients receiving each type of procedure, one could potentially fit procedure-specific multivariate random effects model to relate the outcomes of surgery to patient predictors while allowing variability among hospitals. However, as there are approximately 150 procedures with many procedures conducted on few patients, it is important to borrow information. Allowing variability among hospitals, procedures and outcome types in the regression coefficients relating patient factorsto outcomes, we obtain a three-way tensor of regression coefficient vectors. To borrow information in estimating these coefficients, we propose a Bayesian multiway tensor co-clustering model. In particular, the model works by reducing the dimension of the table through separately clustering hospitals, procedures and outcome types. This soft probabilistic clustering proceeds via nonparametric Bayesian latent class models, which favor clustering of dimensions that have similar values for feature vectors. Efficient MCMC and fast approximation approaches are proposed for posterior computation. The methods are illustrated using simulated data, and applied to heart surgery outcome data from a Duke study. Dissertation
  • Source

Show more

Similar Publications

Preview (2 Sources)

10 Reads
Available from