## About

104

Publications

8,214

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

2,206

Citations

Citations since 2016

Introduction

## Publications

Publications (104)

In one-sided testing, Bayesians and frequentists differ on whether or not there is discrepancy between the inference based on the posterior model probability and that based on the p value. We add some arguments to this debate analyzing the discrepancy for moderate and large sample sizes. For small and moderate samples sizes, the discrepancy is meas...

Selecting a statistical model from a set of competing models is a central issue in the scientific task, and the Bayesian approach to model selection is based on the posterior model distribution, a quantification of the updated uncertainty on the entertained models. We present a Bayesian procedure for choosing a family between the Poisson and the ge...

In cost–effectiveness analysis (CEA) of medical treatments the optimal treatment is chosen using an statistical model of the cost and effectiveness of the treatments, and data from patients under the treatments. Sometimes these data also include values of certain deterministic covariates of the patients which usually have valuable clinical informat...

Cost–effectiveness analysis of medical treatments is a statistical decision problem whose aim is to choose an optimal treatment among a finite set of alternative treatments. It is assumed that the treatment selection is to be based on their cost and effectiveness. In this paper we revise this statistical decision problem, discuss two utility functi...

The random effect approach for meta-analysis was motivated by a lack of consistent assessment of homogeneity of treatment effect before pooling. The random effect model assumes that the distribution of the treatment effect is fully heterogenous across the experiments. However, other models arising by grouping some of the experiments are plausible....

The sampling information for the cost-effectiveness analysis typically comes from different health care centers, and, as far as we know, it is taken for granted that the distribution of the cost and the effectiveness does not vary across centers. We argue that this assumption is unrealistic, and prove that to not consider the sample heterogeneity w...

Statistical meta-analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributi...

In this paper, we present two problems in meta-analysis. One is the model uncertainty generated by the available heterogenous sampling information. We claim that this model uncertainty has to be incorporated into the meta-inference, and propose a Bayesian clustering procedure for doing that. A second problem is that of choosing the linking distribu...

This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena.
Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and...

Most of the consistency analyses of Bayesian procedures for variable
selection in regression refer to pairwise consistency, that is, consistency of
Bayes factors. However, variable selection in regression is carried out in a
given class of regression models where a natural variable selector is the
posterior probability of the models. In this paper...

In most cases, including those of discrete random variables, statistical meta-analysis is carried out using the normal random effect model. The authors argue that normal approximation does not always properly reflect the underlying uncertainty of the original discrete data. Furthermore, in the presence of rare events the results from this approxima...

We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection.
The intrinsic method and its applications have been developed in the last two decades, and has stimulated closely related methods. The intrinsic methodology can be thou...

This paper presents a Bayesian model for meta-analysis of sparse discrete binomial data, which are out of the scope of the usual hierarchical normal random-effect models. Treatment effectiveness data are often of this type.The crucial linking distribution between the effectiveness conditional on the healthcare center and the unconditional effective...

These are the written discussions of the paper "Bayesian measures of model
complexity and fit" by D. Spiegelhalter et al. (2002), following the
discussions given at the Annual Meeting of the Royal Statistical Society in
Newcastle-upon-Tyne on September 3rd, 2013.

Bayesian inference on the change points in a given sample is a statistical model selection problem that presents two main difficulties. The first one is the selection of a reasonable prior distribution over the set of models, the number of which depends exponentially on the sample size, and the second is the high computational burden involved even...

In the presence of covariates, the cost-effectiveness analysis of medical treatments shows that the optimal treatment varies across the patient population subgroups, and hence to accurately define the subgroups is a crucial step in the analysis. A patient subgroup definition using only influential covariates within the potential set of patients cov...

This paper deals with the decision problem of choosing an optimal medical treatment, among M possible candidates, when the states of nature are the net benefit of the treatments, and regression models for the treatment cost and effectiveness are assumed. In this setting a crucial step in the analysis is the construction of the population subgroups...

We describe a new variable selection procedure for categorical responses where the candidate models are all probit regression models. The procedure uses objective intrinsic priors for the model parameters, which do not depend on tuning parameters, and ranks the models for the different subsets of covariates according to their model posterior probab...

In Moreno etal. (Rev R Acad Cien Ser A Mat 97:53–61, 2003) an objective Bayesian model comparison procedure for univariate normal regression models based on intrinsic priors was provided.
However, in many applications the regression models entertained are multidimensional, and hence an extension of the procedure
to this setting is required. This te...

Evaluation of: Oppe M, Al M, Rutten-van Mölken M. Comparing methods of data synthesis. Re-estimating parameters of an existing probabilistic cost-effectiveness model. Pharmacoeconomics 29(3), 239-250 (2011). In the paper by Oppe et al., a cost-effectiveness analysis of alternative treatments for chronic obstructive pulmonary disease (COPD), based o...

In the class of normal regression models with a finite number of regressors, and for a wide class of prior distributions, a Bayesian model selection procedure based on the Bayes factor is consistent [Casella and Moreno J. Amer. Statist. Assoc. 104 (2009) 1261--1271]. However, in models where the number of parameters increases as the sample size inc...

We analyze the general (multiallelic) Hardy-Weinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a sensitivity analysis with respect to the prior. This goal is achieved through the identification...

This paper deals with medical treatments comparison from the cost-effectiveness viewpoint. A decision theory scheme is considered, where the decision space is the set of treatments involved, the space of states of nature consists of the respective net benefits of the treatments, and the utility function is one of two possible candidates. A first ca...

Cost-effectiveness analysis of a treatment is typically based on specific functions of the expectation of the effectiveness
and cost of the treatment, and treatment comparisons are made in the same vein. The mathematical expectation has been the
cornerstone for defining the incremental cost-effectiveness ratio and the incremental net benefit, the m...

Linear regression models are often used to represent the cost and effectiveness of medical treatment. The covariates used may include sociodemographic variables, such as age, gender or race; clinical variables, such as initial health status, years of treatment or the existence of concomitant illnesses; and a binary variable indicating the treatment...

Casella et al. [2, (2009)] proved that, under very general conditions, for normal linear models the Bayes factor for a wIDe class of prior distributions, including the intrinsic priors, is consistent when the number of parameters does not grow with the sample size n. The special attention paID to the intrinsic priors is due to the fact that they ar...

The economic literature on cost-effectiveness analysis in the context of decisions by health technology assessment agencies assumes as the quantity of interest a linear combination of the mean of the sampling distribution of the effectiveness and the cost. We argue that this is not always reasonable. Our reasons for this assertion are that (i) trea...

The aim of cost-effectiveness analysis is to maximize health benefits from a given budget, taking a societal perspective. Consequently, the comparison of alternative treatments or technologies is solely based on their expected effectiveness and cost. However, the expectation, or mean, poses important limitations as it might be a poor summary of the...

A condition needed for testing nested hypotheses from a Bayesian view- point is that the prior for the alternative model concentrates mass around the smaller, or null, model. For testing independence in contingency tables, the intrin- sic priors satisfy this requirement. Further, the degree of concentration of the priors is controlled by a discrete...

It has long been known that for the comparison of pairwise nested models, a decision based on the Bayes factor produces a consistent model selector (in the frequentist sense). Here we go beyond the usual consistency for nested pairwise models, and show that for a wide class of prior distributions, including intrinsic priors, the corresponding Bayes...

In the objective Bayesian approach to variable selection in regression a crucial point is the encompassing of the underlying nonnested linear models. Once the models have been encompassed, one can define objective priors for the multiple testing problem involved in the variable selection problem.
There are two natural ways of encompassing: one way...

This paper deals with the detection of multiple changepoints for independent but non identically distributed observations, which are assumed to be modeled by a linear regression with normal errors. The problem has a natural formu-lation as a model selection problem and the main difficulty for computing model posterior probabilities is that neither...

Hypothesis testing is a model selection problem for which the solution proposed by the two main statistical streams of thought, frequentists and Bayesians, substantially differ. One may think that this fact might be due to the prior chosen in the Bayesian analysis and that a convenient prior selection may reconcile both approaches. However, the Bay...

An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F-statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analys...

A novel fully automatic Bayesian procedure for variable selection in normal regression models is proposed. The procedure uses the posterior probabilities of the models to drive a stochastic search. The posterior probabilities are computed using intrinsic priors, which can be considered default priors for model selection problems; that is, they are...

The Bayesian analysis of the variable selection problem in linear regression when using objective priors needs some form of encompassing the class of all submodels of the full linear model as they are nonnested models. After we provide a nested setting, objective intrinsic priors suitable for computing model posterior probabilities, on which the se...

The Bayesian literature on the change point problem deals with the inference of a change in the distribution of a set of time-ordered data based on a sample of fixed size. This is the so-called ''retrospective or off-line'' analysis of the change point problem. A related but different problem is that of the ''sequential'' change point detection, ma...

The one-sided testing problem can be naturally formulated as the comparison between two nonnested models. In an objective Bayesian setting, that is, when subjective prior information is not available, no general method exists either for deriving proper prior distributions on parameters or for computing Bayes factor and model posterior probabilities...

The Jeffreys–Lindley paradox refers to the well-known fact that a sharp null hypothesis on the normal mean parameter is always accepted when the variance of the conjugate prior goes to infinity, thus implying that the resulting Bayesian procedure is not consistent, and that some limiting forms of proper prior distributions are not necessarily suita...

This article addresses the problem of testing whether the vectors of regression coefficients are equal for two independent normal regression models when the error variances are unknown. This problem poses severe difficulties both to the frequentist and Bayesian approaches to statistical inference. In the former approach, normal hypothesis testing t...

Meta-analysis has a natural formulation as a Bayesian hierarchical model. The main theoretical difficulty is the construction of a sensible relationship between the parameters of the individual statistical experiments and the meta-parameter. Since that prior information on such a relationship is typically not available, we argue that this relations...

Model selection problems involving nonnested models are considered. Bayes factor based solution to these problems needs prior
distributions for the parameters in the alternative models. When the prior information on these parameters is vague default
priors are available but, unfortunately, these priors are usually imporper which yields a calibratio...

In this paper we address three main questions connected with the statistical analysis of matched pairs, mainly from a Bayesian viewpoint. These sort of data appear when measurements are made on the same individuals before and after a treatment is applied. The first question refers to the issue of how to construct statistical models for matched pair...

In the last few years, there has been an increasing interest for default Bayes methods for hypothesis testing and model selection. The availability of such methods is potentially very useful in mixture models, where the elicitation process on the (unknown number of) parameters is usually rather difficult. Two recent yet already popular approaches,...

Partial prior information on the marginal distribution of an observable random variable is considered. When this information is incorporated into the statistical analysis of an assumed parametric model, the posterior inference is typically non-robust so that no inferential conclusion is obtained. To overcome this difficulty a method based on the st...

The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Al-though such priors are available, and proper, for testing contingency tables, we show that for testing independence they can be greatly improved...

Testing that some regression coefficients are equal to zero is an important problem in many applications. Homoscedasticity is not necessarily a realistic condition in this setting and, as a conse-quence, no frequentist test there exist. Approximate tests have been proposed. In this paper a Bayesian analysis of this problem is carried out, from a de...

In this paper, we address three main questions connected with the statistical analysis of matched pairs in the presence of covariates, from a Bayesian viewpoint. These sort of data appear when measurements are made on the same individuals before and after a treatment, at possibly different dose levels, is applied. The first question refers to the i...

Testing that some regression coefficients are equal to zero is an important problem in many applications. Homoscedasticity is not necessarily a realistic condition in this setting and, as a conse- quence, no frequentist test there exist. Approximate tests have been proposed. In this paper a Bayesian analysis of this problem is carried out, from a d...

Let θ represent the unobservable parameter of a sampling model f(x|θ) and ϕ(θ) the quantity of interest. In this article ranges of the posterior expectation of ϕ(θ), as the prior for θ varies over some commonly used classes of prior distributions, are studied. This kind of study is termed “global” Bayesian robustness of the class of priors for the...

In the Bayesian approach to model selection and prediction, the posterior probability of each model under consideration must be computed. In the presence of weak prior information we need using default or automatic priors, that are typically improper, for the parameters of the models. However this leads to ill-defined posterior probabilities.
Sever...

In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes,...

This paper explores the usefulness of robust Bayesian analysis in the context of an applied problem, finding priors to model judicial neutrality in an age discrimination case. We seek large classes of prior distributions without trivial bounds on the posterior probability of a key set, that is, without bounds that are independent of the data. Such...

We analyze the pioneering work on the theory of precise measurement of Edwards, Lindman and Savage (1963) in light of some recent developments in the theory of robust Bayesian analysis. The key points of the former are the concept of “actual” prior and bounds for the errors when replacing the actual prior by a uniform prior. The class of “actual” p...

In estimating the discrete parameter of a Binomial distribution, to choose a truncated Poisson model as the prior distribution
of this parameter is shown to be unsuitable.

Improper priors typically arise in default Bayesian estimation problems. In the Bayesian approach to model selection or hypothesis testing, the main tool is the Bayes factor. When improper priors for the parameters appearing in the models are used, the Bayes factor is not well defined. The intrinsic Bayes factor introduced by Berger and Pericchi is...

Let xt be a random process which is distributed as an homogeneous Poisson process . We assume that λ is unknown and xt is unobservable, but instead we are able to observe another process yt which is an unknown proportion, say θ, of xt for every t. The goal is to make inferences on xt, conditional on the observed yt.Assuming that the distribution of...

The examination of mammograms along with some (historical) information on the patients lead physicians, in some informal way, to declare a patient as having or not breast cancer. This diagnostic is usually based on historical variables, such as age, familiar antecedents, etc. and other semiologic variables derived from the analysis of the mammogram...

A non-precise data problem is defined as a two-stage hierarchical model. The question considered in this Note is to explore some theoretical and practical problems involving independence and/or conditional independence relations among the parameters and the data. We present some results in this direction.

For model selection the Bayes factor is not well defined when using default priors since they are typically improper. To overcome this problem two methods have recently been proposed. These methods, intrinsic and fractional, are studied here as methods to producing proper prior distributions for model selection from the improper conventional priors...

We address the problem of finding the range of the posterior expectation of an arbitrary function of the parameters when the prior distribution varies in an epsilon-contamination class and the resulting priors have specified marginals. This problem,which is an example of the Monge-Kantorovich problem has not yet received a complete solution. We pro...

Bayesian model comparisons are known to be undetermined when improper priors are employed. The Intrinsic Bayes factor (IBF) is a general automatic procedure for model comparison proposed in Berger and Pericchi (1993) which addresses the difficulties that arise when improper priors are employed. An appealing justification of the IBF is that it asymp...

admissible classes of functions e(?) capable of maintaining the prior information for the resulting priors p(?) are characterized and robust posterior analysis is carried out. Influence analysis is also considered. In this setting Fr?chet derivatives are useful tools: they are easily interpreted and easily computed. Interactive robustness based on...

When the parameter space is multidimensional, to elicit the joint prior distribution is a very difficult task. An accessible prior information might then be the class of prior distributions with given one-dimensional marginals. Unfortunately, even in bidimensional parameter spaces, the variational problems encountered in the Bayesian analysis of th...

Summary General comments on the relation between theory and application in statistics are made and emphasis placed on issues and principles
of model formulation. Three examples are described in outline. Criteria for the choice of models are discussed.

When θ is a multidimensional parameter, the issue of prior dependence or independence of coordinates is a serious concern. This is especially true in robust Bayesian analysis; Lavine et al. (J. Amer. Statist. Assoc.86, 964–971 (1991)) show that allowing a wide range of prior dependencies among coordinates can result in near vacuous conclusions. It...

Robust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are also reviewed, though with very uneven emphasis. The topics to be covered are as follows: 1. Introduc...

Summary Robust Bayesian analysis is the study of the sensitivity of Bayesian answers to uncertain inputs. This paper seeks to provide
an overview of the subject, one that is accessible to statisticians outside the field. Recent developments in the area are
also reviewed, though with very uneven emphasis.

A useful non-parametric class of priors is formed as those probability measures which lie between an upper and a lower measure
and it is called a Wand of probability measures Lavine (1991), Moreno and Pericchi (1991), Wasserman and Kadane (1992). This
class allows considerable freedom in tail behaviour as long as the upper and lower measures have d...

In the robust Bayesian literature in order to investigate robustness with respect to the functional form of a base prior distribution π0 (in particular with respect to the shape of the prior tails) the ε-contamination model of prior distributions Γ={π: π=(1−ε)π0(θ|λ)+εq,q∈Q}, has been proposed. Here π0(θ|λ) is the base elicited prior, λ is a vector...

In this work we examine the e-contamination model of prior densities γ={π:π=(1-ε)π0(θ)+εq: qεG}, where π0(θ) is the base elicited prior, q is a contamination belonging to some suitable class G and ε reflects the amount of error in π0(θ). Various classes with shape and/or quantile constraints are analysed, and a posterior robust analysis is carried...

Prior distributions of the form being the base elicited prior, q being contamination and ϵ reflecting the amount of error in π0 that is deemed possible, are considered. It is assumed that π0 is completely specified and that only some quantiles of q are known. As π varies over this class, the ranges of the posterior mean and of the posterior probabi...