## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

In this short note, we propose a simplified adaptive fence procedure that reduces the computational burden of the adaptive fence procedure proposed by Jiang etÂ al. [Jiang, J., Rao, J.S., Gu, Z., Nguyen, T., 2008. Fence methods for mixed model selection. Ann. Statist. 36, 1669-1692] for mixed model selection problems. The consistency property of the new procedure is established. Simulation results show that the new procedure performs very well in a small sample situation. The method is applied to a well-known data set in small area estimation.

To read the full-text of this research,

you can request a copy directly from the authors.

... The mplot package provides an easy to use implementation of model stability and variable inclusion plots (Müller and Welsh 2010;Murray, Heritier, and Müller 2013) as well as the adaptive fence (Jiang, Rao, Gu, and Nguyen 2008;Jiang, Nguyen, and Rao 2009) for linear and generalized linear models. We provide a number of innovations on the standard procedures and address many practical implementation issues including the addition of redundant variables, interactive visualizations and the approximation of logistic models with linear models. ...

... The implementation we provide in the mplot package is inspired by the simplified adaptive fence proposed by Jiang et al. (2009), which represents a significant advance over the original fence method proposed by Jiang et al. (2008). The key difference is that the parameter c is not fixed at a certain value, but is instead adaptively chosen. ...

... The key difference is that the parameter c is not fixed at a certain value, but is instead adaptively chosen. Simulation results have shown that the adaptive method improves the finite sample performance of the fence, see Jiang et al. (2008Jiang et al. ( , 2009). ...

The mplot package provides an easy to use implementation of model stability
and variable inclusion plots (M\"uller and Welsh 2010; Murray, Heritier, and
M\"uller 2013) as well as the adaptive fence (Jiang, Rao, Gu, and Nguyen 2008;
Jiang, Nguyen, and Rao 2009) for linear and generalised linear models. We
provide a number of innovations on the standard procedures and address many
practical implementation issues including the addition of redundant variables,
interactive visualisations and approximating logistic models with linear
models. An option is provided that combines our bootstrap approach with glmnet
for higher dimensional models. The plots and graphical user interface leverage
state of the art web technologies to facilitate interaction with the results.
The speed of implementation comes from the leaps package and cross-platform
multicore support.

... These concerns, such as the above, led to the development of a new class of strategies for model selection, known as the fence methods, first introduced by Jiang et al. [13]. Also see Jiang et al. [14]. The idea consists of a procedure to isolate a subgroup of what are known as correct models (those within the fence) via the inequality ...

... This is especially the case for the AF, which calls for repeated computation of the fence under the bootstrap samples. Jiang et al. [14] proposed to merge the factor̂,w ith the tuning constant , which leads to (2), and use the AF idea to choose the tuning constant adaptively. The latter authors called this modification simplified adaptive fence and showed that it enjoys similarly impressive finite-sample performance as the original AF (see below). ...

... The AF has been shown to have outstanding finite-sample performance (Jiang et al. [13,14]). On the other hand, the method may encounter computational difficulties when applied to high-dimensional and complex problems. ...

This paper provides an overview of a recently developed class of strategies for
model selection, known as the fence methods. It also offers directions of future research
as well as challenging problems.

... for all models M ∈ M, where M has the smallest loss among all considered models. Jiang, Nguyen and Rao (2009) reduce to some extent the computational burden of the Fence method in their Simplified Adaptive Fence procedure, which can be very competitive in lower-dimensional problems and where convergence of estimation procedures is not of a concern, such as when using the least squares estimator in linear regression with X T X of full rank. ...

... Jiang, Nguyen and Rao (2009) refer for the proof of (33) to the proof of Theorem 3 in Jiang et al. (2008). ...

... In our own implementations we used τ = 0.6, which was chosen before running any simulations, by a visual inspection of all published results in the series of Fence papers. (Jiang, Nguyen and Rao, 2009, suggest another adjustment, based on lower bounds of large sample 95% confidence intervals, which depend on the bootstrap sample size and p * .) Figure 1 shows a plot of p * over an appropriate range of the tuning constant c n . The data generating model is a m = 10 independent cluster model with group sample sizes n i ≡ 5. ...

Linear mixed effects models are highly flexible in handling a broad range of
data types and are therefore widely used in applications. A key part in the
analysis of data is model selection, which often aims to choose a parsimonious
model with other desirable properties from a possibly very large set of
candidate statistical models. Over the last 5-10 years the literature on model
selection in linear mixed models has grown extremely rapidly. The problem is
much more complicated than in linear regression because selection on the
covariance structure is not straightforward due to computational issues and
boundary problems arising from positive semidefinite constraints on covariance
matrices. To obtain a better understanding of the available methods, their
properties and the relationships between them, we review a large body of
literature on linear mixed model selection. We arrange, implement, discuss and
compare model selection methods based on four major approaches: information
criteria such as AIC or BIC, shrinkage methods based on penalized loss
functions such as LASSO, the Fence procedure and Bayesian techniques.

... The fence method [23,24] was motivated by the limitations of the information criteria when applied to non-conventional situations, even though the method has been shown to be very competitive to traditional methods in the conventional settings as well. In particular, the adaptive fence procedure is a data driven procedure to determine an optimal tuning parameter. ...

... We similarly treat the QTL mapping as a model selection problem, but our approach is based on the fence method, which is attractive in this situation due to its flexibility and datadriven optimality [23,24]. On the other hand, the fence, especially the adaptive fence, encounters computational difficulties when applied to QTL mapping due to the potentially large number of markers, as mentioned above. ...

... For such a reason, this step of the fence method has complicated its applicability to many areas. Jiang et al. [24] developed a simplified adaptive fence (SAF) procedure that avoids such difficulties. In the SAF procedure, the fence inequality (2) is replaced by (3) It appears that the only difference is the disappearance of from the right side of (2). ...

Model search strategies play an important role in finding simultaneous susceptibility genes that are associated with a trait. More particularly, model selection via the information criteria, such as the BIC with modifications, have received considerable attention in quantitative trait loci (QTL) mapping. However, such modifications often depend upon several factors, such as sample size, prior distribution, and the type of experiment, e.g., backcross, intercross. These changes make it difficult to generalize the methods to all cases. The fence method avoids such limitations with a unified approach, and hence can be used more broadly. In this paper, this method is studied in the case of backcross experiments throughout a series of simulation studies. The results are compared with those of the modified BIC method as well as some of the most popular shrinkage methods for model selection.

... In this literature review we focus only on model selection tools primarily developed for choosing the mean and the working correlation structures when GEEs are used for parameter estimation. Model selection tools designed for additional GLM frameworks, and approaches to modeling correlated data, can be found in Liu et al. (1999), Vaida and Blanchard (2005), Yafune et al. (2005), Azari et al. (2006), Pu and Niu (2006), Kinney and Dunson (2007), Lavergne et al. (2008), Shang and Cavanaugh (2008) and Jiang et al. (2009) among others. ...

... If the frequency plot show a "peak", and therefore the E-MS is to continue, we first look for the last peak, that is, the highest dimension that corresponds to a peak in order to be conservative. This is similar to the AF (Jiang et al. 2009), where the first significant peak is chosen in order to determine the cut-off for the fence (e.g., Jiang 2014). The first peak for the AF corresponds to the last peak for the IF. ...

We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains.

Nonresponse bias has been a long-standing issue in survey research (Brehm 1993; Dillman, Eltinge, Groves and Little 2002), with numerous studies seeking to identify factors that affect both item and unit response. To contribute to the broader goal of minimizing survey nonresponse, this study considers several factors that can impact survey nonresponse, using a 2007 Animal Welfare Survey Conducted in Ohio, USA. In particular, the paper examines the extent to which topic salience and incentives affect survey participation and item nonresponse, drawing on the leverage-saliency theory (Groves, Singer and Corning 2000). We find that participation in a survey is affected by its subject context (as this exerts either positive or negative leverage on sampled units) and prepaid incentives, which is consistent with the leverage-saliency theory. Our expectations are also confirmed by the finding that item nonresponse, our proxy for response quality, does vary by proximity to agriculture and the environment (residential location, knowledge about how food is grown, and views about the importance of animal welfare). However, the data suggests that item nonresponse does not vary according to whether or not a respondent received incentives.

Numerous issues have arisen over the past few decades relating to the implied volatility smile in the options market; however, the extant literature reveals that relatively little effort has thus far been placed into comparing the various implied volatility models, essentially as a result of the lack of any theoretical foundation on which to base such comparative analysis. In this study, we use a comprehensive options database and employ methods of combining the various hypothesis tests to compare the different implied volatility models. To the best of our knowledge, this is the first study of its kind to address this issue using combination tests. Our empirical results reveal that the linear piecewise model is the most appropriate model for capturing the implied volatility smile, with additional robustness checks confirming the validity of this finding.

The fence method [J. Jiang et al., Ann. Stat. 36, No. 4, 1669–1692 (2008; Zbl 1142.62047)] is a recently developed strategy for model selection. The idea involves a procedure to isolate a subgroup of what are known as correct models (of which the optimal model is a member). This is accomplished by constructing a statistical fence, or barrier, to carefully eliminate incorrect models. Once the fence is constructed, the optimal model is selected from amongst those within the fence according to a criterion which can be made flexible. The construction of the fence can be made adaptively to improve finite sample performance. We extend the fence method to situations where a true model may not exist or be among the candidate models. Furthermore, another look at the fence methods leads to a new procedure, known as invisible fence (IF). A fast algorithm is developed for IF in the case of subtractive measure of lack-of-fit. The main focus of the current paper is microarray gene-set analysis. In particular, B. Efron and R. J. Tibshirani [Ann. Appl. Stat. 1, No. 1, 107–129 (2007; Zbl 1129.62102)] proposed a gene set analysis (GSA) method based on testing the significance of gene-sets. In typical situations of microarray experiments the number of genes is much larger than the number of microarrays. This special feature presents a real challenge to implementation of IF to microarray gene-set analysis. We show how to solve this problem, and carry out an extensive Monte Carlo simulation study that compares the performances of IF and GSA in identifying differentially expressed gene-sets. The results show that IF outperforms GSA, in most cases significantly, uniformly across all the cases considered. Furthermore, we demonstrate both theoretically and empirically the consistency property of IF, while pointing out the inconsistency of GSA under certain situations. An application in tracking pathway involvement in late vs earlier stage colon cancers is considered.

This paper considers the problem of selecting nonparametric models for small area estimation, which recently have received much attention. We develop a procedure based on the idea of fence method (Jiang, Rao, Gu and Nguyen 2008) for selecting the mean function for the small areas from a class of approximating splines. Simulation results show impressive performance of the new procedure even when the number of small areas is fairly small. The method is applied to a hospital graft failure dataset for selecting a nonparametric Fay-Herriot type model.

Linear mixed-effects models are a class of models widely used for analyzing different types of data: longitudinal, clustered and panel data. Many fields, in which a statistical methodology is required, involve the employment of linear mixed models, such as biology, chemistry, medicine, finance and so forth. One of the most important processes, in a statistical analysis, is given by model selection. Hence, since there are a large number of linear mixed model selection procedures available in the literature, a pressing issue is how to identify the best approach to adopt in a specific case. We outline mainly all approaches focusing on the part of the model subject to selection (fixed and/or random), the dimensionality of models and the structure of variance and covariance matrices, and also, wherever possible, the existence of an implemented application of the methodologies set out.

A small area typically refers to a subpopulation or domain of interest for which a reliable direct estimate, based only on the domain-specific sample, cannot be produced due to small sample size in the domain. While traditional small area methods and models are widely used nowadays, there have also been much work and interest in robust statistical inference for small area estimation (SAE). We survey this work and provide a comprehensive review here. We begin with a brief review of the traditional SAE methods. We then discuss SAE methods that are developed under weaker assumptions and SAE methods that are robust in certain ways, such as in terms of outliers or model failure. Our discussion also includes topics such as nonparametric SAE methods, Bayesian approaches, model selection and diagnostics, and missing data. A brief review of software packages available for implementing robust SAE methods is also given.

Information criteria are a common approach for joint fixed and random effects selection in mixed models. While straightforward to implement, a major difficultly when applying information criteria is that they are typically based on maximum likelihood estimates, yet calculating such estimates for one, let alone multiple, candidate mixed models presents a major computational hurdle. To overcome this, we study penalized quasilikelihood estimation and use it as the basis for performing fast joint selection. Under a general framework, we show that penalized quasilikelihood estimation produces consistent estimates of the true parameters. Then, we propose a new penalized quasilikelihood information criterion whose distinguishing feature is the way it accounts for model complexity in the random effects, since penalized quasilikelihood estimation effectively treats the random effects as fixed. We demonstrate that the criterion asymptotically identifies the true set of important fixed and random effects. Simulations show the quasi-likelihood information criterion performs competitively with and sometimes better than common maximum likelihood information criteria for joint selection, while offering substantial reductions in computation time.

The previous chapter dealt with point estimation and related problems in linear mixed models. In this section, we consider a different type of inference, namely, tests in linear mixed models. Section 2.1.1 discusses statistical tests in Gaussian mixed models. As shown, exact F-tests can often be derived under Gaussian ANOVA models. Furthermore, in some special cases, optimal tests such as uniformly most powerful unbiased (UMPU) tests exist and coincide with the exact F-tests. Section 2.1.2 considers tests in non-Gaussian linear mixed models.

As mentioned in Sect. 3.4, the likelihood function under a GLMM typically involves integrals with no analytic expressions. Such integrals may be difficult to evaluate, if the dimensions of the integrals are high. For relatively simple models, the likelihood function may be evaluated by numerical integration techniques. See, for example, Hinde (1982) and Crouch and Spiegelman (1990). Such a technique is tractable if the integrals involved are low-dimensional. The following is an example.

Fence method (Jiang and others 2008. Fence methods for mixed model selection. Annals of Statistics 36, 1669–1692) is a recently proposed strategy for model selection. It was motivated by the limitation of the traditional information
criteria in selecting parsimonious models in some nonconventional situations, such as mixed model selection. Jiang and others (2009. A simplified adaptive fence procedure, Statistics & Probability Letters 79, 625–629) simplified the adaptive fence method of Jiang and others (2008) to make it more suitable and convenient to use in a wide variety of problems. Still, the current modification encounters
computational difficulties when applied to high-dimensional and complex problems. To address this concern, we proposed a restricted
fence procedure that combines the idea of the fence with that of the restricted maximum likelihood. Furthermore, we propose
to use the wild bootstrap for choosing adaptively the tuning parameter used in the restricted fence. We focus on problems
of longitudinal studies and demonstrate the performance of the new procedure and its comparison with other procedures of variable
selection, including the information criteria and shrinkage methods, in simulation studies. The method is further illustrated
by an example of real-data analysis.

In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC) are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

Knowledge of the area under different crops is important to the U.S. Department of Agriculture. Sample surveys have been designed to estimate crop areas for large regions, such as crop-reporting districts, individual states, and the United States as a whole. Predicting crop areas for small areas such as counties has generally not been attempted, due to a lack of available data from farm surveys for these areas. The use of satellite data in association with farm-level survey observations has been the subject of considerable research in recent years. This article considers (a) data for 12 Iowa counties, obtained from the 1978 June Enumerative Survey of the U.S. Department of Agriculture and (b) data obtained from land observatory satellites (LANDSAT) during the 1978 growing season. Emphasis is given to predicting the area under corn and soybeans in these counties. A linear regression model is specified for the relationship between the reported hectares of corn and soybeans within sample segments in the June Enumerative Survey and the corresponding satellite determination for areas under corn and soybeans. A nested-error model defines a correlation structure among reported crop hectares within the counties. Given this model, the mean hectares of the crop per segment in a county is defined as the conditional mean of reported hectares, given the satellite determinations and the realized (random) county effect. The mean hectares of the crop per segment is the sum of a fixed component, involving unknown parameters to be estimated and a random component to be predicted. Variance-component estimators in the nested-error model are defined, and the generalized least-squares estimators of the parameters of the linear model are obtained. Predictors of the mean crop hectares per segment are defined in terms of these estimators. An estimator of the variance of the error in the predictor is constructed, including terms arising from the estimation of the parameters of the model. Predictions of mean hectares of corn and soybeans per segment for the 12 Iowa counties are presented. Standard errors of the predictions are compared with those of competing predictors. The suggested predictor for the county mean crop area per segment has a standard error that is considerably less than that of the traditional survey regression predictor.

. We consider the problem of selecting the fixed and random effects in a mixed linear model. Two kinds of selection problems are considered. The first is to select the fixed covariates from a set of candidate predictors when the random effects are not subject to selection; the second is to select both the fixed covariates and the random effect factors. Our selection criteria are similar to the generalized information criterion (GIC), but we show that a naive GIC does not work for the second kind of selection problem. Asymptotic theory is developed in which we give sufficient conditions for consistency of the selection criteria proposed. Finite sample performance of the selection procedures are investigated by simulation studies.

During the last fifteen years, Akaike's entropy-based Information Criterion (AIC) has had a fundamental impact in statistical model evaluation problems. This paper studies the general theory of the AIC procedure and provides its analytical extensions in two ways without violating Akaike's main principles. These extensions make AIC asymptotically consistent and penalize overparameterization more stringently to pick only the simplest of the true models. These selection criteria are called CAIC and CAICF. Asymptotic properties of AIC and its extensions are investigated, and empirical performances of these criteria are studied in choosing the correct degree of a polynomial model in two different Monte Carlo experiments under different conditions.

Many model search strategies involve trading off model fit with model complexity in a penalized goodness of fit measure. Asymptotic properties for these types of procedures in settings like linear regression and ARMA time series have been studied, but these do not naturally extend to nonstandard situations such as mixed effects models, where simple definition of the sample size is not meaningful. This paper introduces a new class of strategies, known as fence methods, for mixed model selection, which includes linear and generalized linear mixed models. The idea involves a procedure to isolate a subgroup of what are known as correct models (of which the optimal model is a member). This is accomplished by constructing a statistical fence, or barrier, to carefully eliminate incorrect models. Once the fence is constructed, the optimal model is selected from among those within the fence according to a criterion which can be made flexible. In addition, we propose two variations of the fence. The first is a stepwise procedure to handle situations of many predictors; the second is an adaptive approach for choosing a tuning constant. We give sufficient conditions for consistency of fence and its variations, a desirable property for a good model selection procedure. The methods are illustrated through simulation studies and real data analysis.

It is shown that a strongly consistent estimation procedure for the order of an autoregression can be based on the law of the iterated logarithm for the partial autocorrelations. As compared to other strongly consistent procedures this procedure will underestimate the order to a lesser degree.

The term small area usually refers to a geographical area in which a limited number of observations are available. How does one produce reliable small-area estimates? It is evident that various relevant sources of information need to be gathered from outside the survey itself. Once these have been identified, one needs to develop a method that will combine information from all relevant sources. This is usually done by either implicit or explicit models. In this article, design-based estimators, indirect estimators and model-based estimators will be discussed. Most of these methods borrow information from related areas and use auxiliary data by either implicit or explicit modeling.

The problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion. These terms are a valid large-sample criterion beyond the Bayesian context, since they do not depend on the a priori distribution.

This paper focuses on the Akaike information criterion, AIC, for linear mixed-effects models in the analysis of clustered data. We make the distinction between questions regarding the
population and questions regarding the particular clusters in the data. We show that the AIC in current use is not appropriate for the focus on clusters, and we propose instead the conditional Akaike information and
its corresponding criterion, the conditional AIC, cAIC. The penalty term in cAIC is related to the effective degrees of freedom ρ for a linear mixed model proposed by Hodges & Sargent (2001); ρ reflects
an intermediate level of complexity between a fixed-effects model with no cluster effect and a corresponding model with fixed
cluster effects. The cAIC is defined for both maximum likelihood and residual maximum likelihood estimation. A pharmacokinetics data application is
used to illuminate the distinction between the two inference settings, and to illustrate the use of the conditional AIC in model selection.

Discussions on a paper by Efron and Gous

- G S Datta
- P Lahiri

Datta, G.S., Lahiri, P., 2001. Discussions on a paper by Efron and Gous, in: Model Selection, IMS P. Lahiri (Ed.) in: Lecture Notes/Monograph 38.

New procedures of fence methods and their applications

- T Nguyen

Nguyen, T., 2008. New procedures of fence methods and their applications. Ph.D. Dissertation. Dept. of Statist., Univ. of Calif., Davis, CA. Rao, J.N.K., 2003. Small Area Estimation. Wiley, New York.

New procedures of fence methods and their applications

- T Nguyen
- C A Davis
- J N K Rao

Nguyen, T., 2008. New procedures of fence methods and their applications. Ph.D. Dissertation. Dept. of Statist., Univ. of Calif., Davis, CA.
Rao, J.N.K., 2003. Small Area Estimation. Wiley, New York.