## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

This article describes the REREFACT R package, which provides a postrotation algorithm that reorders or reflects factors for each replication of a simulation study with exploratory factor analysis (EFA). The purpose of REREFACT is to provide a general algorithm written in freely available software, R, dedicated to addressing the possibility that a nonuniform order or sign pattern of the factors could be observed across replications. The algorithm implemented in REREFACT proceeds in 4 steps. Step 1 determines the total number of equivalent forms, I, of the vector of factors, ?. Step 2 indexes, i = 1, 2 ? I, each equivalent form of ? (i.e., ??) via a unique permutation matrix, P (i.e., P?). Step 3 determines which ?? each replication follows. Step 4 uses the appropriate P? to reorder or re-sign parameter estimates within each replication so that all replications uniformly follow the order and sign pattern defined by the population values. Results from two simulation studies provided evidence for the efficacy of the REREFACT to identify and remediate equivalent forms of ? in models with EFA only (i.e., Example 1) and in fuller parameterizations of exploratory structural equation modeling (i.e., Example 2). How to use REREFACT is briefly demonstrated prior to the Discussion section by providing annotations for key commands and condensed output using a subset of simulated data from Example 1.

To read the full-text of this research,

you can request a copy directly from the authors.

... A second potential problem when RETAM is used concerns the order and sign indeterminacies of the target and rotated pattern matrices, in the sense that the order of the factor columns is interchangeable, and each column is interchangeable with its negative (e.g., Myers, Ahn, Lu, Celimli, & Zopluoglu, 2017). In a real application, particularly when the procedure is based on an initial target specification, like the one here, this problem is expected to be unimportant. ...

... This procedure was followed to construct the 36 partially specified target matrices (i.e., 12 target matrices for each population matrix). The 36 partially specified target matrices were checked to confirm that the rotation identification conditions were met (see Myers et al., 2017;Myers et al., 2015). To help other researchers replicate our study, we can offer interested readers the set of target matrices that we produced. ...

A common difficulty in the factor analysis of items designed to measure psychological constructs is that the factor structures obtained using exploratory factor analysis tend to be rejected if they are tested statistically with a confirmatory factor model. An alternative to confirmatory factor analysis is unrestricted factor analysis based on Procrustes rotation, which minimizes the distance from a target matrix proposed by the researcher. In the present article, we focus on the situation in which researchers propose a partially specified target matrix but are prepared to allow their initial target to be refined. Here we discuss RETAM as a new procedure for objectively refining target matrices. To date, it has been recommended that this kind of refinement be guided by human judgment. However, our approach is objective, because the threshold value is computed automatically (not decided on by the researcher) and there is no need to manually compute a number of factor rotations every time. The new procedure was tested in an extensive simulation study, and the results suggest that it may be a useful procedure in factor analysis applications based on incomplete measurement theory. Its feasibility in practice is illustrated with an empirical example from the personality domain. Finally, RETAM is implemented in a well-known noncommercial program for performing unrestricted factor analysis.

... It is interesting to note that Thurstone's contemporaries also called into question his propositions regarding factor rotation, which involved too much subjectivity (e.g., Caroll, 1953;Tucker, 1955). In turn, attempts to make factor rotation more objectively-driven led to the development of the whole plethora of modern rotation procedures (e.g., Browne, 2001) and to the problem of rotational indeterminacy that we will briefly review in a subsequent section (for a more thorough review of some indeterminacies in EFA and ESEM see Myers, Ahn, Lu, Celimli & Zopluoglu, 2017). Indeed, although EFA models based on alternative rotation procedures have identical covariance implications, the exact size of the cross-loadings and factor correlations varies directly as a function of the retained rotation algorithm (Sass & Schmitt, 2010;Schmitt & Sass, 2011). ...

This chapter focuses on Exploratory Structural Equation Modeling (ESEM), incorporating bifactor‐ESEM, which represent an overarching data analytic framework in which classical exploratory factor analysis methods have been integrated into the confirmatory factor analyses (CFA)/structural equation modeling framework. It discusses limitations related to the use of CFA, as well as some myths that maintain the use of this potentially problematic approach. The chapter introduces ESEM and the accompanying notion of psychometric multidimensionality. It discusses the alternative methods that can be used to account for construct‐irrelevant and construct‐relevant forms of psychometric multidimensionality. The chapter proposes a sequence for the estimation of ESEM and bifactor‐ESEM models and guidelines related to sample size determination and power estimation, and to the choice of the estimator and rotation procedure. It presents some limitations related to current implementations of ESEM and bifactor‐ESEM and preliminary solutions to some of these limitations.

... However, by relying on TCCs, equivalent sign and order pattern forms have to be evaluated. In case of 3 factors, 48 possible patterns exist when matching two sets, many of which are equivalent apart from sign and order (see Myers et al., 2017, Table 3). Removing the problem of signs by using the mTCC, the number of patterns to evaluate would be 6. ...

Since factor analysis is one of the most often used techniques in psychometrics, comparing or combining solutions from different factor analyses is often needed. Several measures to compare factors exist, one of the best known is Tucker's congruence coefficient, which is enjoying newly found popularity thanks to the recent work of Lorenzo-Seva and ten Berge (2006), who established cutoff values for factor congruence. While this coefficient is in most cases very good in comparing factors in general, it also has some disadvantages, which can cause trouble when one needs to compare or combine many analyses. In this paper, we propose a modified Tucker's congruence coefficient to address these issues.

... 436, Asparouhov & Muthén, 2009); these indeterminacies (i.e., the order and sign pattern) are particularly important in simulation studies (Asparouhov & Muthén, 2009). Without evaluating and correcting the order and sign pattern for each replication, the results would be biased in relation to parameter bias, mean square error, and coverage in simulation studies (Myers, Ahn, Lu, Celimli, & Zopluoglu, 2017). As such, we carefully reviewed all ESEM solutions and corrected (i.e., reordered or re-signed) parameter estimates for each replication so that all replications uniformly aligned with the pattern defined by the population values. ...

In this study, we contrast two competing approaches, not previously compared, that balance
the rigor of CFA/SEM with the flexibility to fit realistically complex data. Exploratory SEM
(ESEM) is claimed to provide an optimal compromise between EFA and CFA/SEM.
Alternatively, a family of three Bayesian SEMs (BSEMs) replace fixed-zero estimates with
informative, small-variance priors for different subsets of parameters: cross-loadings (CL),
residual covariances (RC), or CLs and RCs (CLRC). In Study 1, using three simulation
studies, results showed that (1) BSEM-CL performed more closely to ESEM; (2) BSEMCLRC
did not provide more accurate model estimation compared with BSEM-CL; (3)
BSEM-RC provided unstable estimation; and (4) different specifications of targeted values
in ESEM and informative priors in BSEM have significant impacts on model estimation. The
real data analysis (Study 2) showed that the differences in estimation between different
models were largely consistent with those in Study1 but somewhat smaller.

... Los estudios de simulación de datos son considerados como un procedimiento de investigación formal generalizado y de gran utilidad para la comunidad científica. En concreto, se puede consultar una extensa bibliografía relacionada con el estudio del comportamiento de técnicas y estadísticos multivariantes a partir de datos simulados, alcanzando gran difusión en los últimos años (Cain, Zhang & Yuan, en prensa;Myers, Ahn, Lu, Celimli & Zopluoglu, 2017;Ulitzsch, Schultze & Eid, 2017). ...

El análisis estadístico multivariante de escalas de respuesta tipo Likert, dado su empleo generalizado, resulta un tema controvertido en la comunidad científica, principalmente desde la especificación del problema de la medida. Este trabajo tiene como objeto estudiar cómo afectan diversas condiciones de estas escalas ordinales al cálculo de los coeficientes de correlación producto-momento y tetracórico-policórico. Para ello, se aplica un estudio de simulación en el que se generaron 90 bases de datos con 10 ítems cada una, controlando las siguientes variables: número de categorías de respuesta, distribución simétrica o asimétrica de los datos, tamaño de la muestra y nivel de relación entre los ítems. Así, se obtuvieron 90 matrices (10x10) que incluyeron la diferencia entre la correlación producto-momento y tetracórica-policórica. El análisis gráfico y de varianza muestran cómo el coeficiente de correlación producto-momento infravalora la relación entre las variables de manera importante principalmente cuando el número de opciones de respuesta de la escala ordinal es pequeño y la relación entre las variables grande. Por su parte, las estimaciones de ambos coeficientes son muy similares cuando la relación de partida entre las parejas de variables es pequeña y/o cuando el número de opciones de respuesta de las variables es mayor que 5. Se concluye realizando una recomendación al investigador aplicado sobre el coeficiente de correlación más apropiado en función del tipo de datos disponibles, y se discuten los resultados con estudios previos que alcanzan algunas conclusiones similares.

The objective of this study was to provide initial validity evidence for responses to the newly expanded version of the I COPPE Actions scale from adults with obesity under an exploratory latent variable approach. Longitudinal data from the 2018 Fun For Wellness Effectiveness Trial (ClinicalTrials.gov, identifier: NCT03194854) were reanalyzed in the current study. The a priori measurement theory specified a six-dimensional correlated structure of well-being actions to govern responses to the I COPPE Actions scale: Interpersonal, Occupational, Community, Physical,
Psychological, and Economic. The a priori measurement model exhibited exact fit to well-being actions data at baseline within an exploratory latent variable approach. There was strong evidence for at least partial measurement invariance by time for responses to the scale. Convergent (or divergent) correlations between concordant (or discordant) pairs of well-being actions self-efficacy scores and latent well-being action factors provided strong evidence for hypothesized relationships to other theoretically relevant variables.

A large number of methods of factor rotation are available in the literature, but the development of formal criteria by which to compare them is an understudied field of research. One possible criterion is the Thurstonian concept of “factorial invariance”, which was applied by Kaiser to the varimax rotation method in 1958 and has been subsequently neglected. In the present study, we propose two conditions for establishing whether a method satisfies factorial invariance, and we apply them to 11 orthogonal rotation methods. The results show that 3 methods do not exhibit factorial invariance under either condition, 3 are invariant under one but not the other, and 5 are invariant under both. Varimax rotation is one of the 5 methods that satisfy factorial invariance under both conditions and is the only method that satisfies the invariance condition originally advocated by Kaiser in 1958. From this perspective, it appears that varimax rotation is the method that best ensures factorial invariance.

We describe and evaluate a factor rotation algorithm, iterated target rotation (ITR). Whereas target rotation (Browne, 2001) requires a user to specify a target matrix a priori based on theory or prior research, ITR begins with a standard analytic factor rotation (i.e., an empirically informed target) followed by an iterative search procedure to update the target matrix. In Study 1, Monte Carlo simulations were conducted to evaluate the performance of ITR relative to analytic rotations from the Crawford-Ferguson family with population factor structures varying in complexity. Simulation results: (a) suggested that ITR analyses will be particularly useful when evaluating data with complex structures (i.e., multiple cross-loadings) and (b) showed that the rotation method used to define an initial target matrix did not materially affect the accuracy of the various ITRs. In Study 2, we: (a) demonstrated the application of ITR as a way to determine empirically informed priors in a Bayesian confirmatory factor analysis (BCFA; Muthén & Asparouhov, 2012) of a rater-report alexithymia measure (Haviland, Warren, & Riggs, 2000) and (b) highlighted some of the challenges when specifying empirically based priors and assessing item and overall model fit.

Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

The purpose of this study was to explore the influence of the number of targets specified on the quality of exploratory factor analysis solutions with a complex underlying structure and incomplete substantive measurement theory. Three Monte Carlo studies were performed based on the ratio of the number of observed variables to the number of underlying factors. Within each study, communality, sample size, and the number of targets were manipulated. Outcomes included a measure of congruence and a measure of variability with regard to the rotated pattern matrix. The magnitude of the main effect for the influence of the number of targets on congruence and variability ranged from moderate to large. The magnitude of the interaction between the number of targets and level of communality ranged from small to moderate with regard to congruence and variability. Consistent with theoretical expectations, the minimum number of targets to specify to be reasonably assured of obtaining an accurate solution varied across study conditions.

The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis.

In covariance structure modeling several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the conclusions frequently seem to contradict each other An overview of robustness studies in covariance structure analysis is given, and an attempt is made to generalize their findings. Robustness studies are described and distinguished from each other systematically by means of certain characteristics. These characteristics serve as explanatory variables in a meta-analysis concerning the behavior of parameter estimators, standard error estimators, and goodness-of-fit statistics when the model is correctly specified.

Given the proliferation of factor analysis applications in the literature, the present article examines the use of factor analysis in current published research across four psychological journals. Notwithstanding ease of analysis due to computers, the appropriate use of factor analysis requires a series of thoughtful researcher judgments. These judgments directly affect results and interpretations. The authors examine across studies (a) the decisions made while conducting exploratory factor analyses (N = 60) and (b) the information reported from the analyses. In doing so, they present a review of the current status of factor analytic practice, including comment on common errors in use and reporting. Recommendations are proffered for future practice as regards analytic decisions and reporting in empirical research.

This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of SE-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and hybrid intervals are explored using simulation studies involving different sample sizes, perfect and imperfect models, and normal and elliptical data. The bootstrap confidence intervals are also illustrated using a personality data set of 537 Chinese men. The results suggest that the bootstrap is an effective method for assigning confidence intervals at moderately large sample sizes.

This study is a methodological-substantive synergy, demonstrating the power and flexibility of exploratory structural equation modeling (ESEM) methods that integrate confirmatory and exploratory factor analyses (CFA and EFA), as applied to substantively important questions based on multidimentional students' evaluations of university teaching (SETs). For these data, there is a well established ESEM structure but typical CFA models do not fit the data and substantially inflate correlations among the nine SET factors (median rs = .34 for ESEM, .72 for CFA) in a way that undermines discriminant validity and usefulness as diagnostic feedback. A 13-model taxonomy of ESEM measurement invariance is proposed, showing complete invariance (factor loadings, factor correlations, item uniquenesses, item intercepts, latent means) over multiple groups based on the SETs collected in the first and second halves of a 13-year period. Fully latent ESEM growth models that unconfounded measurement error from communality showed almost no linear or quadratic effects over this 13-year period. Latent multiple indicators multiple causes models showed that relations with background variables (workload/difficulty, class size, prior subject interest, expected grades) were small in size and varied systematically for different ESEM SET factors, supporting their discriminant validity and a construct validity interpretation of the relations. A new approach to higher order ESEM was demonstrated, but was not fully appropriate for these data. Based on ESEM methodology, substantively important questions were addressed that could not be appropriately addressed with a traditional CFA approach.

Reise, Cook, and Moore proposed a “comparison modeling” approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with the item slope parameter estimates from the general factor in an exploratory bifactor IRT model (the unrestricted comparison model). In turn, these authors suggested that the unrestricted comparison bifactor model be derived from a target factor rotation. The goal of this study was to provide further empirical support for the use of target rotations as a method for deriving a comparison model. Specifically, we conducted Monte Carlo analyses exploring (a) the use of the Schmid–Leiman orthogonalization to specify a viable initial target matrix and (b) the recovery of true bifactor pattern matrices using target rotations as implemented in Mplus. Results suggest that to the degree that item response data conform to independent cluster structure, target rotations can be used productively to establish a plausible comparison model.

Exploratory factor analysis (EFA) is a commonly used statistical technique for examining the relationships between variables (e.g., items) and the factors (e.g., latent traits) they depict. There are several decisions that must be made when using EFA, with one of the more important being choice of the rotation criterion. This selection can be arduous given the numerous rotation criteria available and the lack of research/literature that compares their function and utility. Historically, researchers have chosen rotation criteria based on whether or not factors are correlated and have failed to consider other important aspects of their data. This study reviews several rotation criteria, demonstrates how they may perform with different factor pattern structures, and highlights for researchers subtle but important differences between each rotation criterion. The choice of rotation criterion is critical to ensure researchers make informed decisions as to when different rotation criteria may or may not be appropriate. The results suggest that depending on the rotation criterion selected and the complexity of the factor pattern matrix, the interpretation of the interfactor correlations and factor pattern loadings can vary substantially. Implications and future directions are discussed.

Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes (N), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for N below 50. Simulations were carried out to estimate the minimum required N for different levels of loadings (λ), number of factors (f), and number of variables (p) and to examine the extent to which a small N solution can sustain the presence of small distortions such as interfactor correlations, model error, secondary loadings, unequal loadings, and unequal p/f. Factor recovery was assessed in terms of pattern congruence coefficients, factor score correlations, Heywood cases, and the gap size between eigenvalues. A subsampling study was also conducted on a psychological dataset of individuals who filled in a Big Five Inventory via the Internet. Results showed that when data are well conditioned (i.e., high λ, low f, high p), EFA can yield reliable results for N well below 50, even in the presence of small distortions. Such conditions may be uncommon but should certainly not be ruled out in behavioral research data. * These authors contributed equally to this work

We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the freely available Mx program. We present the results of 2 illustrative analyses, one based on artificial data, and the other on real data relating to personality in male and female psychology students.

Despite the widespread use of exploratory factor analysis in psychological research, researchers often make questionable decisions when conducting these analyses. This article reviews the major design and analytical decisions that must be made when conducting a factor analysis and notes that each of these decisions has important consequences for the obtained results. Recommendations that have been made in the methodological literature are discussed. Analyses of 3 existing empirical data sets are used to illustrate how questionable decisions in conducting factor analyses can yield problematic results. The article presents a survey of 2 prominent journals that suggests that researchers routinely conduct analyses using such questionable methods. The implications of these practices for psychological research are discussed, and the reasons for current practices are reviewed.

NEO instruments are widely used to assess Big Five personality factors, but confirmatory factor analyses (CFAs) conducted at the item level do not support their a priori structure due, in part, to the overly restrictive CFA assumptions. We demonstrate that exploratory structural equation modeling (ESEM), an integration of CFA and exploratory factor analysis (EFA), overcomes these problems with responses (N = 3,390) to the 60-item NEO-Five-Factor Inventory: (a) ESEM fits the data better and results in substantially more differentiated (less correlated) factors than does CFA; (b) tests of gender invariance with the 13-model ESEM taxonomy of full measurement invariance of factor loadings, factor variances-covariances, item uniquenesses, correlated uniquenesses, item intercepts, differential item functioning, and latent means show that women score higher on all NEO Big Five factors; (c) longitudinal analyses support measurement invariance over time and the maturity principle (decreases in Neuroticism and increases in Agreeableness, Openness, and Conscientiousness). Using ESEM, we addressed substantively important questions with broad applicability to personality research that could not be appropriately addressed with the traditional approaches of either EFA or CFA.

This article examines effects of sample size and other design features on correspondence between factors obtained from analysis of sample data and those present in the population from which the samples were drawn. We extend earlier work on this question by examining these phenomena in the situation in which the common factor model does not hold exactly in the population. We present a theoretical framework for representing such lack of fit and examine its implications in the population and sample. Based on this approach we hypothesize that lack of fit of the model in the population will not, on the average, influence recovery of population factors in analysis of sample data, regardless of degree of model error and regardless of sample size. Rather, such recovery will be affected only by phenomena related to sampling error which have been studied previously. These hypotheses are investigated and verified in two sampling studies, one using artificial data and one using empirical data.

This article presents a new method for multiple-group confirmatory factor analysis (CFA), referred to as the alignment method. The alignment method can be used to estimate group-specific factor means and variances without requiring exact measurement invariance. A strength of the method is the ability to conveniently estimate models for many groups. The method is a valuable alternative to the currently used multiple-group CFA methods for studying measurement invariance that require multiple manual model adjustments guided by modification indexes. Multiple-group CFA is not practical with many groups due to poor model fit of the scalar model and too many large modification indexes. In contrast, the alignment method is based on the configural model and essentially automates and greatly simplifies measurement invariance analysis. The method also provides a detailed account of parameter invariance for every model parameter in every group.

The use of analytic rotation in exploratory factor analysis will be examined. Particular attention will be given to situations where there is a complex factor pattern and standard methods yield poor solutions. Some little known but interesting rotation criteria will be discussed and methods for weighting variables will be examined. Illustrations will be provided using Thurstone's 26 variable box data and other examples.

Sample size recommendations in confirmatory factor analysis (CFA) have recently shifted away from observations per variable or per parameter toward consideration of model quality. Extending research by Marsh, Hau, Balla, and Grayson (1998), simulations were conducted to determine the extent to which CFA model convergence and parameter estimation are affected by n as well as by construct reliability, which is a measure of measurement model quality derived from the number of indicators per factor (p/f) and factor loading magnitude. Results indicated that model convergence and accuracy of parameter estimation were affected by n and by construct reliability within levels of n. Sample size recommendations for applied researchers using CFA are presented herein as a function of relevant design characteristics.

One hundred years have passed since the birth of factor analysis, during which time there have been some major developments and extensions to the methodology. Unfortunately, one issue where the widespread accumulation of knowledge has been rather slow concerns identification. This article provides a didactic discussion of the topic in an attempt to draw more researchers' attention to the potential consequences of ignoring identification issues. Because of the mathematical complexity of the topic and some special characteristics of the model, the focus is mainly on identification issues in exploratory factor analysis.

Exploratory factor analysis (EFA) is a frequently used multivariate analysis technique in statistics. Jennrich and Sampson (1966)19.
Jennrich , R. I. and
Sampson , P. F. 1966. Rotation to simple loadings.. Psychometrika, 31: 313–323. [CrossRef], [PubMed], [Web of Science ®]View all references solved a significant EFA factor loading matrix rotation problem by deriving the direct Quartimin rotation. Jennrich was also the first to develop standard errors for rotated solutions, although these have still not made their way into most statistical software programs. This is perhaps because Jennrich's achievements were partly overshadowed by the subsequent development of confirmatory factor analysis (CFA) by Jöreskog (1969)20.
Jöreskog , K. G. 1969. A general approach to confirmatory maximum-likelihood factor analysis.. Psychometrika, 34: 183–202. [CrossRef], [Web of Science ®]View all references. The strict requirement of zero cross-loadings in CFA, however, often does not fit the data well and has led to a tendency to rely on extensive model modification to find a well-fitting model. In such cases, searching for a well-fitting measurement model may be better carried out by EFA (Browne, 20017.
Browne , M. W. 2001. An overview of analytic rotation in exploratory factor analysis.. Multivariate Behavioral Research, 36: 111–150. [Taylor & Francis Online], [Web of Science ®]View all references). Furthermore, misspecification of zero loadings usually leads to distorted factors with over-estimated factor correlations and subsequent distorted structural relations. This article describes an EFA-SEM (ESEM) approach, where in addition to or instead of a CFA measurement model, an EFA measurement model with rotations can be used in a structural equation model. The ESEM approach has recently been implemented in the Mplus program. ESEM gives access to all the usual SEM parameters and the loading rotation gives a transformation of structural coefficients as well. Standard errors and overall tests of model fit are obtained. Geomin and Target rotations are discussed. Examples of ESEM models include multiple-group EFA with measurement and structural invariance testing, test–retest (longitudinal) EFA, EFA with covariates and direct effects, and EFA with correlated residuals. Testing strategies with sequences of EFA and CFA models are discussed. Simulated and real data are used to illustrate the points.

Since the early years of psychological research, investigators in psychology have made use of mathematical models of psychological phenomena. Models are now routinely used to represent and study cognitive processes, the structure of psychological measurements, the structure of correlational relationships among variables, the nature of change over time, and many other topics and phenomena of interest. All of these models, in their attempt to provide a parsimonious representation of psychological phenomena, are wrong to some degree and are thus implausible if taken literally. Such models simply cannot fully represent the complexities of the phenomena of interest and at best provide an approximation of the real world. This imperfection has implications for how we specify, estimate, and evaluate models, and how we interpret results of fitting models to data. Using factor analysis and structural equation models as a context, I examine some implications of model imperfection for our use of models, focusing on formal specification of models; the nature of parameters and parameter estimates; the relevance of discrepancy functions; the issue of sample size; the evaluation, development, and selection of models; and the conduct of simulation studies. The overall perspective is that our use and study of models should be guided by an understanding that our models are imperfect and cannot be made to be exactly correct.

Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

In confirmatory factor analysis, constraints on the elements of the factor pattern or factor covariance matrix or both must be imposed to achieve a factor solution that is rotationally unique. The problem of rotational uniqueness is an aspect of the general identification problem in factor analysis; the relation between these 2 problems is described. Many sets of constraints exist that are sufficient to achieve uniqueness. Ideally, all such sets should yield equivalent model fits for a fixed number of common factors. As illustrated here, however, different sets of uniqueness constraints may lead to different fit results when applied to the same data. Several examples of this phenomenon in simulated data are given, and the reasons for the variation in fit results are described. In real applications, this variation in fit results over different choices for the uniqueness constraints may mislead researchers. Some remedies for this problem are discussed.

Investigation of the structure underlying variables (or people, or time) has intrigued social scientists since the early origins of psychology. Conducting one's first factor analysis can yield a sense of awe regarding the power of these methods to inform judgment regarding the dimensions underlying constructs. This book presents the important concepts required for implementing two disciplines of factor analysis: exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The book may be unique in its effort to present both analyses within the single rubric of the general linear model. Throughout the book canons of best factor analytic practice are presented and explained. The book has been written to strike a happy medium between accuracy and completeness versus overwhelming technical complexity. An actual data set, randomly drawn from a large-scale international study involving faculty and graduate student perceptions of academic libraries, is presented in Appendix A. Throughout the book different combinations of these variables and participants are used to illustrate EFA and CFA applications. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

This paper presents an iterative procedure for rotating a factor matrix obliquely to a least-squares fit to a target matrix which need not be fully specified.

A structural equation model is proposed with a generalized measurement part, allowing for dichotomous and ordered categorical variables (indicators) in addition to continuous ones. A computationally feasible three-stage estimator is proposed for any combination of observed variable types. This approach provides large-sample chi-square tests of fit and standard errors of estimates for situations not previously covered. Two multiple-indicator modeling examples are given. One is a simultaneous analysis of two groups with a structural equation model underlying skewed Likert variables. The second is a longitudinal model with a structural model for multivariate probit regressions.

Analysis of Ordinal Categorical Data Alan Agresti Statistical Science Now has its first coordinated manual of methods for analyzing ordered categorical data. This book discusses specialized models that, unlike standard methods underlying nominal categorical data, efficiently use the information on ordering. It begins with an introduction to basic descriptive and inferential methods for categorical data, and then gives thorough coverage of the most current developments, such as loglinear and logit models for ordinal data. Special emphasis is placed on interpretation and application of methods and contains an integrated comparison of the available strategies for analyzing ordinal data. This is a case study work with illuminating examples taken from across the wide spectrum of ordinal categorical applications. 1984 (0 471-89055-3) 287 pp. Regression Diagnostics Identifying Influential Data and Sources of Collinearity David A. Belsley, Edwin Kuh and Roy E. Welsch This book provides the practicing statistician and econometrician with new tools for assessing the quality and reliability of regression estimates. Diagnostic techniques are developed that aid in the systematic location of data points that are either unusual or inordinately influential; measure the presence and intensity of collinear relations among the regression data and help to identify the variables involved in each; and pinpoint the estimated coefficients that are potentially most adversely affected. The primary emphasis of these contributions is on diagnostics, but suggestions for remedial action are given and illustrated. 1980 (0 471-05856-4) 292 pp. Applied Regression Analysis Second Edition Norman Draper and Harry Smith Featuring a significant expansion of material reflecting recent advances, here is a complete and up-to-date introduction to the fundamentals of regression analysis, focusing on understanding the latest concepts and applications of these methods. The authors thoroughly explore the fitting and checking of both linear and nonlinear regression models, using small or large data sets and pocket or high-speed computing equipment. Features added to this Second Edition include the practical implications of linear regression; the Durbin-Watson test for serial correlation; families of transformations; inverse, ridge, latent root and robust regression; and nonlinear growth models. Includes many new exercises and worked examples.

This Special Issue is the result of the inaugural summit hosted by the Gallup Leadership Institute at the University of Nebraska-Lincoln in 2004 on Authentic Leadership Development (ALD). We describe in this introduction to the special issue current thinking in this emerging field of research as well as questions and concerns. We begin by considering some of the environmental and organizational forces that may have triggered interest in describing and studying authentic leadership and its development. We then provide an overview of its contents, including the diverse theoretical and methodological perspectives presented, followed by a discussion of alternative conceptual foundations and definitions for the constructs of authenticity, authentic leaders, authentic leadership, and authentic leadership development. A detailed description of the components of authentic leadership theory is provided next. The similarities and defining features of authentic leadership theory in comparison to transformational, charismatic, servant and spiritual leadership perspectives are subsequently examined. We conclude by discussing the status of authentic leadership theory with respect to its purpose, construct definitions, historical foundations, consideration of context, relational/processual focus, attention to levels of analysis and temporality, along with a discussion of promising directions for future research.

Component loss functions (CLFs) similar to those used in orthogonal rotation are introduced to define criteria for oblique
rotation in factor analysis. It is shown how the shape of the CLF affects the performance of the criterion it defines. For
example, it is shown that monotone concave CLFs give criteria that are minimized by loadings with perfect simple structure
when such loadings exist. Moreover, if the CLFs are strictly concave, minimizing must produce perfect simple structure whenever
it exists. Examples show that methods defined by concave CLFs perform well much more generally. While it appears important
to use a concave CLF, the specific CLF used is less important. For example, the very simple linear CLF gives a rotation method
that can easily outperform the most popular oblique rotation methods promax and quartimin and is competitive with the more
complex simplimax and geomin methods.

A Monte Carlo experiment is conducted to investigate the performance of the bootstrap methods in normal theory maximum likelihood factor analysis both when the distributional assumption is satisfied and unsatisfied. The parameters and their functions of interest include unrotated loadings, analytically rotated loadings, and unique variances. The results reveal that (a) bootstrap bias estimation performs sometimes poorly for factor loadings and nonstandardized unique variances; (b) bootstrap variance estimation performs well even when the distributional assumption is violated; (c) bootstrap confidence intervals based on the Studentized statistics are recommended; (d) if structural hypothesis about the population covariance matrix is taken into account then the bootstrap distribution of the normal theory likelihood ratio test statistic is close to the corresponding sampling distribution with slightly heavier right tail.

Conditions for removing the indeterminancy due to rotation are given for both the oblique and orthogonal factor analysis models. The conditions indicate why published counterexamples to conditions discussed by Jöreskog are not identifiable.

Use of Monte Carlo studies in structural equation modeling

- D L Bandalos
- W Leite

Bandalos, D. L., & Leite, W. (2013). Use of Monte Carlo studies in
structural equation modeling. In G. R. Hancock & R. O. Mueller
(Eds.), Structural equation modeling: A second course (2nd ed., pp.
625-666). Charlotte, NC: Information Age.

CEFA: Comprehensive exploratory factor analysis (Version 3.04) [Computer software and manual

- M W Browne
- R Cudek
- K Tateneni
- G Mels

Browne, M. W., Cudek, R., Tateneni, K., & Mels, G. (2010). CEFA:
Comprehensive exploratory factor analysis (Version 3.04) [Computer
software and manual]. Retrieved from http://faculty.psy.ohio-state.edu/
browne/