In a recent article, Castro-Schilo, Widaman, and Grimm (2013) compared different approaches for relating multitrait-multimethod (MTMM) data to external variables. Castro-Schilo et al. reported that estimated associations with external variables were in part biased when either the Correlated Traits-Correlated Uniqueness (CT-CU) or Correlated Traits-Correlated (Methods - 1) [CT-C(M - 1)] models were fit to data generated from the Correlated Traits-Correlated Methods (CT-CM) model, whereas the data-generating CT-CM model accurately reproduced these associations. Castro-Schilo et al. argued that the CT-CM model adequately represents the data-generating mechanism in MTMM studies, whereas the CT-CU and CT-C(M - 1) models do not fully represent the MTMM structure. In this comment, we question whether the CT-CM model is more plausible as a data-generating model for MTMM data than the CT-C(M - 1) model. We show that the CT-C(M - 1) model can be formulated as a reparameterization of a basic MTMM true score model that leads to a meaningful and parsimonious representation of MTMM data. We advocate the use CFA-MTMM models in which latent trait, method, and error variables are explicitly and constructively defined based on psychometric theory.
The scientific literature consistently supports a negative relationship between adolescent depression and educational achievement, but we are certainly less sure on the causal determinants for this robust association. In this paper we present multivariate data from a longitudinal cohort-sequential study of high school students in Hawai'i (following McArdle, 2009; McArdle, Johnson, Hishinuma, Miyamoto, & Andrade, 2001). We first describe the full set of data on academic achievements and self-reported depression. We then carry out and present a progression of analyses in an effort to determine the accuracy, size, and direction of the dynamic relationships among depression and academic achievement, including gender and ethnic group differences. We apply three recently available forms of longitudinal data analysis: (1) Dealing with Incomplete Data -- We apply these methods to cohort-sequential data with relatively large blocks of data which are incomplete for a variety of reasons (Little & Rubin, 1987; McArdle & Hamagami, 1992). (2) Ordinal Measurement Models (Muthén & Muthén, 2006) -- We use a variety of statistical and psychometric measurement models, including ordinal measurement models to help clarify the strongest patterns of influence. (3) Dynamic Structural Equation Models (DSEMs; McArdle, 2009). We found the DSEM approach taken here was viable for a large amount of data, the assumption of an invariant metric over time was reasonable for ordinal estimates, and there were very few group differences in dynamic systems. We conclude that our dynamic evidence suggests that depression affects academic achievement, and not the other way around. We further discuss the methodological implications of the study.
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group.
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Latent Class Analysis (LCA) is a statistical method used to identify subtypes of related cases using a set of categorical and/or continuous observed variables. Traditional LCA assumes that observations are independent. However, multilevel data structures are common in social and behavioral research and alternative strategies are needed. In this paper, a new methodology, multilevel latent class analysis (MLCA), is described and an applied example is presented. Latent classes of cigarette smoking among 10,772 European American females in 9th grade who live in one of 206 rural communities across the U.S. are considered. A parametric and non-parametric approach for estimating a MLCA are presented and both individual and contextual predictors of the smoking typologies are assessed. Both latent class and indicator-specific random effects models are explored. The best model was comprised of three Level 1 latent smoking classes (heavy smokers, moderate smokers, non-smokers), two random effects to account for variation in the probability of Level 1 latent class membership across communities, and a random factor for the indicator-specific Level 2 variances. Several covariates at the individual and contextual level were useful in predicting latent classes of cigarette smoking as well as the individual indicators of the latent class model. This paper will assist researchers in estimating similar models with their own data.
In this article, we operationalize identification of mixed racial and ethnic ancestry among adolescents as a latent variable to (a) account for measurement uncertainty, and (b) compare alternative wording formats for racial and ethnic self-categorization in surveys. Two latent variable models were fit to multiple mixed-ancestry indicator data from 1,738 adolescents in New England. The first, a mixture factor model, accounts for the zero-inflated mixture distribution underlying mixed-ancestry identification. Alternatively, a latent class model allows classification distinction between relatively ambiguous versus unambiguous mixed-ancestry responses. Comparison of individual indicators reveals that the Census 2000 survey version estimates higher prevalence of mixed ancestry but is less sensitive to relative certainty of identification than are alternate survey versions (i.e., offering a "mixed" check box option, allowing a written response). Ease of coding and missing data are also considered in discussing the relative merit of individual mixed-ancestry indicators among adolescents.
This study introduces a two-part factor mixture model as an alternative analysis approach to modeling data where strong floor effects and unobserved population heterogeneity exist in the measured items. As the names suggests, a two-part factor mixture model combines a two-part model, which addresses the problem of strong floor effects by decomposing the data into dichotomous and continuous response components, with a factor mixture model, which explores unobserved heterogeneity in a population by establishing latent classes. Two-part factor mixture modeling can be an important tool for situations in which ordinary factor analysis produces distorted results and can allow researchers to better understand population heterogeneity within groups. Building a two-part factor mixture model involves a consecutive model building strategy that explores latent classes in the data for each part as well as a combination of the two-part. This model building strategy was applied to data from a randomized preventive intervention trial in Baltimore public schools administered by the Johns Hopkins Center for Early Intervention. The proposed model revealed otherwise unobserved subpopulations among the children in the study in terms of both their tendency toward and their level of aggression. Furthermore, the modeling approach was examined using a Monte Carlo simulation.
Effects of parents' divorce on children's adjustment have been studied extensively. This article applies new advances in trajectory modeling to the problem of disentangling the effects of divorce on children's adjustment from related factors such as the child's age at the time of divorce and the child's gender. Latent change score models were used to examine trajectories of externalizing behavior problems in relation to children's experience of their parents' divorce. Participants included 356 boys and girls whose biological parents were married at kindergarten entry. The children were assessed annually through Grade 9. Mothers reported whether they had divorced or separated in each 12-month period, and teachers reported children's externalizing behavior problems each year. Girls' externalizing behavior problem trajectories were not affected by experiencing their parents' divorce, regardless of the timing of the divorce. In contrast, boys who were in elementary school when their parents divorced showed an increase in externalizing behavior problems in the year of the divorce. This increase persisted in the years following the divorce. Boys who were in middle school when their parents divorced showed an increase in externalizing behavior problems in the year of the divorce followed by a decrease to below baseline levels in the year after the divorce. This decrease persisted in the following years.
Taxometric procedures such as MAXEIG and factor mixture modeling (FMM) are used in latent class clustering, but they have very different sets of strengths and weaknesses. Taxometric procedures, popular in psychiatric and psychopathology applications, do not rely on distributional assumptions. Their sole purpose is to detect the presence of latent classes. The procedures capitalize on the assumption that, due to mean differences between two classes, item covariances within class are smaller than item covariances between the classes. FMM goes beyond class detection and permits the specification of hypothesis-based within-class covariance structures ranging from local independence to multidimensional within-class factor models. In principle, FMM permits the comparison of alternative models using likelihood-based indexes. These advantages come at the price of distributional assumptions. In addition, models are often highly parameterized and susceptible to misspecifications of the within-class covariance structure. Following an illustration with an empirical data set of binary depression items, the MAXEIG procedure and FMM are compared in a simulation study focusing on class detection and the assignment of subjects to the latent classes. FMM generally outperformed MAXEIG in terms of class detection and class assignment. Substantially different class sizes negatively impacted the performance of both approaches, whereas low class separation was much more problematic for MAXEIG than for the FMM.
Structural Equation Mixture Models(SEMMs) are latent class models that permit the estimation of a structural equation model within each class. Fitting SEMMs is illustrated using data from one wave of the Notre Dame Longitudinal Study of Aging. Based on the model used in the illustration, SEMM parameter estimation and correct class assignment are investigated in a large scale simulation study. Design factors of the simulation study are (im)balanced class proportions, (im)balanced factor variances, sample size, and class separation. We compare the fit of models with correct and misspecified within-class structural relations. In addition, we investigate the potential to fit SEMMs with binary indicators. The structure of within-class distributions can be recovered under a wide variety of conditions, indicating the general potential and flexibility of SEMMs to test complex within-class models. Correct class assignment is limited.
To understand one developmental process, it is often helpful to investigate its relations with other developmental processes. Statistical methods that model development in multiple processes simultaneously over time include latent growth curve models with time-varying covariates, multivariate latent growth curve models, and dual trajectory models. These models are designed for growth represented by continuous, unidimensional trajectories. The purpose of this article is to present a flexible approach to modeling relations in development among two or more discrete, multidimensional latent variables based on the general framework of loglinear modeling with latent variables called associative latent transition analysis (ALTA). Focus is given to the substantive interpretation of different associative latent transition models, and exactly what hypotheses are expressed in each model. An empirical demonstration of ALTA is presented to examine the association between the development of alcohol use and sexual risk behavior during adolescence.
Regression mixture models are a new approach for finding differential effects which have only recently begun to be used in applied research. This approach comes at the cost of the assumption that error terms are normally distributed within classes. The current study uses Monte Carlo simulations to explore the effects of relatively minor violations of this assumption, the use of an ordered polytomous outcome is then examined as an alternative which makes somewhat weaker assumptions, and finally both approaches are demonstrated with an applied example looking at differences in the effects of family management on the highly skewed outcome of drug use. Results show that violating the assumption of normal errors results in systematic bias in both latent class enumeration and parameter estimates. Additional classes which reflect violations of distributional assumptions are found. Under some conditions it is possible to come to conclusions that are consistent with the effects in the population, but when errors are skewed in both classes the results typically no longer reflect even the pattern of effects in the population. The polytomous regression model performs better under all scenarios examined and comes to reasonable results with the highly skewed outcome in the applied example. We recommend that careful evaluation of model sensitivity to distributional assumptions be the norm when conducting regression mixture models.
In longitudinal studies, investigators often measure multiple variables at multiple time points and are interested in investigating individual differences in patterns of change on those variables. Furthermore, in behavioral, social, psychological, and medical research, investigators often deal with latent variables that cannot be observed directly and should be measured by 2 or more manifest variables. Longitudinal latent variables occur when the corresponding manifest variables are measured at multiple time points. Our primary interests are in studying the dynamic change of longitudinal latent variables and exploring the possible interactive effect among the latent variables.Much of the existing research in longitudinal studies focuses on studying change in a single observed variable at different time points. In this article, we propose a novel latent curve model (LCM) for studying the dynamic change of multivariate manifest and latent variables and their linear and interaction relationships. The proposed LCM has the following useful features: First, it can handle multivariate variables for exploring the dynamic change of their relationships, whereas conventional LCMs usually consider change in a univariate variable. Second, it accommodates both first- and second-order latent variables and their interactions to explore how changes in latent attributes interact to produce a joint effect on the growth of an outcome variable. Third, it accommodates both continuous and ordered categorical data, and missing data.
Researchers in the behavioural and social sciences often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model resulting in an informative hypothesis. The question they would like an answer to is "Is the Hypothesis Correct" or "Is the hypothesis incorrect?". We demonstrate a Bayesian approach to compare an inequality-constrained hypothesis with its complement in an SEM framework. The method is introduced and its utility is illustrated by means of an example. Furthermore, the influence of the specification of the prior distribution is examined. Finally, it is shown how the approach proposed can be implemented using Mplus.
Selecting the number of different classes which will be assumed to exist in the population is an important step in latent class analysis (LCA). The bootstrap likelihood ratio test (BLRT) provides a data-driven way to evaluate the relative adequacy of a (K -1)-class model compared to a K-class model. However, very little is known about how to predict the power or the required sample size for the BLRT in LCA. Based on extensive Monte Carlo simulations, we provide practical effect size measures and power curves which can be used to predict power for the BLRT in LCA given a proposed sample size and a set of hypothesized population parameters. Estimated power curves and tables provide guidance for researchers wishing to size a study to have sufficient power to detect hypothesized underlying latent classes.
A typical structural equation model is intended to reproduce the means, variances, and correlations or covariances among a set of variables based on parameter estimates of a highly restricted model. It is not widely appreciated that the sample statistics being modeled can be quite sensitive to outliers and influential observations leading to bias in model parameter estimates. A classic public epidemiological data set on the relation between cigarette purchases and rates of four types of cancer among states in the USA is studied with case-weighting methods that reduce the influence of a few cases on the overall results. The results support and extend the original conclusions; the standardized effect of smoking on a factor underlying deaths from bladder and lung cancer is .79.
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models.
Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we will refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, M, and the outcome, Y. This assumption holds if individuals are randomly assigned to levels of M but generally random assignment is not possible. We propose the use of propensity scores to help remove the selection bias that may result when individuals are not randomly assigned to levels of M. The propensity score is the probability that an individual receives a particular level of M. Results from a simulation study are presented to demonstrate this approach, referred to as Classical + Propensity Model (C+PM), confirming that the population parameters are recovered and that selection bias is successfully dealt with. Comparisons are made to the classical approach that does not include propensity scores. Propensity scores were estimated by a logistic regression model. If all confounders are included in the propensity model, then the C+PM is unbiased. If some, but not all, of the confounders are included in the propensity model, then the C+PM estimates are biased although not as severely as the classical approach (i.e. no propensity model is included).
The integration of modern methods for causal inference with latent class analysis (LCA) allows social, behavioral, and health researchers to address important questions about the determinants of latent class membership. In the present article, two propensity score techniques, matching and inverse propensity weighting, are demonstrated for conducting causal inference in LCA. The different causal questions that can be addressed with these techniques are carefully delineated. An empirical analysis based on data from the National Longitudinal Survey of Youth 1979 is presented, where college enrollment is examined as the exposure (i.e., treatment) variable and its causal effect on adult substance use latent class membership is estimated. A step-by-step procedure for conducting causal inference in LCA, including multiple imputation of missing data on the confounders, exposure variable, and multivariate outcome, is included. Sample syntax for carrying out the analysis using SAS and R is given in an appendix.
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently-used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively distinct trajectories in the context of developmental heterogeneity in count data. Accounting for the Poisson outcome distribution is essential for correct model identification and estimation. In addition, setting up the model in a way that is conducive to ILD measures helps with data complexities - large data volume, missing observations, and differences in sampling frequency across individuals. We present technical details of model fitting, summarize an empirical example of patterns of smoking behavior change, and describe research questions the generalized GMM helps to address.
Latent class analysis (LCA) is a statistical method used to identify a set of discrete, mutually exclusive latent classes of individuals based on their responses to a set of observed categorical variables. In multiple-group LCA, both the measurement part and structural part of the model can vary across groups, and measurement invariance across groups can be empirically tested. LCA with covariates extends the model to include predictors of class membership. In this article, we introduce PROC LCA, a new SAS procedure for conducting LCA, multiple-group LCA, and LCA with covariates. The procedure is demonstrated using data on alcohol use behavior in a national sample of high school seniors.
Stage-sequential (or multiphase) growth mixture models are useful for delineating potentially different growth processes across multiple phases over time and for determining whether latent subgroups exist within a population. These models are increasingly important as social behavioral scientists are interested in better understanding change processes across distinctively different phases, such as before and after an intervention. One of the less understood issues related to the use of growth mixture models is how to decide on the optimal number of latent classes. The performance of several traditionally used information criteria for determining the number of classes is examined through a Monte Carlo simulation study in single- and multi-phase growth mixture models. For thorough examination, the simulation was carried out in two perspectives: the models and the factors. The simulation in terms of the models was carried out to see the overall performance of the information criteria within and across the models, while the simulation in terms of the factors was carried out to see the effect of each simulation factor on the performance of the information criteria holding the other factors constant. The findings not only support that sample size adjusted BIC (ADBIC) would be a good choice under more realistic conditions, such as low class separation, smaller sample size, and/or missing data, but also increase understanding of the performance of information criteria in single- and multi-phase growth mixture models.
Little research has examined factors influencing statistical power to detect the correct number of latent classes using latent profile analysis (LPA). This simulation study examined power related to inter-class distance between latent classes given true number of classes, sample size, and number of indicators. Seven model selection methods were evaluated. None had adequate power to select the correct number of classes with a small (Cohen's d = .2) or medium (d = .5) degree of separation. With a very large degree of separation (d = 1.5), the Lo-Mendell-Rubin test (LMR), adjusted LMR, bootstrap likelihood-ratio test, BIC, and sample-size adjusted BIC were good at selecting the correct number of classes. However, with a large degree of separation (d = .8), power depended on number of indicators and sample size. The AIC and entropy poorly selected the correct number of classes, regardless of degree of separation, number of indicators, or sample size.
The factor mixture model (FMM) uses a hybrid of both categorical and continuous latent variables. The FMM is a good model for the underlying structure of psychopathology because the use of both categorical and continuous latent variables allows the structure to be simultaneously categorical and dimensional. This is useful because both diagnostic class membership and the range of severity within and across diagnostic classes can be modeled concurrently. While the conceptualization of the FMM has been explained in the literature, the use of the FMM is still not prevalent. One reason is that there is little research about how such models should be applied in practice and, once a well fitting model is obtained, how it should be interpreted. In this paper, the FMM will be explored by studying a real data example on conduct disorder. By exploring this example, this paper aims to explain the different formulations of the FMM, the various steps in building a FMM, as well as how to decide between a FMM and alternative models.
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a nonlinear manner are common to all subjects. In this article we describe how a variant of the Michaelis-Menten (M-M) function can be fit within this modeling framework using Mplus 6.0. We demonstrate how observed and latent covariates can be incorporated to help explain individual differences in growth characteristics. Features of the model including an explication of key analytic decision points are illustrated using longitudinal reading data. To aid in making this class of models accessible, annotated Mplus code is provided.
A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods.
First-order latent growth curve models (FGMs) estimate change based on a single observed variable and are widely used in longitudinal research. Despite significant advantages, second-order latent growth curve models (SGMs), which use multiple indicators, are rarely used in practice, and not all aspects of these models are widely understood. In this article, our goal is to contribute to a better understanding of theoretical and practical differences between FGMs and SGMs. We define the latent variables in FGMs and SGMs explicitly on the basis of latent state–trait (LST) theory and discuss insights that arise from this approach. We show that FGMs imply a strict trait-like conception of the construct under study, whereas SGMs allow for both trait and state components. Based on a simulation study and empirical applications to the Center for Epidemiological Studies Depression Scale (Radloff, 197733.
Radloff , L. S. 1977. The CES–D Scale: A self-report depression scale for research in the general population. Applied Psychological Measurement, 1: 385–401. [CrossRef], [Web of Science ®]View all references) we illustrate that, as an important practical consequence, FGMs yield biased reliability estimates whenever constructs contain state components, whereas reliability estimates based on SGMs were found to be accurate. Implications of the state–trait distinction for the measurement of change via latent growth curve models are discussed.
This study investigated a method to evaluate mediational processes using latent growth curve modeling. The mediator and the outcome measured across multiple time points were viewed as 2 separate parallel processes. The mediational process was defined as the independent variable influencing the growth of the mediator, which, in turn, affected the growth of the outcome. To illustrate modeling procedures, empirical data from a longitudinal drug prevention program, Adolescents Training and Learning to Avoid Steroids, were used. The program effects on the growth of the mediator and the growth of the outcome were examined first in a 2-group structural equation model. The mediational process was then modeled and tested in a parallel process latent growth curve model by relating the prevention program condition, the growth rate factor of the mediator, and the growth rate factor of the outcome.
In recent years the use of the Latent Curve Model (LCM) among researchers in social sciences has increased noticeably, probably thanks to contemporary software developments and to the availability of specialized literature. Extensions of the LCM, like the the Latent Change Score Model (LCSM), have also increased in popularity. At the same time, the R statistical language and environment, which is open source and runs on several operating systems, is becoming a leading software for applied statistics. We show how to estimate both the LCM and LCSM with the sem, lavaan, and OpenMx packages of the R software. We also illustrate how to read in, summarize, and plot data prior to analyses. Examples are provided on data previously illustrated by Ferrer, Hamagami, & McArdle, 2004. The data and all scripts used here are available on the first author's website.
This study analyzes latent change scores using latent curve models (LCMs) for evaluation research with pre–post–post designs. The article extends a recent article by Willoughby, Vandergrift, Blair, and Granger (2007)32.
Willoughby , M. ,
Vandergrift , N. ,
Blair , C. and
Granger , D. A. 2007. A structural equation modeling approach for the analysis of cortisol data collected using pre–post–post designs. Structural Equation Modeling, 14: 125–145. [Taylor & Francis Online], [Web of Science ®]View all references on the use of LCMs for studies with pre–post–post designs, and demonstrates that intervention effects can be better tested using different parameterizations of LCMs. This study illustrates how to test the overall mean of a latent variable at the time of research interest, not just at baseline, as well as means of latent change variables between assessments, and introduces how individual differences in the referent outcome (i.e., Level 2 random effects) and measurement-specific residuals (i.e., Level 1 residuals) can be modeled and interpreted. Two intervention data examples are presented. This LCM approach to change is more advantageous than other methods for its handling of measurement errors and individual differences in response to treatment, avoiding unrealistic assumptions, and being more powerful and flexible.
The aims of this study were to present a method for developing a path analytic network model using data acquired from positron emission tomography. Regions of interest within the human brain were identified through quantitative activation likelihood estimation meta-analysis. Using this information, a "true" or population path model was then developed using Bayesian structural equation modeling. To evaluate the impact of sample size on parameter estimation bias, proportion of parameter replication coverage, and statistical power, a 2 group (clinical/control) × 6 (sample size: N = 10, N = 15, N = 20, N = 25, N = 50, N = 100) Markov chain Monte Carlo study was conducted. Results indicate that using a sample size of less than N = 15 per group will produce parameter estimates exhibiting bias greater than 5% and statistical power below .80.
Latent difference score models (e.g., McArdle & Hamagami, 200121.
McArdle , J. J. 2001. “A latent difference score approach to longitudinal dynamic structural analysis.”. In Structural equation modeling: Present and future Edited by:
Cudeck , R. ,
du Toit , S. and
Sorbom , D. 342–380. Lincolnwood, IL: Scientific Software International.. View all references) are extended to include effects from prior changes to subsequent changes. This extension of latent difference scores allows for testing hypotheses where recent changes, as opposed to recent levels, are a primary predictor of subsequent changes. These models are applied to bivariate longitudinal data collected as part of the Baltimore Longitudinal Study of Aging on memory performance, measured by the California Verbal Learning Test, and lateral ventricle size, measured by structural MRIs. Results indicate that recent increases in the lateral ventricle size were a leading indicator of subsequent declines in memory performance from age 60 to 90.
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional z test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a simulation was conducted to evaluate these alternative methods in a more complex path model with multiple mediators and indirect paths with 2 and 3 paths. Methods for testing contrasts of 2 effects were evaluated also. The simulation included 1 exogenous independent variable, 3 mediators and 2 outcomes and varied sample size, number of paths in the mediated effects, test used to evaluate effects, effect sizes for each path, and the value of the contrast. Confidence intervals were used to evaluate the power and Type I error rate of each method, and were examined for coverage and bias. The bias-corrected bootstrap had the least biased confidence intervals, greatest power to detect nonzero effects and contrasts, and the most accurate overall Type I error. All tests had less power to detect 3-path effects and more inaccurate Type I error compared to 2-path effects. Confidence intervals were biased for mediated effects, as found in previous studies. Results for contrasts did not vary greatly by test, although resampling approaches had somewhat greater power and might be preferable because of ease of use and flexibility.
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included.
Conventional wisdom in missing data research dictates adding variables to the missing data model when those variables are predictive of (a) missingness and (b) the variables containing missingness. However, it has recently been shown that adding variables that are correlated with variables containing missingness, whether or not they are related to missingness, can substantially improve estimation (bias and efficiency). Including large numbers of these "auxiliary" variables is straightforward for researchers who use multiple imputation. However, what is the researcher to do if 1 of the full-information maximum likelihood (FIML)/structural equation modeling (SEM) procedures is the analysis of choice? This article suggests 2 models for SEM analysis with missing data, and presents simulation results to show that both models provide estimation that is clearly as good as analysis with the expectation-maximization (EM) algorithm, and by extension, multiple imputation. One of these models, the saturated correlates model, also provides good estimates of model fit. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Applied confirmatory factor analysis (CFA) models to evaluate multidimensional models of flow state and flow trait. 385 athletes from the 1994 World Masters Games completed the 9-factor Flow State Scale (S. A. Jackson and H. W. Marsh, 1996), Flow Trait Scale (S. A. Jackson et al, 1998), and external validity criteria. CFA tested alternative 1st-order and higher order models of responses to each instrument separately, to combined responses from the 2 instruments, and to responses augmented by external validity criteria. There was good support for the construct validity of state and trait flow responses in that a priori 9-factor (for each instrument separately) and 18-factor (for the 2 instruments) models fit the data well, correlations were substantially higher between matching trait and state factors than between nonmatching factors, and external state and trait validity criteria were predictably related to specific state and trait flow factors. Whereas higher order models positing global trait and state factors could not be distinguished from corresponding 1st-order models when responses to each instrument were considered separately, the higher order models fared poorly when state and trait factors were related to each other and to the external criteria. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Illustrates the estimation of variance components using the covariance structure analysis approach, beneficial to those interested in the analysis of measurement designs based on the principles of Generalizability (G) theory (L. J. Cronbach et al, 1972). This measurement theory provides a framework for examining the psychometric properties of various testing situations, and extends classical reliability theory by recognizing and estimating the magnitude of multiple sources of error that may affect a behavioral measurement. In G theory, a behavioral measurement is considered a sample from a universe of admissible observations described by 1 or more facets. Illustrations are provided of G theory's focus on variance components for determining the relative contribution of each source of measurement error, including ANOVA estimates of variance components for 1-facet and 2-facet designs. Results of a G study analysis using both the ANOVA and the covariance structure approaches are included. The Appendix contains a LISREL script for obtaining the estimates using the covariance structure approach. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
A didactic discussion of a latent variable modeling approach is presented that addresses frequent empirical concerns of social, behavioral, and educational researchers involved in longitudinal studies. The method is suitable when the purpose is to analyze repeated measure data along several interrelated dimensions and to explain some of the associated patterns of change in terms of other developmental patterns via regressions among random effects. The procedure utilizes the framework of latent variable modeling and in addition is readily applicable when not all subjects provide complete records on all assessments and variables. The approach is illustrated with data from a cognitive intervention study. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
summarize the original D. Campbell and D. Fiske (1959) guidelines used to inspect an MTMM [multitrait-multimethod] correlation matrix and subsequent latent variable models that employ structural equation modeling
the confirmatory factor analysis (CFA) approach / direct product models (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.
AMOS 3.1. Originally distributed by James Arbuckle, Department of Psychology, Temple University, Philadelphia, PA 19122. $50.' Requirements for AMOS 3.1 for Windows: IBM PC‐compatible computer, MS‐DOS 3.1 or later, 512K free memory, 386 or 486, Microsoft Windows 3.0 or later.EQS/Windows 4.0. BMDP Statistical Software, Inc., Suite 316, 1440 Sepulveda Boulevard, Los Angeles, CA 90025, (800) 238‐BMDP, (310) 479–7799. (In Europe, EQS/Windows 4.0 is available from ProGamma, P.O.B. 841, 9700 AV Groningen, The Netherlands, +31–50–636900.) Prices vary. Upgrades and academic pricing available. Requirements: 386 or better, 2.5MG hard disk space, 4MB RAM, math coprocessor, Microsoft Windows 3.1.LISREL for Windows 8.01. Scientific Software International, Suite 906, 1525 East 53rd Street, Chicago, IL 60615–4530. (In Europe, LISREL for Windows 8.01 is available from ProGamma, P.O.B. 841, 9700 AV Groningen, The Netherlands, +31–50–636900.) DOS/DOS Extender versions, $495; all three versions, $575; upgrades from LISREL 7, $195; all three versions, $245. Requirements: 2MB RAM (4MB recommended), 3.6MB hard disk space, Microsoft Windows 3.1.
We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables and 1 involving covariate effects on the latent variables in addition.
In research concerning model invariance across populations, researchers have discussed the limitations of the conventional chi-square difference test ([Delta] chi-square test). There have been some research efforts in using goodness-of-fit indexes (i.e., [Delta]goodness-of-fit indexes) for assessing multisample model invariance, and some specific recommendations have been made (Cheung & Rensvold, 2002). Because [Delta]goodness-of-fit indexes were designed to assess model fit in terms of covariance structure, it is not clear how they will perform when mean structure invariance is the research focus. This study extends the previous work (Cheung & Rensvold, 2002), and evaluates how [Delta]goodness-of-fit indexes perform in mean structure invariance analysis. By using a Monte Carlo simulation experiment, the performance of [Delta]goodness-of-fit indexes in detecting population mean structure difference is evaluated. The findings suggest that, in general, [Delta]goodness-of-fit indexes are so sensitive to model size that they are not generally useful in mean structure invariance analysis. (Contains 4 figures, 5 tables, and 1 footnote.)
A relevant problem in applications of Item Response Theory (IRT) models is due to non- ignorable missing responses. We propose a multidimensional latent class IRT model for binary items in which the missingness mechanism is driven by a latent variable (propensity to answer) correlated with the latent variable for the ability (or latent variables for the abilities) measured by the test items. These latent variables are assumed to have a joint discrete distribution. This assumption is convenient both from the point of view of estimation, since the manifest distribution of the responses may be simply obtained, and for the decisional process, since individuals are classified in homogeneous groups having common latent variable values. Moreover, this assumption avoids parametric formulations for the distribution of the latent variables, giving rise to a semiparametric model. The basic model, which can be expressed in terms of a Rasch or a two-parameters logistic parameterization, is also extended to allow for covariates that influence the weights of latent classes. The resulting model may be efficiently estimated through the discrete marginal maximum likelihood method, making use of the Expectation-Maximization algorithm. The proposed approach is illustrated through an application to data coming from a Students’ Entry Test for the admission to the courses in Economics in an Italian University.
Researchers often have expectations that can be expressed in the form of inequality constraints among the parameters of a structural equation model. It is currently not possible to test these so-called informative hypotheses in structural equation modeling software. We offer a solution to this problem using Mplus. The hypotheses are evaluated using plug-in p values with a calibrated alpha level. The method is introduced and its utility is illustrated by means of an example.
A two-stage procedure for estimation and testing of observed measure correlations in the presence of missing data is discussed. The approach uses maximum likelihood for estimation and the false discovery rate concept for correlation testing. The method can be utilized in initial exploration oriented empirical studies with missing data, where it is of interest to estimate manifest variable interrelationship indexes and test hypotheses about their population values. The procedure is applicable also with violations of the underlying missing at random assumption, via inclusion of auxiliary variables. The outlined approach is illustrated with data from an aging research study.
According to Kenny and McCoach (2003), chi-square tests of structural equation models produce inflated Type I error rates when the degrees of freedom increase. So far, the amount of this bias in large models has not been quantified. In a Monte Carlo study of confirmatory factor models with a range of 48 to 960 degrees of freedom it was found that the traditional maximum likelihood ratio statistic, TML, overestimates nominal Type I error rates up to 70% under conditions of multivariate normality. Some alternative statistics for the correction of model-size effects were also investigated: the scaled Satorra-Bentler statistic, TSC; the adjusted Satorra-Bentler statistic, TAD (Satorra & Bentler, 1988, 1994); corresponding Bartlett corrections, TMLb, TSCb, and TADb (Bartlett, 1950); and corresponding Swain corrections, TMLs , TSCs , and TADs (Swain, 1975). The empirical findings indicate that the model test statistic TMLs should be applied when large structural equation models are analyzed and the observed variables have (approximately) a multivariate normal distribution.
Within the latent growth curve model, time-invariant covariates are generally modeled on the subject level, thereby estimating the effect of the covariate on the latent growth parameters. Incorporating the time-invariant covariate in this manner may have some advantages regarding the interpretation of the effect but may also be incorrect in certain instances. In this article we discuss a more general approach for modeling time-invariant covariates in latent growth curve models in which the covariate is directly regressed on the observed indicators. The approach can be used on its own to get estimates of the growth curves corrected for the influence of a 3rd variable, or it can be used to test the appropriateness of the standard way of modeling the time-invariant covariates. It thus provides a test of the assumption of full mediation, which states that the relation between the covariate and the observed indicators is fully mediated by the latent growth parameters.
The analysis of longitudinal data collected from non-exchangeable dyads presents a challenge for applied researchers for various reasons. This paper introduces the Dyadic Curve-of-Factors Model (D-COFM) which extends the Curve-of-Factors Model (COFM) proposed by McArdle (1988) for use with non-exchangeable dyadic data. The D-COFM overcomes problems with modeling composite scores across time and instead permits examination of the growth in latent constructs over time. The D-COFM also appropriately models the interdependency among non-exchangeable dyads. Different parameterizations of the D-COFM are illustrated and discussed using a real dataset to aid applied researchers when analyzing dyadic longitudinal data.