Correlation Is Not Causation
HR measurement is very simple when you know what you’re doing. However, if you make it overly simplistic, don’t be surprised if no one is interested in the results. HR benchmarks based on a spurious connection between business performance and HR practices convince no one. Effective and convincing HR measurement starts with clear causation, not correlation.
Available from: ucr.edu
- "In fact, cross-sectional designs are highly vulnerable to reverse causality and endogeneity biases. It has been broadly accepted for at least three decades that three conditions must be met to establish the causality of a relationship between two variables: correlation, temporal precedence, and non-spuriousness (Kenny 1979). Establishing correlation is usually not a problem in cross-sectional models. "
[Show abstract] [Hide abstract]
ABSTRACT: Much quantitative macro-comparative research (QMCR) relies on a common set of published data sources to answer similar research questions using a limited number of statistical tools. Since all researchers have access to much the same data, one might expect quick convergence of opinion on most topics. In reality, of course, differences of opinion abound and persist. Many of these differences can be traced, implicitly or explicitly, to the different ways researchers choose to model error in their analyses. Much careful attention has been paid in the political science literature to the error structures characteristic of time series cross-sectional (TSCE) data, but much less attention has been paid to the modeling of error in broadly cross-national research involving large panels of countries observed at limited numbers of time points. Here, and especially in the sociology literature, multilevel modeling has become a hegemonic – but often poorly understood – research tool. I argue that widely-used types of multilevel models, commonly known as fixed effects models (FEMs) and random effects models (REMs), can produce wildly spurious results when applied to trended data due to mis-specification of error. I suggest that in most commonly-encountered scenarios, difference models are more appropriate for use in QMC.
08/2015; 1(1):86-114. DOI:10.5195/jwsr.2009.333
Available from: comm.stanford.edu
- "To test whether the data collection methods differed in terms of the amount of random measurement error in assessments, we made use of multiple measures of candidate preferences administered both pre-election and post-election to estimate the parameters of a structural equation model (see, e.g., Kenny 1979). This model posited that the multiple measures were each imperfect indicators of latent candidate preferences at the two time points and permitted those preferences to change between the two interviews. "
[Show abstract] [Hide abstract]
ABSTRACT: In a national field experiment, the same questionnaires were administered simultaneously by RDD telephone interviewing, by the Internet with a probability sample, and by the Internet with a nonprobability sample of people who volunteered to do surveys for money. The probability samples were more representative of the nation than the nonprobability sample in terms of demographics and electoral participation, even after weighting. The nonprobability sample was biased toward being highly engaged in and knowledgeable about the survey's topic (politics). The telephone data manifested more random measurement error, more survey satisficing, and more social desirability response bias than did the Internet data, and the probability Internet sample manifested more random error and satisficing than did the volunteer Internet sample. Practice at completing surveys increased reporting accuracy among the probability Internet sample, and deciding only to do surveys on topics of personal interest enhanced reporting accuracy in the nonprobability Internet sample. Thus, the nonprobability Internet method yielded the most accurate self-reports from the most biased sample, while the probability Internet sample manifested the optimal combination of sample composition accuracy and self-report accuracy. These results suggest that Internet data collection from a probability sample yields more accurate results than do telephone interviewing and Internet data collection from nonprobability samples.
Public Opinion Quarterly 12/2009; 73(4). DOI:10.1093/poq/nfp075 · 2.25 Impact Factor
Available from: Paul Aveyard
- "In CFA MTMM analysis, discriminant validity is therefore tested by setting the correlation among trait factors to 1.0, which is the equivalent of a single factor model, to see whether this model fits better than the model where the trait factor correlations are freely estimated. If it does not, discriminant validity is supported (Kenny, 1979; Marsh et al., 1988; Stacy et al., 1985). "
[Show abstract] [Hide abstract]
ABSTRACT: The transtheoretical model is a framework to explain smoking uptake and cessation in adolescence. Decisional balance is proposed as a driver of stage movement.
The purpose of this study was to examine the factor structure and measurement equivalence/invariance (ME/I) of the decisional balance scale.
In this study, we used confirmatory factor analysis followed by measurement equivalence/invariance testing to examine the factorial validity of the decisional balance scale in adolescent smokers and nonsmokers.
Unlike previous studies, we found that a four-factor solution splitting cons into esthetic and health cons significantly improved the fit of model to the data. ME/I testing showed that the same structure and measurement model held for both smokers and nonsmokers, girls and boys, and across the three occasions the scale was administered.
Cons showed strong evidence that it constituted two separate first order factors. Decisional balance for smoking in adolescence has good evidence of factorial validity.
International Journal of Behavioral Medicine 03/2009; 16(2):158-63. DOI:10.1007/s12529-008-9021-5 · 2.63 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.