Begabungsdiagnostik mit dem Grundintelligenztest (CFT 20-R): Psychometrische Eigenschaften und Messäquivalenz

Diagnostica (Impact Factor: 0.72). 04/2008; 54(4):184-192. DOI: 10.1026/0012-1924.54.4.184

ABSTRACT Der CFT 20-R ist ein in der psychologischen Praxis häufig eingesetzter Test zur Diagnostik der fluiden Intelligenz. Er dient damit auch zur Diagnose von Hochbegabung. Gegenstand dieses Artikels ist die psychometrische Überprüfung der Kurzform 1 des Grundintelligenztests (CFT 20-R; Weiß, 2006) hinsichtlich der Messäquivalenz des Verfahrens zwischen normal- und höher begabten Schülern. Berichtet werden Ergebnisse auf Item- sowie Subtestebene. Eine Stichprobe von insgesamt N = 1886 Haupt-, Real- sowie Gymnasialschülern beiderlei Geschlechts bearbeitete die Kurzform 1 des CFT 20-R. Mittels DIF-Analysen (Itemebene) kann beim Vergleich von höher (IQ ≥ 120) sowie normal begabten Schülern (IQ < 120) gezeigt werden, dass größtenteils Messäquivalenz hinsichtlich der Itemschwierigkeiten, weniger hinsichtlich der Trennschärfen vorliegt. Die Messäquivalenz auf der Ebene der Subtests ist hoch. Das Verfahren kann damit im Rahmen eines ersten (Hoch-)Begabungsscreenings gut eingesetzt werden.

371 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Combining items into parcels in confirmatory factor analysis (CFA) can improve model estimation and fit. Because adequate model fit is imperative for CFA tests of measurement invariance, parcels have frequently been used. However, the use of parcels as indicators in a CFA model can have serious detrimental effects on tests of measurement invariance. Using simulated data with a known lack of invariance, the authors illustrate how models using parcels as indicator variables erroneously indicate that measurement invariance exists much more often than do models using items as indicators. Moreover, item-by-item tests of measurement invariance were often more informative than were tests of the entire parameter matrices.
    Organizational Research Methods 07/2006; 9(3):369-403. DOI:10.1177/1094428105283384 · 3.26 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The likelihood ratio test statistic G2(dif) is widely used for comparing the fit of nested models in categorical data analysis. In large samples, this statistic is distributed as a chi-square with degrees of freedom equal to the difference in degrees of freedom between the tested models, but only if the least restrictive model is correctly specified. Yet, this statistic is often used in applications without assessing the adequacy of the least restrictive model. This may result in incorrect substantive conclusions as the above large sample reference distribution for G2(dif) is no longer appropriate. Rather, its large sample distribution will depend on the degree of model misspecification of the least restrictive model. To illustrate this, a simulation study is performed where this statistic is used to compare nested item response theory models under various degrees of misspecification of the least restrictive model. G2(dif) was found to be robust only under small model misspecification of the least restrictive model. Consequently, we argue that some indication of the absolute goodness of fit of the least restrictive model is needed before employing G2(dif) to assess relative model fit.
    Multivariate Behavioral Research 03/2006; 41(1-1):55-64. DOI:10.1207/s15327906mbr4101_4 · 2.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Confirmatory factor analysis for ordered-categorical measures (CFA-OCM) and rating scale item response theory (IRT) analyses explore measurement bias across gender on the Children's Depression Inventory (CDI) in a community sample of 779 children in the third and sixth grades. Given the set of statistical criteria, IRT and CFA-OCM generally establish measurement equivalence. Results substantiate both Craighead et al.'s five-factor model and IRT models with the CDI, demonstrate their convergence regarding bias, support the use of the CDI in cross-gender comparisons, suggest a separate scoring method need not be developed for children in this age range, and provide evidence that previously noted developmental similarities in depression reflect true similarities. Given measurement invariance, observed score analyses demonstrate no statistically significant differences between boys and girls on the CDI total score and four scores created as a function of the factor model. However, girls endorse statistically significant elevated levels on a dysphoria score.
    Educational and Psychological Measurement 04/2008; 68(2):281-303. DOI:10.1177/0013164407308471 · 1.15 Impact Factor
Show more

We use cookies to give you the best possible experience on ResearchGate. Read our cookies policy to learn more.