A method for testing group differences of scale validity in multiple population studies
A method for testing equality in validity of multi-component measuring instruments across populations is outlined. The approach is developed within the framework of covariance structure modelling and complements earlier research on examining group differences in scale reliability. The procedure is particularly useful for purposes of ascertaining comparability of validity when constructing and developing measuring instruments. The method also provides ranges of plausible values for differences in composite validity across several populations and allows one to evaluate group discrepancies in validity of behavioural scales. The approach is illustrated using data from a cognitive intervention study.
Available from: eric.ed.gov
- "Other approaches are the hierarchical complexity of the tasks (Commons & Miller, 2001) and the covariance structure modeling (Raykov, 2005). "
[Show abstract] [Hide abstract]
ABSTRACT: A linear model that concretes the idea by Wright & Stone to assess the quality of a test is proposed based on the experience on several real tests. The model is a "test design line" distributing uniformly the items of the test centered on 0 logits. The "test design model" is related to the "mean absolute difference", a single parameter useful to determine the distribution of the items, the influence of the bias of the scale and the test width. Results and applications of the model, from test design to test analysis and calibration are shown. Validity is the most important attribute related to the quality of a test or to the quality of the decisions taken with the results of a test. The main focus of validity-centered designs is the formal set of objective characteristics of the "domain theory" (Bunderson, 2005) where the measurement scales associated with the Rasch model, have these attributes of invariance: (1) invariance of sample – measures independent of the sample of persons; (2) invariance of task – measures independent of the set of items; (3) invariance of unit and zero – measures have approximately equal intervals, a zero and the unit is constant; (4) invariance of interpretive – measures coherent with a construct framework, with milestones and level descriptors. Other approaches are the hierarchical complexity of the tasks (Commons & Miller, 2001) and the covariance structure modeling (Raykov, 2005). Contrary to the definition of validity by Messick (1998) and Cronbach & Meehl (1955), the simple definition of causality for construct validity proposed by Boorsboom et al (2004), provides a framework for logistic models. Bond (2004) explains how the Rasch model meets the requirements by Messick.
British Journal of Mathematical and Statistical Psychology 12/2010; 58(2):383 - 384. DOI:10.1348/000711005X64141 · 2.17 Impact Factor
[Show abstract] [Hide abstract]
ABSTRACT: Objective: This study aims to present the Selective Reminding Test (SRT) and Word List Generation (WLG) adaptation to the Portuguese population, within the validation of the Brief Repeatable Battery of Neuropsychological Tests (BRBN-T) for multiple sclerosis (MS) patients. Method: 66 healthy participants (54.5% female) recruited from the community volunteered to participate in this study. Results: A combination of procedures from Classical Test Theory (CTT) and Item Response Theory (ITR) were applied to item analysis and selection. For each SRT list, 12 words were selected and 3 letters were chosen for WLG to constitute the final versions of these tests for the Portuguese population. Conclusion: The combination of CTT and ITR maximized the decision making process in the adaptation of the SRT and WLG to a different culture and language (Portuguese). The relevance of this study lies on the production of reliable standardized neuropsychological tests, so that they can be used to facilitate a more rigorous monitoring of the evolution of MS, as well as any therapeutic effects and cognitive rehabilitation. © 2015, Associacao Arquivos de Neuro-Psiquiatria. All rights reserved.
Arquivos de neuro-psiquiatria 10/2015; 73(10):867-872. DOI:10.1590/0004-282X20150134 · 0.84 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.