The determinants of reading comprehension.

Educational and Psychological Measurement (Impact Factor: 1.07). 01/1962; DOI: 10.1177/001316446202200203

ABSTRACT To determine the relative variance of test content, method, and error components, parallel forms of 7 specially constructed vocabulary and reading tests were administered to 108 British and 75 American college students. Although the results did not support Vernon's belief that method factors would have the strongest influence, higher validities were obtained with a reading test employing an unconventional method. "Centroid factor analyses revealed a strong Comprehension factor, orthogonal to the Vocabulary factor, among both groups in the reading tests." Several general observations are also offered. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

  • [Show abstract] [Hide abstract]
    ABSTRACT: The increasing numbers of English language learners (ELLs) in Canadian schools pose a significant challenge to the standards-based provincial tests used to measure proficiency levels of all students from various linguistic and cultural backgrounds. This study investigated the extent to which reading item bundles or items on the Ontario Secondary School Literacy Test (OSSLT) function differentially for Grade 10 students who speak only or mostly English at home (first language [L1] students; n = 1,969) and those whose home language is something other than English (ELL students; n = 3,675). Based on Roussos and Stout's (1996a) multidimensionality-based DIF analysis paradigm, a variety of substantive and statistical techniques were employed: (a) content review by English as a second language (ESL) experts, (b) exploratory and confirmatory dimensionality analyses, and (c) confirmatory differential bundle functioning (DBF)/differential item functioning (DIF) procedures. The evidence gathered in the study indicated that items associated with vocabulary knowledge favored L1 students, whereas items requiring grammatical knowledge or integrated reading and writing skill favored ELL students. Instructional implications for the promotion of effective literacy education programs are discussed, as is the development of a literacy curriculum that can meet the needs of linguistically diverse learners in a multilingual context.
    Language Learning 11/2009; 59(4):825 - 865. · 1.22 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: High school seniors (84 males, 77 females) were randomly assigned to one of two treatment groups. One group received a programmed text designed to teach Ss to answer every item on an examination, whether or not the directions included a penalty for incorrect answers. The other group was administered a programmed text to teach certain selected aspects of test-wiseness. Each group served as the control group for the other. The following day all Ss were administered a measure of willingness to guess and a measure of test-wiseness. Two weeks later, all Ss received additional measures of willingness to guess and test-wiseness. Analysis of the data indicated the group that received the guessing program answered significantly more items than its control group (on both the immediate and delayed tests), even though there was a penalty for incorrect answers. In similar fashion, the group exposed to the test-wiseness program achieved significantly higher mean test-wiseness scores than its control group.
    Journal of Educational Measurement 09/2005; 7(4):247 - 254. · 1.00 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Multiple-choice, short-answer, and extended-response item formats were used in the Third International Mathematics and Science Study to assess student achievement in mathematics and science at Grades 7 and 8 in more than 40 countries around the world. Data pertaining to science indicate that the standings of some countries relative to others change when performance is measured via the different item formats. The question addressed in the present article is the following: Can the instability of ranks in this case be attributed principally to item format, or are other important factors at work? It is argued that the findings provide further evidence that comparing student achievement across countries is a very complex undertaking indeed.
    Educational Measurement Issues and Practice 10/2005; 21(4):27 - 38.