The determinants of reading comprehension.

Educational and Psychological Measurement (Impact Factor: 1.17). 07/1962; DOI: 10.1177/001316446202200203

ABSTRACT To determine the relative variance of test content, method, and error components, parallel forms of 7 specially constructed vocabulary and reading tests were administered to 108 British and 75 American college students. Although the results did not support Vernon's belief that method factors would have the strongest influence, higher validities were obtained with a reading test employing an unconventional method. "Centroid factor analyses revealed a strong Comprehension factor, orthogonal to the Vocabulary factor, among both groups in the reading tests." Several general observations are also offered. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this study 590 third-grade students took one of four reading comprehension tests with either multiple- choice items or open-ended items. Each also took 32 tests indicating 16 semantic Structure-of-Intellect (si) abilities. Four conditions or groups were distinguished on the basis of the reading comprehension tests. The four 33 x 33 correlation matrices were analyzed si multaneously with a four-group LISREL model. The 16 intellectual abilities explained approximately 62% of the variance in true reading comprehension scores. None of the SI abilities proved to be differentially re lated to item type. Therefore, it was concluded that item type for reading comprehension is congeneric with respect to the SI abilities measured. Index terms: construct validity, item format, free response, reading comprehension, Structure-of-Intellect model.
    Applied Psychological Measurement 03/1990; 14(1):1-12. DOI:10.1177/014662169001400101 · 1.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The argument presented in this paper is that effective instruction of children with reading difficulties relies little on accurate diagnosis either of the apparent cause of the reading problem or the nature of the reading problem. The former argument—that the diagnosis of the underlying cause of the problem is futile—is not new but bears restatement as there is no sign that the practice is abating among certain health professionals. The latter argument—that the diagnosis of the child's relative strengths and weaknesses in reading is also irrelevant to instruction—is more controversial as it contradicts standard educational practice. It is considered that standardised reading tests, if properly administered and interpreted, have a part to play in the identification of children with reading problems, but not in the diagnosis or treatment of such problems. Children with reading difficulties would be better served if more attention were paid to instruction and less to diagnosis.
    Educational Psychology 01/1992; 12:225-237. DOI:10.1080/0144341920120306 · 1.02 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Criticism within the educational community of the multiple-choice format is in no short supply and seems to revolve around an alleged inability of multiplechoice tests to assess higher order thinking skills as well as constructed-response tests measure these skills. To investigate the credibility of such assertions, I constructed the exams for two measurement classes as half multiple choice and half constructed response; the exams contained equal numbers of items in each format written for the knowledge, comprehension, application, and analysis levels of Bloom's taxonomy. A pattern of generally high disattenuated correlations between multiple-choice and constructed-response measures within each taxonomic level was found, indicating that the two formats measure similar constructs at different levels of complexity. Factor analysis corroborated these interpretations.
    The Journal of Experimental Education 01/1994; 62(2):143-157. DOI:10.1080/00220973.1994.9943836 · 1.09 Impact Factor