Article

The determinants of reading comprehension.

Educational and Psychological Measurement (Impact Factor: 1.07). 01/1962; DOI: 10.1177/001316446202200203

ABSTRACT To determine the relative variance of test content, method, and error components, parallel forms of 7 specially constructed vocabulary and reading tests were administered to 108 British and 75 American college students. Although the results did not support Vernon's belief that method factors would have the strongest influence, higher validities were obtained with a reading test employing an unconventional method. "Centroid factor analyses revealed a strong Comprehension factor, orthogonal to the Vocabulary factor, among both groups in the reading tests." Several general observations are also offered. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

0 Bookmarks
 · 
71 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The argument presented in this paper is that effective instruction of children with reading difficulties relies little on accurate diagnosis either of the apparent cause of the reading problem or the nature of the reading problem. The former argument—that the diagnosis of the underlying cause of the problem is futile—is not new but bears restatement as there is no sign that the practice is abating among certain health professionals. The latter argument—that the diagnosis of the child's relative strengths and weaknesses in reading is also irrelevant to instruction—is more controversial as it contradicts standard educational practice. It is considered that standardised reading tests, if properly administered and interpreted, have a part to play in the identification of children with reading problems, but not in the diagnosis or treatment of such problems. Children with reading difficulties would be better served if more attention were paid to instruction and less to diagnosis.
    Educational Psychology - EDUC PSYCHOL-UK. 01/1992; 12:225-237.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Criticism within the educational community of the multiple-choice format is in no short supply and seems to revolve around an alleged inability of multiplechoice tests to assess higher order thinking skills as well as constructed-response tests measure these skills. To investigate the credibility of such assertions, I constructed the exams for two measurement classes as half multiple choice and half constructed response; the exams contained equal numbers of items in each format written for the knowledge, comprehension, application, and analysis levels of Bloom's taxonomy. A pattern of generally high disattenuated correlations between multiple-choice and constructed-response measures within each taxonomic level was found, indicating that the two formats measure similar constructs at different levels of complexity. Factor analysis corroborated these interpretations.
    Journal of Experimental Education - J EXP EDUC. 01/1994; 62(2):143-157.
  • Scandinavian Journal of Educational Research 01/1976; 20(1):25-40. · 0.27 Impact Factor