Article

A study of the effects of out‐of‐level testing with poor readers in the intermediate grades

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Forty‐seven fourth‐and‐fifth grade students whose raw scores on the Gates‐MacGinitie Level D (on‐level test) were determined to be unsuitable were administered an out‐of‐level test (Level C) to ascertain whether the out‐of‐level test would be a more appropriate test for those students. Two questions were addressed in this study. First, are there significant differences in students’ derived scores on the Gates‐MacGinitie when an on‐level test (Level D) that is judged to be unsuitable is compared to an out‐of‐level test (Level C)? Second, is use of an out‐of‐level test more suitable in terms of Roberts’ criterion of the raw scores achieved by the students? There were no significant differences between the derived scores from on‐level and out‐of‐level testing for each of the subtests. The out‐of‐level raw scores did fall within the accepted range for the test to be considered suitable and reliable.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate feedback is recommended. This study made use of a commercial item bank with vertically scaled items across grades and simulated student responses in a computerized adaptive testing (CAT) environment. Results of the study suggested that administration of out-of-level items improved measurement accuracy and test efficiency for students who perform significantly above or below their grade-level peers. This study has direct implications with regards to the relevance, applicability, and benefits of using out-of-level items in CAT.
Article
“Out‐of‐level” testing, assigning pupils to levels of a standardized test on the basis of previous (close to chance level) test scores rather than their present grade assignment, has been used in Philadelphia since 1968. An extensive number of pupils were tested in this way in 1970, in contrast to the two previous years, and overall performance indices seemed depressed as a result. The test results of 1500 children tested out‐of‐level in 1970 were reviewed to see if their performance supported the rationale behind the practice. More reliable performance definitely resulted from the procedure for this sample, but there is considerable question about the publisher's assurance regarding comparability of in‐level and out‐of‐level scores.
Article
A large proportion of the federal funds provided to local education agencies (LEAs) under Title I of the Elementary and Secondary Education Act of 1965 has been spent on attempts to remediate reading deficiencies. Under the law, a student's eligibility for Title I services is predicated upon measured educational deficiencies and attending a school that enrolls large numbers of children from federally-defined poverty level families. Procedures employed by LEAs to identify eligible students for Title I reading program services have traditionally been based upon students' scores on various nationally standardized reading tests. Most attempts to assess the impact of Title I remedial reading programs have focused upon 'changes in student reading scores as measured by these same nationally standardized tests. In the early years of Title I the testing procedures employed in student selection and program evaluation called for students to be administered a standardized test that was designed for their current grade level, regardless of their level of academic functioning. This system of determining testing levels was criticized by many teachers, who felt that this "grade-level testing" of Title I students yielded unreliable scores. The teachers argued that large numbers of Title I students were reading so far below grade level that they were unable to cope with a grade-level test and, as a result, their scores were largely a function of guessing. In 1971, the Rhode Island State Department of Education responded to these criticisms by authorizing the testing of Title I students at their reading instructional level, rather than their actual grade level. The final decision as to whether to test at grade level or instructional level was left to the discretion of the LEA's. Consequently, a variety of testing models evolved for Title I student selection and program assessment in reading. Some school systems continued to test all students at grade level, some systems tested all students one year below grade level, and others determined the level of test to be administered by considering each student individually. Subsequently, this out-of-level testing led to a new set of problems related to data interpretation and program evaluation. The implicit assumption seemed to be that "data are data" and that a score on an instructional-level test-a test which was designed for, and normed on, a population of students at a lower grade level-would be comparable to the score a student would receive on the test designed for hir actual level. This assumption is reflected in Rhode Island Title I annual reports over a threeyear period (Rhode Island Department of Education, 1972, 1973, 1974). Gradeequivalent scores and score changes of students participating in Title I reading programs in various communities were combined, analyzed, and reported as state averages. Communities having reading programs were also ranked according to the magnitude of reported average program change scores. These procedures are not
ESEA Title I Evaluation and Reporting System Mountain View Using Grade Level vs. Out-of-Level Reading Tests with Remedial Stu-dents. The Reading Teacher The oldest books are still only just out to those who have not read them
  • L L Smith
  • J L Johns
  • L Ganschow
  • N B Masztal
H. Out-of-Level Testing. ESEA Title I Evaluation and Reporting System. (Technical Paper No. 6). Mountain View, CA: RMC Research Corp, 1976. Smith, L. L., Johns. J. L., Ganschow, L., & Masztal, N. B. Using Grade Level vs. Out-of-Level Reading Tests with Remedial Stu-dents. The Reading Teacher, 36 (February 1983), 550-53. The oldest books are still only just out to those who have not read them.. Samuel Butler (1835 -1902) Downloaded by [UZH Hauptbibliothek / Zentralbibliothek Zürich] at 11:50 29 December 2014