December 1999
·
30 Reads
·
8 Citations
Alberta Journal of Educational Research
The study was an exploratory investigation of the consequences of using a complex test-and-item analysis approach in a large-scale testing situation that historically has used a conventional approach of simple number-right scoring. In contemplating modifications to a complex, high-stakes testing program that has a long history of successful operation, any change in operations would have to be carefully evaluated to ensure that there is a high probability of improvement through change. So if a change from number-right-type scoring to item response theory (IRT) scoring is under consideration, the question arises: Does the increase in complexity and difficulty associated with the use of IRT pay significant dividends in better achievement estimates? In terms of consequences, it did not make much difference which domain score estimate was selected for use: any estimate gives approximately the same results in terms of mean, standard deviation, error of estimation, and correlation to other sources of estimation of student achievement.