4 Reads
·
1 Citation
Under a MIRT CAT's (multidimensional computerized adaptive testing) testing scenario, an ability estimate in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection as well as its scoring algorithms compared with its counterpart, unidimensional CAT (UCAT). However, when practitioners are planning to employ MIRT CAT on a real testing program, interesting problems present themselves. For the case of simultaneously measuring examinee's Reading and Mathematics abilities, will we administer to examinees Reading items first and Mathematic items next, Mathematics items first and Reading items next, or mixed items (e.g., a Reading item follows by a Mathematics item) ? Will the orders of administering different type of items to examinees make significant difference in terms of ability estimates and item exposure rates ? This sort of context effects in MIRT CAT never occurred in UCAT, but might happen in MIRT CAT. This issue is so critical and should be clarified before a real MIRT CAT program is implemented in place. The current research design intended to assess those context effects.