Article

Special issues for building computerized-adaptive tests for measuring patient-reported outcomes: the National Institute of Health's investment in new technology.

National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892-7344, USA.
Medical Care (Impact Factor: 2.94). 12/2006; 44(11 Suppl 3):S198-204. DOI: 10.1097/01.mlr.0000245146.77104.50
Source: PubMed
1 Follower
 · 
45 Views
  • Source
    • "Such conflicting findings of performance between the newer item selection methods versus the classical MFI inspired us to undertake this study. Furthermore, interest in polytomous items is growing with the recent use of CAT technology in patient-reported outcomes (PROs) such as mental health, pain, fatigue, and physical functioning (Reeve, 2006). Most PRO measures are constructed using Likert-type items more befitting of polytomous models. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Item selection is a core component in computerized adaptive testing (CAT). Several studies have evaluated new and classical selection methods; however, the few that have applied such methods to the use of polytomous items have reported conflicting results. To clarify these discrepancies and further investigate selection method properties, six different selection methods are compared systematically. The results showed no clear benefit from more sophisticated selection criteria and showed one method previously believed to be superior-the maximum expected posterior weighted information (MEPWI)-to be mathematically equivalent to a simpler method, the maximum posterior weighted information (MPWI).
    Applied Psychological Measurement 09/2009; 33(6):419-440. DOI:10.1177/0146621608327801 · 1.49 Impact Factor
  • Source
    • "Widely used in various assessment applications, computerized adaptive testing (CAT) has begun to infiltrate the patient-reported outcomes (PRO) arena (Bjorner, Chang, Thissen, & Reeve, 2007; Reeve, 2006). PROs such as depression and fatigue are represented as latent traits (similar to mathematical achievement) so CATs in conjunction with IRT are natural considerations for PRO measurement (Bjorner, et al., 2007; Cella & Chang, 2000; McHorney, 2003; Reeve, 2006). A typical CAT design uses a mathematical algorithm to sequentially select items that are in some sense " best " from a pool of pertinent items (called an item bank) until an estimate of the latent trait is achieved with a certain precision. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Widely used in various educational and vocational assessment applications, computerized adaptive testing (CAT) has recently begun to be used to measure patient-reported outcomes Although successful in reducing respondent burden, most current CAT algorithms do not formally consider it as part of the item selection process. This study used a loss function approach motivated by decision theory to develop an item selection method that incorporates respondent burden into the item selection process based on maximum Fisher information item selection. Several different loss functions placing varying degrees of importance on respondent burden were compared, using an item bank of 62 polytomous items measuring depressive symptoms. One dataset consisted of the real responses from the 730 subjects who responded to all the items. A second dataset consisted of simulated responses to all the items based on a grid of latent trait scores with replicates at each grid point. The algorithm enables a CAT administrator to more efficiently control the respondent burden without severely affecting the measurement precision than when using MFI alone. In particular, the loss function incorporating respondent burden protected respondents from receiving longer tests when their estimated trait score fell in a region where there were few informative items.
  • Source
    Medical Care 12/2006; 44(11 Suppl 3):S3-4. DOI:10.1097/01.mlr.0000245437.46695.4a · 2.94 Impact Factor
Show more