Informing drug development and clinical practice through patient-centered outcomes research.
Clinical Therapeutics (Impact Factor: 2.59). 05/2014; 36(5):616-8. DOI: 10.1016/j.clinthera.2014.04.015
- [Show abstract] [Hide abstract]
ABSTRACT: The US Food and Drug Administration's guidance for industry document on patient-reported outcomes (PRO) defines content validity as "the extent to which the instrument measures the concept of interest" (FDA, 2009, p. 12). According to Strauss and Smith (2009), construct validity "is now generally viewed as a unifying form of validity for psychological measurements, subsuming both content and criterion validity" (p. 7). Hence, both qualitative and quantitative information are essential in evaluating the validity of measures. We review classical test theory and item response theory (IRT) approaches to evaluating PRO measures, including frequency of responses to each category of the items in a multi-item scale, the distribution of scale scores, floor and ceiling effects, the relationship between item response options and the total score, and the extent to which hypothesized "difficulty" (severity) order of items is represented by observed responses. If a researcher has few qualitative data and wants to get preliminary information about the content validity of the instrument, then descriptive assessments using classical test theory should be the first step. As the sample size grows during subsequent stages of instrument development, confidence in the numerical estimates from Rasch and other IRT models (as well as those of classical test theory) would also grow. Classical test theory and IRT can be useful in providing a quantitative assessment of items and scales during the content-validity phase of PRO-measure development. Depending on the particular type of measure and the specific circumstances, the classical test theory and/or the IRT should be considered to help maximize the content validity of PRO measures.Clinical Therapeutics 05/2014; · 2.59 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: In many research and clinical settings in which patient-reported outcome (PRO) measures are used, it is often desirable to link scores across disparate measures or to use scores from 1 measure to describe scores on a separate measure. However, PRO measures are scored by using a variety of metrics, making such comparisons difficult. The objective of this article was to provide an example of how to transform scores across disparate measures (the Marks Asthma Quality of Life Questionnaire [AQLQ-Marks] and the newly developed RAND-Negative Impact of Asthma on Quality of Life item bank [RAND-IAQL-Bank]) by using an item response theory (IRT)-based linking method. Our sample of adults with asthma (N = 2032) completed 2 measures of asthma-specific quality of life: the AQLQ-Marks and the RAND-IAQL-Bank. We use IRT-based co-calibration of the 2 measures to provide a linkage, or a common metric, between the 2 measures. Co-calibration refers to the process of using IRT to estimate item parameters that describe the responses to the scales' items according to a common metric; in this case, a normal distribution transformed to a T scale with a mean of 50 and an SD of 10. Respondents had an average age of 43 (15), were 60% female, and predominantly non-Hispanic White (56%), with 19% African American, 14% Hispanic, and 11% Asian. Most had at least some college education (83%), and 90% had experienced an asthma attack during the last 12 months. Our results indicate that the AQLQ-Marks and RAND-IAQL-Bank scales measured highly similar constructs and were sufficiently unidimensional for IRT co-calibration. Once linked, scores from the 2 measures were invariant across subgroups. A crosswalk is provided that allows researchers and clinicians using AQLQ-Marks to crosswalk to the RAND-IAQL toolkit. The ability to translate scores from the RAND-IAQL toolkit to other "legacy" (ie, commonly used) measures increases the value of the new toolkit, aids in interpretation, and will hopefully facilitate adoption by asthma researchers and clinicians. More generally, the techniques we illustrate can be applied to other newly developed or existing measures in the PRO research field to obtain crosswalks with widely used traditional legacy instruments.Clinical Therapeutics 05/2014; · 2.59 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The goal of this study was to evaluate the reliability and validity of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Patient-Centered Medical Home (PCMH) survey. We conducted a field test of the CAHPS PCMH survey with 2740 adults. We collected information by mail (n = 1746), telephone (n = 672), and from the Web (n = 322) from 6 sites of care affiliated with a West Coast staff model health maintenance organization. An overall response rate of 37% was obtained. Internal consistency reliability estimates for 7 multi-item scales were as follows: access to care, 5 items, α = 0.79; communication with providers, 6 items, α = 0.93; office staff courtesy and respect, 2 items, α = 0.80; shared decision making about medicines, 3 items, α = 0.67; self-management support, 2 items, α = 0.61; attention to mental health issues, 3 items, α = 0.80; and care coordination, 4 items, α = 0.58. The number of responses needed to get reliable information at the site of care level for the composites was generally acceptable (<300 for 0.70 reliability-level) except for self-management support and shared decision making about medicines. Item-scale correlations provided support for distinct composites except for access to care and shared decision making about medicines, which overlapped with the communication with providers scale. Shared decision making and self-management support were significantly, uniquely associated with the global rating of the provider (dependent variable), along with access and communication in a multiple regression model. This study provides further support for the reliability and validity of the CAHPS PCMH survey, but refinement of the self-management support and shared decision-making scales is needed. The survey can be used to provide information about the performance of different health plans on multiple domains of health care, but future efforts to improve some of the survey items is needed.Clinical Therapeutics 05/2014; · 2.59 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.