Assessment of the validity of the English National Health Service Adult In-Patient Survey for use within individual specialties
ABSTRACT BACKGROUND: Healthcare improvement requires rigorous measurement. Patient experience is a key healthcare outcome and target for improvement. Its measurement requires psychometrically validated questionnaires. In England, the Adult In-Patient Survey (AIPS), which is validated for use across the entire acute inpatient population, is administered to unselected patients after discharge from National Health Service acute Trusts. The AIPS is reported at an organisational level, but subhospital level data are needed for local quality improvement; it is currently uncertain whether the AIPS retains validity in local specialty subgroups. METHODS: We analysed the results of AIPS for 2010 (n=56 931 returns) by specialty (medicine, surgery, orthopaedics, renal medicine, neurosurgery, obstetrics-gynaecology and oncology) to determine whether validity is retained at a suborganisational level. RESULTS: Criterion validity and internal consistency of AIPS were retained for most specialty subgroups. When small local samples were excluded, the results for Trust level specialty groups were similar over a 2-year period, indicating test stability. For oncology there was poor internal consistency in the 'doctors' domain and criterion validity, expressed as the relationship elements of experience and overall rating of care, was less than for other specialties. CONCLUSIONS: The AIPS is suitable for use within many specialties, but our findings question some elements of validity for oncology inpatients. We recommend that future surveys are administered and reported by specialty, to inform local improvement and permit comparison of specialty units.
- SourceAvailable from: Susan Edgman-LevitanHealth Affairs 02/1991; 10(4):254-67. DOI:10.1377/hlthaff.10.4.254 · 4.32 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Objectives To evaluate the use of a modified Consumer Assessment of Healthcare Providers and Systems (CAHPS®) survey to support quality improvement in a collaborative focused on patient-centred care, assess subsequent changes in patient experiences, and identify factors that promoted or impeded data use.Background Healthcare systems are increasingly using surveys to assess patients’ experiences of care but little is established about how to use these data in quality improvement.Design Process evaluation of a quality improvement collaborative.Setting and participants The CAHPS team from Harvard Medical School and the Institute for Clinical Systems Improvement organized a learning collaborative including eight medical groups in Minnesota.Intervention Samples of patients recently visiting each group completed a modified CAHPS® survey before, after and continuously over a 12-month project. Teams were encouraged to set goals for improvement using baseline data and supported as they made interventions with bi-monthly collaborative meetings, an online tool reporting the monthly data, a resource manual called The CAHPS® Improvement Guide, and conference calls.Main outcome measures Changes in patient experiences. Interviews with team leaders assessed the usefulness of the collaborative resources, lessons and barriers to using data.Results Seven teams set goals and six made interventions. Small improvements in patient experience were observed in some groups, but in others changes were mixed and not consistently related to the team actions. Two successful groups appeared to have strong quality improvement structures and had focussed on relatively simple interventions. Team leaders reported that frequent survey reports were a powerful stimulus to improvement, but that they needed more time and support to engage staff and clinicians in changing their behaviour.Conclusions Small measurable improvements in patient experience may be achieved over short projects. Sustaining more substantial change is likely to require organizational strategies, engaged leadership, cultural change, regular measurement and performance feedback and experience of interpreting and using survey data.Health expectations: an international journal of public participation in health care and health policy 05/2008; 11(2):160 - 176. DOI:10.1111/j.1369-7625.2007.00483.x · 1.80 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: To review the concepts of reliability and validity, provide examples of how the concepts have been used in nursing research, provide guidance for improving the psychometric soundness of instruments, and report suggestions from editors of nursing journals for incorporating psychometric data into manuscripts. CINAHL, MEDLINE, and PsycINFO databases were searched using key words: validity, reliability, and psychometrics. Nursing research articles were eligible for inclusion if they were published in the last 5 years, quantitative methods were used, and statistical evidence of psychometric properties were reported. Reports of strong psychometric properties of instruments were identified as well as those with little supporting evidence of psychometric soundness. Reports frequently indicated content validity but sometimes the studies had fewer than five experts for review. Criterion validity was rarely reported and errors in the measurement of the criterion were identified. Construct validity remains underreported. Most reports indicated internal consistency reliability (alpha) but few reports included reliability testing for stability. When retest reliability was asserted, time intervals and correlations were frequently not included. Planning for psychometric testing through design and reducing nonrandom error in measurement will add to the reliability and validity of instruments and increase the strength of study findings. Underreporting of validity might occur because of small sample size, poor design, or lack of resources. Lack of information on psychometric properties and misapplication of psychometric testing is common in the literature.Journal of Nursing Scholarship 02/2007; 39(2):155-64. DOI:10.1111/j.1547-5069.2007.00161.x · 1.77 Impact Factor