Teaching critical appraisal skills in health care settings.
ABSTRACT Critical appraisal is the process of assessing and interpreting evidence by systematically considering its validity, results and relevance to an individual's work. Within the last decade critical appraisal has been added as a topic to many medical school and UK Royal College curricula, and several continuing professional development ventures have been funded to provide further training.
To assess the effects of teaching critical appraisal skills to health professionals, on the process of care, patient outcomes and knowledge of health professionals.
We searched The Cochrane Library (to Issue 2 2000), MEDLINE (1966 to 1997), EMBASE (1980 to 1997), Eric (1966 to 1997), Cinahl (1982 to 1997), Lisa (1976 to 1997), Sigle (1980 to 1997), Science Citation Index (1981 to 1997), PsycLit (1974 to 1997), the world-wide-web, and reference lists of articles. We also contacted major medical education centres.
Randomised trials, controlled clinical trials, controlled before and after studies and interrupted time series analyses of educational interventions teaching critical appraisal to health professionals. The outcomes were: process of care; patient mortality, quality of life, and satisfaction; and health professional knowledge/awareness based upon objective, standardised, validated instruments.
Two reviewers independently extracted data and three reviewers independently assessed study quality.
One USA hospital-based randomised trial was included involving 44 doctors. The outcome assessed was critical appraisal knowledge. Process of care, patient health or attitude/awareness outcomes were not assessed. Critical appraisal teaching was reported to have resulted in a 25% improvement (adjusted figure) in critical appraisal knowledge in the intervention group compared to a 6% improvement in the control group, which was statistically significant (p=0.02).
There is evidence that critical appraisal teaching has positive effects on participants' knowledge, but as only one study met the inclusion criteria the validity of drawing general conclusions about the effects of teaching critical appraisal is debatable. There are large gaps in the evidence as to whether teaching critical appraisal impacts on decision-making or patient outcomes. It is also unclear whether the size of benefit seen is large enough to be of practical significance, or whether this varies according to participant background or teaching method. The evidence supporting all outcomes is weakened by the generally poorly designed, executed and reported studies that we found.
- SourceAvailable from: conferences.alia.org.au
- [Show abstract] [Hide abstract]
ABSTRACT: Background: A discipline which critically looks at the evidence for practice should itself be critically examined. Credible evidence for the effectiveness of training in evidence-based healthcare (EBHC) is essential. We attempted to summarise the current knowledge on evaluating the effectiveness of training in EBHC while identifying the gaps. Methods: A working group of EBHC teachers developed a conceptual framework of key areas of EBHC teaching and practice in need of evidence mapped to appropriate methods and outcomes. A literature search was conducted to review the current state of research in these key areas. Studies of training interventions that evaluated effectiveness by considering learner, patient or health system outcomes in terms of knowledge, skills, attitude, judgement, competence, decision-making, patient satisfaction, quality of life, clinical indicators or cost were included. There was no language restriction. Results: Of 55 articles reviewed, 15 met the inclusion criteria: six systematic reviews, three randomised controlled trials and six before–after studies. We found weak indications that undergraduate training in EBHC improves knowledge but not skills, and that clinically integrated postgraduate teaching improves both knowledge and skills. Two randomised controlled trials reported no impact on attitudes or behaviour. One before–after study found a positive impact on decision-making, while another suggested change in learners' behaviour and improved patient outcome. We found no studies assessing the impact of EBHC training on patient satisfaction, health-related quality of life, cost or population-level indicators of health. Conclusions: Literature evaluating the effectiveness of training in EBHC has focused on short-term acquisition of knowledge and skills. Evaluation designs were methodologically weak, controlled trials appeared inadequately powered and systematic reviews could not provide conclusive evidence owing to weakness of study designs.International Journal of Evidence-Based Healthcare 01/2007; 5(4):468-476.
- [Show abstract] [Hide abstract]
ABSTRACT: Despite the increasing number of evidence-based practices and the significant use of public resources towards these, clinicians and practitioners do not consistently use evidence available to them. This paper examines methods that help clinicians and practitioners adopt evidence-based practices. A review was done of 69 systematic reviews, meta-analyses and literature reviews. Several methods can change the knowledge and skill base of professionals, and, to a lesser extent, patient health outcomes; namely, educational interventions; electronic methods; credible and skilled leadership; feedback; discussion; financial incentives; guidelines; portfolios; simulations; and visits from trained individuals. While robust evidence is lacking, effective interventions are likely to be multimodal; address the needs of the target group; be well-planned; be intensive; encourage active participation; be relevant to the clinical context; and provide opportunities for ongoing professional development. The dissemination of evidence to clinicians and practitioners requires well-considered multimodal interventions that are inclusive, comprehensive and ongoing.Australian Psychologist 09/2010; 45(3). · 0.61 Impact Factor