Evaluation of an Audit With Feedback Continuing Medical Education Program for Radiation Oncologists

Article (PDF Available)inJournal of Cancer Education 20(4):216-21 · February 2005with19 Reads
DOI: 10.1207/s15430154jce2004_9 · Source: PubMed
Abstract
Meta-analyses demonstrate audit with feedback (AWF) is effective continuing medical education (CME). However, efficacy varies between specialties, with little published radiation oncologist (RO)-specific evidence. We evaluated an AWF CME intervention for ROs determining efficacy, cost-effectiveness, and participant satisfaction. CME program: The CME incorporated fortnightly random patient chart audit, scoring management adequacy via a checklist. Scores were presented at a same-day institutional meeting, and case management discussed. Senior peers provided individualized, educational feedback. Changes in behavior and performance were evaluated by chart review of new patients seen by ROs in the 2 months before commencement of AWF (T0), and at months 13-14 of the program (T1). Behavior and performance were evaluated with a validated, reproducible, 19-item instrument. Criteria for each case audited included 10 targeted and 3 nontargeted behavior items and 6 performance items; each scored 1 point if deemed adequate (maximum score 19). Cost-effectiveness was reported as cost to the institution per item point gained. The mean score (out of 5) of a 14-item questionnaire evaluated program perception. A total of 113 and 118 charts were evaluated at T0 and T1, respectively. Mean score of targeted behavior improved between T0 and T1 (from 8.7 to 9.2 out of 10, P = .0001), with no significant improvement of nontargeted behavior/performance items. Annual costs and cost-per-point gained were US 7,897 dollars and 15 dollars. Participant satisfaction was positive, increasing after efficacy result distribution (P = .0001). Audit with comparative, individualized, educational feedback is cost-effective and positively perceived CME, significantly improving targeted RO behavior. Oncologists' CME design and evaluation require further research.
n
Continuing Education
Evaluation of an Audit With Feedback
Continuing Medical Education Program
for Radiation Oncologists
THOMAS P. SHAKESPEARE, MBBS, MPH, FRANZCR, FAMS, GRADDIPMED
(CLINEPI), RAHUL K. MUKHERJEE, MBBS, FRANZCR, JIADE J. LU, MD,
KHAI MUN LEE, MBBS FAMS, MICHAEL F. BACK, MBBS,
FRANZCR, GRADDIPPSCHO-ONC
AbstractBackground. Meta-analyses demonstrate audit with feedback (AWF) is effective contin
-
uing medical education (CME). However, efficacy varies between specialties, with little published
radiation oncologist (RO)-specific evidence. We evaluated an AWF CME intervention for ROs de
-
termining efficacy, cost-effectiveness, and participant satisfaction. Methods. CME program: The
CME incorporated fortnightly random patient chart audit, scoring management adequacy via a
checklist. Scores were presented at a same-day institutional meeting, and case management dis-
cussed. Senior peers provided individualized, educational feedback. Evaluation: Changes in behav-
ior and performance were evaluated by chart review of new patients seen by ROs in the 2 months be-
fore commencement of AWF (T0), and at months 13-14 of the program (T1). Behavior and
performance were evaluated with a validated, reproducible, 19-item instrument. Criteria for each
case audited included 10 targeted and 3 nontargeted behavior items and 6 performance items; each
scored 1 point if deemed adequate (maximum score 19). Cost-effectiveness was reported as cost to
the institution per item point gained. The mean score (out of 5) of a 14-item questionnaire evalu-
ated program perception. Results. A total of 113 and 118 charts were evaluated at T0 and T1, respec-
tively. Mean score of targeted behavior improved between T0 and T1 (from 8.7 to 9.2 out of 10, P =
.0001), with no significant improvement of nontargeted behavior/performance items. Annual costs
and cost-per-point gained were US $7,897 and $15. Participant satisfaction was positive, increasing
after efficacy result distribution (P = .0001). Conclusion. Audit with comparative, individualized,
educational feedback is cost-effective and positively perceived CME, significantly improving tar
-
geted RO behavior. Oncologists’ CME design and evaluation require further research. J Cancer
Educ. 2005;20:213-218.
A
udit with feedback (AWF) is a valuable continuing
medical education (CME) tool for clinicians, with
its efficacy demonstrated by overviews of random
-
ized studies.
1-3
However, there is evidence that the success of
CME differs between various physician specialties.
4-6
Radia
-
tion oncology is a unique specialty in terms of knowledge
and practice; thus, research specifically investigating the ef
-
ficacy of CME for radiation oncologists (ROs) is required. To
date, there has been little research of CME specifically for
ROs.
7
The published literature has often focused on the
need for radiation oncology CME rather than program de
-
scription or evaluation.
8,9
Authors reporting structure and
content of radiation oncology-specific CME programs gen
-
erally do not back up recommendations with systematic
evaluation.
10,11
The Radiotherapy Centre (RTC) of The Cancer Institute
(TCI), National University Hospital, Singapore, com
-
menced targeted departmental AWF for CME and quality
216
Received from The Cancer Institute, Singapore (TPS, RKM, JJL, KML,
MFB).
Presented at the American Society for Therapeutic Radiology and On
-
cology (ASTRO) 45th ASM, Salt Lake City, Utah, October, 2003.
This work was completed as part of the principal author’s Master of Pub
-
lic Health Thesis.
Address correspondence and reprint requests to: Prof. Tom Shake
-
speare, Acting Director, Area Cancer Services, North Coast Area Health
Service, Coffs Harbour, NSW, 2450, Australia; phone: (612) 665-5125; fax:
(XXX) XXX-XXXX; e-mail: <Tshakespeare@mncahs.health.nsw.gov.au>.
assurance purposes in June 2001. The program was imple
-
mented to help facilitate Royal Australian and New Zealand
College of Radiologist accreditation of resident training
12
and continuing professional development programs at TCI.
The CME audit consisted of fortnightly random assessment
of patient charts, radiotherapy prescriptions, isodose plans,
and simulation films. The department’s residents independ
-
ently scored 2 patients’ records per RO. This checklist con
-
sisted of items evaluating RO behavior (ie, “targeted behav
-
ior items”) including provision of history and examination
findings (in a “registry sheet”), provision of a letter to refer
-
ring doctor, describing treatment intent and treatment site,
correctly describing tumour stage and laterality, specifying
the radiotherapy prescription point, ensuring patient name
was correct, and signing all isodose plans and simulation/ver
-
ification films. These items were incorporated due to their
inclusion in a proposed practice assessment audit instrument
for Australian, New Zealand, and Singaporean ROs.
13
At a 1-hour CME meeting held the same day, residents
presented each case with simulation films and then pre
-
sented the results of the audit score for the case. Treatment
indications, techniques, doses, and other management issues
(ie, clinician “performance” items) were discussed, with
feedback given by senior ROs. Feedback of both deficiencies
in clinician behavior (identified by the audit checklist) and
problems with clinician performance (identified by case
management presentation) was given in a nonthreatening
manner. After all cases were presented, a summary sheet was
displayed listing each case with RO name and checklist
score.
Our purpose in this study was to evaluate the utility of the
AWF program as a CME device for ROs. Our primary end-
point was to evaluate the efficacy of AWF in improving tar-
geted behavior criteria. Our secondary endpoints were to
evaluate potential improvement in nontargeted behavior
criteria (items not included in the CME checklist but that
may represent general behavior change) and clinician per
-
formance (items representing adequate patient manage
-
ment) as well as to measure participant perception and pro
-
gram cost. Our further aim was to determine whether
changes in targeted behavior differed between ROs.
METHODS
CME Program Perception
We evaluated program perception by an anonymous
questionnaire. During the time of the CME evaluation
(April 2001 to July 2002), there were 3 ROs working at TCI,
all 3 being involved in AWF. After July 2002, an additional 2
ROs joined the department and have been involved in the
CME process since commencing employment. As all 5 cur
-
rently employed ROs have been involved in the AWF CME
sessions for at least 6 months, we obtained and combined
perceptions from all 5 ROs. We sent a questionnaire to all 5
participants before and after the results of the evaluation
were known to them.
We calculated a “satisfaction index” before results of the
evaluation (or first questionnaire) were known to evaluate
whether the satisfaction with program efficacy varied before
and after the results had been distributed. The 14 items in
-
cluded related to satisfaction with the program in identifying
learning needs, educational aspects, cost-effectiveness, and
wish for continued involvement. We scored each item on a
5-point Likert scale with 1, very dissatisfied; and 5, very sat
-
isfied. We reported the mean score for all items and made a
comparison between prescores and postscores using matched
Student’s t tests. We asked additional questions about rea
-
sons for participating, obstacles to participation, usefulness
of specific features of the program, intention to change prac
-
tice, and suggestions for changes.
Behavior and Performance
We evaluated behavior and performance by performing a
chart review of all new patients seen by TCI ROs in the 2
months prior to commencement of the AWF program (April
and May 2001, T0), and the 2 months following 1-year’s du
-
ration of the program (June and July 2002, T1). Thus, T1 oc
-
curred while the CME program was running in its 13th and
14th months.
Chart review included all patient notes, treatment charts,
isodose plans, and simulation films. We scored the charts us-
ing a checklist consisting of items that were deemed both
useful and feasible to reliably collect in a retrospective man-
ner. We determined these items based on a literature search;
review of a proposed Australian, New Zealand, and Singa-
porean practice review audit checklist
13
; and after consulta-
tion with TCI ROs and medical educationalists. Item valida-
tion and reproducibility are described elsewhere.
14,15
Briefly,
13 behavior items were related to (1) provision of informa-
tion relevant to decision making (provision of completed
registry sheet; correct description of tumour stage, histology,
and laterality; indicating whether radiotherapy was recom
-
mended or not; describing treatment intent; and indicating
treatment site and prescription point), (2) communication
(providing a letter to the referring doctor), and (3) quality
assurance (ensuring correct patient name on simulation
films and signing all films and charts). An additional 6 clini
-
cian performance items assessed adequacy of radiothera
-
peutic management and included indication for radiother
-
apy, treatment intent, radiation modality (beam type and
energy), dose, fractionation, and field arrangement.
We found no validated evidence as to the most appropri
-
ate scoring system for chart audit. We adopted a simple sys
-
tem to maximize reproducibility and aid subsequent statisti
-
cal analyses: 1 point was awarded for each criterion that had
been fulfilled adequately and 0 points if a criterion had not
been fulfilled. We summated scores for each chart for the
untargeted behavior items (out of 3), targeted behavior
items (out of 10), and nontargeted performance items (out of
6). We also obtained an overall score (out of 19).
We made comparisons between the 2 time points (T0 and
T1) for several parameters including proportion of each item
Journal of Cancer Education 2004, Volume 20, Number 4 217
gaining a 1 and mean scores for targeted behavior items,
nontargeted behavior, and performance items and overall
score. We also evaluated scores for each of the 3 ROs indi
-
vidually. We analyzed results using t tests, chi-square tests,
and confidence levels
16
as appropriate. All P values we re
-
port are 2-tailed. We calculated confidence levels using
freely downloadable software.
17
Cost
We assessed cost to the department by evaluating time
spent by participants and auditors based on hourly salary
rates as well as all physical materials used during the audit
and feedback session. We prospectively evaluated and aver
-
aged these over 4 audits. Costs are in US dollars. We re
-
ported cost as an annual cost and cost per point gained based
on annualized figures of both costs and total points gained.
RESULTS
Participant Characteristics and Satisfaction
ROs’ characteristics are shown in Table 1. It is apparent
that all participants were male and all trained in Western
countries. Before the results of the program evaluation were
made known, participant satisfaction was 3.2 out of 5
in which 3 represents “neither satisfied nor dissatisfied” (Ta-
ble 2). After the results of program evaluation were made
known, participants were significantly more satisfied (P <
.0001), with a mean satisfaction index of 3.7 (in which 4
represents “satisfied”). There were several suggestions for
changes. For example, there was a fairly general view that
less time should be spent on presenting cases, analyzing the
checklist, and audit results and more time on discussion and
teaching. One respondent believed that additional checklist
items should be added (eg, to audit institutional protocol
adherence).
Intention to Change Practice
Doctors indicated they had made changes or intended to
change practice as a result of the program. There were 14
specific changes mentioned including the following: ensur
-
ing treatment prescriptions were accurate: changing doses
for bone metastases, breast cancer, and oesophageal cancer
curative therapy techniques; changing minimum size of elec
-
tron fields; changing shielding for rectal cancer; and alter
-
ations in methods of patient setup at simulation. These
changes were systematic, ie, the RO intended for these
changes to be applied for all future patients.
Behavior and Performance
The overall effect of AWF on targeted and nontargeted
behavior items and performance items is shown in Table 3.
There were improvements in the mean score of targeted
items (P = .0001), behavior items (P = .0004), and for per
-
formance and targeted items combined (P = .002). For tar
-
geted items, significant improvements were seen in the items
“Letter to referring doctor” (improved from 53.1% to 66.1%;
P = .04), “Treatment intent described” (54.0% to 77.1%; P =
.0002), “Laterality doublet” (91.2% to 97.5%; P = .04), and
“Isodose plan signed” (93.8% to 100.0%; P = .006). For
nontargeted behavior, “Decision for treatment” improved
from 84.1% to 91.5%; although not statistically significant
(P = .08), there was 95.9% confidence for a true improve-
ment between T0 and T1. Performance items were high at
T0, with no significant change at T1. Changes in individual
behavior items varied between physicians. Differences that
were statistically significant or showing a trend (P < .1) are
displayed in Table 4.
218 SHAKESPEARE et al.
n
Audit With Feedback CME Program
TABLE 1. Participant Characteristics
Participant April 2001
to July 2002*
Participant after
July 2003† All Participants‡
Age (y)
Mean 38.7 33 36.6
Range 34-45 both 33 33-45
Sex
Male 3 2 5
Female 0 0 0
Nationality and Country of resident training
Australia 2 1 3
New Zealand 1 1
USA 1 1
Years since qualified
Mean 9.3 2.5 6.6
Range 4-16 2-3 2-16
*N=3.
†N=2.
‡N=5.
Journal of Cancer Education 2004, Volume 20, Number 4 219
TABLE 2. Participant Satisfaction With the CME Program*
Before Evaluation Results Known After Evaluation Results Known
Satisfaction in effective needs identification for
Knowledge/skills 2.6 3.4
Applying knowledge/skills 2.8 3.2
Remembering recording/documentation 3.8 4.2
Patient care 3.0 3.4
Satisfaction as an effective CME tool in terms of
Education of knowledge/skills 2.2 3.2
Translating skills to practice 2.8 2.8
Reinforcing appropriate care 3.2 3.6
Improving patient care 3.4 3.8
Confirming appropriate care 3.8 4.0
Relevance to practice 3.6 3.8
Overall satisfaction 2.6 3.6
Cost effectiveness satisfaction 3.6 3.8
Satisfaction with continued involvement
Current program 3.2 3.8
Program after changes made 4.6 4.6
Mean satisfaction Index 3.2 3.7
*Values are mean score out of 5 (n=5).
TABLE 3. Evaluation of Changes in Radiation Oncologist Behavior and Performance as a Result of the Awf Cme Program*
Overall (All Doctors)
2001† 2002‡ P value
Non-targeted items (each score 1)
Primary tumour site documented and correct 99.1% 100.0% .3
Histology documented and correct 96.5% 95.8% .8
Decision for treatment documented 84.1% 91.5% .08
Mean score nontargeted items (out of 3) 2.80 2.87 .2
Targeted items (each score 1)
Registry sheet completed 91.2% 93.2% .6
Letter to referring doctor completed 53.1% 66.1% .04
Treatment intent described 54.0% 77.1% .0002
Tumour stage described and correct 93.8% 94.9% .7
Treatment site on treatment chart present and correct 98.2% 99.2% .5
Laterality doublet on treatment chart present and correct 91.2% 97.5% .04
RT dose point specified 100.0% 99.2% .3
All isodose plans signed 93.8% 100.0% .006
Patient name on film 100.0% 99.2% .3
Film countersigned 96.5% 97.5% .7
Mean score targeted items (out of 10) 8.72 9.24 .0001
Mean score behavior items (out of 13) 11.51 12.11 .0004
Performance Items (each score 1)
Indication adequate? 98.2% 98.3% 1.0
Treatment intent adequate? 99.1% 98.3% .6
Radiation modality adequate? 98.2% 99.2% .5
Dose adequate? 98.2% 99.2% .5
Fractionation adequate? 99.1% 99.2% .5
Field arrangement adequate? 99.1% 99.2% 1.0
Mean score performance items (out of 6) 5.92 5.93 .9
Total mean score (out of 19) 17.43 18.04 .002
*AWP indicates audit with feedback; CME, continuing medical education; RT, reaction time.
†N = 113.
‡N = 118.
Cost
The total cost was $316. Of this, $288 (91%) consisted of
salaries of meeting attendees (on average, 3 ROs, 2 residents,
1 fellow, and 2 therapists), $26 is made up of audit prepara
-
tion costs, and $2 for consumables (mainly overhead trans
-
parencies and marker pens). The annual cost for the CME
exercise, which was held fortnightly during the evaluation
period, was $7,897. The mean score per patient at T0 was
17.43 and 18.04 at T1, with a gain of 0.61 points. The num
-
ber of patients seen over the 12-month period of evaluation
was 840; thus, the number of points gained was 512, and cost
per point gained was $15.
DISCUSSION
The results of this evaluation showed that CME consist-
ing of AWF had beneficial effects on radiation oncology
practice in our institution. The effect was mainly limited to
targeted behavior items (those items that were included in
the checklist of our CME intervention). This pattern is not
surprising, as AWF has been shown to improve clinician
practice for items targeted by the CME intervention,
2
with
conflicting efficacy for nontargeted items.
3
However one
reason for the lack of improvement in some items may be
that they already scored very well prior to the CME program,
a problem noted by others.
18
We also found that all ROs had significant improvements
in 1 or more criteria; however, these items were different for
the different ROs. These multiple significance tests should
be viewed with caution and interpreted as hypothesis gener
-
ating only. However, it is not surprising that CME efficacy
might vary between individuals as has been found previ
-
ously.
19
This supports the need for further research in educa
-
tional needs assessment and CME efficacy for radiation on
-
cology specifically: we cannot presume one type of CME will
be effective across specialists or specialties. There were other
improvements in practice documented by ROs in response
to the participant perception survey. Participants noted a to
-
tal of 14 systematic changes to practice. Identifying these
changes retrospectively relied on participant memory; thus,
the real number is likely higher. Practice changes can have a
major impact on patient care due to their systematic applica
-
tion (ie, applying the changes to all relevant subsequent
cases).
One interesting finding is that the initial program percep
-
tion was ambivalent as measured by our efficacy satisfaction
index; however, it improved significantly after the results of
the program evaluation were distributed. This emphasizes
the importance of regular evaluation, which may increase
satisfaction and motivation and is a vital component of
CME interventions.
20
Also of interest, our program cost
consisted mainly of attendees (and in particular RO’s) sala
-
ries. These were all included since they are costed to the
department. However, in settings where CME is compulsory
and employers are obliged to ensure such programs’ exist,
salary cost might be omitted. Instead, we reported the
percentage of program cost made up of attendees’ salaries
(91%). The program was considered cost-effective by TCI
management.
One of the limitations of any nonrandomized study is that
external factors (beyond the CME program) may have influ-
enced results. However, during the period of the evaluation,
there was no other dedicated RO CME program in place. Al-
though there was no other relevant quality assurance pro-
gram in place during the evaluation period, informal feed-
back to nursing staff about the need to ensure document
filing might have affected some items for which the failure
was in filing, not clinician behavior. However, whatever the
mechanism of improvement, the improvement itself was
likely directly due to the CME program either through
changing clinician behavior or identifying the need to im
-
prove filing. In any case, we were of the opinion that it was
the RO’s ultimate responsibility to ensure behavior items
were both performed and filed. Regardless of these limita
-
tions, our study provides useful information in a field where
little research exists.
7
In the absence of evidence of the effi
-
cacy of CME in radiation oncology, we believe that informa
-
tion gained from nonrandomized studies is valuable. In ad
-
dition, there is evidence that before-after studies of AWF
may reflect results of subsequent randomized evidence,
1,2
and some have argued that we require more observational
studies.
23
As a result of our evaluation, we have gained valuable in
-
sight as to how we might refine the program for future im
-
plementation. Ongoing refinement is an essential part of
CME,
20
and we plan to modify audit criteria and integrate
the program with other CME and clinical quality assurance
activities with the aid of a database.
24
The new instrument is
being developed in conjunction with the Royal Australian
220 SHAKESPEARE et al.
n
Audit With Feedback CME Program
TABLE 4. Changes in Behavior and Performance Items for Individual Radiation Oncologists
Clinician
Criterion
Proportion Scoring
“1” at T0(%)
Proportion Scoring
“1” at T1 (%) P Value
Clinician 1 Treatment intent described 48.4 88.5 0.0001
All isodose plans signed 83.9 100.0 0.003
Clinician 2 Decision for treatment documented 82.0 97.1 0.03
Tumour stage described and correct 91.8 100.0 0.09
Clinician 3 Treatment intent described 52.4 84.4 0.01
Laterality doublet on treatment chart present and correct 81.0 96.9 0.05
and New Zealand College of Radiologists Post-Fellowship
Education Committee with a view for use as a revalidation/
recertification practice assessment instrument. Further eval
-
uations of the modified program are planned.
References
1. Davis D, Haynes RB, Chambers L, et al. The impact of CME. a meth
-
odological review of the continuing medical education literature.
Eval Health Prof. 1984;7:251-283.
2. Davis DA, Thomson MA, Oxman AD, et al. Evidence for the effec
-
tiveness of CME: a review of 50 randomized controlled trials. JAMA.
1992;268:1111-1117.
3. O’Brien TMA, Oxman AD, Davis DA, et al. Audit and feedback: ef
-
fects on professional practice and health care outcomes. Cochrane
Database Syst Rev. 2001;Issue 4.
4. Freiman MP. The rate of adoption of new procedures among physi
-
cians: the impact of specialty and practice characteristics. Med Care.
1985;23:939-945.
5. Schwartz JS, Lewis CE, Clancy C, et al. Internists’ practices in health
promotion and disease prevention: a survey. Ann Intern Med.
1991;114:46-53.
6. Weiss R, Charney E, Baumgardner RA. Changing patient man
-
agement: what influences the practicing pediatrician? Pediatrics.
1990;85:791-795.
7. Shakespeare TP. Evaluating the published evidence for the efficacy of
continuing medical education in radiation oncology. Ann Acad Med
Singapore. 2003;32:S152.
8. Radiation oncology resident training working group and the mem-
bers of SCAROP. Radiation oncology training in the United States:
report from the radiation oncology resident training working group
organized by the Society of Chairman of Academic Radiation Oncol-
ogy Programs (SCAROP). Int J Radiat Oncol Biol Phys. 1999;45:
153-161.
9. Coleman CN, Griffin TW, Prosnitz LR, et al. Training the radiation
oncologist for the twenty-first century. Int J Radiat Oncol Biol
Phys.1996;35:821-826.
10. Emiliani E. Continuing medical education in radiation oncology.
Tumori 1998;84:96-100.
11. Evans RG. The future of oncology training? Clin Oncol (R Coll
Radiol). 1994;6:142-143.
12. Shakespeare TP, Back MF, Lu JJ, Wynne CJ, Bloomfield L. Design of
an internationally accredited radiation oncology resident training
program incorporating novel educational models. Int J Radiat Oncol
Biol Phys. 2004;59:1157-1162.
13. Faculty of Radiation Oncology Revalidation Committee. Revali
-
dation Options Paper. Sydney, Australia: Royal Australian and New
Zealand College of Radiologists, 2000.
14. Shakespeare TP, Mukherjee RK, Lu JJ, et al. A comparison of
RANZCR and Singapore-designed radiation oncology practice audit
instruments: how does reproducibility affect future approaches to
revalidation? Australas Radiol. 2004;48:195-203.
15. Shakespeare TP. Evaluation of Audit With Feedback as a Continuing
Medical Education Tool for Radiation Oncologists [master’s thesis].
Sydney, Australia: University of New South Wales; 2003.
16. Shakespeare TP, Gebski VJ, Veness MJ, et al. Improving interpre
-
tation of clinical studies by use of confidence levels, clinical sig
-
nificance curves, and risk-benefit contours. Lancet. 2001;357:
1349–1353.
17. Confidence Calculator [computer program]. Version 2.0. Available
at: http://www.primercollaboration.com/Tools/tools.html. Accessed
August 11, 2004.
18. Buntix F, Knottnerus JA, Credbolder HFJM, et al. Does feedback im
-
prove the quality of cervical smears? A randomized controlled trial.
Br J Gen Pract. 1993;43:194-198.
19. Gullion DS, Adamson TE, Watts M. The effect of an individualized
practice-based CME program on physician performance and patient
outcomes. West J Med. 1983;138:582-588.
20. Stein L. The effectiveness of continuing medical education: eight re
-
search reports. Med Educ. 1981;56:103-110.
21. Kramer S. An overview of process and outcome data in the patterns
of care study. Int J Radiat Oncol Biol Phys. 1976;7:795-800.
22. Shank B, Moughan J, Owen J, Wilson F, Hanks GE. The 1993-94 pat-
terns of care process survey for breast irradiation after breast-conserv-
ing surgery—comparison with the 1992 standard for breast conserva-
tion treatment. The Patterns of Care Study, American College of
Radiology. Int J Radiat Oncol Biol Phys. 2000;48:1291-1299.
23. Prideaux D. Researching the outcomes of educational interventions:
a matter of design. BMJ. 2002;324:126-127.
24. TCI Quality Improvement (QI) and Continuing Medical Educa-
tion (CME) Program [computer program]. Available at: http://www.
theshakespeares.com/CME.html. Accessed August 11, 2004.
Journal of Cancer Education 2004, Volume 20, Number 4 221
    • "In 1999, the Royal Australian and New Zealand College of Radiologists (RANZCR) designed an instrument to check and record the quality of radiotherapy notes and prescriptions [2]. This audit tool has been shown to be cost effective, to improve targeted performance and positively received by partici- pants [3]. The RANZCR audit tool mandates feedback to treating clinicians as an educational function as well as to provide an additional tier of safety assessment. "
    [Show abstract] [Hide abstract] ABSTRACT: Background The Royal Australian and New Zealand College of Radiologists (RANZCR) initiated a unique instrument to audit the quality of patient notes and radiotherapy prescriptions. We present our experience collected over ten years from the use of the RANZCR audit instrument. Methods In this study, the results of data collected prospectively from January 1999 to June 2009 through the audit instrument were assessed. Radiotherapy chart rounds were held weekly in the uro-oncology tumour stream and real time feedback was provided. Electronic medical records were retrospectively assessed in September 2009 to see if any omissions were subsequently corrected. Results In total 2597 patients were audited. One hundred and thirty seven (5%) patients had one hundred and ninety nine omissions in documentation or radiotherapy prescription. In 79% of chart rounds no omissions were found at all, in 12% of chart rounds one omission was found and in 9% of chart rounds two or more omissions were found. Out of 199 omissions, 95% were of record keeping and 2% were omissions in the treatment prescription. Of omissions, 152 (76%) were unfiled investigation results of which 77 (51%) were subsequently corrected. Conclusions Real-time audit with feedback is an effective tool in assessing the standards of radiotherapy documentation in our department, and also probably contributed to the high level of attentiveness. A large proportion of omissions were investigation results, which highlights the need for an improved system of retrieval of investigation results in the radiation oncology department.
    Full-text · Article · Apr 2013
  • [Show abstract] [Hide abstract] ABSTRACT: The external audit of oncologist clinical practice is increasingly important because of the incorporation of audits into national maintenance of certification (MOC) programs. However, there are few reports of external audits of oncology practice or decision making. Our institution (The Cancer Institute, Singapore) was asked to externally audit an oncology department in a developing Asian nation, providing a unique opportunity to explore the feasibility of such a process. We audited 100 randomly selected patients simulated for radiotherapy in 2003, using a previously reported audit instrument assessing clinical documentation/quality assurance and medical decision making. Clinical documentation/quality assurance, decision making, and overall performance criteria were adequate 74.4%, 88.3%, and 80.2% of the time, respectively. Overall 52.0% of cases received suboptimal management. Multivariate analysis revealed palliative intent was associated with improved documentation/clinical quality assurance (p = 0.07), decision making (p = 0.007), overall performance (p = 0.003), and optimal treatment rates (p = 0.07); non-small-cell lung cancer or central nervous system primary sites were associated with better decision making (p = 0.001), overall performance (p = 0.03), and optimal treatment rates (p = 0.002). Despite the poor results, the external audit had several benefits. It identified learning needs for future targeting, and the auditor provided facilitating feedback to address systematic errors identified. Our experience was also helpful in refining our national revalidation audit instrument. The feasibility of the external audit supports the consideration of including audit in national MOC programs.
    Full-text · Article · Apr 2006
  • [Show abstract] [Hide abstract] ABSTRACT: There has been little radiation oncologist (RO)-specific research in continuing medical education (CME) or quality improvement (QI) program efficacy. Our aim was to evaluate a CME/QI program for changes in RO behavior, performance, and adherence to department protocols/studies over the first 12 months of the program. The CME/QI program combined chart audit with feedback (C-AWF), simulation review AWF (SR-AWF), reminder checklists, and targeted CME tutorials. Between April 2003 and March 2004, management of 75 patients was evaluated by chart audit with feedback (C-AWF) and 178 patients via simulation review audit (SR-AWF) using a validated instrument. Scores were presented, and case management was discussed with individualized educational feedback. RO behavior and performance was compared over the first year of the program. Comparing the first and second 6 months, there was a significant improvement in mean behavior (12.7-13.6 of 14, p = 0.0005) and RO performance (7.6-7.9 of 8, p = 0.018) scores. Protocol/study adherence significantly improved from 90.3% to 96.6% (p = 0.005). A total of 50 actions were generated, including the identification of learning needs to direct CME tutorials, the systematic change of suboptimal RO practice, and the alteration of deficient management of 3% of patients audited during the program. An integrated CME/QI program combining C-AWF, SR-AWF, QI reminders, and targeted CME tutorials effectively improved targeted RO behavior and performance over a 12-month period. There was a corresponding increase in departmental protocol and study adherence.
    Full-text · Article · Jan 2007
Show more