Evaluation of Resident Performance in an Outpatient
Internal Medicine Clinic Using Standardized Patients
RICHARD P. DAY, MD, MARIANA G. HEWSON, PhD,
PHILLIPS KINDY, JR, MSSW, JUDITH VAN KIRK, MS
Objective: To observe and evaluate the performance of primary care
internal medicine residents within the outpatient clinic milieu.
Design: Longitudinal descriptive study.
Patients/participants: 48 internal medicine resident encounters
with two standardized patients at the University of Wisconsin General
Internal Medicine Clinics.
Intervention: Residents were rated by the standardized patients with
a medical skills checklist and an interpersonal skills checklist, and by
the smiling physician with a clinical reasoning skills checklist. The
investigators reviewed audiotapes of the standardized patient en-
counters for strategic management skills.
Main results: Resident performance on these scales was examined
for improvement with years of training; when considered separately,
no such effect was seen for either standardized patient case. When the
cases were grouped together, however, there was significant im-
provement on the Clinical Reasoning Instrument. The grouped stan-
dardized patient data were compared with data from inpatient faculty
evaluations of the residents. Faculty evaluations correlated with stan-
dardized patient evaluations of resident performance only on the
medical checklist. Finally, comparison of the four assessment scales
demonstrated a significant correlation between interpersonal skills,
as assessed by the patient, and strategic management skills.
Conclusion: Resident outpatient performance, measured in a
blinded setting, does not improve with year of training. Faculty inpa-
tient assessments of residents correlate with medical "thoroughness"
as measured by a medical skills checklist, and interpersonal skills as
rated by standardized patients correlate with resident use of strategic
Key words: standardized patients; performance assessment;
J GEN INTERN MED 1993; 8:193- 198.
methods have led to other forms of evaluation) Patient
management problems, done either by traditional
paper/pencil means or by computer, provide a means
of assessing sequential management of a patient. How-
ever, residents usually obtain much more clinical and
laboratory data in a patient management problem than
what is typically documented in an oral exercise, 4 rais-
ing the question of whether either of these methods can
provide an unbiased assessment of true clinical behav-
ior. Standardized patient encounters have also been
used to evaluate resident behavior. Such patient en-
counters allow for the controlled assessment of data
gathering, physical examination, and interpersonal
skills) Standardized patient assessments have been
found to have low intercase reliability6; to minimize
this, large numbers of standardized patients per evalua-
tion and a minimum number of individual assessors are
needed. Standardized patient-based clinical examina-
tions thus become time-intensive.
In this report, we describe the development of an-
other method of observing clinical performance using
unidentified standardized patients 7-9 and we share the
initial results of using such a method in our clinic set-
ting. Finally, we discuss performance and competency
as they pertain to ambulatory practice in a primary care
internal medicine program.
RESIDENT EVALUATION and certification in internal medi-
cine focus largely upon the demonstration of factual
knowledge in a written test. While attending physicians
in residency programs are expected to observe resi-
dents' clinical performance, in part compensating for
the demise of the oral examination process,t this clini-
cal exercise has been criticized as artificial .2 Neither of
these current methods of evaluation allows for an as-
sessment of resident behavior directly related to patient
Attempts to circumvent the problems of current
Received from the Department of Medicine, University of Wisconsin
Hospital and Clinics, Madison, Wisconsin.
Presented in part at the Generalists in Medical Education Con-
ference, Washington, DC, November 10, 1991.
Supported by USPHS/HRSA training grant D28-PE- 15218.
Address correspondence and reprint requests to Dr. Day:
J5/210 CSC, University of Wisconsin Hospital and Clinics, 600 High-
land Avenue, Madison, WI 53792.
Subjects were current members of the Primary
Care Internal Medicine Residency (n = 42) and Cate-
gorical Internal Medicine Residency (n = 6) who saw
outpatients in the General Internal Medicine Clinic at
the University of Wisconsin Hospital and Clinics. Resi-
dents were in the first (n -- 14), second (n = 25), or
third (n = 9) residency year. The study was approved
by the Human Subjects Committee of the University of
Wisconsin. All residents provided informed, written
consent for the study. They consented at the beginning
of the academic year with the understanding that they
might see up to two unidentified standardized patients
who would audiotape the clinical encounters. They
further understood that the exercise was for research
purposes only and that no unusual psychosocial prob-
lem would present in these patients. Residents had the
opportunity to review their audiotapes at the end of the
Day et aJ., RESIDENT PERFORMANCE EVALUATION
Two standardized patients were recruited by ad-
vertisement and trained by standard techniques s to por-
tray standard scenarios. The first patient, a 58-year-old
woman, had experienced subacute onset of lumbar
pain after housework two weeks before the clinic visit.
The second patient, a 22-year-old woman, complained
of increasing daily headaches over the previous six
months. The presentations of these medical problems
were designed to be clinically ambiguous. At least two
staff physicians saw the standardized patients in clinic
before residents were scheduled with them, in order to
establish uniform patient presentation.
The standardized patient was meticulously incor-
porated into the resident's clinic schedule; no distinc-
tion between standardized and "real" patients was
made. The patient audiotaped the encounter by means
of a small microphone and recording device carried in
either a backpack or a purse. To preserve blinding all
orders for laboratory and radiologic data resulting from
the clinical encounter were intercepted, and all paper
evidence of the resident-standardized patient en-
counter was purged from the clinic chart after each
Four instruments were developed: a clinical rea-
soning skills rating scale; an interpersonal skills rating
scale; a case-specific checklist on the history and physi-
cal examination; and a strategic medical management
case-specific instruments, completed by the standard-
ized patients and designed to assess the degree to which
medical data are gathered by the resident. Each list was
generated by the physician on the project and validated
by two other physicians as a reasonable expectation for
a specific complaint. The lists are divided into the fol-
lowing four categories: pain description, pain severity,
psychosocial context, and physical examination proce-
dures. Standardized patient ratings of the interview
portion of the checklist were periodically checked
against the corresponding audiotape, yielding agree-
ment between 80% and 100% for the back pain case,
and 83% and 100% for the headache case.
The medical checklists are
Clinical Reasoning Skills Rating Scale.
instrument was designed to assess residents' reasoning
and strategic medical management as subjectively de-
termined from the attending physician's point of
view. lo The attending physician completed the instru-
ment on the basis of the resident's presentation and
discussion of the standardized patient case during nor-
mal clinic hours. It contains 12 items, rated on a seven-
point scale and grouped into five categories: data gath-
ering, generating the differential diagnosis, confirming
the diagnosis, planning treatment, and management.
Face validity was obtained from the general internal
medicine faculty at the University of Wisconsin Hospi-
tal and Clinics. Reliability checks included test-retest
correlations (using Pearson's rho) of 0.669, 0.322, and
0.805 for three different attending physicians on the
back pain case. Interrater reliability (using the Kendall
Selected Criteria for Strategic Medical Management (with Examples from the Headache Case)
Resident names what patient has, or probably has
Resident names what patient does not have, or is unlikely to have
Resident states how soon patient should be better, given the care
Resident indicates what changes in patient's condition would necessitate
"It's kind of hard to tell right now whether you're having sort of a
tension headache or a vascular headache."
"You have migraine headaches."
"I think you have tension headaches, but that doesn't necessarily mean
you're tense. It's just that it's muscle tension."
"These headaches really don't sound like migraine headaches. Also,
there's nothing to make me think that there's anything terribly
wrong . . . like a brain tumor."
"'1 don't see anything neurologically wrong, no abnormalities."
"It takes a while usually for propranolol to take effect, so if you can give
it a couple weeks and see how it goes . . .
"You should know by the second [pill] if it's gonna help or not."
"If two or three weeks go by and you don't feel it's doing a good
enoughjob, in other words, you're taking the pills every day, you're
still getting headaches every day, and you don't feel you're very much
improved even though it's relieving your symptoms a little bit, come
back, and then we'll talk about starting you on something that you
can take to prevent getting the headaches."
"" If you're having any problems, give us a call."
'" If all of a sudden the headaches are a lot more frequent or a lot worse,
or especially if you get numbness or tingling anywhere or anything,
you should come right in."
JOURNAL OF GENERAL INTERNAL MEOtCINE, Volume 8 (April), ~ 993
coefficient of concordance) was first 0.285 (n = 12)
for the back pain case, later improving to 0.502 (n =
6) after training to the meaning of the items.
Interpersonal Skills Rating Scale. This instru-
ment, modified from the Arizona Clinical Interview
Rating Scale, ~t was designed to assess the resident's
ability to communicate with the patient appropriately.
The standardized patient completed it following the
resident encounter. It has 31 items, rated on a five-
point scale and grouped into the following five catego-
ries: negotiating agenda, gathering data, emotional
focus, giving information, and patient satisfaction.
Standardized patients were trained to this scale with
videotapes of other physician- patient encounters. In-
terrater comparisons between the standardized patient
and one of the authors (PK) were obtained (using Pear-
son correlation coefficients) for the back pain case
(0.667, 0.837) and the headache case (0.882, 0.817).
Strategic Medical Management Checklist.
This checklist was designed to assess some of the ob-
servable behaviors we think reflect strategic medical
management. Developed by the investigators, it con-
sists of ten items likely to occur during patient examina-
tion (Table 1 provides selected items and examples
from the headache case). The items may be case-inde-
pendent, in that they should occur regardless of the
outcome of the assessment. This checklist was com-
pleted by the investigators, who analyzed audiotapes of
each encounter. When a resident displayed a specific
verbal behavior at least once, credit for that item was
given. Each criterion was thus rated only once.
Data describing resident performance were com-
piled on a personal computer using the statistical pack-
age SYSTAT. The Kruskal-Wallis ANOVA was used to
compare performances by year for each instrument.
Resident performance ratings using the four instru-
ments (using combined cases of low back pain and
headache) were also compared using Pearson correla-
tion coefficients. In addition, mean scores for each resi-
dent from a six-month period of attending physician
evaluations using the American Board of Internal Medi-
cine (ABIM) Houseofficer Evaluation Form were corre-
lated with the four instruments.
Forty-eight resident-standardized patient en-
counters took place for back pain (n = 23) and head-
ache (n = 25) during an 18-month period. At the time
of the encounter no resident explicitly identified a stan-
dardized patient as such. Initially the clinic attending
physicians also were blinded to the exact identity of the
standardized patient. Instead they knew that one of the
patients in an afternoon for one of their residents would
be a standardized patient. Staff physicians were re-
quested to complete and submit the Clinical Reasoning
Skills Rating Scale for all that day's encounters. Because
staff physicians did not always do so, either because of
the inavailability of time or because of insufficient
staffing of the patient, the total number of data points
for the Clinical Reasoning Skills Rating Scale (n = 36)
is smaller than those for the other instruments (n =
48). Technical problems prevented audiotape analysis
of eight encounters, leaving 40 tapes for strategic medi-
cal management checklist assessment.
In most instances, initial medical management of
the standardized patients consisted only of the medical
interview, physical examination, and treatment plan.
Blood tests were ordered for each patient by two resi-
dents each. Lumbosacral spine x-rays were ordered by
two residents for the back pain patient. For the head-
ache patient, one resident ordered a cranial magnetic
resonance imaging study.
Resident Performance by Year of Training
Resident performance was examined with each in-
strument for each case according to year of residency.
The medical checklists showed appreciable variation
between cases, with more information obtained from
the headache patient. No difference by year of training
was observed. The mean percentages of history items
completed were 76%, 70%, and 71% for first-, second-,
and third-year residents, respectively. On physical ex-
amination items, the scores averaged 47%, 45%, and
61%, respectively. Performances on the interpersonal
skills scale also did not significantly differ among resi-
dents according to year of training. For both cases, low
ratings were observed on the emotional focus items,
and a trend toward decreasing performance with years
of residency on emotional focus questions was seen in
the low back pain case. The means (based on a five-
point scale) for residents on the back pain case were
3.5, 3.4, and 3.2 for first-, second-, and third-year resi-
dents, respectively. For the headache case, the means
were 3.5, 3.5, and 3.7.
On the Clinical Reasoning Skills Rating Scale,
scores were uniformly high (Table 2). When data from
the two cases were grouped, a significant difference
was seen across years of residency (p < 0.05). The Stra-
tegic Medical Management Checklist scores showed no
improvement across years of training or standardized
patient case (Table 3).
Resident Performance across Instruments
Pearson correlations of mean resident ratings
across the four instruments for both cases, and between
them and the ABIM global rating scale, showed few
significant correlations (Table 4). The Interpersonal
Day eta/., RESIDENT PERFORMANCE EVALUATION
Clinical Reasoning Skills Rating Scale Mean Scores (_ Standard Deviations) (Rated by Attending Physician on a Scale of 0-7)
Low Back Pain Headache Combined Cases*
Postgraduate year 1
Postgraduate year 2
Postgraduate year 3
4.89 + 1.29 (n = 8)
5.87 ----- 1.51 (n = 4)
6.00 4- 0.00 (n = 2)
5.03 -t- 1.38 (n = 5)
5.50 + 1.09 (n = 14)
6.37 + 0.34 (n = 3)
4.95 4- 1.27 (n = 13)
5.58 4- 1.15 (n = 18)
6.22 ___ 0.31 (n = 5)
*Combined cases show statistical improvement with postgraduate year (p < 0.05, Kruskal-Wallis ANOVA).
Strategic Medical Management Rating Scale Mean Scores (_ Standard Deviations) (Rated by Investigators on the Basis of Audiotape)
Low Back Pain Headache
Postgraduate year 1
Postgraduate year 2
Postgraduate year 3
59% --+ 16% (n = 7)
600/0 4- 120/0 (n = 7)
560/0 ___ 18O/o (n = 5)
51% + 12% (n = 5)
65% ___ 160/0 (n = 14)
550/0 __. 21% (n = 2)
Pearson Correlation Matrix Comparing the Four Assessment Scales and Inpatient Faculty Evaluations
SMM* MCLt IPSt CRI§ ABIM¶
1.00 (n = 40)
0.07 (n = 40)
0.41 (n = 40)**
0.29 (n = 32)
O. 16 (n = 24)
1.00 (n = 48)
0.59 (n = 48)tt
0.12 (n = 36)
0.48 (n = 28)**
1.00 (n = 48)
0.25 (n = 36)
O. 18 (n = 28)
1.00 (n = 36)
0.08 (n = 19)
1.oo (n = 28)
*SMM = Strategic Medical Management Rating Scale.
tMCL = Medical Checklist.
tIPS = Interpersonal Skills Rating Scale.
§CRI = Clinical Reasoning Skills Rating Scale.
¶ABIM = ABIM Global Rating Scale.
**p < 0.05.
tip < 0.01.
Skills Rating Scale and the Medical Checklist correlated
highly with each other (r = 0.59, p < 0.01). Interper-
sonal skills also significantly correlated with the man-
agement checklist (r = 0.41, p < 0.05), while medi-
cal checklists correlated with faculty inpatient
evaluations (r = 0.48, p < 0.05).
The Notion of Competency
The definition of medical expertise, while varying
from subspecialty to subspecialty, becomes particu-
larly difficult within an undifferentiated primary care
practice. First, any description of the content domain of
primary care internal medicine is extensive, 12 making it
unlikely that individual residents can personally master
all aspects. Second, the problems seen in primary care
range widely, including those typical of family practice
as well as those seen more often in subspecialty internal
medicine. Finally, the outpatient internist is expected
to identify the patient who has an unusual disease pro-
cess necessitating a specific therapeutic approach.
However, given that an unusual disease has a low inci-
dence of presentation, it may prove difficult to distin-
guish a patient who has an unusual disease from a pa-
tient who has an atypical presentation of a common
The primary care provider, therefore, must view
patient complaints differently from the way the prac-
ticing subspecialty internist does. TM The subspecialist
routinely sees patients who have been referred or who
are frustrated with their previous care givers. In such a
practice, where there is a much higher incidence of
serious disease than that found in the primary care pop-
ulation, there is a greater obligation to fully evaluate
the patient's problems, using currently available tech-
nology. Application of such an approach to primary
care, however, might have unproductive conse-
quences. When primary care clinical management in-
cludes unrestricted and extensive diagnostic testing to
rule out ominous disease, the likelihood of false-posi-
tive results and the risk of adverse effects from the diag-
nostic process are high. 15 Ultimately, patient satisfac-
JOURNAL OF GENERAL INTERNAL MEDICINE, Vo/ume B (April), I..°93
tion may decrease; overall, medical costs to societywill
To avoid unnecessary pursuit of medical diagnosis,
the successful primary care physician must utilize more
than the "snapshot" of the patient presentation as de-
fined by the history, physical examination, and initial
laboratory data. In the initial clinical encounter, any
physician makes an implicit decision as to the immedi-
ate severity of the patient's problem. 16 In an extreme
situation, medical severity is evident; this leads to a
specific diagnostic and therapeutic course. But most
encounters are ambiguous, so the physician must chart
a management course that permits several hypothe-
sized diagnoses, allowing for either a benign (likely) or
an ominous (unlikely) illness process. 17 Relying upon
the ability to reevaluate the patient through time, the
provider may elect to counsel the patient regarding the
diagnostic possibilities, the likely symptom course if a
benign outcome is to occur, and the period of time
necessary to allow such an outcome. 13 Most important,
the physician can indicate to the patient what addi-
tional symptoms would require immediate reevalua-
tion for more ominous disease. Management of patients
in the face of uncertainty thus may include deferring
definitive diagnostic testing while involving the pa-
tient in assessing the evolution of disease.
In this report, we have referred to this clinical
process as strategic medical management. This process
allows for the winnowing of significant medical disease
from self-limiting syndromes within the primary care
practice. Overall use of medical technology is thus op-
timized, and, more importantly, through the use of pa-
tient involvement, bad outcomes are minimized. While
critically important to effective primary care, strategic
medical management is currently not formally taught
or assessed in residency training. 18
Other authors have commented upon the need for
the internist to be more than the possessor of knowl-
edge and skills. The practitioner needs to be sensitive to
psychosocial concerns, to possess good interviewing
and psychological skills, t9 and to be able to negotiate
with the patient an effective treatment plan to which
the patient will adhere. 2° Strategic medical manage-
ment certainly requires good interviewing skills and
psychosocial assessment, just as it requires appropriate
clinical information gathering and clinical reasoning
skills. However, strategic medical management is a
meta-skill, incorporating all of the above elements into
a functional practice style. Indeed, the exercise of stra-
tegic medical management by an individual physician
may allow for selected deficiencies in performance
without affecting patient outcome. For example, a phy-
sician deficient in the complete differential diagnosis
of a patient's presenting complaint would, through
strategic medical management, monitor the patient
well enough to know when to get assistance. Similarly,
the physician with less than ideal interpersonal skills
could utilize the techniques of strategic medical man-
agement to communicate to the patient his or her ex-
pectations regarding medical outcome.
Assessment with Unidentified Patients
Practitioner behavior changes during an explicit
testing situation.4 Performance of general practitioners
with known simulated patients in controlled practice
environments differs from performance with unidenti-
fied simulated patients in their own clinic settings. 21
We hypothesized that strategic medical management,
as a set of skills not explicitly taught in our educational
curriculum, might be exercised less often in an exami-
nation setting. Therefore, unidentified standardized pa-
tients were used. This method worked well; there was
no insurmountable administrative difficulty and no
complaint from residents or other staff.
Except for faculty assessment of resident behavior
on the Clinical Reasoning Skills Rating Scale, we found
no statistically significant improvement in our assess-
ments over years of training. These findings allow for
several possible interpretations. Interpersonal skills of
residents may not show improvement because 1) there
is inadequate teaching or reinforcement of these skills
or 2) the process of residency training causes residents
to devalue interpersonal skills. The low scores in the
psychosocial domain may reflect that our cases were
constructed without a strong emotional component, so
that residents did not need to exercise interpersonal
skills to the degree that differentiation among them
Inconsistent scores on the medical checklists sup-
port the notion that there is not necessarily a relation-
ship between amount of information gathered and ex-
perience. The lack of improvement with training on the
Strategic Medical Management Checklist is compatible
with the fact that this skill was not explicitlytaught to
The presence or absence of specific correlations
among individual residents' performances on different
rating scales deserves further analysis. The strong cor-
relation between the Medical Checklist and the Inter-
personal Skills Rating Scale may reflect a bias among
patients that "more is better." Interestingly, strategic
medical management correlated with interpersonal
skills. Whether this indicates that similar behaviors are
addressed on the two scales (for example, information
giving) or that performance of strategic medical man-
agement creates patient satisfaction requires further
The notion of strategic medical management,
freshly developed here, deserves pursuit. Recent litera-
ture supports the notion that implicitly practiced stra-
Day c'c a/., RESIDENT P E R F O R M A N C E EVALUATION
tegic medical management explains some physician be-
havior. For example, analysis of patients with back pain
has suggested that specific clinical criteria can be used
to determine the necessity for lumbosacral roentgenog-
raphy, and that the sedimentation rate is probably indi-
cated for all patients over 50 years of age. 22 However,
surveys of physician behavior have shown that lumbo-
sacral roentgenography is done less often than these
recommendations would suggest.23 If strategic medical
management is responsible for the discrepancy be-
tween recommended practice and actual practice, fo-
cused studies of physician practice behavior may be
necessary. Refinement of this notion of strategic medi-
cal management will then allow more effective teach-
ing of outpatient practice to residents.
The present study demonstrates well that the man-
agement skills we wish residents to learn do not increas-
ingly manifest themselves during residency training. It
is not possible, nor was it the purpose of this study, to
assess the competency of individual residents. How-
ever, the overall success of a program can be judged
with this methodology. Accordingly, we have begun
using standardized patients in an unblinded fashion to
explicitly teach strategic medical management to our
primary care residents.
The authors gratefully acknowledge the encouragement and advice
of Dr. Paula Stillman.
1. Petersdorf RG. Evaluation of the general internist. Arch Intern
2. Kroboth FJ, KapoorW, Brown FH, KarpfM, Levey GS. A compara-
tive trial of the clinical evaluation exercise. Arch Intern Med.
3. Swanson DB. Issues in assessment of practical skills in medicine.
Professions Educ Res Q. 1990;12:3-6.
4. McGuire CH. Evaluation of student and practitioner compe-
tence. In: McGuire CH, Folet RP, Gorr A, et al. Handbook of
health professions education: responding to new realities in
medicine, dentistry, pharmacy, nursing, allied health, and pub-
lic health. San Francisco: Jossey-Bass, 1983;256-93.
5. Stillman PL, Swanson DB, Smee S, et al. Assessing clinical skills
of residents with standardized patients. Ann Intern Med
6. van der Vleuten CPM, Swanson DB. Assessment of clinical skills
with standardized patients: state of the art. Teach Learn Med.
7. Hoppe RB, Farquhar LJ, Henry R, Stoffelmayr B. Residents' atti-
tudes towards and skills in counseling: using undetected stan-
dardized patients. J Gen Intern Med. 1990; 5:415 -20.
8. Ainsworth MA, Rogers LP, Markus JF, Dorsey NK, Blackwell TA,
Petrusa ER. Standardized patient encounters: a method for teach-
ing and evaluation. JAMA. 1991 ;266:1390-6.
9. Gordon.0, Saunders NA, Hennrikus D, Sanson-Fisher RW. Interns'
performances with simulated patients at the beginning and the
end of the intern year. J Gen Intern Med. 1992;7:57-62.
10. McGuire CH. Medical problem solving: a critique of the litera-
ture.J Med Educ. 1985;60:587-95.
11. Stillman P, Brown D, Redfield D, Sabers D. Construct validation
of the Arizona Clinical Interview Rating Scale. Educ Psychol
12. Jensen NN, Dirkx JM, et al. A curriculum for internal medicine
residency: The University of Wisconsin Program. Philadelphia:
American College of Physicians, 1988.
13. Dixon AS. 'There's a lot of it about': clinical strategies in family
practice. J R Coil Gen Pract. 1986;36:468-71.
14. Mathers N, Hodgldn P. The gatekeeper and the wizard: a fairy
tale. BMJ. 1989;298:172-4.
15. Mold JW, Stein HF. The cascade effect in the clinical care of
patients. N EnglJ Med. 1986;314:512-4.
16. Barrows HS, Feltovich PJ. The clinical reasoning process. Med
17. Kassirer JP. Teaching clinical medicine by iterative hypothesis
testing. Let's preach what we practice. N Engl J Med. 1983;
18. Lawrence RS. The goals for medical education in the ambulatory
setting. J Gen Intern Med. 1988;3(M/A suppl):Sl 5-$25.
19. American Board of Internal Medicine. Clinical competence in
internal medicine. Ann Intern Med. 1979;90:402-11.
20. GreenJA. Minimizing malpractice risks by role clarification. The
confusing transition from tort to contract. Ann Intern Med.
21. RethansJ], Sturmans F, Drop R, van der Vleuten C, Hobus P. Does
competence of general practitioners predict their performance?
Comparison between examination setting and actual practice.
22. Deyo RA, Diehl AK. Cancer as a cause of back pain: frequency,
clinical presentation, and diagnostic strategies. J Gen Intern
23. Frazier LM, Carey TS, Lyles MF, Khayrallah MA, McHaghie WC.
Selective criteria may increase lumbosacral spine roentgeno-
gram used in acute low-back pain. Arch Intern Med. 1989;