Quality of Care of Children in the Emergency Department: Association
with Hospital Setting and Physician Training
MADAN DHARMAR, MBBS, JAMES P. MARCIN, MD, MPH, PATRICK S. ROMANO, MD, MPH, EMILY R. ANDRADA, MD,
FRANK OVERLY, MD, JONATHAN H. VALENTE, MD, DANIELLE J. HARVEY, PHD, STACEY L. COLE, BS,
AND NATHAN KUPPERMANN, MD, MPH
Objective To investigate differences in the quality of emergency care for children related to differences in hospital setting,
physician training, and demographic factors.
Study design This was a retrospective cohort study of a consecutive sample of children presenting with high-acuity illnesses
or injuries at 4 rural non-children’s hospitals (RNCHs) and 1 academic urban children’s hospital (UCH). Two of 4 study
physicians independently rated quality of care using a validated implicit review instrument. Hierarchical modeling was used to
estimate quality of care (scored from 5 to 35) across hospital settings and by physician training.
Results A total of 304 patients presenting to the RNCHs and the UCH were studied. Quality was lower (difference ? ?3.23;
95% confidence interval [CI] ? ?4.48 to ?1.98) at the RNCHs compared with the UCH. Pediatric emergency medicine (PEM)
physicians provided better care than family medicine (FM) physicians and those in the “other” category (difference ? ?3.34,
95% CI ? ?5.40 to ?1.27 and ?3.12, 95% CI ? ?5.25 to ?0.99, respectively). Quality of care did not differ significantly
between PEM and general emergency medicine (GEM) physicians in general, or between GEM and PEM physicians at the UCH;
however, GEM physicians at the RNCHs provided care of lesser quality than PEM physicians at the UCH (difference ? ?2.75;
95% CI ? ?5.40 to ?0.05). Older children received better care.
Conclusions The quality of care provided to children is associated with age, hospital
setting, and physician training. (J Pediatr 2008;153:783-9)
recommended that facilities be specifically equipped and staffed for pediatric emergency
care.2,6,7 A recent report from the Institute of Medicine documented that only 6% of EDs
in the United States are fully equipped for pediatric emergencies.8,9 The Centers for
Disease Control and Prevention reported that only 71% of US EDs have board-certified
emergency medicine physicians available round the clock either in-house or on call;
furthermore, only 24% of EDs have access to board-certified pediatric emergency med-
icine physicians, and 38% of EDs do not have a pediatrician available for consultation at
all times.9 These deficiencies in equipment, staffing, and availability of pediatric expertise
can lead to delayed diagnosis, administration of inappropriate therapies and suboptimal
medical management in the ED, particularly for critically ill and injured children.4,11-18
Even though most of the previous studies have investigated differences in the
structure of care, little is known about how these differences translate to differences in the
processes of care and other measures of quality. Although some instruments have been
developed to risk-stratify children in the ED for specific outcomes, including appropri-
ateness of admission and return visits within 24 hours of discharge,19-22 these instruments
do not comprehensively evaluate processes of care in the ED. Implicit review, a means of
tudies of the infrastructure and quality of pediatric emergency services across US
hospitals1-8 have found that most emergency departments (EDs), particularly in
rural areas,8,9 may not be sufficiently prepared to care for children,10 and have
General emergency medicine
Pediatric emergency medicine
Pediatric Risk of Admission
Rural non-children’s hospital
Urban children’s hospital
See editorial, p 738
From the Department of Pediatrics (M.D.,
J.M., P.R., S.C., N.K.), Center for Health Ser-
vices Research in Primary Care (M.D., J.M.,
P.R., N.K.), Department of Internal Medi-
cine (P.R.), Department of Emergency
Medicine (E.A., N.K.), and Department of
Public Health Sciences (D.H.), University of
California Davis, Sacramento, CA, and De-
partment of Emergency Medicine, Brown
University, Providence, RI (F.O., J.V.).
Supported in part by grants from the
Agency for Healthcare Research and Qual-
ity (AHRQ 1 K08 HS 13179-01), Emer-
gency Medical Services for Children (HRSA
H34MC04367-01-00), and the California
Healthcare Foundation (CHCF 02-2210).
The authors declare no conflicts of interest.
Submitted for publication Sep 1, 2007; last
revision received Apr 15, 2008; accepted
May 14, 2008.
Reprint requests: Madan Dharmar, MBBS,
Department of Pediatrics, University of Cal-
ifornia Davis Children’s Hospital, 2516
Stockton Boulevard, Sacramento, CA 95817.
0022-3476/$ - see front matter
Copyright © 2008 Mosby Inc. All rights
assessing quality based on expert reviewers’ judgment of
care,23-33has been shown to have high face validity26and
adequate interrater reliability.25,26
To investigate the associations between ED physician
training, hospital setting, and quality of care, we used a
validated implicit review instrument34that measures quality
of emergency care provided to critically ill or injured children.
Our goals were to (1) compare the quality of care delivered to
children in the EDs at nonacademic, rural, non-children’s
hospitals (RNCHs) and the ED of an academic, urban,
children’s hospital (UCH); (2) compare the quality of ED
care delivered to children by pediatric emergency medicine
(PEM) physicians, general emergency medicine (GEM) phy-
sicians, and other physicians; and (3) to identify demographic
factors associated with the quality of care provided to children
in the ED.
Study Setting and Time Period
This study is part of a larger project designed to inves-
tigate interventions aimed at improving the quality of care and
reducing medication errors in acutely ill and injured children
seen in rural, underserved EDs. Because of this, we selected a
convenience sample of 4 RNCH EDs that provide emergency
care in designated rural areas, as defined by California’s Office
of Statewide Health Planning and Development35and the
Federal Center for Medicare and Medicaid Services.36All of
the 4 RNCH EDs provide care to underserved communities
according to the Health Resources and Services Administra-
tion’s definitions of a health professional shortage area, a
medically underserved area, and/or a medically underserved
population.37The EDs at the RNCHs treat between 2200
and 7500 patients per year, of whom between 700 and 2000
are children. The comparison UCH ED is also the northern
California pediatric tertiary referral center for the participat-
ing RNCH EDs. The UCH has a referral and transport rela-
tionship with the 4 RNCHs but does not have an educational or
a training relationship and do not share personnel. During the
study period, UCH ED treated approximately 51000 patients
per year, of whom approximately 20% were children. The study
period extended from January 2000 through June 2003.
Selection of Participants
We chose to study the most seriously ill children pre-
senting to the participating EDs, because these types of
patients are the most likely to demonstrate clinically mean-
ingful differences in quality of care.38Therefore, from the 4
RNCHs (which all have similar 3-level triage systems), we
included all consecutive children who were triaged in the
highest category at presentation (ie, those children considered
seriously ill or injured). The UCH ED has a 5-level triage
system, with the highest 2 triage categories similar in descrip-
tion to the single highest triage level used in the RNCHs.
Therefore, for the children presenting to the UCH ED, we
included a simple random sample of patients triaged in either
of the highest 2 levels; thus, the number of charts collected
from the UCH ED was similar to the number of charts
collected from the RNCHs.
Children were included in the study if they were older
than 1 day and younger than 17 years and presented between
January 1, 2000 and June 30, 2003. All patients from the
RNCH and UCH sites were identified from paper and elec-
tronic ED logbooks, respectively, which included information
on the triage category and the patient’s age and chief com-
plaint. The medical records were copied and deidentified of
all patient and hospital information by a research assistant.
Main Outcome Measure
Quality of care was measured with a validated, 5-item
implicit review instrument developed to measure quality of
care provided to pediatric patients in EDs (Figure; available at
www.jpeds.com).34This instrument encompasses 4 aspects of
process of care in the ED, along with a fifth item assessing the
overall quality of care provided to the patient. Each item is
scored on a scale of 1 to 7 as “extremely inappropriate,” “very
inappropriate,” “somewhat inappropriate,” “intermediate,”
“somewhat appropriate,” “very appropriate,” or “extremely
appropriate.” All reviews were conducted in pairs by 4 phy-
sicians who were board-certified in PEM, 1 of whom also was
board-certified in GEM. Two physicians independently re-
viewed blinded medical records from the UCH ED, and 2
different physicians reviewed records from the 4 RNCH EDs.
The medical records were blinded by deidentifying both hos-
pital and patient-identifying information. For this study, we
used the sum of the 5 item-specific scores from each reviewer
to obtain a summary quality of care score for each chart review
by each reviewer. After consensus resolution of scores that
differed by more than 2 points between reviewers on the
overall quality of care item, the mean of the 2 reviewers’
summary scores was recorded as the final quality of care
assessment. The consensus resolution to address divergent
scores was done independently by both of the reviewers them-
selves discussing the case. None of the reviewers practiced at
the hospital from which he or she reviewed charts. To assess
the comparability of the 2 sets of reviewers, all 4 reviewers
reviewed a random sample of 30 charts, and the quality of care
scores across the 4 reviewers were recorded for analysis.
Factors Related to Quality of Care
A research assistant abstracted patient-level demo-
graphic, diagnostic, and physiological data that we identified
a priori as possibly related to quality of care. Patient-level
variables included age, sex, chief complaint, mode of arrival to
the ED, day of week, and time of presentation. Time of
presentation was dichotomized into nighttime (defined as 7
PM to 7 AM) and daytime. The variable “ambulance or heli-
copter” was categorized as “yes” if the patient was transported
by either of these methods or “no” for walk-in and all other
modes of arrival to the ED. In addition, we collected all
variables required to calculate the Pediatric Risk of Admission
(PRISA II) score.19,20
784Dharmar et al The Journal of Pediatrics • December 2008
We also collected data regarding the treating physician,
including internship, residency, fellowship training, and
board certification. These data were collected from the med-
ical staff offices at participating hospitals and the Medical
Board of California.39Training was categorized as PEM,
GEM, family medicine (FM), or “other,” which included
internal medicine, obstetrics-gynecology, general surgery, and
1-year internships. We also collected data on years of expe-
rience, defined as the time since graduation from medical
school to the time of patient treatment.
The reliability of chart review for each set of reviewers
was measured using the intraclass correlation (ICC).40The
ICC represents the proportion of the variation between sub-
jects (patients) in relation to the total variation. The total
variation (variance) has 2 components: (1) variation between
subjects, which captures variability in quality of care scores
across subjects, and (2) variation within subject, which cap-
tures the variability in quality of care scores generated by each
reviewer for the same subject. A high ICC indicates that
within-subject variation is relatively small and that the re-
viewers are scoring subjects similarly; in other words, more
reliable reviewer scores will result in a higher ICC.
We performed descriptive analyses comparing the
UCH and RNCHs using the Student t-test or the Mann-
Whitney test for continuous variables. We compared categor-
ical variables using the ?2test or Fisher’s exact test, as appro-
priate. Because our data set comprised multiple patients
treated in each hospital, many patients were evaluated by the
same physician. Quality of care scores of patients treated by
the same physician are likely to be more similar than scores of
patients treated by different physicians, and we could not
assume independence of quality of care scores across all pa-
tients. Therefore, we used hierarchical multivariable mod-
els41,42with a random intercept for each physician to account
for this “clustering” by physician. We calculated an ICC for
each of these models to assess the validity of the “clustering”
assumption. All variables were assessed for collinearity before
being included in the models. We decided, a priori, to include
time of admission and the PRISA II score into the multiva-
riable model because of their conceptual importance. We used
a log-transformed PRISA II score as an independent variable
to adjust for the nonlinear association between this marker of
severity of illness and quality of care.
Because of the collinearity between hospital setting and
physician specialty training, we modeled quality of care in 3
ways to determine the independent associations of both hos-
pital setting and physician training. In model 1, we included
hospital setting, comparing the UCH and RNCHs, without
including physician training. In model 2, we included physi-
cian training, comparing PEM, GEM, FM, and other, with-
out including hospital setting. In model 3, we included phy-
sician training and hospital setting, further categorizing the
groups as PEM (limited to the UCH), GEM in the UCH,
GEM in the RNCHs, FM (limited to the RNCHs), and
other (limited to the RNCHs). All models included the
variables needed to adjust for case mix, physician experience,
and other confounding variables.
We used SAS 9.21 (SAS Institute, Cary, North Caro-
lina) and Stata 9.1 (StataCorp, College Station, Texas) for all
statistical analyses. The study protocol and conduct was ap-
proved by the Human Subjects Review Committee at Uni-
versity of California Davis.
A total of 342 eligible patients were identified. In 17 of
these patients (5.0%), the medical chart could not be located
for review. Of the 325 charts available, 21 (6.5%) had to be
excluded due to an absent medical record for the ED visit.
The final sample included 166 charts from the RNCHs and
138 charts from the UCH. In this sample, 6 PEM physicians
treated 72 patients (in the UCH only), 23 GEM physicians
treated 108 patients (in both the UCH and RNCHs), 20 FM
physicians treated 68 patients (in the RNCHs only), and 19
physicians trained in other specialties treated 56 patients (in
the RNCHs only).
Table I summarizes the patient characteristics. When
the sample was stratified based on the hospital setting (UCH
vs RNCH), demographic and ED-related variables were sim-
ilar between groups except for the mean PRISA II score,
which was higher at the RNCHs. All patients at the UCH
ED were treated by GEM- or PEM-trained physicians,
whereas only 25% of the patients at the RNCHs were treated
by GEM-trained physicians (P ? .05) and none were treated
by PEM-trained physicians. Almost 100% of the UCH ED
physicians were board-certified in a specialty, compared with
80% of those at the RNCH EDs (P ? .01). Average physi-
cian experience was 15.11 years in the UCH ED and 17.92
years in the RNCH EDs (P ? .01).
The overall ICC for the random sample of 30 charts
among all 4 reviewers was 0.79, indicating that the 2 pairs of
reviewers scored the same charts similarly. The specific ICCs
were 0.71 for the reviewers who evaluated the UCH charts
and 0.64 for those who evaluated the RNCH charts, suggest-
ing acceptable agreement between reviewers. The ICC calcu-
lated to evaluate the need for hierarchical modeling of the
outcome was 26.3%. This finding demonstrates the relative
similarity in the quality of care of patients treated by the same
physician, justifying the use of the hierarchical model.
Table II (available at www.jpeds.com) demonstrates the
results from the bivariate analysis of quality of care for each of
the independent variables and the absolute score for each
group. The patients cared for at the RNCHs had significantly
lower quality of care scores compared with those cared for at
the UCH (coefficient ? ?3.10; 95% confidence interval
[CI] ? ?4.96 to ?1.94). In addition, physicians trained in
FP and other specialties provided lower average quality of care
than those trained in either PEM or GEM. Although overall
there were no statistically significant differences in quality of
care provided by GEM and PEM physicians, when GEM
training was further categorized by practice location, it was
Quality of Care of Children in the Emergency Department: Association with Hospital Setting and Physician Training785
found that GEM-trained and PEM-trained physicians work-
ing at the UCH provided similar quality of care, whereas
GEM-trained physicians working at the RNCHs provided
significantly lower quality of care than PEM-trained physi-
cians (working at the UCH) (P ? .05 for all). Increasing
patient age also was significantly associated with better quality
of care (coefficient ? 0.11; 95% CI ? 0.04 to 0.19). All other
variables (including time of admission, patient presentation
during the weekend, chief complaint, and mode of arrival to
the ED) were not significantly associated with quality of care.
Table III gives the results for the hierarchical random-
intercept multivariate models. Three factors were significantly
associated with higher quality of care scores: patient age,
hospital setting, and physician training. In all 3 models,
quality of care was 0.55 points higher for every 5-year increase
in patient age. In model 1, average quality of care was worse
for the patients treated at the RNCHs compared with those
treated at the UCH. In model 2, compared with PEM phy-
sicians at the UCH, the quality of care provided by GEM,
FP, and other physicians was lower by 0.54, 3.44, and 2.83
points, respectively; however, the difference in quality of care
between GEM and PEM physicians was not statistically
significant. In model 3, quality of care did not differ signifi-
cantly between PEM and GEM physicians treating patients
in the UCH; however, average quality of care was lower for
patients treated by GEM, FP, and other physicians in the
RNCH EDs (by 2.75, 3.34, and 3.12 points, respectively)
compared with those treated by PEM physicians at the UCH
(the reference group for comparison). Quality of care did not
differ significantly among the types of physicians at the RNCH
EDs. In all models, higher quality of care also was associated
with increasing patient age. All other patient and physician
variables (including treating physician experience, patient sex,
nighttime or weekend presentation to the ED, and PRISA II
score) were not significantly associated with quality of care.
We found that the quality of emergency care provided
to children of similar acuity was better at the UCH ED than
at the RNCH EDs. We also found that the overall quality of
care was similar in children treated by PEM-trained physi-
cians and those treated by GEM-trained physicians, and was
Table I. Patient and physician characteristics at the UCH and 4 RNCHs in northern California (n ? 304
Age, years, mean (SD)
PRISA score, mean (SD)
Sex, n (%)
Arrival method, n (%)
Ambulance or helicopter, n (%)
Injury, n (%)
Weekend presentation, n (%)
Night presentation, n (%)
Board certification, n (%)
Physician training, n (%)
Years since graduation, mean (SD)
17.92 (11.42)15.11 (5.49)
SD, standard deviation.
786 Dharmar et alThe Journal of Pediatrics • December 2008
better than that provided to children treated by all physicians
at the RNCH EDs. In all models, better quality of care also
was associated with increasing age.
Our finding that physicians trained in specialty areas
provided higher-quality specialty care compared with physi-
cians with general training is similar to that reported by
previous investigators.43,44Rhee et al44found that general
practitioners practicing in specialty areas provide lower quality
of care compared with specialists, and that when specialists
practice outside their specialty areas, the relative quality of
their performance declines. Harrold et al43found that spe-
cialists are more knowledgeable with regard to care of diseases
pertaining to their specialty and more current with the guide-
lines, and also are more willing to adopt newer technologies.
More specific to care provided in the ED, Weaver et al45
found that the quality of care provided to patients with acute
myocardial infarction improved after the introduction of
emergency medicine–trained physicians. Numerous studies
have indicated that children receive higher-quality specialty
care from pediatric specialists.15,17,46-48Furthermore, a study
by Kronick12found that significantly more newborns referred
by nonpediatricians required further interventions compared
with those referred by pediatricians.
Our finding that GEM-trained physicians practicing in
an academic UCH ED provide similar quality of care as
PEM-trained physicians practicing in the same setting may
be due to several factors. Physicians of both types may be
equally prepared to care for children in the highest acuity
category. Alternatively, such factors as the availability of pe-
diatric nursing, appropriate pediatric equipment, and/or sub-
specialty and other resources at the UCH ED may primarily
account for any differences in quality of care.8,9Another
factor could be the similar standards and protocols followed
by both PEM-trained and GEM-trained physicians working
alongside one another in the same setting. This finding is in
accordance with that of a previous study demonstrating that
the average length of stay for patients treated by a particular
physician practicing at more than 1 hospital was more depen-
dent on the hospital setting than on the particular physician.49
The lower quality of care scores among all physicians in the
RNCH EDs compared with those in the UCH ED could
have several possible explanations, including the limited re-
sources available at the hospital setting in which the physi-
cians provided care, differences in charting and documenta-
tion between the settings, or differences in knowledge base
among the physicians practicing in the different settings.
Such factors as the lack of pediatric subspecialty physicians,
pediatric specialized equipment, or specialized pediatric sup-
port (such as pediatric nurses and pharmacists) can influence
how physicians are able to practice.49,50The vast majority of
Table III. Multivariate modeling of physician/patient characteristics associated with implicit quality of care
among consecutive high-acuity patients treated at 5 EDs in northern California (n ?304 patients)
Model 1Model 2Model 3
GEM in UCH
GEM in RNCH
Years since graduation
Age in years
Log PRISA II Score
?3.23 (?4.48 to ?1.98)
0.11 (?1.92 to 2.14)
?2.75 (?5.44 to ?0.05)
?3.34 (?5.40 to ?1.27)
?3.12 (?5.25 to ?0.99)
?0.54 (?2.59 to 1.51)
?3.44 (?5.56 to ?1.32)
?2.83 (?5.13 to ?0.53)
?0.02 (?0.08 to 0.04)
?0.02 (?0.07 to 0.04).62
?0.02 (?0.08 to 0.46).63
0.12 (0.04 to 0.19)
0.13 (?0.06 to 0.32)
0.12 (0.04 to 0.19)
0.11 (?0.08 to 0.30)
0.12 (0.05 to 0.19)
0.12 (?0.07 to 0.31)
?0.08 (?0.93 to 0.77).85
?0.04 (?0.90 to 0.82).91
?0.07 (?0.93 to 0.78) .87
?0.27 (?1.15 to 0.59).53
?0.28 (?1.16 to 0.60).53
?0.26 (?1.14 to 0.61).55
Quality of Care of Children in the Emergency Department: Association with Hospital Setting and Physician Training787
children cared for in the ED setting receive their care in
non-children’s centers,8although not necessarily at rural hos-
pitals. Improving the provision of pediatric resources available
to these rural hospital EDs may result in improved quality of
care. Previous investigators have found that such organiza-
tional factors are associated with quality of care in both
adult11and pediatric patients.15,18,47
Our study has several strengths. We investigated several
factors that could influence quality of care, including hospital
setting and physician training, and adjusted for both of these
as well as for case mix and physician experience. This study
evaluated differences in the quality of care provided to chil-
dren in the ED by PEM-trained physicians, GEM-trained
physicians, and physicians with other types of training. In
addition, the quality of care tool that we used assesses the
entire spectrum of the physician–patient interaction and in-
cludes factors such as the diagnostic workup, treatment plan
and interventions, and the discharge plan (Figure). Single
outcome measures, such as appropriateness of admission or
return visits, may not adequately represent the overall quality of
care provided in the ED. Furthermore, the use of an implicit
review instrument allows for quality assessments to be based on
the most current practice guidelines for the broad spectrum of
age, physiology, and disease processes seen in the ED. Finally,
we used robust statistical methods, including hierarchical ran-
dom-intercept modeling to account for the correlation in the
care among patients when they are seen by the same physician.
Our study has several limitations, however. Although
we used a validated quality of care instrument, our measures
of quality were based on retrospective chart reviews and were
limited by the level of documentation in the medical records.
GEM-trained and PEM-trained physicians may document
their data gathering and decision making processes and fol-
low-up plans more thoroughly than physicians with other
training, especially in the UCH setting. Although our PEM-
trained quality reviewers were not asked to evaluate medical
records from the hospital at which they practiced, they may
have viewed PEM and GEM care more favorably because of
their shared culture of training and experience. Furthermore,
because the PEM-trained physicians were all located at 1 ED
(the UCH), we cannot attribute the higher quality of care
provided by PEM physicians entirely to their training, be-
cause the GEM-trained physicians practicing in the same ED
provided similar quality of care. We cannot speculate as to the
quality of care that would be provided by PEM-trained phy-
sicians working in RNCH EDs, or whether there could be
other differences in outcomes such as mortality, morbidity, or
medication errors. In addition, because this study included
only 1 UCH and 4 RNCHs, which have an existing relation-
ship with the participating UCH, our results may not be
representative of other EDs and physicians nationwide, par-
ticularly large, urban nonacademic EDs. The RNCHs in-
cluded in this study treat a relatively small annual volume of
patients, particularly children, and thus may not be represen-
tative of other EDs, including medium and large community
nonpediatric centers. Finally, although we found statistically
significant differences in quality using an instrument to eval-
uate the quality of several processes of care, whether these
differences represent important or clinically significant differ-
ences in quality of care, satisfaction with care, and resulting
outcomes is unknown. The differences in quality of care that
we found, ranging from 0.11 to 3.44, can be interpreted in the
following context: a 5-point difference among scores would
result if, on average, charts were scored 1 full point higher in
each of the 5 measured domains of care.
Our findings of differences in quality of care associ-
ated with hospital type and physician training reinforce the
importance of addressing disparities in quality of emer-
gency care between urban and rural hospitals caring for
acutely ill and injured children. These findings should be
corroborated by applying similar methods to a larger num-
ber of hospitals across multiple states or regions, linking
our implicit process measures with explicit measures to
identify specific opportunities for quality improvement. As
pointed out in recent reports from the Institute of Medi-
cine and the Centers for Disease Control and Prevention,8,9
improving the quality of care provided to children in rural EDs
may require improvements in pediatric resources, operational
structures, and staffing.
development, and first steps. Acad Emerg Med 2003;10:661-8.
Athey J, Dean JM, Ball J, Wiebe R, Melese-d’Hospital I. Ability of hospitals to
care for pediatric emergency patients. Pediatr Emerg Care 2001;17:170-4.
Durch JS, Lohr KN. From the Institute of Medicine. JAMA 1993;270:929.
Esposito TJ, Sanddal ND, Dean JM, Hansen JD, Reynolds SA, Battan K.
Analysis of preventable pediatric trauma deaths and inappropriate trauma care in
Montana. J Trauma 1999;47:243-53.
Gausche M, Seidel JS, Henderson DP, Ness B, Ward PM, Wayland BW, et al.
Pediatric deaths and emergency medical services (EMS) in urban and rural areas. Pediatr
Emerg Care 1989;5:158-62.
Seidel JS, Henderson DP, Ward P, Wayland BW, Ness B. Pediatric prehospital
care in urban and rural areas. Pediatrics 1991;88:681-90.
Seidel JS, Hornbein M, Yoshiyama K, Kuznets D, Finklestein JZ, St Geme JW Jr.
Emergency medical services and the pediatric patient: are the needs being met? Pedi-
Institute of Medicine (US) Committee on the Future of Emergency Care in the
United States Health System. Emergency Care for Children: Growing Pains. Wash-
ington, DC: National Academies Press; 2007.
Middleton KR BC. Availability of Pediatric Services and Equipment in Emer-
gency Departments: United States, 2002–03. Hyattsville, MD: National Center for
Health Statistics; 2006.
Gausche M, Rutherford M, Lewis RL. Emergency department quality assurance/
improvement practices for the pediatric patient. Ann Emerg Med 1995;25:804-8.
Keeler EB, Rubenstein LV, Kahn KL, Draper D, Harrison ER, McGinty MJ,
et al. Hospital characteristics and quality of care. JAMA 1992;268:1709-14.
Kronick JB, Frewen TC, Kissoon N, Lee R, Sommerauer JF, Reid WD, et al.
Influence of referring physicians on interventions by a pediatric and neonatal critical care
transport team. Pediatr Emerg Care 1996;12:73-7.
Lawrence LL, Brannen SJ. The impact of physician training on child maltreat-
ment reporting: a multi-specialty study. Mil Med 2000;165:607-11.
Phibbs CS, Bronstein JM, Buxton E, Phibbs RH. The effects of patient volume
and level of care at the hospital of birth on neonatal mortality. JAMA 1996;276:1054-9.
Pollack MM, Alexander SR, Clarke N, Ruttimann UE, Tesselaar HM, Bachulis
AC. Improved outcomes from tertiary center pediatric intensive care: a statewide
comparison of tertiary and nontertiary care facilities. Crit Care Med 1991;19:150-9.
Tilford JM, Roberson PK, Lensing S, Fiser DH. Improvement in pediatric critical
care outcomes. Crit Care Med 2000;28:601-3.
Tilford JM, Simpson PM, Green JW, Lensing S, Fiser DH. Volume-outcome
relationships in pediatric intensive care units. Pediatrics 2000;106:289-94.
Hampers LC, Trainor JL, Listernick R, Eddy JJ, Thompson DA, Sloan EP, et al.
The Pediatric Emergency Care Applied Research Network (PECARN): rationale,
788 Dharmar et alThe Journal of Pediatrics • December 2008
Setting-based practice variation in the management of simple febrile seizure. Acad
Emerg Med 2000;7:21-7.
Chamberlain JM, Patel KM, Pollack MM. The Pediatric Risk of Hospital Ad-
mission score: a second-generation severity-of-illness score for pediatric emergency
patients. Pediatrics 2005;115:388-95.
Chamberlain JM, Patel KM, Pollack MM, Brayer A, Macias CG, Okada P, et al.
Recalibration of the pediatric risk of admission score using a multi-institutional sample.
Ann Emerg Med 2004;43:461-8.
Gorelick MH, Lee C, Cronan K, Kost S, Palmer K. Pediatric emergency assess-
ment tool (PEAT): a risk-adjustment measure for pediatric emergency patients. Acad
Emerg Med 2001;8:156-62.
Hendricks C. The PEAT scale: an EMS tool. Emerg Med Serv 2004;33:47-50.
Dans PE, Weiner JP, Otter SE. Peer review organizations: promises and potential
pitfalls. N Engl J Med 1985;313:1131-7.
Donabedian A. The quality of care: how can it be assessed? JAMA
Goldman RL. The reliability of peer assessments of quality of care. JAMA
Goldman RL. The reliability of peer assessments: a meta-analysis. Eval Health
Hulka BS, Romm FJ, Parkerson GR Jr, Russell IT, Clapp NE, Johnson FS. Peer
review in ambulatory care: use of explicit criteria and implicit judgments. Med Care
Kahn KL, Rogers WH, Rubenstein LV, Sherwood MJ, Reinisch EJ, Keeler EB,
et al. Measuring quality of care with explicit process criteria before and after implemen-
tation of the DRG-based prospective payment system. JAMA 1990;264:1969-73.
Rubenstein LV, Kahn KL, Reinisch EJ, Sherwood MJ, Rogers WH, Kamberg C,
et al. Changes in quality of care for five diseases measured by implicit review, 1981 to
1986. JAMA 1990;264:1974-9.
Rubin HR, Rogers WH, Kahn KL, Rubenstein LV, Brook RH. Watching the
doctor-watchers: how well do peer review organization methods detect hospital care
quality problems? JAMA 1992;267:2349-54.
Smith MA, Atherly AJ, Kane RL, Pacala JT. Peer review of the quality of care:
reliability and sources of variability for outcome and process assessments. JAMA
Caplan RA, Posner KL, Cheney FW. Effect of outcome on physician judgments
of appropriateness of care. JAMA 1991;265:1957-60.
Hayward RA, McMahon LF Jr, Bernard AM. Evaluating the care of general
medicine inpatients: how good is implicit review? Ann Intern Med 1993;118:550-6.
et al. A new implicit review instrument for measuring quality of care delivered to
pediatric patients in the emergency department. BMC Emerg Med 2007;7:13.
Office of Statewide Health Planning and Development. Available from: http://
www.ruralhealth.oshpd.state.ca.us/faq.htm. Accessed June 2005.
Centers for Medicare & Medicaid Services. Available from: http://www.cms.hhs.gov/
data/download/default.asp. Accessed May 2005.
Health Resources and Services Administration. Available from: http://bhpr.hrsa.gov/
shortage/. Accessed February 2005.
Holdsworth MT, Fichtl RE, Behta M, Raisch DW, Mendez-Rico E, Adams A,
et al. Incidence and impact of adverse drug events in pediatric inpatients. Arch Pediatr
Adolesc Med 2003;157:60-5.
California Office of Statewide Health Planning and Development. Available from:
http://www.medbd.ca.gov/Choose_Doctor.htm. Accessed March 2006.
Shrout P, Fleiss J. Intraclass correlations: uses in assessing rater reliability. Psychol
Hox JJ. Applied Multilevel Analysis. 2nd edition. Amsterdam: TT Publikaties; 1995.
Rabe-Hesketh S. Multilevel and Longitudinal Modeling Using Stata. College
Station, TX: Stata Press; 2005.
Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of
care for generalists and specialists. J Gen Int Med 1999;14:499-511.
Rhee S, Luke RD, Lyons TF, Payne BC. Domain of practice and the quality of
physician performance. Med Care 1981;19:14-23.
Weaver CS, Avery SJ, Brizendine EJ, McGrath RB. Impact of emergency med-
icine faculty on door to thrombolytic time. J Emerg Med 2004;26:279-83.
Goh AY, Lum LC, Abdel-Latif ME. Impact of 24-hour critical care physician
staffing on case-mix adjusted mortality in paediatric intensive care. Lancet
Potoka DA, Schall LC, Gardner MJ, Stafford PW, Peitzman AB, Ford HR.
Impact of pediatric trauma centers on mortality in a statewide system. J Trauma
Diette GB, Skinner EA, Nguyen TTH, Markson L, Clark BD, Wu AW.
Comparison of quality of care by specialist and generalist physicians as usual source of
asthma care for children. Pediatrics 2001;108:432-7.
de Jong JD, Westert GP, Lagoe R, Groenewegen PP. Variation in hospital length
of stay: do physicians adapt their length of stay decisions to what is usual in the hospital
where they work? Health Serv Res 2006;41:374-94.
Weiss KB. Measuring success in the treatment of children in the emergency
department setting: process versus outcomes? Ambul Pediatr 2002;2:301-5.
Dharmar M, Marcin JP, Kuppermann N, Andrada ER, Cole SL, Harvey DJ,
Quality of Care of Children in the Emergency Department: Association with Hospital Setting and Physician Training 789
Figure. Pediatric ED quality assessment scale.
789.e1Dharmar et alThe Journal of Pediatrics • December 2008
Table II. Univariate associations between physician/patient characteristics and implicit quality of care
among consecutive high-acuity patients treated at 5 EDs in northern California (n ? 304 patients)
Variable Mean quality of care scoreBeta coefficient (95% CI)
Physician training and location
GEM in UCH
GEM in RNCH
Years since graduation
Age in years
Log PRISA II score
Ambulance or helicopter
?3.10 (?4.26 to ?1.94)
?0.43 (?2.41 to 1.56)
?3.32 (?5.37 to ?0.27)
?2.70 (?4.92 to ?1.27)
0.21 (?1.67 to 2.08)
?2.48 (?4.93 to ?0.03)
?3.28 (?5.18 to ?1.38)
?2.82 (?4.78 to ?0.87)
0.86 (?1.23 to 2.93)
?0.04 (?0.11 to 0.02)
0.11 (0.04 to 0.19)
0.04 (?0.15 to 0.23)
?0.25 (?1.12 to 0.61).56
?0.14 (?1.02 to 0.74) .76
?0.47 (?1.37 to 0.42).30
0.46 (?0.41 to 1.33).30
?0.29 (?1.13 to 0.56).51
Quality of Care of Children in the Emergency Department: Association with Hospital Setting and Physician Training 789.e2