Full Terms & Conditions of access and use can be found at
International Journal of Healthcare Management
ISSN: 2047-9700 (Print) 2047-9719 (Online) Journal homepage: https://www.tandfonline.com/loi/yjhm20
A predictive model for decreasing clinical no-show
rates in a primary care setting
M. Usman Ahmad, Angie Zhang & Rahul Mhaskar
To cite this article: M. Usman Ahmad, Angie Zhang & Rahul Mhaskar (2019): A predictive model
for decreasing clinical no-show rates in a primary care setting, International Journal of Healthcare
Management, DOI: 10.1080/20479700.2019.1698864
To link to this article: https://doi.org/10.1080/20479700.2019.1698864
Published online: 09 Dec 2019.
Submit your article to this journal
Article views: 24
View related articles
View Crossmark data
A predictive model for decreasing clinical no-show rates in a primary care
M. Usman Ahmad
, Angie Zhang
and Rahul Mhaskar
Medical Education, University of South Florida (USF) Morsani College of Medicine (MCOM), Tampa, FL, USA;
Internal Medicine, USF MCOM,
Tampa, FL, USA
A challenging obstacle to primary health care in the United States (US) is patient no-shows or
missed appointments. The no-show rate can vary from 5.5% to 50%. A no-show may contribute
to increased health risks, poor continuity of care, and loss of revenue. In this study, we develop
and test a predictive model of patient visits. Retrospective regression analyzed patient visits in
2014 and 2015. Dependent variables were month, day, age, gender, race, ethnicity, insurance
type, visit type, and the number of previous no-shows. A threshold for classifying no-shows
was determined. The model was prospectively tested on patient visits in 2016. Signiﬁcant
variables included age, visit type, insurance, and number of previous no-shows. The model
performed at 47% sensitivity and 79% speciﬁcity. The receiver operating characteristic (ROC)
area under curve (AUC) was 0.72 (95% CI, 0.69–0.76) for the model and 0.70 (95% CI, 0.65–
0.74) for prospective analysis. Simulated overbooking with the model resulted in 3.67 vs. 6.87
unused appointments, P< 0.000 (mean diﬀ3.2, 95% CI, 2.9–3.5). It is feasible to develop and
implement a predictive model for single physician practices and implementation may
improve practice eﬃciency.
Received 11 April 2019
Accepted 29 October 2019
Appointments and schedules;
regression analysis; machine
learning; family practice;
One of the most challenging obstacles to primary
health care delivery in the United States (US) is patient
no-shows or missed appointments. In 2011 in the US,
the majority of physicians and surgeons have group
practice sizes with less than 50 members with approxi-
mately 42% in groups of less than 10 members . A
no-show bears the potential consequences for
increased health risks for the patient, poor continuity
of care, and potential loss of multiple streams of rev-
enue originating from decreased eﬃcacy and capacity
to provide services. In one study, the loss to revenue
from a no-show rate of 5.5% was equivalent to the sal-
ary of three nursing staﬀ. This can be especially
economically devastating for small group practices.
The no-show rate in outpatient medical settings can
vary from 5.5% to 50% in multiple studies [2–7]. In
US primary care practices, Lasser et al. showed a
range of no-show rates from 6% to 21.5% across 16
diﬀerent primary care practice settings in New England
. Thus, individual practice settings are an important
contributor to the rate and type of patient no-shows.
In previous literature reasons outlined for patient
no-shows included race, age, insurance type, income,
psychiatric co-morbidities, prior attendance, and
appointment lead time [3–5,8–10]. However, there is
a lack of consensus amongst studies with these vari-
ables and their eﬀect on patient no-shows. In a recent
systematic review, signiﬁcant variables which may
aﬀect no-shows with moderate concordance (>70% of
published studies) include lead time, prior no-show
history, medical history, number of previously sched-
uled visits, use of medication, telephone number
recorded, residence, and year of appointment . Vari-
ables which may not aﬀect no-shows with moderate
concordance (>70% of published studies) include gen-
der, educational level, characteristics of clinic, weather,
and religion . Other variables had an unclear agree-
ment amongst studies (signiﬁcant or nonsigniﬁcant in
30–70% of published studies). Patient-reported reasons
for missing an appointment included forgetting, mis-
communication, transportation problems, and time
In order to reduce the rate of clinical no-shows or
missed appointments, various strategies have been pro-
posed. A systematic review from 2012 compared the
utility of telephone, mail, text message, e-mail, and
open-access scheduling to reduce no-shows. Of the
options, text messaging was cost-eﬀective and provided
a net ﬁnancial beneﬁt but had limited applicability .
Other research has studied methods of reducing no-
shows, including: staﬀtelephone reminders , auto-
mated telephone reminders , text messages [13,14],
exit interviews , missed appointment fees ,
overbooking [17–19], predictive modeling [20–25],
and predictive modeling with overbooking [26–35].
© 2019 Informa UK Limited, trading as Taylor & Francis Group
CONTACT M. Usman Ahmad email@example.com Medical Education, University of South Florida (USF) Morsani College of Medicine
(MCOM), Tampa, FL, USA
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT
There is variation in eﬀectiveness and disadvantages
associated with each management strategy. The most
eﬀective strategy may be closely tailored to a clinic’s
no-show population and minimize lost productivity
by overbooking. In previous studies, predictive model-
ing and overbooking have been investigated in the
endoscopy suite aﬃliated with a large academic medi-
cal center [33–35], a Veteran’sAﬀairs Medical Center
, a large mental health care center , a general
pediatrics clinic aﬃliated with a large academic medical
center [29,30], a multispecialty outpatient primary care
facility aﬃliated with a large academic medical center
, and simulation models [28,31]. Predictive model-
ing has been used to forecast emergency department
patient arrivals in a private hospital in Turkey .
However, evidence about this topic regarding a private
family medicine practice with a single physician has
The present paper describes the development of a pre-
dictive model for patient no-shows or missed appoint-
ments in single physician family medicine practice. It
was our aim to (a) develop a predictive model for
patient no-shows and (b) test the performance of the
model on subsequent patient visits.
The present study was approved by the Institutional
Review Board (IRB). All study procedures were per-
formed in compliance with the World Medical Associ-
ation Declaration of Helsinki on Ethical Principles for
Medical Research Involving Human Subjects. The
study was conducted in a single physician family medi-
cine practice with retrospective analysis of patient data
for model development in Phase 1 and prospective data
collection for testing of the model in Phase 2. Phase 1
included process ﬂow mapping, development of a
model, and selecting the diagnostic threshold. Phase
2 included collection of prospective data and measur-
ing the performance of the predictive model.
Process ﬂow map
Staﬀinterviews were conducted in order to determine
the current process by which patients move through
the clinic system. The operating model for the practice
was mapped and plotted using Microsoft Visio (Micro-
soft Inc., Redmond, WA, USA) with concepts adopted
from the Lean and Lean Six Sigma methodology .
Patient visit information was collected from the elec-
tronic health record system, eClinicalWorks
(eClinicalWorks, Westborough, MA, USA), from Janu-
ary 2014 to December 2016. Each unique patient visit
was matched to a patient identiﬁcation number,
month, day, age, gender, race, ethnicity, insurance
type, visit type, and a number of previous no-shows
Microsoft Excel (Microsoft Inc., Redmond, WA,
USA). Stata 13 (StataCorp LLC, College Station, TX,
USA) was used to run a probit regression with the
dependent variable, no-show, and independent vari-
ables described. The result of the regression was used
to generate a line equation.
Using Microsoft Excel, the output of this equation was
calculated for each patient visit and plotted on a histo-
gram as a no-show or show visit with the frequency of
an output on the yaxis and the output of the equation
on the xaxis. A threshold for classifying show vs. no-
show was chosen using this chart to minimize the
sum of Type I and Type II error.
Predictive model performance
Patient visit information was collected from the elec-
tronic health record system, eClinicalWorks, from Jan-
uary 2016 to December 2016. Patient visits were
matched to visit status and other variables as described
previously using Microsoft Excel. Patients missing one
or more demographic variable were excluded from our
dataset. The predictive model was used to calculate an
output for each patient visit using the previously cho-
sen threshold and designated a show or no-show
visit. This was compared to actual outcomes of visit sta-
tus, and sensitivity and speciﬁcity were calculated for
the model. Receiver operating characteristic (ROC)
was generated for the model and tested data with
area under curve (AUC) calculated. A simulation of
predictive model use with overbooking was conducted
by summarizing patient visits, no-shows, predicted no-
shows, and overbooked visits for each day in 2016. A
two-tailed test of signiﬁcance was conducted on a num-
ber of unused appointments with and without the
A process ﬂow diagram was generated using Microsoft
Visio depicting the process of a patient’s movement
through the clinic (Figure 1). Based on process map-
ping, a ‘no-show’was deﬁned as a case where a patient
has a scheduled appointment, does not cancel, and
does not attend. Oﬃce staﬀrelated that patients do
call in and cancel appointments prior to the appoint-
ment time. These were not categorized as no-show in
2M. U. AHMAD ET AL.
the health record. Thus, the actual number of missed
appointments may be understated.
Patient visit types were deﬁned as follows: CU =
Check Up, INJ = Injection, OV = Oﬃce Visit, and NP
= New Patient, Physical = Physical Exam. Check Up
visits were described by staﬀas follow-up visits from
a hospital admission or maintenance visits for chronic
illness. Injection visits were for immunizations. Oﬃce
Visits were sick visits for existing patients. New
patients were patients new to the oﬃce. Physical
Exam visits were those required for sports, work, or
other periodic reasons for a physical exam.
In total, 6758 patient visits were analyzed by Stata 13
using a probit regression analysis. Linear regression
requires that the data have (a) a linear relationship
between independent and dependent variables, (b)
variables are multivariate normal, and (c) no multicol-
linearity. The likelihood ratio chi-square was 152.25
with a P-value of 0.000 showing good ﬁt of the
model. The variables were all transformed into binary
data with a reference category under a probit
regression. Ethnicity was dropped as a variable due to
a high level of collinearity. The demographics of
patients analyzed are shown in Table 1. The equation
of the line was as follows:
The output of the probit regression analysis is reported
in Table 2. Variables with a signiﬁcant eﬀect on
increasing the likelihood of a no-show include 18–25
years of age, 36–39 years of age, check up visits, and
two previous no-show visits. It can be inferred that
uninsured patients also increased the likelihood of a
no-show based on similar decreased risk with Medicare
and private insurance.
A histogram based on data from the predictive model
was generated with frequency on the yaxis and the
equation output on the xaxis (Figure 2). The no-
show patients showed a bimodal distribution com-
pared to patients who did not miss appointments.
The threshold was chosen at the minimum of
Figure 1. A process ﬂow diagram constructed with Microsoft Visio for the single physician oﬃce with typical patient ﬂow.
Table 1. Demographics.
Phase 1 (n=
Phase 2 (n=
Age, years: mean (SD) 51.6 (18.6) 53.2 (18.9)
Sex, male: N(%) 1744 (59.2%) 1317 (59.6%)
White 2677 (90.9%) 1989 (90.0%)
Black 165 (5.6%) 133 (6.0%)
Hispanic 48 (1.6%) 39 (1.8%)
Asian 27 (0.9%) 24 (1.1%)
Other 29 (1.0%) 24 (1.1%)
Ethnicity, Hispanic or Latino: N
65 (2.2%) 48 (2.2%)
Medicare 601 (20.4%) 499 (22.6%)
Private insurance 2213 (75.1%) 1634 (74.0%)
Uninsured 132 (4.5%) 76 (3.4%)
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 3
intersecting lines between the two patient populations
at 0.160 (95% CI, 0.149–0.151).
Predictive model performance
For model training data, the model performed at 59%
sensitivity and 70% speciﬁcity. The model predicted
outcomes with 47% sensitivity and 79% speciﬁcity on
new data. The ROC AUC was 0.72 (95% CI, 0.69–
0.76) for the model (Figure 2) and 0.70 (95% CI,
0.65–0.74) for predicted data (Figure 2). Simulated
overbooking resulted in 3.67 vs. 6.87 unused appoint-
ments, P< 0.000 (mean diﬀ3.2, 95% CI, 2.9–3.5).
There were 43 days with 115 visits above capacity
with mean visits above capacity 0.46 (95% CI, 0.29–
0.62). Visit utilization increased from 69% to 82%
using the model (Figure 2).
Our team set out to predict patient no-shows with a tai-
lored model due to variability on applicability of exist-
ing models in the literature . Age, visit type,
insurance status, and two previous no-show visits
were signiﬁcant in our model. However, more than
two no-shows were not signiﬁcant. In our study popu-
lation, we also found an association between check up
visits for chronic illness and/or hospital admissions
with patient no-shows. Our model performed with
high speciﬁcity, but moderate sensitivity. Simulated
overbooking using our model produced increased
visit utilization with mean visits above capacity per
day at less than 0.5 across the study period.
Previous research has shown moderate concor-
dance with history of no-shows; however, data are
mixed on age and insurance status [7,8]. Interest-
ingly, our study population did not show an associ-
ation between no-shows and race or gender which
conﬂicts some previous research [24,25]. Race and
gender eﬀects may have been diﬀerent in our study
due to diﬀerences in the local environment . In
a recent systematic review many studies reported
gender as nonsigniﬁcant (22.2% of studies showed
signiﬁcance), while race had mixed data (56.7% of
studies showed signiﬁcance) . This is an important
ﬁnding as implicit bias in healthcare with regard to
race and gender is a validated issue and may aﬀect
patient access to healthcare .
Our data also showed increased rates of no-shows
for ‘check up’visits or those patients following up for
chronic medical issues or hospital admissions as in pre-
vious research . Overbooking may present clinic
operations as eﬃcient but may overlook these chronic
care patients. Appropriate follow-up and referral of
these patients should be considered with any
implementation of predictive modeling and overbook-
ing to maintain quality health for the population. Ten
quality factors are associated with high-quality referrals
in health systems including safe, timely, eﬀective,
eﬃcient, patient centered, equitable, structured referral
letters, dissemination of referral guidelines, centralized
computer systems, and inclusion criteria of a referred
patient which can be adapted to target and follow-up
for these patients within the local healthcare infrastruc-
Simulated overbooking resulted in an improvement
in visit utilization when compared to the clinic’s cur-
rent practice similar to prior research [26,27,33,34].
However, we did not compare predictive overbooking
with ﬂat overbooking as other studies to show relative
improvement [29,35]. Future research may study the
work eﬀort required by staﬀin overbooking with a pre-
dictive model vs. ﬂat overbooking methods in a pri-
mary care setting. Future research may also be useful
in prospectively overbooking patient visits with a
Table 2. Results of probit regression analysis.
Month Beta coeﬃcient (95% CI) Signiﬁcance
January 0.02 (−0.24 to 0.29) 0.857
February 0.04 (−0.23 to 0.31) 0.773
March 0.01 (−0.26 to 0.27) 0.965
April 0.07 (−0.19 to 0.32) 0.621
May −0.02 (−0.3 to 0.26) 0.886
June −0.08 (−0.34 to 0.19) 0.581
July −0.19 (−0.47 to 0.1) 0.203
August 0.01 (−0.26 to 0.28) 0.953
September 0.08 (−0.19 to 0.35) 0.563
October −0.21 (−0.48 to 0.07) 0.145
November −0.03 (−0.31 to 0.25) 0.837
December 0 (0 to 0) Reference
Monday 3.08 (−211.67 to 217.84) 0.978
Tuesday 2.95 (−211.81 to 217.71) 0.979
Wednesday 2.97 (−211.79 to 217.72) 0.978
Thursday 3 (−211.76 to 217.76) 0.978
Friday 2.74 (−212.02 to 217.5) 0.98
<12 −0.19 (−0.71 to 0.34) 0.487
13–17 0.3 (−0.08 to 0.68) 0.126
18–25 0.31 (0.05 to 0.57) 0.019*
26–35 0.01 (−0.24 to 0.26) 0.95
36–49 0.26 (0.06 to 0.45) 0.012*
50–64 −0.01 (−0.19 to 0.17) 0.915
> 65 0 (0 to 0) Reference
Female −0.01 (−0.12 to 0.1) 0.893
Male 0 (0 to 0) Reference
No Gender Selected 0 (0 to 0) Reference
White −0.48 (−0.97 to 0.02) 0.058
Black 0.04 (−0.47 to 0.55) 0.873
Asian −0.56 (−1.35 to 0.23) 0.162
Hispanic −0.17 (−0.74 to 0.41) 0.568
Medicare −0.3 (−0.6 to −0.01) 0.046*
Private Insurance −0.27 (−0.51 to −0.03) 0.030*
Uninsured 0 (0 to 0) Reference
CU 0.39 (0.23 to 0.56) 0.000*
INJ 0.11 (−0.19 to 0.4) 0.489
OV 0.07 (−0.08 to 0.21) 0.382
NP −0.17 (−0.38 to 0.04) 0.110
Physical 0 (0 to 0) Reference
None 0.3 (−0.26 to 0.85) 0.299
Once 0.44 (−0.13 to 1.02) 0.131
Twice 0.94 (0.3 to 1.58) 0.004*
Three or More 0 (0 to 0) Reference
Note: CU = Check Up, INJ = Injection, OV = Oﬃce Visit, NP = New Patient,
Physical = Physical Exam, * = signiﬁcant at P< 0.05.
4M. U. AHMAD ET AL.
predictive model rather than simulation in a single
While developing a threshold for our predictive
model, a bimodal distribution was found for those
patients that had no-show visits. Thus, there may be
masked variables that may continue to improve the
sensitivity of our predictive model. Further research
should focus on additional variables which may not
have been analyzed in our model based on the avail-
ability of information in retrospective data analysis
for model development, but have been shown to be sig-
niﬁcant across studies in a recent systematic review .
Our methodology used the existing healthcare elec-
tronic health record and relatively inexpensive software
and data analysis tools. From a practical standpoint,
implementation requires consideration of infrastruc-
ture changes, human resources, managerial issues,
and cost. Health information technology that captures
data for decision making was found to be available in
hospitals in Canada in an urban location and with a
larger size; however, teaching status and staﬀsize
were not signiﬁcant . Thus, existing infrastructure
may not be limited to large, academic practices. Train-
ing and skill development may help improve staﬀ
acceptance of data-driven tools, while older and longer
term staﬀmay require more development . The
cost of data keeping technology in the US on healthcare
delivery has remained relatively stable from 2006 to
Figure 2. Results and performance of predictive model and simulated overbooking. (a) A histogram plotted using Microsoft Excel
with the output of the regression equation for 6758 patient visits from 2014 to 2015. Visit status by show and no-show is separated
to highlight distributional diﬀerences. A threshold of 0.160 (95% CI, 0.149–0.151) was chosen to classify a visit as a show or no-show
for model deployment. (b) In total, 3571 patient visits in 2016 were analyzed as 251 visit days. One patient day with one visit was
excluded as an outlier due to holiday scheduling. The remaining visits, no-shows, and overbooked appointments with predictive
model were compared to maximum capacity for each visit day. Visit days were converted into visit weeks and maximum capacity,
total visits, and visits with model displayed. Holiday weeks 23, 28, 29, 37, 48, and 53 were excluded in this image to simplify visu-
alization of maximum capacity. Simulated predictive overbooking resulted in 3.67 vs. 6.87 unused appointments, P< 0.000 (mean
diﬀ3.2, 95% CI, 2.9–3.5). Visit utilization increased from 69% with normal scheduling to 82% with predictive overbooking. (c) The
threshold value of 0.16 is marked on the ROC for the training data in Microsoft Excel. The AUC is 0.72 (95% CI, 0.69–0.76). (d) The
threshold value of 0.16 is marked on the ROC for the predicted data. The AUC is 0.70 (95% CI, 0.65–0.74) in Microsoft Excel.
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 5
2016; however, 81% of patients believed it improved
service delivery . Thus, ﬁxed and variable costs
have the potential to remain ﬂat, but, patient knowl-
edge of utilization of technology may improve patient
This project was conducted in a single physician family
medicine practice and may not be broadly applicable to
other clinical settings. Although prospective in data
collection, the model was tested in simulation to
avoid the Hawthorne eﬀect and not as an intervention
in clinical practice. Other variables which may be sig-
niﬁcant predictors may not have been included in
our model. These factors limit the ﬁndings of our
Prediction of no-shows or missed appointments
depends on practice-speciﬁc issues. It is possible
to develop a predictive model to prospectively pre-
dict no-shows for clinics as small as single phys-
ician practices with 47% sensitivity and 79%
speciﬁcity. Overbooking using predictive modeling
may improve a clinic’s ability to schedule unused
Implications for healthcare management
Predictive modeling and overbooking is a cost-eﬀective
and reliable method to decrease the rate of underutili-
zation. A central aim of implementing the model is
focused on maximizing eﬃciency while reducing the
risk of scheduling visits above capacity which may be
an issue with other methods. Eﬀective innovation in
the context of healthcare requires a diligent evaluation
of both managerial and organizational issues .
However, human resource related issues including
staﬀwellness to prevent burnout should be integral
to any implementation of this tool. In a systematic
review, burnout for staﬀis related to low job control,
workplace demands, low support in the workplace,
high workloads, low reward or compensation, and
job insecurity . In another review, Rothenberger
from the Department of Surgery at the University of
Minnesota had similar ﬁndings with additional issues
related to a culture of blame and conﬂicting values in
patient care . Koussa et al. found that there were
diﬀerences in high-income and low-income countries
for issues related to staﬀwellness in the healthcare sec-
tor. Although all factors were relevant in both settings
there was greater value placed on autonomy, pro-
fessional work environment, and workload in high-
income countries . In low-income countries greater
value was placed on infrastructure, career
development, and ﬁnancial incentives . Linzer
et al. provide an excellent framework for managerial
changes related to communication, workﬂow, and tar-
geted quality improvement in randomized trial in the
primary care setting in the US that reduced burnout
In addition, certain sub-populations that are not
accessing healthcare should not be overlooked in
order to minimize potential negative eﬀects on popu-
lation-based health. Our study found that check up
visits for chronic diseases and patients discharged
from the hospital had a higher rate of no shows.
For our population there is a risk that these patients
may be overlooked if the model is implemented with-
out special consideration. In addition to improving
eﬃciency in the clinic, follow-up may reduce the
rate of hospital readmissions. The literature rec-
ommends a partnership with inpatient and outpatient
settings for both surgical and medical patients being
discharged from the hospital [49,50]. In addition to
the 10 quality factors for high-quality referrals ,
an outpatient practice may beneﬁt from direct strat-
egies. Although systematic literature reviews have
mixed results due to quality of methods and study
design, a primary care nurse-based program of call-
ing at-risk patients within 72 h of hospital discharge
may be a useful tool [51–53]. A recent high-quality
trial of this intervention improved appointment
attendance rates with successful phone contact vs.
missed contact (60.1% vs. 38.5% attendance, P=
In order to successfully implement the model,
improve healthcare outcomes for at-risk patients, and
maintain/improve staﬀwellness we recommend the
1. Institute managerial changes in practice setting that
include communication, workﬂow, and quality
2. Partner with local health system including inpatient
settings, urgent care, emergency room, and other
medical settings to ensure high-quality referrals
3. Assign a quality improvement team to development
and implementation of the model using the
methods in this study.
4. Identify special populations by analyzing model
data and signiﬁcance.
5. Develop a phone outreach program as needed if a
sub-population of patients is identiﬁed that requires
additional follow-up [51–54].
6. Implement the model and apply continuous
improvement at pre-deﬁned intervals to update
the algorithm as new data become available.
Geolocation information, WOEID 2503863
6M. U. AHMAD ET AL.
This work has been supported by mentorship by the faculty
in the SELECT MD program at the University of South Flor-
ida Morsani College of Medicine. This work was used to
fulﬁll the requirements of capstone in the SELECT MD pro-
gram. We are also thankful for the team at the private medi-
cal practice of Dr Michael Reilly in St. Petersburg, Florida,
USA, for their support and assistance with this research
No potential conﬂict of interest was reported by the authors.
Ethics The study was approved by Institutional Review
Board (IRB). All study procedures with performed in com-
pliance with the World Medical Association Declaration of
Helsinki on Ethical Principles for Medical Research Invol-
ving Human Subjects.
Notes on contributors
M. Usman Ahmad is currently an intern in the Department
of Surgery at the University of Colorado.
Angie Zhang is currently a resident in Neurological Surgery
at the University of California Irvine.
Dr. Rahul Mhaskar is the director of the Oﬃce of Research
and Associate Professor of Internal Medicine in the Univer-
sity of South Florida Morsani College of Medicine and
Associate Professor of Global Health in the University of
South Florida College of Public Health.
M. Usman Ahmad http://orcid.org/0000-0001-9797-7106
 Welch WP, Cuellar AE, Stearns SC, et al. Proportion of
physicians in large group practices continued to grow
in 2009–11. Health Aﬀ.2013;32:1659–1666.
 Mäntyjärvi M. No-show patients in an ophthalmologi-
cal out-patient department. Acta Ophthalmol.
 Samuels RC, Ward VL, Melvin P, et al. Missed
appointments: factors Contributing to high No-show
rates in an urban pediatrics primary care clinic. Clin
Pediatr (Phila). 2015;54:976–982.
 Kaplan-Lewis E, Percac-Lima S. No-show to primary
care appointments: why patients do not come. J Prim
Care Community Health. 2013;4:251–255.
 Traeger L, O’Cleirigh C, Skeer MR, et al. Risk factors
for missed HIV primary care visits among men who
have sex with men. J Behav Med. 2012;35:548–556.
 Lasser KE, Mintzer IL, Lambert A, et al. Missed
appointment rates in primary care: the importance of
site of care. J Health Care Poor Underserved.
 Dantas LF, Fleck JL, Cyrino Oliveira FL, et al. No-
shows in appointment scheduling –a systematic litera-
ture review. Health Policy (New York). 2018;122:412–
 Norris JB, Kumar C, Chand S, et al. An empirical
investigation into factors aﬀecting patient cancellations
and no-shows at outpatient clinics. Decis Support Syst.
 Cashman SB, Savageau JA, Lemay CA, et al. Patient
health status and appointment keeping in an urban
community health center. J Health Care Poor
 Oppenheim GL, Bergman JJ, English EC. Failed
appointments: a review. J Fam Pract. 1979;8:789–796.
 Stubbs ND, Geraci SA, Stephenson PL, et al. Methods
to reduce outpatient non-attendance. Am J Med Sci.
 Parikh A, Gupta K, Wilson AC, et al. The eﬀectiveness
of outpatient appointment reminder systems in redu-
cing no-show rates. Am J Med. 2010;123:542–548.
 Junod Perron N, Dao MD, Righini NC, et al. Text-mes-
saging versus telephone reminders to reduce missed
appointments in an academic primary care clinic: a
randomized controlled trial. BMC Health Serv Res.
 Chen Z, Fang L, Chen L, et al. Comparison of an SMS
text messaging and phone reminder to improve attend-
ance at a health promotion center: a randomized con-
trolled trial. J Zhejiang Univ Sci B. 2008;9:34–38.
 Guse CE, Richardson L, Carle M, et al. The eﬀect of
exit-interview patient education on no-show rates at
a family practice residency clinic. J Am Board Fam
 Lesaca T. Assessing the inﬂuence of a no-show fee on
patient compliance at a CMHC. Adm Policy Ment
 LaGanga LR, Lawrence SR. Clinic overbooking to
improve patient access and increase provider pro-
ductivity. Decis Sci. 2007;38:251–276.
 DuMontier C, Rindﬂeisch K, Pruszynski J, et al. A
multi-method intervention to reduce no-shows in an
urban residency clinic. Fam Med. 2013;45:634–641.
 Berg BP, Murr M, Chermak D, et al. Estimating the
cost of no-shows and evaluating the eﬀects of mitiga-
tion strategies. Med Decis Making. 2013;33:976–985.
 LotﬁV, Torres E. Improving an outpatient clinic util-
ization using decision analysis-based patient schedul-
ing. Socioecon Plann Sci. 2014;48:115–126.
 Goﬀman RM, Harris SL, May JH, et al. Modeling
patient no-show history and predicting future outpati-
ent appointment behavior in the Veterans health
administration. Mil Med. 2017;182:e1708–e1714.
 Alaeddini A, Yang K, Reddy C, et al. A probabilistic
model for predicting the probability of no-show in hos-
pital appointments. Health Care Manag Sci.
 Torres O, Rothberg MB, Garb J, et al. Risk factor model
to predict a missed clinic appointment in an urban,
academic, and underserved setting. Popul Health
 Goldman L, Freidin R, Cook EF, et al. A multivariate
approach to the prediction of no-show behavior in a pri-
mary care center. Arch Intern Med. 1982;142:563–567.
 Cohen AD, Kaplan DM, Kraus M, et al.
Nonattendance of adult otolaryngology patients for
scheduled appointments. J Laryngol Otol.
 Samorani M, LaGanga LR. Outpatient appointment
scheduling given individual day-dependent no-show
predictions. Eur J Oper Res. 2015;240:245–257.
 Daggy J, Lawley M, Willis D, et al. Using no-show
modeling to improve clinic performance. Health
Inform J. 2010;16:246–259.
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 7
 Muthuraman K, Lawley M. A stochastic overbooking
model for outpatient clinical scheduling with no-
shows. IIE Trans. 2008;40:820–837.
 Huang Y, Hanauer DA. Patient no-show predictive
model development using multiple data sources for
an eﬀective overbooking approach. Appl Clin Inform.
 Huang Y-L, Hanauer DA. Time dependent patient no-
show predictive modelling development. Int J Health
Care Qual Assur. 2016;29:475–488.
 Zeng B, Turkcan A, Lin J, et al. Clinic scheduling models
with overbooking for patients with heterogeneous no-
show probabilities. Ann Oper Res. 2010;178:121–144.
 Creps J, LotﬁV. A dynamic approach for outpatient
scheduling. J Med Econ. 2017;20:786–798.
 May FP, Reid MW, Cohen S, et al. Predictive over-
booking and active recruitment increases uptake of
endoscopy appointments among African American
patients. Gastrointest Endosc. 2017;85:700–705.
 Reid MW, May FP, Martinez B, et al. Preventing endo-
scopy clinic No-shows: prospective Validation of a pre-
dictive overbooking model. Am J Gastroenterol.
 Reid MW, Cohen S, Wang H, et al. Preventing patient
absenteeism: validation of a predictive overbooking
model. Am J Manag Care. 2015;21:902–910.
 Yucesan M, Gul M, Celik E. A multi-method patient
arrival forecasting outline for hospital emergency
departments. Int J Healthc Manag. 2018;0:1–13.
 De la Lama J, Fernandez J, Punzano JA, et al. Using Six
Sigma tools to improve internal processes in a hospital
center through three pilot projects. Int J Healthc
 Fitzgerald C, Hurst S. Implicit bias in healthcare pro-
fessionals: A systematic review. BMC Med Ethics.
 Zailinawati AH, Ng CJ, Nik-Sherina H. Why do
patients with chronic illnesses fail to keep their
appointments? A telephone interview. Asia-PaciﬁcJ
Public Heal. 2006;18:10–15.
 Senitan M, Alhaiti AH, Gillespie J. Improving inte-
grated care for chronic non-communicable diseases:
A focus on quality referral factors. Int J Healthc
 Chow CK. Factors associated with the extent of infor-
mation technology use in Ontario hospitals. Int J
Healthc Manag. 2013;6:18–26.
 Walston SL, Bennett CJ, Al-Harbi A. Understanding
the factors aﬀecting employees’perceived beneﬁts of
healthcare information technology. Int J Healthc
 Okpala P. Assessment of the inﬂuence of technology
on the cost of healthcare service and patient’s satisfac-
tion. Int J Healthc Manag. 2018;11:351–355.
 Lega F. Strategic, organisational and managerial issues
related to innovation, entrepreneurship and intrapre-
neurship in the hospital context: Remarks from
the Italian experience. J Manag Mark Healthc. 2014;2:
 Aronsson G, Theorell T, Grape T, et al. A systematic
review including meta-analysis of work environment
and burnout symptoms. BMC Public Health.
 Rothenberger DA. Physician burnout and well-being: a
systematic review and framework for action. Dis Colon
 Koussa M E, Atun R, Bowser D, et al. Factors inﬂuen-
cing physicians’choice of workplace: systematic review
of drivers of attrition and policy interventions to
address them. J Glob Health. 2016;6:1–13.
 Linzer M, Poplau S, Grossman E, et al. A cluster ran-
domized trial of Interventions to improve work con-
ditions and clinician burnout in primary care: results
from the Healthy work Place (HWP) study. J Gen
Intern Med. 2015;30:1105–1111.
 Kogon B, Woodall K, Kanter K, et al. Reducing read-
missions following paediatric cardiothoracic surgery:
A quality improvement initiative. Cardiol Young.
 Tang N. A primary care physician’sidealtransitions
of care-where’s the evidence? J Hosp Med. 2013;8:
 Woods CE, Jones R, O’Shea E, et al. Nurse-led post-
discharge telephone follow-up calls: a mixed study sys-
tematic review. J Clin Nurs. 2019;28(19–20):3386–
 Crocker JB, Crocker JT, Greenwald JL. Telephone fol-
low-up as a primary care intervention for post-
discharge outcomes improvement: a systematic
review. Am J Med. 2012;125:915–921.
 Bahr SJ, Solverson S, Schlidt A, et al. Integrated litera-
ture review of postdischarge telephone calls. West J
Nurs Res. 2014;36:84–104.
 Tang N, Fujimoto J, Karliner L. Evaluation of a pri-
mary care-based Post-discharge phone call program:
keeping the primary care practice at the center of
Post-hospitalization care transition. J Gen Intern
8M. U. AHMAD ET AL.