Content uploaded by M. Usman Ahmad
Author content
All content in this area was uploaded by M. Usman Ahmad on Mar 08, 2020
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=yjhm20
International Journal of Healthcare Management
ISSN: 2047-9700 (Print) 2047-9719 (Online) Journal homepage: https://www.tandfonline.com/loi/yjhm20
A predictive model for decreasing clinical no-show
rates in a primary care setting
M. Usman Ahmad, Angie Zhang & Rahul Mhaskar
To cite this article: M. Usman Ahmad, Angie Zhang & Rahul Mhaskar (2019): A predictive model
for decreasing clinical no-show rates in a primary care setting, International Journal of Healthcare
Management, DOI: 10.1080/20479700.2019.1698864
To link to this article: https://doi.org/10.1080/20479700.2019.1698864
Published online: 09 Dec 2019.
Submit your article to this journal
Article views: 24
View related articles
View Crossmark data
A predictive model for decreasing clinical no-show rates in a primary care
setting
M. Usman Ahmad
a
, Angie Zhang
a
and Rahul Mhaskar
b
a
Medical Education, University of South Florida (USF) Morsani College of Medicine (MCOM), Tampa, FL, USA;
b
Internal Medicine, USF MCOM,
Tampa, FL, USA
ABSTRACT
A challenging obstacle to primary health care in the United States (US) is patient no-shows or
missed appointments. The no-show rate can vary from 5.5% to 50%. A no-show may contribute
to increased health risks, poor continuity of care, and loss of revenue. In this study, we develop
and test a predictive model of patient visits. Retrospective regression analyzed patient visits in
2014 and 2015. Dependent variables were month, day, age, gender, race, ethnicity, insurance
type, visit type, and the number of previous no-shows. A threshold for classifying no-shows
was determined. The model was prospectively tested on patient visits in 2016. Significant
variables included age, visit type, insurance, and number of previous no-shows. The model
performed at 47% sensitivity and 79% specificity. The receiver operating characteristic (ROC)
area under curve (AUC) was 0.72 (95% CI, 0.69–0.76) for the model and 0.70 (95% CI, 0.65–
0.74) for prospective analysis. Simulated overbooking with the model resulted in 3.67 vs. 6.87
unused appointments, P< 0.000 (mean diff3.2, 95% CI, 2.9–3.5). It is feasible to develop and
implement a predictive model for single physician practices and implementation may
improve practice efficiency.
ARTICLE HISTORY
Received 11 April 2019
Accepted 29 October 2019
KEYWORDS
Appointments and schedules;
regression analysis; machine
learning; family practice;
population health
management
Introduction
One of the most challenging obstacles to primary
health care delivery in the United States (US) is patient
no-shows or missed appointments. In 2011 in the US,
the majority of physicians and surgeons have group
practice sizes with less than 50 members with approxi-
mately 42% in groups of less than 10 members [1]. A
no-show bears the potential consequences for
increased health risks for the patient, poor continuity
of care, and potential loss of multiple streams of rev-
enue originating from decreased efficacy and capacity
to provide services. In one study, the loss to revenue
from a no-show rate of 5.5% was equivalent to the sal-
ary of three nursing staff[2]. This can be especially
economically devastating for small group practices.
The no-show rate in outpatient medical settings can
vary from 5.5% to 50% in multiple studies [2–7]. In
US primary care practices, Lasser et al. showed a
range of no-show rates from 6% to 21.5% across 16
different primary care practice settings in New England
[6]. Thus, individual practice settings are an important
contributor to the rate and type of patient no-shows.
In previous literature reasons outlined for patient
no-shows included race, age, insurance type, income,
psychiatric co-morbidities, prior attendance, and
appointment lead time [3–5,8–10]. However, there is
a lack of consensus amongst studies with these vari-
ables and their effect on patient no-shows. In a recent
systematic review, significant variables which may
affect no-shows with moderate concordance (>70% of
published studies) include lead time, prior no-show
history, medical history, number of previously sched-
uled visits, use of medication, telephone number
recorded, residence, and year of appointment [7]. Vari-
ables which may not affect no-shows with moderate
concordance (>70% of published studies) include gen-
der, educational level, characteristics of clinic, weather,
and religion [7]. Other variables had an unclear agree-
ment amongst studies (significant or nonsignificant in
30–70% of published studies). Patient-reported reasons
for missing an appointment included forgetting, mis-
communication, transportation problems, and time
offwork [3,4].
In order to reduce the rate of clinical no-shows or
missed appointments, various strategies have been pro-
posed. A systematic review from 2012 compared the
utility of telephone, mail, text message, e-mail, and
open-access scheduling to reduce no-shows. Of the
options, text messaging was cost-effective and provided
a net financial benefit but had limited applicability [11].
Other research has studied methods of reducing no-
shows, including: stafftelephone reminders [12], auto-
mated telephone reminders [13], text messages [13,14],
exit interviews [15], missed appointment fees [16],
overbooking [17–19], predictive modeling [20–25],
and predictive modeling with overbooking [26–35].
© 2019 Informa UK Limited, trading as Taylor & Francis Group
CONTACT M. Usman Ahmad musman.ahmad@cuanschutz.edu Medical Education, University of South Florida (USF) Morsani College of Medicine
(MCOM), Tampa, FL, USA
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT
https://doi.org/10.1080/20479700.2019.1698864
There is variation in effectiveness and disadvantages
associated with each management strategy. The most
effective strategy may be closely tailored to a clinic’s
no-show population and minimize lost productivity
by overbooking. In previous studies, predictive model-
ing and overbooking have been investigated in the
endoscopy suite affiliated with a large academic medi-
cal center [33–35], a Veteran’sAffairs Medical Center
[27], a large mental health care center [26], a general
pediatrics clinic affiliated with a large academic medical
center [29,30], a multispecialty outpatient primary care
facility affiliated with a large academic medical center
[32], and simulation models [28,31]. Predictive model-
ing has been used to forecast emergency department
patient arrivals in a private hospital in Turkey [36].
However, evidence about this topic regarding a private
family medicine practice with a single physician has
been lacking.
Objectives
The present paper describes the development of a pre-
dictive model for patient no-shows or missed appoint-
ments in single physician family medicine practice. It
was our aim to (a) develop a predictive model for
patient no-shows and (b) test the performance of the
model on subsequent patient visits.
Methods
The present study was approved by the Institutional
Review Board (IRB). All study procedures were per-
formed in compliance with the World Medical Associ-
ation Declaration of Helsinki on Ethical Principles for
Medical Research Involving Human Subjects. The
study was conducted in a single physician family medi-
cine practice with retrospective analysis of patient data
for model development in Phase 1 and prospective data
collection for testing of the model in Phase 2. Phase 1
included process flow mapping, development of a
model, and selecting the diagnostic threshold. Phase
2 included collection of prospective data and measur-
ing the performance of the predictive model.
Process flow map
Staffinterviews were conducted in order to determine
the current process by which patients move through
the clinic system. The operating model for the practice
was mapped and plotted using Microsoft Visio (Micro-
soft Inc., Redmond, WA, USA) with concepts adopted
from the Lean and Lean Six Sigma methodology [37].
Predictive model
Patient visit information was collected from the elec-
tronic health record system, eClinicalWorks
(eClinicalWorks, Westborough, MA, USA), from Janu-
ary 2014 to December 2016. Each unique patient visit
was matched to a patient identification number,
month, day, age, gender, race, ethnicity, insurance
type, visit type, and a number of previous no-shows
Microsoft Excel (Microsoft Inc., Redmond, WA,
USA). Stata 13 (StataCorp LLC, College Station, TX,
USA) was used to run a probit regression with the
dependent variable, no-show, and independent vari-
ables described. The result of the regression was used
to generate a line equation.
Diagnostic threshold
Using Microsoft Excel, the output of this equation was
calculated for each patient visit and plotted on a histo-
gram as a no-show or show visit with the frequency of
an output on the yaxis and the output of the equation
on the xaxis. A threshold for classifying show vs. no-
show was chosen using this chart to minimize the
sum of Type I and Type II error.
Predictive model performance
Patient visit information was collected from the elec-
tronic health record system, eClinicalWorks, from Jan-
uary 2016 to December 2016. Patient visits were
matched to visit status and other variables as described
previously using Microsoft Excel. Patients missing one
or more demographic variable were excluded from our
dataset. The predictive model was used to calculate an
output for each patient visit using the previously cho-
sen threshold and designated a show or no-show
visit. This was compared to actual outcomes of visit sta-
tus, and sensitivity and specificity were calculated for
the model. Receiver operating characteristic (ROC)
was generated for the model and tested data with
area under curve (AUC) calculated. A simulation of
predictive model use with overbooking was conducted
by summarizing patient visits, no-shows, predicted no-
shows, and overbooked visits for each day in 2016. A
two-tailed test of significance was conducted on a num-
ber of unused appointments with and without the
model.
Results
Process flow
A process flow diagram was generated using Microsoft
Visio depicting the process of a patient’s movement
through the clinic (Figure 1). Based on process map-
ping, a ‘no-show’was defined as a case where a patient
has a scheduled appointment, does not cancel, and
does not attend. Office staffrelated that patients do
call in and cancel appointments prior to the appoint-
ment time. These were not categorized as no-show in
2M. U. AHMAD ET AL.
the health record. Thus, the actual number of missed
appointments may be understated.
Patient visit types were defined as follows: CU =
Check Up, INJ = Injection, OV = Office Visit, and NP
= New Patient, Physical = Physical Exam. Check Up
visits were described by staffas follow-up visits from
a hospital admission or maintenance visits for chronic
illness. Injection visits were for immunizations. Office
Visits were sick visits for existing patients. New
patients were patients new to the office. Physical
Exam visits were those required for sports, work, or
other periodic reasons for a physical exam.
Predictive model
In total, 6758 patient visits were analyzed by Stata 13
using a probit regression analysis. Linear regression
requires that the data have (a) a linear relationship
between independent and dependent variables, (b)
variables are multivariate normal, and (c) no multicol-
linearity. The likelihood ratio chi-square was 152.25
with a P-value of 0.000 showing good fit of the
model. The variables were all transformed into binary
data with a reference category under a probit
regression. Ethnicity was dropped as a variable due to
a high level of collinearity. The demographics of
patients analyzed are shown in Table 1. The equation
of the line was as follows:
ln r
(1 −r)
=−4.47 +
b
month +
b
day
+
b
gender +
b
race
+
b
insurance +
b
visittype
+
b
previousnoshow (1)
The output of the probit regression analysis is reported
in Table 2. Variables with a significant effect on
increasing the likelihood of a no-show include 18–25
years of age, 36–39 years of age, check up visits, and
two previous no-show visits. It can be inferred that
uninsured patients also increased the likelihood of a
no-show based on similar decreased risk with Medicare
and private insurance.
Diagnostic threshold
A histogram based on data from the predictive model
was generated with frequency on the yaxis and the
equation output on the xaxis (Figure 2). The no-
show patients showed a bimodal distribution com-
pared to patients who did not miss appointments.
The threshold was chosen at the minimum of
Figure 1. A process flow diagram constructed with Microsoft Visio for the single physician office with typical patient flow.
Table 1. Demographics.
Characteristics
Phase 1 (n=
2946)
Phase 2 (n=
2209)
Age, years: mean (SD) 51.6 (18.6) 53.2 (18.9)
Sex, male: N(%) 1744 (59.2%) 1317 (59.6%)
Race, N(%)
White 2677 (90.9%) 1989 (90.0%)
Black 165 (5.6%) 133 (6.0%)
Hispanic 48 (1.6%) 39 (1.8%)
Asian 27 (0.9%) 24 (1.1%)
Other 29 (1.0%) 24 (1.1%)
Ethnicity, Hispanic or Latino: N
(%)
65 (2.2%) 48 (2.2%)
Insurance, N(%)
Medicare 601 (20.4%) 499 (22.6%)
Private insurance 2213 (75.1%) 1634 (74.0%)
Uninsured 132 (4.5%) 76 (3.4%)
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 3
intersecting lines between the two patient populations
at 0.160 (95% CI, 0.149–0.151).
Predictive model performance
For model training data, the model performed at 59%
sensitivity and 70% specificity. The model predicted
outcomes with 47% sensitivity and 79% specificity on
new data. The ROC AUC was 0.72 (95% CI, 0.69–
0.76) for the model (Figure 2) and 0.70 (95% CI,
0.65–0.74) for predicted data (Figure 2). Simulated
overbooking resulted in 3.67 vs. 6.87 unused appoint-
ments, P< 0.000 (mean diff3.2, 95% CI, 2.9–3.5).
There were 43 days with 115 visits above capacity
with mean visits above capacity 0.46 (95% CI, 0.29–
0.62). Visit utilization increased from 69% to 82%
using the model (Figure 2).
Discussion
Our team set out to predict patient no-shows with a tai-
lored model due to variability on applicability of exist-
ing models in the literature [7]. Age, visit type,
insurance status, and two previous no-show visits
were significant in our model. However, more than
two no-shows were not significant. In our study popu-
lation, we also found an association between check up
visits for chronic illness and/or hospital admissions
with patient no-shows. Our model performed with
high specificity, but moderate sensitivity. Simulated
overbooking using our model produced increased
visit utilization with mean visits above capacity per
day at less than 0.5 across the study period.
Previous research has shown moderate concor-
dance with history of no-shows; however, data are
mixed on age and insurance status [7,8]. Interest-
ingly, our study population did not show an associ-
ation between no-shows and race or gender which
conflicts some previous research [24,25]. Race and
gender effects may have been different in our study
due to differences in the local environment [6]. In
a recent systematic review many studies reported
gender as nonsignificant (22.2% of studies showed
significance), while race had mixed data (56.7% of
studies showed significance) [7]. This is an important
finding as implicit bias in healthcare with regard to
race and gender is a validated issue and may affect
patient access to healthcare [38].
Our data also showed increased rates of no-shows
for ‘check up’visits or those patients following up for
chronic medical issues or hospital admissions as in pre-
vious research [39]. Overbooking may present clinic
operations as efficient but may overlook these chronic
care patients. Appropriate follow-up and referral of
these patients should be considered with any
implementation of predictive modeling and overbook-
ing to maintain quality health for the population. Ten
quality factors are associated with high-quality referrals
in health systems including safe, timely, effective,
efficient, patient centered, equitable, structured referral
letters, dissemination of referral guidelines, centralized
computer systems, and inclusion criteria of a referred
patient which can be adapted to target and follow-up
for these patients within the local healthcare infrastruc-
ture [40].
Simulated overbooking resulted in an improvement
in visit utilization when compared to the clinic’s cur-
rent practice similar to prior research [26,27,33,34].
However, we did not compare predictive overbooking
with flat overbooking as other studies to show relative
improvement [29,35]. Future research may study the
work effort required by staffin overbooking with a pre-
dictive model vs. flat overbooking methods in a pri-
mary care setting. Future research may also be useful
in prospectively overbooking patient visits with a
Table 2. Results of probit regression analysis.
Month Beta coefficient (95% CI) Significance
January 0.02 (−0.24 to 0.29) 0.857
February 0.04 (−0.23 to 0.31) 0.773
March 0.01 (−0.26 to 0.27) 0.965
April 0.07 (−0.19 to 0.32) 0.621
May −0.02 (−0.3 to 0.26) 0.886
June −0.08 (−0.34 to 0.19) 0.581
July −0.19 (−0.47 to 0.1) 0.203
August 0.01 (−0.26 to 0.28) 0.953
September 0.08 (−0.19 to 0.35) 0.563
October −0.21 (−0.48 to 0.07) 0.145
November −0.03 (−0.31 to 0.25) 0.837
December 0 (0 to 0) Reference
Day
Monday 3.08 (−211.67 to 217.84) 0.978
Tuesday 2.95 (−211.81 to 217.71) 0.979
Wednesday 2.97 (−211.79 to 217.72) 0.978
Thursday 3 (−211.76 to 217.76) 0.978
Friday 2.74 (−212.02 to 217.5) 0.98
Age
<12 −0.19 (−0.71 to 0.34) 0.487
13–17 0.3 (−0.08 to 0.68) 0.126
18–25 0.31 (0.05 to 0.57) 0.019*
26–35 0.01 (−0.24 to 0.26) 0.95
36–49 0.26 (0.06 to 0.45) 0.012*
50–64 −0.01 (−0.19 to 0.17) 0.915
> 65 0 (0 to 0) Reference
Gender
Female −0.01 (−0.12 to 0.1) 0.893
Male 0 (0 to 0) Reference
No Gender Selected 0 (0 to 0) Reference
Race
White −0.48 (−0.97 to 0.02) 0.058
Black 0.04 (−0.47 to 0.55) 0.873
Asian −0.56 (−1.35 to 0.23) 0.162
Hispanic −0.17 (−0.74 to 0.41) 0.568
Insurance
Medicare −0.3 (−0.6 to −0.01) 0.046*
Private Insurance −0.27 (−0.51 to −0.03) 0.030*
Uninsured 0 (0 to 0) Reference
Visit Type
CU 0.39 (0.23 to 0.56) 0.000*
INJ 0.11 (−0.19 to 0.4) 0.489
OV 0.07 (−0.08 to 0.21) 0.382
NP −0.17 (−0.38 to 0.04) 0.110
Physical 0 (0 to 0) Reference
Previous no-shows
None 0.3 (−0.26 to 0.85) 0.299
Once 0.44 (−0.13 to 1.02) 0.131
Twice 0.94 (0.3 to 1.58) 0.004*
Three or More 0 (0 to 0) Reference
Note: CU = Check Up, INJ = Injection, OV = Office Visit, NP = New Patient,
Physical = Physical Exam, * = significant at P< 0.05.
4M. U. AHMAD ET AL.
predictive model rather than simulation in a single
physician practice.
While developing a threshold for our predictive
model, a bimodal distribution was found for those
patients that had no-show visits. Thus, there may be
masked variables that may continue to improve the
sensitivity of our predictive model. Further research
should focus on additional variables which may not
have been analyzed in our model based on the avail-
ability of information in retrospective data analysis
for model development, but have been shown to be sig-
nificant across studies in a recent systematic review [7].
Our methodology used the existing healthcare elec-
tronic health record and relatively inexpensive software
and data analysis tools. From a practical standpoint,
implementation requires consideration of infrastruc-
ture changes, human resources, managerial issues,
and cost. Health information technology that captures
data for decision making was found to be available in
hospitals in Canada in an urban location and with a
larger size; however, teaching status and staffsize
were not significant [41]. Thus, existing infrastructure
may not be limited to large, academic practices. Train-
ing and skill development may help improve staff
acceptance of data-driven tools, while older and longer
term staffmay require more development [42]. The
cost of data keeping technology in the US on healthcare
delivery has remained relatively stable from 2006 to
Figure 2. Results and performance of predictive model and simulated overbooking. (a) A histogram plotted using Microsoft Excel
with the output of the regression equation for 6758 patient visits from 2014 to 2015. Visit status by show and no-show is separated
to highlight distributional differences. A threshold of 0.160 (95% CI, 0.149–0.151) was chosen to classify a visit as a show or no-show
for model deployment. (b) In total, 3571 patient visits in 2016 were analyzed as 251 visit days. One patient day with one visit was
excluded as an outlier due to holiday scheduling. The remaining visits, no-shows, and overbooked appointments with predictive
model were compared to maximum capacity for each visit day. Visit days were converted into visit weeks and maximum capacity,
total visits, and visits with model displayed. Holiday weeks 23, 28, 29, 37, 48, and 53 were excluded in this image to simplify visu-
alization of maximum capacity. Simulated predictive overbooking resulted in 3.67 vs. 6.87 unused appointments, P< 0.000 (mean
diff3.2, 95% CI, 2.9–3.5). Visit utilization increased from 69% with normal scheduling to 82% with predictive overbooking. (c) The
threshold value of 0.16 is marked on the ROC for the training data in Microsoft Excel. The AUC is 0.72 (95% CI, 0.69–0.76). (d) The
threshold value of 0.16 is marked on the ROC for the predicted data. The AUC is 0.70 (95% CI, 0.65–0.74) in Microsoft Excel.
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 5
2016; however, 81% of patients believed it improved
service delivery [43]. Thus, fixed and variable costs
have the potential to remain flat, but, patient knowl-
edge of utilization of technology may improve patient
satisfaction scores.
Limitations
This project was conducted in a single physician family
medicine practice and may not be broadly applicable to
other clinical settings. Although prospective in data
collection, the model was tested in simulation to
avoid the Hawthorne effect and not as an intervention
in clinical practice. Other variables which may be sig-
nificant predictors may not have been included in
our model. These factors limit the findings of our
study.
Conclusions
Prediction of no-shows or missed appointments
depends on practice-specific issues. It is possible
to develop a predictive model to prospectively pre-
dict no-shows for clinics as small as single phys-
ician practices with 47% sensitivity and 79%
specificity. Overbooking using predictive modeling
may improve a clinic’s ability to schedule unused
visit slots.
Implications for healthcare management
Predictive modeling and overbooking is a cost-effective
and reliable method to decrease the rate of underutili-
zation. A central aim of implementing the model is
focused on maximizing efficiency while reducing the
risk of scheduling visits above capacity which may be
an issue with other methods. Effective innovation in
the context of healthcare requires a diligent evaluation
of both managerial and organizational issues [44].
However, human resource related issues including
staffwellness to prevent burnout should be integral
to any implementation of this tool. In a systematic
review, burnout for staffis related to low job control,
workplace demands, low support in the workplace,
high workloads, low reward or compensation, and
job insecurity [45]. In another review, Rothenberger
from the Department of Surgery at the University of
Minnesota had similar findings with additional issues
related to a culture of blame and conflicting values in
patient care [46]. Koussa et al. found that there were
differences in high-income and low-income countries
for issues related to staffwellness in the healthcare sec-
tor. Although all factors were relevant in both settings
there was greater value placed on autonomy, pro-
fessional work environment, and workload in high-
income countries [47]. In low-income countries greater
value was placed on infrastructure, career
development, and financial incentives [47]. Linzer
et al. provide an excellent framework for managerial
changes related to communication, workflow, and tar-
geted quality improvement in randomized trial in the
primary care setting in the US that reduced burnout
for staff[48].
In addition, certain sub-populations that are not
accessing healthcare should not be overlooked in
order to minimize potential negative effects on popu-
lation-based health. Our study found that check up
visits for chronic diseases and patients discharged
from the hospital had a higher rate of no shows.
For our population there is a risk that these patients
may be overlooked if the model is implemented with-
out special consideration. In addition to improving
efficiency in the clinic, follow-up may reduce the
rate of hospital readmissions. The literature rec-
ommends a partnership with inpatient and outpatient
settings for both surgical and medical patients being
discharged from the hospital [49,50]. In addition to
the 10 quality factors for high-quality referrals [40],
an outpatient practice may benefit from direct strat-
egies. Although systematic literature reviews have
mixed results due to quality of methods and study
design, a primary care nurse-based program of call-
ing at-risk patients within 72 h of hospital discharge
may be a useful tool [51–53]. A recent high-quality
trial of this intervention improved appointment
attendance rates with successful phone contact vs.
missed contact (60.1% vs. 38.5% attendance, P=
0.004) [54].
In order to successfully implement the model,
improve healthcare outcomes for at-risk patients, and
maintain/improve staffwellness we recommend the
following steps:
1. Institute managerial changes in practice setting that
include communication, workflow, and quality
improvement [48].
2. Partner with local health system including inpatient
settings, urgent care, emergency room, and other
medical settings to ensure high-quality referrals
[40,49,50].
3. Assign a quality improvement team to development
and implementation of the model using the
methods in this study.
4. Identify special populations by analyzing model
data and significance.
5. Develop a phone outreach program as needed if a
sub-population of patients is identified that requires
additional follow-up [51–54].
6. Implement the model and apply continuous
improvement at pre-defined intervals to update
the algorithm as new data become available.
Geolocation information, WOEID 2503863
6M. U. AHMAD ET AL.
Acknowledgements
This work has been supported by mentorship by the faculty
in the SELECT MD program at the University of South Flor-
ida Morsani College of Medicine. This work was used to
fulfill the requirements of capstone in the SELECT MD pro-
gram. We are also thankful for the team at the private medi-
cal practice of Dr Michael Reilly in St. Petersburg, Florida,
USA, for their support and assistance with this research
project.
Disclosure statement
No potential conflict of interest was reported by the authors.
Ethics The study was approved by Institutional Review
Board (IRB). All study procedures with performed in com-
pliance with the World Medical Association Declaration of
Helsinki on Ethical Principles for Medical Research Invol-
ving Human Subjects.
Notes on contributors
M. Usman Ahmad is currently an intern in the Department
of Surgery at the University of Colorado.
Angie Zhang is currently a resident in Neurological Surgery
at the University of California Irvine.
Dr. Rahul Mhaskar is the director of the Office of Research
and Associate Professor of Internal Medicine in the Univer-
sity of South Florida Morsani College of Medicine and
Associate Professor of Global Health in the University of
South Florida College of Public Health.
ORCID
M. Usman Ahmad http://orcid.org/0000-0001-9797-7106
References
[1] Welch WP, Cuellar AE, Stearns SC, et al. Proportion of
physicians in large group practices continued to grow
in 2009–11. Health Aff.2013;32:1659–1666.
[2] Mäntyjärvi M. No-show patients in an ophthalmologi-
cal out-patient department. Acta Ophthalmol.
1994;72:284–289.
[3] Samuels RC, Ward VL, Melvin P, et al. Missed
appointments: factors Contributing to high No-show
rates in an urban pediatrics primary care clinic. Clin
Pediatr (Phila). 2015;54:976–982.
[4] Kaplan-Lewis E, Percac-Lima S. No-show to primary
care appointments: why patients do not come. J Prim
Care Community Health. 2013;4:251–255.
[5] Traeger L, O’Cleirigh C, Skeer MR, et al. Risk factors
for missed HIV primary care visits among men who
have sex with men. J Behav Med. 2012;35:548–556.
[6] Lasser KE, Mintzer IL, Lambert A, et al. Missed
appointment rates in primary care: the importance of
site of care. J Health Care Poor Underserved.
2005;16:475–486.
[7] Dantas LF, Fleck JL, Cyrino Oliveira FL, et al. No-
shows in appointment scheduling –a systematic litera-
ture review. Health Policy (New York). 2018;122:412–
421.
[8] Norris JB, Kumar C, Chand S, et al. An empirical
investigation into factors affecting patient cancellations
and no-shows at outpatient clinics. Decis Support Syst.
2014;57:428–443.
[9] Cashman SB, Savageau JA, Lemay CA, et al. Patient
health status and appointment keeping in an urban
community health center. J Health Care Poor
Underserved. 2004;15:474–488.
[10] Oppenheim GL, Bergman JJ, English EC. Failed
appointments: a review. J Fam Pract. 1979;8:789–796.
[11] Stubbs ND, Geraci SA, Stephenson PL, et al. Methods
to reduce outpatient non-attendance. Am J Med Sci.
2012;344:211–219.
[12] Parikh A, Gupta K, Wilson AC, et al. The effectiveness
of outpatient appointment reminder systems in redu-
cing no-show rates. Am J Med. 2010;123:542–548.
[13] Junod Perron N, Dao MD, Righini NC, et al. Text-mes-
saging versus telephone reminders to reduce missed
appointments in an academic primary care clinic: a
randomized controlled trial. BMC Health Serv Res.
2013;13:125.
[14] Chen Z, Fang L, Chen L, et al. Comparison of an SMS
text messaging and phone reminder to improve attend-
ance at a health promotion center: a randomized con-
trolled trial. J Zhejiang Univ Sci B. 2008;9:34–38.
[15] Guse CE, Richardson L, Carle M, et al. The effect of
exit-interview patient education on no-show rates at
a family practice residency clinic. J Am Board Fam
Pract. 1997;16:399–404.
[16] Lesaca T. Assessing the influence of a no-show fee on
patient compliance at a CMHC. Adm Policy Ment
Health. 1995;22:629–631.
[17] LaGanga LR, Lawrence SR. Clinic overbooking to
improve patient access and increase provider pro-
ductivity. Decis Sci. 2007;38:251–276.
[18] DuMontier C, Rindfleisch K, Pruszynski J, et al. A
multi-method intervention to reduce no-shows in an
urban residency clinic. Fam Med. 2013;45:634–641.
[19] Berg BP, Murr M, Chermak D, et al. Estimating the
cost of no-shows and evaluating the effects of mitiga-
tion strategies. Med Decis Making. 2013;33:976–985.
[20] LotfiV, Torres E. Improving an outpatient clinic util-
ization using decision analysis-based patient schedul-
ing. Socioecon Plann Sci. 2014;48:115–126.
[21] Goffman RM, Harris SL, May JH, et al. Modeling
patient no-show history and predicting future outpati-
ent appointment behavior in the Veterans health
administration. Mil Med. 2017;182:e1708–e1714.
[22] Alaeddini A, Yang K, Reddy C, et al. A probabilistic
model for predicting the probability of no-show in hos-
pital appointments. Health Care Manag Sci.
2011;14:146–157.
[23] Torres O, Rothberg MB, Garb J, et al. Risk factor model
to predict a missed clinic appointment in an urban,
academic, and underserved setting. Popul Health
Manag. 2015;18:131–136.
[24] Goldman L, Freidin R, Cook EF, et al. A multivariate
approach to the prediction of no-show behavior in a pri-
mary care center. Arch Intern Med. 1982;142:563–567.
[25] Cohen AD, Kaplan DM, Kraus M, et al.
Nonattendance of adult otolaryngology patients for
scheduled appointments. J Laryngol Otol.
2007;121:258–261.
[26] Samorani M, LaGanga LR. Outpatient appointment
scheduling given individual day-dependent no-show
predictions. Eur J Oper Res. 2015;240:245–257.
[27] Daggy J, Lawley M, Willis D, et al. Using no-show
modeling to improve clinic performance. Health
Inform J. 2010;16:246–259.
INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 7
[28] Muthuraman K, Lawley M. A stochastic overbooking
model for outpatient clinical scheduling with no-
shows. IIE Trans. 2008;40:820–837.
[29] Huang Y, Hanauer DA. Patient no-show predictive
model development using multiple data sources for
an effective overbooking approach. Appl Clin Inform.
2014;5:836–860.
[30] Huang Y-L, Hanauer DA. Time dependent patient no-
show predictive modelling development. Int J Health
Care Qual Assur. 2016;29:475–488.
[31] Zeng B, Turkcan A, Lin J, et al. Clinic scheduling models
with overbooking for patients with heterogeneous no-
show probabilities. Ann Oper Res. 2010;178:121–144.
[32] Creps J, LotfiV. A dynamic approach for outpatient
scheduling. J Med Econ. 2017;20:786–798.
[33] May FP, Reid MW, Cohen S, et al. Predictive over-
booking and active recruitment increases uptake of
endoscopy appointments among African American
patients. Gastrointest Endosc. 2017;85:700–705.
[34] Reid MW, May FP, Martinez B, et al. Preventing endo-
scopy clinic No-shows: prospective Validation of a pre-
dictive overbooking model. Am J Gastroenterol.
2016;111:1267–1273.
[35] Reid MW, Cohen S, Wang H, et al. Preventing patient
absenteeism: validation of a predictive overbooking
model. Am J Manag Care. 2015;21:902–910.
[36] Yucesan M, Gul M, Celik E. A multi-method patient
arrival forecasting outline for hospital emergency
departments. Int J Healthc Manag. 2018;0:1–13.
[37] De la Lama J, Fernandez J, Punzano JA, et al. Using Six
Sigma tools to improve internal processes in a hospital
center through three pilot projects. Int J Healthc
Manag. 2013;6:158–167.
[38] Fitzgerald C, Hurst S. Implicit bias in healthcare pro-
fessionals: A systematic review. BMC Med Ethics.
2017;18(19). DOI:10.1186/s12910-017-0179-8.
[39] Zailinawati AH, Ng CJ, Nik-Sherina H. Why do
patients with chronic illnesses fail to keep their
appointments? A telephone interview. Asia-PacificJ
Public Heal. 2006;18:10–15.
[40] Senitan M, Alhaiti AH, Gillespie J. Improving inte-
grated care for chronic non-communicable diseases:
A focus on quality referral factors. Int J Healthc
Manag. 2019;12:106–115.
[41] Chow CK. Factors associated with the extent of infor-
mation technology use in Ontario hospitals. Int J
Healthc Manag. 2013;6:18–26.
[42] Walston SL, Bennett CJ, Al-Harbi A. Understanding
the factors affecting employees’perceived benefits of
healthcare information technology. Int J Healthc
Manag. 2014;7:35–44.
[43] Okpala P. Assessment of the influence of technology
on the cost of healthcare service and patient’s satisfac-
tion. Int J Healthc Manag. 2018;11:351–355.
[44] Lega F. Strategic, organisational and managerial issues
related to innovation, entrepreneurship and intrapre-
neurship in the hospital context: Remarks from
the Italian experience. J Manag Mark Healthc. 2014;2:
77–93.
[45] Aronsson G, Theorell T, Grape T, et al. A systematic
review including meta-analysis of work environment
and burnout symptoms. BMC Public Health.
2017;17:1–13.
[46] Rothenberger DA. Physician burnout and well-being: a
systematic review and framework for action. Dis Colon
Rectum. 2017;60:567–576.
[47] Koussa M E, Atun R, Bowser D, et al. Factors influen-
cing physicians’choice of workplace: systematic review
of drivers of attrition and policy interventions to
address them. J Glob Health. 2016;6:1–13.
[48] Linzer M, Poplau S, Grossman E, et al. A cluster ran-
domized trial of Interventions to improve work con-
ditions and clinician burnout in primary care: results
from the Healthy work Place (HWP) study. J Gen
Intern Med. 2015;30:1105–1111.
[49] Kogon B, Woodall K, Kanter K, et al. Reducing read-
missions following paediatric cardiothoracic surgery:
A quality improvement initiative. Cardiol Young.
2015;25:935–940.
[50] Tang N. A primary care physician’sidealtransitions
of care-where’s the evidence? J Hosp Med. 2013;8:
472–477.
[51] Woods CE, Jones R, O’Shea E, et al. Nurse-led post-
discharge telephone follow-up calls: a mixed study sys-
tematic review. J Clin Nurs. 2019;28(19–20):3386–
3399. DOI:10.1111/jocn.14951.
[52] Crocker JB, Crocker JT, Greenwald JL. Telephone fol-
low-up as a primary care intervention for post-
discharge outcomes improvement: a systematic
review. Am J Med. 2012;125:915–921.
[53] Bahr SJ, Solverson S, Schlidt A, et al. Integrated litera-
ture review of postdischarge telephone calls. West J
Nurs Res. 2014;36:84–104.
[54] Tang N, Fujimoto J, Karliner L. Evaluation of a pri-
mary care-based Post-discharge phone call program:
keeping the primary care practice at the center of
Post-hospitalization care transition. J Gen Intern
Med. 2014;29:1513–1518.
8M. U. AHMAD ET AL.