Content uploaded by Yangzi Jiang
Author content
All content in this area was uploaded by Yangzi Jiang on Jan 13, 2023
Content may be subject to copyright.
Content uploaded by Yangzi Jiang
Author content
All content in this area was uploaded by Yangzi Jiang on May 01, 2021
Content may be subject to copyright.
Geographic Virtual Pooling of Hospital Resources:
Data-Driven Tradeoff between Waiting and Traveling
Yangzi Jiang
Kellogg School of Management, Northwestern University, yangzi.jiang@kellogg.northwestern.edu
Hossein Abouee Mehrizi
University of Waterloo, haboueem@uwaterloo.ca
Jan A. Van Mieghem
Kellogg School of Management, Northwestern University, vanmieghem@kellogg.northwestern.edu
1
2
Sep. 25, 2020; Revised: Apr. 30, 2021 & Jan. 7, 2022 & Jan. 12, 2023
(1) Problem definition: Patient-level data from 72 MRI hospitals in Ontario, Canada from 2013 to 2017
shows that over 60% of patients exceeded their wait time targets. We conduct a data-driven analysis to
quantify the reduction in the patient Fraction Exceeding Target (FET) for MRI services through geographic
virtual resource sharing while limiting incremental driving time. Our model partitions the 72 MRI hospitals
into a set of groups or clusters. Each cluster keeps an all-inclusive list of all patients and available MRI
scanners within its cluster and employs a scheduling rule to assign its patients to specific MRI scanners at
specific hospital locations within its cluster.
(2) Academic/Practical Relevance: Resource sharing among hospitals clustered in (possibly non-
contiguous) geographic regions can reduce waiting time but increase traveling costs. We prove analytically
that partial geographic pooling typically dominates complete pooling and no pooling for a simple linear city
model with homogeneous patients. We present a data-driven method to solve the generalized (practical but
more difficult) geographic pooling problem of 72 hospitals with heterogeneous patients with different wait
time targets located in a two-dimensional region.
(3) Methodology: We propose an “augmented-priority rule,” which is a sequencing rule that balances the
patient’s initial priority class and the number of days until her wait time target. We then use neural networks
to predict patient arrival and service times. We combine this predicted information and the sequencing rule
within each cluster to implement “advance scheduling,” which informs the patient of her treatment day and
location when requesting an MRI scan. We then optimize the number of geographic resource pools among
the 72 hospitals using Genetic Algorithms.
(4) Results: Our resource pooling model lowers the FET from 66% to 36% while constraining the average
incremental travel time below three hours. In addition, our model and method show that only ten additional
scanners are needed to achieve 10% FET while 50 additional scanners would be needed without resource
sharing. Over 70% of the hospitals are not worse-off financially (measured over a rolling horizon of at least
two weeks, using either fee per scan or fee per machine hour). Each individual hospital, measured over at
least two weeks, is weakly better-off socially (e.g., achieves a higher machine utilization and a lower FET).
(5) Managerial Implication: Our paper provides a practical, data-driven geographical resource sharing
model that hospitals can readily implement. Our solution method achieves a near-optimal solution with low
computational complexity. Using smart data-driven scheduling, a little extra capacity placed at the right
location is all we need to achieve the desired FET under geographic resource sharing.
Key words : Healthcare, Data-Driven, Resource Sharing, Recurrent Neural Network, Genetic Algorithms
1. Introduction
Magnetic Resonance Imaging (MRI) is used by physicians to assist medical diagnosis, procedures,
and research. It produces images to help doctors identify tumors and diagnose cancer. The pro-
longed waiting time experienced by patients needing MRI scans has been an issue for Canadian
hospitals. Since 2004, the First Ministers’ Health Accord has acknowledged the importance of wait
3
times in their 10-Year Plan to Strengthen Health Care (Motiwala et al. 2005). In 2012, the Standing
Senate Committee on Social Affairs, Sciences, and Technology reported on the progress of the 10-
year plan (Canada Standing Committee 1984). This report highlights that there had been almost
no progress in reducing MRI wait times. In 2013, under the supervision of the Ministry of Health,
MRI hospitals in Ontario began to gather data in the hope of finding a method to reduce waiting.
The Canadian Association of Radiologists (CAR) proposes the idea of Maximum Time Interval
Target or Wait Time Target in their handbook of “National Maximum Wait Time Access Targets
For Medical Imaging” (CAR 2014). The patients are divided into four priority classes in the
handbook and assigned a priority-specific wait time target for each class. The book states that the
hospital should treat over 90% of their patients within their wait time target; i.e., the time between
the request of an MRI scan (“scheduled time”) and the time of scan (“treatment time”) should be
shorter than the patient’s wait time target. However, based on Health Quality Ontario’s report in
2020, more than 42% of the patients are scanned after their wait time targets. Hospitals thus are
urged to reduce the fraction of patients exceeding their wait time targets, hereafter referred to as
the “Fraction Exceeding Target (FET)” which is used to evaluate hospital performance. [Af`eche
2013]
During the five years spanned by our data, all 72 MRI hospitals in Ontario are operating inde-
pendently from each other. Patients that require an MRI scan submit their requests to their family
doctors. They are usually assigned to the hospital nearest to their home regardless of the number
of patients already waiting for service at that location or at another location which may have
a much shorter queue. If hospitals continue to operate individually and independently they can-
not reduce their FET to 10% without significant capacity expansion (Jiang et al. 2020). Yet, the
public hospitals in Ontario are connected under the Local Health Integration Networks (LHINs).
This established information network enables collecting and sharing patient information among the
hospitals which motivated us to develop a virtual resource sharing model among the 72 hospitals
without additional cost.
This paper aims to reduce the FET by proposing a virtual partial resource sharing model that is
effective for hospitals with multiple priority classes with different priority-specific wait time targets.
Our model partitions the 72 MRI hospitals into a set of groups or clusters. Each cluster keeps
an all-inclusive list of all patients and available MRI scanners within its cluster and employs a
scheduling rule to assign its patients to specific MRI scanners at specific hospital locations within
its cluster. We emphasize that all MRI scanners remain at their original hospital locations. We only
pool patient information in each geographic cluster; utilizing this information to assign patients
virtually shares or pools the MRI scanners within the cluster. Assigning patients to hospitals
with shorter waiting lists improves the FET yet may increase patient travel time. To balance the
4
tradeoff between waiting and traveling, we propose a geographic partial resource sharing model that
groups hospitals typically with geographic proximity to maximize FET reduction while controlling
incremental travel time. Our paper thus addresses four questions:
1. Advance scheduling: How should a cluster assign its patients to its scanners and inform each
patient of her service date and service location at the time she submits her request for an MRI
scan? (MRI hospitals in Ontario are required to use “advance scheduling” [Geng and Xie 2016]).
2. Clustering: Which hospitals should pool their MRI scanners to reduce Fraction Exceeding
Target while controlling for incremental patient travel times?
3. Performance evaluation: What is the impact of geographical partial pooling on FET?
4. Capacity expansion: How much capacity is needed to reduce FET below 10%?
We propose that clusters use an “augmented-priority” rule, which is a sequencing rule that
augments a patient’s priority level with the number of days until the patient’s wait time target,
and then assign their patients in that sequence to the first available MRI scanner in their clus-
ter. Augmented prioritization strategically delays service for some patients to avoid more urgent
patients (possibly from a lower priority class) from exceeding their wait time targets. Clearly, to
inform the index patient of their service date and location at the time of their request, advance
scheduling must anticipate the future arrivals of more urgent patients who must be served before
the index patient. Therefore, we must predict and forecast the arrival patterns and service rates
with high accuracy. Given that the arrivals are multi-class and non-stationary, there exists no
queuing formula to make these predictions analytically. Therefore we adopt data-driven methods:
we compare the traditional time series model with neural networks to find that two-layered Long
Short Term Memory has the best out-of-sample accuracy in predicting patient arrival and service
times. Combing these predictions with our augmented-priority sequencing rule, we formulate an
advance scheduling algorithm that predicts the treatment time for each patient.
Given the data-driven advance scheduling rule, we then optimize over all feasible clusters (par-
titions) of the 72 hospitals to find a clustering that minimizes the FET while controlling the
incremental travel time in each cluster (e.g., below 1, 2, or 3 hours). This strategic network problem
is traditionally solved by set-covering methods in the field of combinatorial optimization. However,
set-covering problems are NP-hard (Chvatal 1979), and optimal partitioning of 72 hospitals suf-
fers from the curse of dimensionality. Therefore, we approximate the set-covering problem with
data-driven methods: a chromosomic design of the Genetic Algorithm to produce a near-optimal
clustering of the 72 hospitals.
Our method utilizes over 30 million records of patient-level data gathered by 72 hospitals from
January 2013 to December 2017. Prediction and clustering models are determined using data from
5
Figure 1 The fraction of patients exceeding wait time target: historical (2017) versus patients assigned to clusters
and sequenced using FIFO, Non-Preemptive Priority Class, and our model’s “augmented priority.”
2013-2016 (i.e., the training data). We then evaluate our method using the data from 2017 (i.e., out-
of-sample). Our analysis (Fig. 1) demonstrates that geographical pooling using our clustering and
sequencing rule outperforms current practice as well as clustering with First-in First-out (FIFO),
and Non-preemptive Priority sequencing. Using simulation, we show that our method reduces the
aggregate FET from 66% to 36% while improving the FET of every priority class (in fact, ensuring
all Priority 1 and Priority 2 patients are treated within their wait time targets).
To investigate whether self-interested independent hospitals would benefit from our method (i.e.,
to check “incentive-compatibility”), we also evaluate the performance of each individual hospital.
We show that the FET (both weighted and non-weighted) is reduced for all 72 hospitals. When
using machine hours as a proxy for financial performance, we conclude that no hospital is worse-off
under geographic pooling for any given month in 2017. When using fee per scan as a proxy, we
conclude that over 70% of the hospitals are weakly benefiting from geographic pooling financially
for any given month in 2017. We also compare geographical pooling with complete resource pooling
with perfect information and conclude that our model can achieve a near-optimal solution with
low computational complexity and high robustness towards imperfect information.
Lastly, we provide some guidance towards the capacity expansions needed to achieve 10% FET.
Without geographic resource pooling, 50 additional scanners are required to reduce the FET to
10%. Expanding 50 scanners on top of the current 115 is extremely costly and leads to wasted
6
capacity. In contrast, a 10% FET can be achieved with only ten additional scanners using our
geographic resource pooling model with a data-driven sequencing rule. This demonstrates a truism
in smart data-driven operations: a little extra capacity placed at the right location is all we need
to achieve the desired FET under geographic resource pooling.
Overall, our model provides a practical and implementable resource-sharing model for the 72
hospitals in Ontario. The Ontario Ministry of Health has recently started a new Central Intake Pro-
gram (CIP) to implement resource-sharing among seven hospitals in Ottawa and Eastern Ontario
for non-urgent MRI scans (Priority 3 and 4 patients). Utilizing the core ideas of the resource pool-
ing model, they are centralizing the decision-making process of a patient assignment and matching
the patients to a more appropriate site to reduce the FET among lower priority patients.
2. Literature Review
Our paper contributes to two streams of literature: resource sharing and patient scheduling.
Loch [1998] predicts that “the one idea from the reengineering era most likely to persist is that
of integrated work.” Buzacott et al. [1994] combine several tasks to create the first assertion of
a superiority system. The literature shifted from a specialized model with dedicated servers to
flexible models with pooled servers throughout the years. A substantial literature demonstrates
that pooled systems can outperform their unpooled counterparts; e.g., Smith and Whitt [1981],
Van Mieghem [1998]. Yet another stream of the flexibility literature shows when pooling might
not be beneficial. For instance, Smith and Whitt [1981] show that for heterogeneous tasks and/or
servers, pooled systems may have worse performance than the dedicated system. Mandelbaum
and Reiman [1998] introduce a specialized model with a dedicated server and a flexible model
with flexible servers capable of handling all tasks. By analyzing the system in steady-state, they
conclude that pooling improves performance in light traffic, but the effect of pooling in heavy traffic
is difficult to determine. They also demonstrate that optimal capacity allocation and variability in
arrival and servers mitigate the advantage of pooling. Van Dijk and van der Sluis [2009] analyze
how pooling can be effective for preemptive and non-preemptive queues. They compare simple
priority rules, threshold rules, and threshold rules with pooling and conclude that prioritizing long
jobs outperforms pooling. However, the results fail to hold if there are more than two priority
classes. Lim et al. [2017] analyze how the trade-offs between the costs and benefits of proximity and
agility affect facility location decisions in a supply chain. Our paper extends the trade-off between
waiting and traveling to a two-dimensional setting and investigates virtual geographical resource
pooling using information sharing.
Behavioral Operations Management is another branch of literature that explores the effect of
pooling with particular focus on the setting where human service capacity is endogenously deter-
mined. When human severs have discretion over their service capacity, pooling can lead to the
7
free-rider problem. The busyness aversion behavior can negate the benefits from queue pooling
(Armony et al. 2017). Do et al. [2015] develop a theoretical model to show that service rate decreases
as the number of servers in a resource pool increases. Shunko et al. [2018] find that servers work
faster in a dedicated system as the length of the queue is more visible and reduces the free-riding
problem. Our paper only focuses on exogenously determined service capacity as all our servers are
machines; thus, their performance is not affected by pooling. We show that pooling can reduce
the fraction of overstaying patients for hospitals with high server utilization, even when it might
not effectively reduce the waiting time itself. We also demonstrate that pooling can significantly
outperform simple priority and FIFO rules.
Bad queueing management and lack of effective scheduling scheme is a “major source of oper-
ational inefficiency” (Chakraborty et al. 2010). Applying advance scheduling for different priority
patients with various wait time target is difficult. One way of dynamic scheduling of patients is
through resource allocation. Ayvaz and Huh [2010] use a dynamic programming approach to handle
multiple classes of patients with different reactions to delay of service. They show that a simple
threshold policy can perform optimally in their experiment. Patrick et al. [2008] solve the schedul-
ing problem for multi-priority class patients with a dynamic Markov decision process. However, the
approximate dynamic programming problem suffers from the curse of dimensionality and is unable
to handle data with high dimensions. Another branch of the literature looks at scheduling patients
with dynamic priority queues. Our augmented priority rule builds on Kleinrock and Finkelstein
[1967] who propose a time-dependent priority queue that allows low priority patients to receive
treatment ahead of high priority patients under certain conditions. Hay et al. [2006] combine the
initial priority score with accumulated priority into a linear function and use it to determine the
optimal order of treatment for two classes of patients. Stanford et al. [2014] extend Hay et al. [2006]
to multi priority classes and analyze the wait time distribution for each priority process.
Our paper proposes a heuristic sequencing rule to schedule multiple priority patients with wait
time targets. We design an augmented-priority queue that balances the patient’s initial priority
and the number of days until patient’s wait time target to determine her treatment order. Our
sequencing rule uses forecasted data to overcome the uncertainty and curse of dimensionality
associated with advance scheduling.
3. Model of Geographical Resource Pooling
3.1. Simple Linear Model of Geographical Resource Pooling
Let us start with a simple linear model inspired by Hotelling [1990]: a set of homogeneous patients
is uniformly distributed in a “linear city” on the interval of [0,1]. We have a total of Nhomogeneous
hospitals, each with a Poisson arrival rate of λand an exponential service rate of µ. As usual, let ρ=
8
λ/µ denote the hospital utilization, assumed to be below 1 to ensure stability. The unique difference
among hospitals is their locations. The patients incur a travel cost to visit the nearest hospital,
where they may wait to receive service. The linear city model is a simplification of our model as
all patients, hospitals, and service rates are homogeneous, and traveling is single-dimensional.
Let Ddenote a patient’s distance traveled and Whis/her expected waiting time. Let CD>0
and CW>0 denote the patient’s cost to travel one unit of distance and wait one unit of time. Let
τ=CD
CWdenote the relative cost of traveling to waiting. The total expected cost (scaled by Cw)
incurred by a patient thus becomes:
C=E[W] + τE[D].(1)
Varying τallows us to investigate the relative importance that patients place on waiting versus
driving. We thus call τ > 0 the trade-off coefficient. The objective is to find the optimal clustering
of Nhospitals that minimize the total cost (1) experienced by the patients.
There are two “corner” solutions: no resource pooling and complete resource pooling. No resource
pooling has the minimal expected traveling distance. The equidistant distribution of Nindepen-
dently operating hospitals placed at [ 1
2N,3
2N,5
2N...., 2N−1
2N] minimizes the expected patient traveling
distance to E[DN clusters] = 1
4N. Given independent exponential inter-arrival and service times, a
patient’s expected waiting time at the nearest hospital is E[WN clusters] = λ
µ(µ−λ)=ρ
µ(1−ρ).
In contrast, pooling all hospitals’ arrivals and resources together into one “Super Hospital”
yields the minimal expected waiting cost. In this case, we assume that all hospitals combine their
resources into one centralized location, and all patients must travel to the centralized location to
receive service. We only assume this in the analytical model to get a closed-form solution. For
our actual geographic pooling model, MRI scanners stay at the original location. The completely
pooled super hospital has arrival rate Nλ and service rate N µ. The expected waiting time would
be E[W1 cluster] =
Nλ
Nµ
N(µ−λ)=1
N
λ
µ(µ−λ)=1
NE[WN clusters]. The optimal location of the super hospital is
1
2and yields expected traveling time of E[D1 cluster] = 1
4.
In between the two corner solutions is the partial pooling of Nhospitals in pclusters. The optimal
placement of the pclusters is at [ 1
2p,3
2p,5
2p...., 2p−1
2p]. In this case, we will combine the resources from
N
phospitals. The expected traveling cost becomes E[Dp clusters] = 1
4pwhile the expected waiting
time is E[Wp clusters] = p
NE[WN clusters]. The corresponding balanced total cost (1) thus becomes:
E[Wp cluster] + τE[Dp cluster ] = p
N
ρ
µ(1 −ρ)+τ
4p(2)
Similar to the traditional Economic Order Quantity model (Zipkin 2000) and shown in Figure 2,
this cost is convex in the number of clusters p. The minimal cost is obtained by taking the first
order condition, which prescribes the optimal number of clusters to balance waiting and traveling:
9
Figure 2 The total cost incurred by patients in the linear city comprises both waiting and driving costs. It is
convex in the average number of hospitals per cluster, so that an optimal clustering exists.
Proposition 1 The optimal number of clusters of Nhospitals on the linear segment [0,1] is
p⋆=sτN
4
µ(1 −ρ)
ρ.(3)
The corresponding minimal balanced total waiting and traveling cost is
C⋆=rτρ
Nµ(1 −ρ)(4)
When patients incur both waiting and traveling costs, the linear model demonstrates that par-
tial geographical pooling typically dominates complete pooling and no pooling. Both the optimal
number of clusters and the cost exhibit “square-root” scaling. As traveling becomes more costly
relative to waiting (i.e., τincreases), the optimal number of clusters and the corresponding total
cost increases sublinearly. As either the number of hospitals Nor the individual hospital capacity
µincreases, the optimal number of clusters increases, yet the total cost decreases. As the individual
hospital utilization ρincreases, the number of clusters decreases while the total cost increases.
3.2. The Actual Two-dimensional Geographic Partial Pooling Problem
The insights from the linear city model will generalize to the two-dimensional geographical pooling
that we study throughout the remainder of this paper. In addition to being two-dimensional, how-
ever, the practical problem that the Province of Ontario faces involves additional complications: the
72 hospitals and their patients are heterogeneous; there are four classes of patients, differentiated
by their priority and wait time target; hospitals use prioritized service policies; constructing one
“super hospital” may be infeasible; and the ultimate performance metric is the fraction of patients
whose waiting time exceeds a priority-specific target (FET). This makes the practical problem
much more complicated and an analytical solution no longer exists. (The objective to minimize
10
FET has important ramifications: as shown in the Appendix, even if geographic pooling has a
negligible impact on expected waiting times, it can still “compress” the upper tail of the waiting
time distribution, and thus the FET.)
Equation (1) proposed in Section 3.1 finds the optimal clustering that balances the trade-off
between driving and waiting. However, hospital and healthcare practitioners expressed concern
about the patients’ ability to quantify their relative preference over driving and waiting. While it
is difficult to quantify a trade-off coefficient for each patient based on simple interview questions,
patients can easily quantify their maximally tolerable travel time, denoted by D. This directly limits
the number of hospitals one can pool and implies we should seek partial pooling configurations
or clustering of the hospitals, which can be represented as a partition pof the N= 72 hospitals.
Thus, the actual two-dimensional geographic pooling problem seeks to find a partition kthat
minimizes the weighted FET while constraining the maximal allowed incremental travel time within
an allowable limit.
Let i∈ {1,2,3,4}denote the priority class of a patient. Let j∈ {1,2, ..., Ji}be the patient iden-
tifier out of a total population of Jitype ipatients with P4
i=1 Ji=Jwhere Jdenotes the total
patient population. We let widenote the cost for priority ipatients exceeding their wait target.
(w1= 28/41, w2= 10/41, w3= 2/41, w4= 1/41 are assigned based on the relative urgency of each
priority class’s wait time target.) Let Oij(p) denote the “overtime,” which is the number of days
patient jof priority iexceeds her wait time target for the given partition p. Similarly, let Dij(p)
denote the maximal allowed incremental travel time – the difference between the patient’s original
traveling time, defined by the amount of time needed to travel from the patient’s home to their
original hospital, and the new traveling time, defined by the amount of time needed to travel from
a patient’s home to their newly assigned hospital. The actual two-dimensional geographic partial
pooling problem can thus be stated mathematically as:
min
all partitions pof N
4
X
i=1
Ji
X
j=1
wiE[Oij (p)].(5)
s.t. Dij (p)≤D∀i, j, p
The expectation in the objective function accounts for the variation and uncertainty in the
arrival rates, service times, and patients’ home addresses. Taking the expectation over the Oij(p),
we are estimating the expected exceeding time based on our estimation of the arrivals, service
times, and patients’ home addresses over a year. Given that driving time can vary significantly due
to traffic and road conditions, we use Google Maps to capture the driving time for every hour of
the day. We then take the average over the 24 different driving times for different hours of the day
to estimate the time needed to travel from the patients’ homes to the new hospital. Notice that
11
Table 1 Sample Patient Level Data
Patient ID Hospital ID Priority Target Scan Type Scheduled Time Treatment Time
34562 4302 3 10 Brain 09/10/2015, 08:35 09/28/2015, 13:35
34563 3703 1 1 Spine 09/11/2015, 10:35 09/11/2015, 17:35
34564 2173 4 28 Pelvis 09/14/2015, 17:22 10/23/2015, 19:34
this two-dimensional model is similar to the linear model. The linear model’s objective function
(Equation (1)) is just a Lagrangian relaxation of the two-dimensional objective function.
4. Descriptive Analytics
Our paper used five years of MRI data gathered from 72 hospitals from January 2013 to December
2017. Our dataset contains patient-level information regarding each patient’s specifics and hospital-
level information regarding operating hours and capacity constraints.
Table 1 is a sample of the patient-level records. The Patient ID is a unique identifier used
to identify each patient, and the Hospital ID indicates the hospital that received the MRI scan
request. There are four Priority classes pre-determined by the patient’s family doctors based on
patient situations, and each class is associated with a pre-specified Wait Time Target. We will
elaborate on the priority classes and wait time targets in Section 4.1. There are ten different Scan
Types that can be performed by the hospitals, each with a stochastic scan duration.
Scheduled Time records the date and time the patient’s MRI scan request is received at the
Medical Imaging Booking office. The patient is then added to the hospital’s MRI waiting queue.
The Treatment Time logs the date and time the patient actually received the MRI scan. The
waiting time is defined as (Treatment time - Scheduled Time)+, which ideally is less than the
patient’s priority-specific wait time target. In particular, the fraction of patients for which waiting
time exceeds target (FET) is a key performance metric to be minimized.
4.1. Patient Types, Priority Classes, and Class-specific Waiting Targets
The patients are assigned a priority class at the time of their request for an MRI scan. According
to the Canadian Association of Radiologists, the Priority 1 patient is defined as emergency patient
who needs an examination necessary to diagnose diseases or injuries that are immediately life-
threatening. Priority 2 is the urgent class who need treatment for a disease that is not immediately
threatening to life and will have no adverse outcome if all examinations are completed within the
wait time target. Priority 3 patients are the semi-urgent patients who require a diagnosis that can
help/alter treatment plans to a disease. Priority 4 patients are the non-urgent patients whose long-
term medical outcome would not be negatively impacted by small delays in the treatment as long
as the wait time is within the target. The wait time target is priority-specific and pre-determined
by the patient’s condition. Priority 1 patients require same-day examination as they might have
12
Table 2 Scheduled Duration vs. Actual Scan Time for Each Type of Scan
Scan Type Scheduled
Duration
Avg. Actual
Duration Std. Actual
Duration Min Actual
Duration
Brain 100 41.5 5.0 17
Extremities 40 26.6 4.7 13
Spine 25 20.3 3.4 8
Abdomen 40 32.8 3.5 13
Pelvis 30 28.5 3.0 16
Breast 35 36.4 5.2 16
Head & Neck 45 4.1 16.6 21
Cardiac 25 8.0 1.8 5
Thorax 50 44.1 4.8 26
Peripheral Vascular 60 48.3 6.4 31
some life-threatening conditions that require immediate actions. The wait time targets for Priority
2, 3, and 4 patients are 48 hours (2 days), 10 days, and 28 days respectively.
4.2. Type of Scans and Scan Duration
The operating hours vary from hospital to hospital, with some operating for 24 hours a day while
others only 16 hours. All scanners are capable of performing all the required procedures, including
Brain, Extremities, Spine, Abdomen, Pelvis, Breast, Head & Neck, Cardiac, Thorax, and Peripheral
Vascular. Each type of scans has a scheduled duration which is the predetermined constant service
time that the hospitals use to determine the number of patients they can schedule daily.
Table 2 shows that the actual scan duration deviate from the scheduled duration significantly.
For example, the brain scan’s scheduled duration is 100 minutes, while the average actual duration
is 41.5 minutes. The majority of the actual duration are much shorter than the scheduled ones.
The scheduled duration not only includes the actual scan time but also incorporates a safety
time to ensure smooth transitions between scans. However, hospitals often overestimate the safety
time, causing idling time between each consecutive scan. Therefore, scheduling decisions based on
the scheduled duration often result in inefficient scheduling and wasted capacity. Therefore, it is
beneficial to model the actual scan duration with real transition time as a stochastic process and
use the estimation to determine the actual number of scans performed each day.
4.3. Patient’s Home Address
Patient home addresses are not disclosed in our dataset for privacy reasons. After consulting with
many doctors, primary physicians, and the employees from CCO, there were no official documents
on exactly how patients were assigned to their current hospital. We received an official quote
from CCO stating that the doctors “usually send the patient to the hospital within the same
city.” So to address the problem that our data-set lacks patient home addresses, we used several
different methods to estimate each patient’s home address. Using these different estimates of home
13
addresses, we conducted a robustness check and established that our model is reasonably robust
toward patient home address variations.1The results reported in this article estimate each patient’s
home address by drawing a random point (i.e., latitude and longitude) uniformly distributed over
the city in which the original hospital that the patient visited is located.
5. Sequencing Patients within a Cluster: Augmented-Priority Rule
In this section, we design a heuristic sequencing model for the patients in a given cluster of hospitals
(we will determine and optimize clusters in Section 8) to assign them to to specific scanners at a
specific location within the cluster. We propose to sequence patients within a cluster following an
augmented-priority rule that balances the patient’s initial priority class and the number of days
until her wait time target. Our sequencing rule is similar to Min and Yih [2014]’s and Stanford et al.
[2014]’s accumulating/time-dependent priority rule where patients are promoted to higher priority
classes based on their waiting time. Our augmented-priority sequencing rule assigns augmented-
priority level, or “score,” Ni(t) to patient iat time tand will serve the patient with smallest score
first. We set
Ni(t) = Pi+ [Ti−(t−Ai)]+,(6)
where Pidenotes patient i’s priority class, and Tiand Ai(t) are patient i’s wait time target and
arrival time, respectively. Therefore, (t−Ai) denotes the time elapsed since the request of a scan
was received, and [Ti−(t−Ai)]+represents the number of days until patient ireaches her waiting
time target. If a patient overstayed her wait time target, the second term of the equation becomes
zero, and her augmented-priority becomes equivalent to her priority class.
This sequencing rule elevates the priority of patients closer to their waiting time target while
patients of high priority-classes still receive timely treatment. This ranking mechanism may strate-
gically delay some patients to prioritize the treatment of patients closer to their wait time target
(Af`eche 2013). More specifically, our ranking mechanism will treat the lower priority-class patient
closer to her wait time target before any present higher-priority class patients with higher aug-
mented score (He et al. 2012).
Note that our augmented prioritization also ensures that patients who already have exceeded
their wait time target will not be postponed indefinitely as they will be placed at the beginning of
their priority-class queue. Also note that this sequencing rule is time-dependent. While it is simple
to determine patient i’s score at arrival time, it is much more challenging to ascertain patient i’s
score at treatment time because that score depends on the future arrivals of higher priority class
patients during patient i’s arrival and treatment time. Since the MRI hospitals are required to
1The robustness results will be provided upon request.
14
implement advance scheduling, we therefore must predict at patient i’s arrival time what her score
will be at her time of treatment. More specifically, we need to determine which treatment time will
ensure that the patient receiving the MRI scan is indeed the patient with the smallest augmented-
priority score. Patient i’s expected augmented-priority score at treatment time is determined by
ranking patient iwith all the existing patients in the patient cluster and the future arrivals the
hospitals in the cluster anticipate between patient i’s arrival and treatment time.
6. Predictive Analytics: Predicting Arrivals and Treatment Times
To implement advance scheduling, we must tell each patient her treatment day and location when
she requests an MRI scan. This requires an accurate prediction of the arrival and service times
of higher priority class patients that must be served before the index patient. In this section, we
will predict patient arrival and service times to determine each patient’s treatment time. Our data
shows that arrival and service times exhibit capricious variations inconsistent with a stationary
distribution, but well approximated by a time series without clear seasonality or periodic trend.
Jiang et al. 2020 has shown that, when the underlying data is non-stationary (as it is here),
predicting arrival and service rates using one stationary distribution has insufficient accuracy.
Therefore, we here propose several time-series prediction models.
6.1. Predicting Non-Stationary Arrivals Using Time-Series
6.1.1. Classical Time Series Model We first predict the arrivals using a classical time
series model: Autoregressive Integration Moving Average (ARIMA). Since this model is a well-
established statistical model, we will only discuss its advantages and limitations here. ARIMA
models, a combination of the standardized autoregressive and moving average model, forecast the
future value of a time series. Using the equation yt=µ+α1yt−1+... +αpyt−p−θ1ϵt−1−.. −θqϵt−1
we can express the current term of the time series linearly in terms of its previous values and in
terms of current and previous residual series. ARIMA models can capture some of the underlying
relationships observed the data. However, it is hard to model nonlinear relationships using an
ARIMA model. Furthermore, it is assumed that there is a constant standard deviation in errors in
ARIMA model, which in practice may not be satisfied. Leveraging our large dataset, we can utilize
machine learning techniques to overcome the limitations in classical time series model.
6.1.2. Recurrent Neural Network: Traditional vs. Long Short Term Memory A
Recurrent Neural Network (RNN) is a neural network that forms direct graphs of the data points
exhibiting the temporal dynamic behavior for a time sequence. Unlike other neural networks such
as a feed-forward network that assumes all inputs are independent of each other, RNN can use
internal state memory to process data in sequence, thus capturing the dependency between inputs.
15
The network is a chain of repeating modules with relatively simple structures that perform the
same actions repeatedly. The internal memory not only stores information at the current input,
but it also keeps track of the information received at the previous state leading to the retention of
historical information.
In theory, RNN can utilize arbitrarily long sequences of information captured in the previous
states. In practice, they are designed to trace back only a few steps due to the limitation in memory
space. The inability to learn long-term dependence causes traditional RNN to be biased towards
more recent information. Limited by the learning and updating model’s design, recent data tends to
have a more significant influence on the prediction results than earlier data, thus causing inaccurate
prediction results (Bengio et al. 1994).
Long Short Term Memory Network (LSTM) is a special kind of recurrent neural network designed
to capture and learn long-term dependencies (Hochreiter and Schmidhuber 1997). Similar to all
recurrent neural networks, LSTM also has a chain of repeating modules. However, instead of a
simple memory layer structure, LSTM can handle up to four layers of neural networks that can
store and process long-term information. LSTM has cells that take the previous states’ input and
combine it with the current state’s input to generate an output that feeds into the next stage. The
cell that stores the historical information acts like a black box that takes the input of the previous
states and combines it with the current state’s input to generate an output that feeds into the
next stage. The ability to filter what information to pass to the next stage ensures that important
information from the early stages is kept intact throughout the information chain.
6.2. Arrival Prediction Models and Results
In order to reduce the training time and avoid convergence to a local optimal, we choose to train
the RNN with a Mini-Batch Gradient Descent (MBGD) algorithm with RMSprop Optimizer. We
set the batch size to 30, and the learning rate to 0.001 as recommended by Williams and Zipser
[1995]. To avoid overfitting, we employ the dropout technique proposed in Hinton et al. [2012] with
a dropout rate of 0.2. Furthermore, we use L2 regularization with the recurrent weights initialized
at 0.001 (Pascanu et al. 2013) to help with exploding gradients.
To avoid sampling error and possible overfitting of the data, we divide the data into training
sets and testing sets. We use the arrival data from 2013 to 2016 as the training and validation
set, and the data from 2017 as the testing set. We use the time-series cross-validation technique to
ensure that no future observations are used to construct the forecast. We evaluate the forecasting
accuracy of our models using four out-of-sample testing error metrics: Root Mean Square Error
(RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and R2.
16
Table 3 Short-term Arrival Pattern Prediction Accuracy for Three Types of Neural Network
RMSE MAE MAPE R2
ARIMA (0,1,1) 10.14 8.71 0.33 0.90
Traditional RNN 11.17 9.52 0.46 0.87
One Layer LSTM 10.83 9.83 0.46 0.86
Two Layer LSTM 10.24 8.72 0.34 0.89
Table 4 Long-term Arrival Pattern Prediction Accuracy for Three Types of Neural Network
RMSE MAE MAPE R2
ARIMA (0,1,1) 15.69 12.32 0.56 0.82
Traditional RNN 14.80 12.00 0.58 0.84
One Layer LSTM 13.27 10.25 0.46 0.82
Two Layer LSTM 12.10 9.83 0.37 0.87
We implemented the ARIMA, traditional RNN, one-layer, and two-layer LSTM. We also tried
three and four layer LSTM with minimal improvement in accuracy and exponentially higher com-
putational time. We compare the differences between the classical times series and the neural
networks and demonstrate the advantages of using LSTM to learn long-term dependency and
making long-term predictions.
We first compared the four models’ short-term prediction power. We took 20 random samples of
10 days from the 2017 arrival data and the out-of-sample error is summarized in Table 3. ARIMA
and two layer LSTM both demonstrated good predicting power and both have a high capability of
handling outliers. ARIMA also has the shortest computational time and requires fewer historical
data points to achieve the same level of prediction accuracy than two-layer LSTM.
However, in order to schedule all the patients upon arrival, we need to make long-term predictions
up to three months in advance. Thus, we also test the models’ power to make long-term predictions.
When we took 20 random samples of 100 consecutive days from the 2017 arrival data, the out-of-
sample error increased significantly for ARIMA model. The results of the four performance metrics
are summarized in Table 4. These 20 random samples of 100 consecutive days are used for the rest
of the paper to evaluate the effectiveness of our clustering and ranking model. We can see from the
table that the prediction power of the two-layer LSTM out-performs the other four models under
all metrics and remained fairly stable for both short-term and long-term predictions. Traditional
RRN model’s high dependence on more recent data resulted in their inability to capture long-term
trend and thus yield a higher prediction error.
6.3. Predicting Service Times
Similar neural networks are used to predict the number of patients the hospitals serve each day. As
we mentioned in the previous section, there are significant variations in each scan duration. There-
fore, we treat the number of patients served each day as a stochastic process. We use traditional
17
Table 5 Number of Scan Performed Prediction Accuracy for Three Types of Neural Network
RMSE MAE MAPE R2
Traditional RNN 12.35 11.02 0.47 0.86
One Layer LSTM 11.66 10.33 0.39 0.86
Two Layer LSTM 10.35 9.11 0.28 0.89
RNN, one-layer LSTM, and two-layer LSTM to predict the number of patients served in the test
set. As shown in Table 5, two-layer LSTM outperforms the other two and can predict the service
pattern with similar degrees of accuracy as the arrivals.
6.4. Predicting Treatment Times
Using the predicted arrival and service times, we use the augmented sequencing rule (Section 5) to
predict the treatment time for each patient at their arrival time (i.e., when requesting an MRI).
Patient i’s expected treatment time equals patient i’s arrival time plus patient i’s expected
waiting time, which is determined by patient i’s expected sequencing score. The sequencing score
is calculated by ranking patient iwith all the existing patients in the patient pool and higher
priority class arrivals within the cluster between patient i’s arrival time and treatment time. The
patients are then ordered by their score to determine patient i’s expected order of service E[Order
of Service]. We use patient i’s score at arrival time to obtain the initial order of service. Using this
ranking, we can obtain patient i’s initial expected waiting time by checking how many days it would
take to treat all patients ahead of patient iusing the forecasted service capacity. However, this
initial order of service is not accurate as future arrivals during patient i’s waiting period of higher
priority patients may impact patient i’s ranking in the queue. We thus add the forecasted arrivals
during patient i’s initial expected waiting time to the patient pool and rank patient iagain. With
the new expected order of service, patient i’s expected waiting time might increase from the initial
expected waiting time as more urgent patients are placed ahead of her in the queue. Therefore, we
obtain a new expected waiting time using the expected order of service. Once again, we include the
anticipated arrivals in the increased waiting period (new expected waiting time - initial expected
waiting time) and rank patient iagain. We repeat this process until convergence, meaning until
patient i’s expected treatment time stays constant. The Appendix provides the exact Algorithm.
7. Clustering and Geographic Pooling Model
The sequencing model in Section 5 is for patients within each cluster. In this section, we investigate
how to best partition the 72 hospitals into pclusters and then optimize over p.
7.1. Clustering Mechanisms
To complete the geographic pooling model, we need to determine the optimal partition of the 72
hospitals to minimize Equation (5). The following section will discuss two clustering techniques:
exhaustive search and genetic algorithms.
18
7.1.1. Exhaustive Search Set-covering is a common combinatorial problem that finds the
partition of a set subject to certain constraints. An exhaustive search through all possible set covers
guarantees that the partition obtained is the global optimum. We first eliminate all partitions that
do not satisfy the distance constraint. However, the exhaustive search method is NP-hard and
suffers from the curse of dimensionality.
The number of ways to partition a set with cardinality nis equal to the nth moment of a
Poisson distribution with mean 1, which is equivalent to the Bell number Bn. Given that Bngrows
exponentially with n, optimal set covering quickly becomes infeasible. Indeed, given that B72 ≈
9·1075, it is computationally infeasible to exhaustively search through all possible partitions of 72
hospitals. The set covering mechanism is also not robust towards changes in the objective function
and number of hospitals. For any changes in the Equation (5) or the number of hospitals, we need to
repeat the exhaustive search process, making it highly inefficient. We can use different polynomial-
time algorithms such as Greedy algorithms (Wolsey 1982) or LP rounding to approximate the
optimal partition obtained from the exhaustive search.
7.1.2. K-means Clustering K-means clustering is another technique that is commonly used
to partition sets with low dimensions (Likas et al. 2003). The K-means algorithm is essentially an
exhaustive process that loops through all possible numbers of clusters p∈ {1,··· , n}. For a given
number of clusters p, we first pick at random phospitals as the initial center of each cluster. Then
we iterate through all possible assignments of the remaining n-p hospitals to one of the pclusters.
Note that certain assignments are excluded if they exceed the driving constraint; therefore this
algorithm is a modified K-means that converges even faster. The maximum number of iterations
(without imposing the driving constraint) to assign all of the remaining n−phospitals to one of the
pclusters is: p×(n−p)! Therefore, the k-means clustering algorithm is guaranteed to convergence
to a feasible solution.
7.1.3. Genetic Algorithms A genetic algorithm, more specifically, the chromosomes repre-
sentation of a genetic algorithm, is a clustering technique (Maulik and Bandyopadhyay 2000) used
in machine learning to find the optimal grouping among the elements of the set (Kumar et al. 2010).
In general, a chromosome representation is designed to enhance the performance of the algorithm.
A chromosome is a set of parameters defining a possible solution to the problem that the algorithm
is trying to solve. Through a process of selection, crossover, and mutation, the algorithm finds an
optimal solution among the population of possible chromosomes.
To calibrate the chromosome representation to our data, we denote the hospitals’ partition as
a chromosome. Each hospital is encoded with a number known as the genes in the chromosome,
19
indicating which cluster it will be assigned to. Different clustering of hospitals results in the dif-
ferent numbering of the genes, which leads to the formation of different chromosomes. The genetic
algorithm then iterates over the different chromosomes to find the one that optimize the objective.
The process begins with the initialization of the population to no clustering. More specifically,
we assign a different number to each of the hospitals. To test these chromosomes and find the
best partition of the 72 hospitals, we define a fitness function. Each chromosome receives a score
generated by applying the fitness function to the test, or results obtained from the tested solution.
In our model, the fitness value is defined as the value of Equation (5) for the given partitions. For
each iteration of the chromosome, we calculate the fitness value to see whether the value improves.
The chromosome is first applied to the fitness function for each iteration to see if it updates the
optimal fitness value. If the optimal value stops updating for several iterations, we terminate the
process and use the last updated chromosome as the optimal partition. If the chromosome shows
improvement in the iteration, it then goes through the process of selection, crossover, and mutation.
Selection is where an individual chromosome is chosen from the population to go through the
crossover process. We applied a combination of fitness proportionate selection and elitism selection
to ensure the rate of convergence and avoid degradation in the chromosomes. In the crossover stage,
we combine the genetic information of two parents to generate new offspring. A new partition or
chromosome is formed by either stochastically generating new solutions from an existing population
or cloning an existing solution. We used the single-point crossover with a crossover rate of 0.6.
Lastly, for the mutation stage, we alter one or more gene values in a chromosome from its initial
state with a mutation rate of 0.1. This way, the solution may change entirely from the previous
solution to help the GA to come to a better solution. The mutation stage is to maintain diversity
and prevent premature convergence.
Lastly, we enhanced the standard GA model by including a dimensional reduction at the end
of each iteration. Each of the hospitals has a different initial value, summing to a total of 7272 ≈
5·10133 possible ways of encoding the hospitals. Some of the chromosomes, even though they
have different encoding, represent the same partition. For example, if we have four hospitals and
they are clustered into two groups. Even though {1,1,2,2}and {2,2,1,1}represent two different
chromosomes in the population set, they represent the same partition. Therefore, we check all the
chromosomes in the population at each iteration stage and remove those representing the same
partition. We also remove all chromosomes that do not satisfy the distance constraint to further
reduce to population set.
Genetic algorithms have many advantages, making it capable of solving high-dimensional prob-
lems and provide a solution that is at least near-optimal. The selection and crossover stage ensures
the algorithm can converge quickly to a fit solution (a suitable solution based on the objective
20
Figure 3 Geographic Clustering via Simulation
Figure 4 Zoomed-in View of the Geographic Clusters
function). The random mutation guarantees that the algorithm is exposed to a wide range of solu-
tions. There are some limitations to GA-based clustering techniques. Firstly, GA-based clustering
suffers from degeneracy, where multiple chromosomes represent the same solution. Secondly, the
initial population selection and chromosome definition are often complicated and can influence the
algorithm’s outcome. Lastly, the algorithm can converge prematurely and trap in a local optimal
without a clear indication of how close the solution is to the true optimal. We can avoid the first
limitation by implementing dimensionality reduction. There are no known methods to avoid other
limitations but can be improved by fine-tuning the parameters.
8. Results of Geographic Resource Pooling of 72 Hospitals
8.1. Clustering and Geographic Resource Pools
Using our data, we constructed the simulation to obtain near-optimal clusters. We generated 20
sets of arrivals, service rates, and home address of each patient over 100 days. We use the trained
Genetic Algorithm to find a near-optimal clustering of the 72 hospitals using our generated data.
We found that the clustering produced by GA is fairly robust to the variations in arrival, service
rate, and home addresses. Throughout this section, we assume the incremental traveling time is
limited to 3 hours (based on the traveling constraints used by the Central Intake Program). Using
our GA method, we divide the 72 hospitals into ten clusters. Figures 3 and Figure 4 show where the
72 hospitals are located and how they are partitioned into different clusters. Each pin represents
one of the 72 hospitals where pins of the same color and shapes are in the same geographic cluster,
while pins of different colors and shapes are in different clusters.
From Figure 3 we see that the hospitals far away from the geographic median are clustered
together since traveling time from those hospitals is very high. The hospitals in the area with
intensive traffic, though close geographically, are not always placed in the same cluster. We observe
21
that hospitals in the more populated area around Toronto tend to have higher server utilization.
Notice how hospitals located in Toronto (Figure 4) are separated into non-contiguous clusters to
maximize pooling benefits. For example, the yellow cluster contains three of the busiest hospitals
in Toronto. However, their workloads are shared by two of the less busy hospitals in Hamilton and
St.Catherine. Though located 1.5 hours away from downtown Toronto, these two hospitals can
significantly reduce waiting time for patients initially assigned to the busy hospitals.
8.2. Effectiveness of the Geographic Pooling Model
After dividing the hospitals into desirable clusters using simulation based on the empirical data,
we evaluate our resources sharing model’s effectiveness. As mentioned in the previous section, the
hospitals are evaluated based on their ability to serve the patients within their wait time target
with an eventual goal of having FET less than 10%. Clustering was done using only data from 2013
to 2016. To evaluate our strategy out-of-sample, we conduct a counterfactual analysis on the 2017
dataset to see the changes in FET, assuming the hospitals employed a geographic resource pooling
model. Using the actual arrival and service data from 2017 and the geographic clusters generated
in Section 8.1, we computed the FET accordingly.
Figure 1 summarizes the results of our geographic pooling model taking the average over twenty
simulations. The blue bar uses the empirical data to show the FET under the current practice. The
orange bar is the FET generated by applying FIFO queuing method to the geographic clusters.
We see that the FET of high priority patients increased while the FET of lower priority patients
decreased drastically. The grey bar represents the result of applying a non-preemptive priority
queuing method to the geographic clusters. No Priority 1 or Priority 2 patients exceeded their wait
time targets. Using geographic pooling, we have enough capacity to ensure that all high priority
patient are treated on the day of their arrival. A significant portion of Priority 3 and 4 patients still
exceed their waiting time targets. Inefficient capacity utilization is prevalent under non-preemptive
priority as many high priority patients are treated before their wait time target elapse. The “as-is”
FET of 66% decreases to 54% under non-preemptive priority and 49% under FIFO. The changes
in FET demonstrate the benefit of geographic resource pooling which reduces the FET to 36%
without any capacity expansion.
In addition to evaluating the effectiveness of geographic partial pooling, we also analyze the
effectiveness of our proposed sequencing rule. Similar to non-preemptive priority scheduling, no
Priority 1 or 2 patients exceeded their wait time target. There is also a reduction in the FET for
Priority 4 patients. The 13% decrease in FET from the FIFO model demonstrates the effect of our
scheduling mechanism, and the 30% decrease in FET from the current practice demonstrates the
overall effectiveness of our scheduling and geographic partial pooling method.
22
Table 6 Number of Waiting Days vs. Number of Overflowing Patients for One Cluster
# Exceeded Target Avg Waiting Time (Days)
P1 P2 P3 P4 P1 P2 P3 P4
Historical 4,624 40,052 172,652 694,280 1.3 3.1 13.5 42.3
FIFO 8,132 45,982 157,023 461,283 34.9 36.0 33.2 34.8
Non-Preemptive 0 0 94,672 651,113 0 0 11.4 49.1
Our model 0 0 45,163 ±871 455,213 ±8,789 0.7±0.1 1.8 ±0.1 12.4±1.3 38.2±4.7
* Note: The total # of MRI patients in 2017 is 1,376,991.
The # of patients in each Priority class: P1: 23,583, P2: 85,586, P3: 367,823, and P4: 899,999
Table 6 displays a detailed breakdown of the number of patients exceeding their wait time target
based on the 2017 data. During 2017, a total of 1,376,991 patients requested an MRI scan. Among
those patients, 23,583 were of Priority 1, 85,586 were Priority 2, 367,823 were Priority 3, and
899,999 were Priority 4. We ran twenty simulations to compute the FET and average waiting times
for our proposed model. The error term is caused by variations in the forecasted arrival rate, service
time, and home addresses. Our proposed model treats 411,000 (30%) more patients within their
wait time target. We also calculated each patient’s waiting time. As expected, there is no significant
reduction in average waiting time due to the already heavily utilized system. However, we ensure
that no patient waits more than 48 days for MRI scans compared to some patients waiting for over
100 days under the current mechanism. As foretold in the Appendix, our method thus significantly
reduces the tail of the waiting time distribution.
Overall, based on this dataset and parameters, our geographic resource pooling model decreases
the FET in all hospitals by at least 10%, with some hospitals achieving the desirable 10% FET.
On average, the hospitals can decrease their FET to 36% without additional capacity. More impor-
tantly, our model can ensure high priority patients never exceed their wait time targets.
8.3. Results for Different Traveling Time Constraints
In this section, we investigate the impact of changing the maximal allowed incremental travel time.
Similar to the linear model, there are two “corner” solutions: complete pooling and no pooling.
For D= 0, the patients are unwilling to travel further to receive speedier service, resulting in all
hospitals operating independently. For D=∞, the patients are willing to travel to any hospital
within Ontario to receive service. In this case, it is optimal to pool all 72 hospitals together as each
additional hospital contributes non-negatively to the resource pool.
Figure 5 shows the near-optimal clustering of the 72 hospitals produced via our Genetic Algo-
rithm for different maximal allowed incremental travel times D. For example, when D= 3, dividing
the 72 hospitals into 10 clusters achieves maximum FET reduction. As expected, the number of
clusters needed to achieve near-optimal FET reduction decreases as Dincreases.
23
Figure 5 Number of Clusters for Different Maximal
Allowed Incremental Travel Time D
Figure 6 Reduction in Overtime vs. Incremental
Travel Time for Different Clusterings
Figure 6 compares the reduction in overtime to the incremental travel time, similar to the
linear city model. To construct the figure we solved our geographical partial pooling model for
12 values of D∈ {0.1,0.5,0.7,1.2,1.5,1.9,2.3,2.7,3.3,4,5,6}hours. For each value of D, we drew
20 random samples of arrival and service times and solved the model for each sample using GA.
We show two curves, parameterized by the 12 values of D, each representing the average together
with error bars that show the range of values obtained from the 20 sets of generated data. (The
narrow error bars show that waiting and traveling times are robust towards variations in arrivals
and service duration.) The red curve shows the overtime P4
i=1 PJi
j=1 wiE[Oij (p)], representing the
aggregated reduction in the number of days patients exceed their waiting time target. The blue
curve represents the average incremental traveling time experienced by patients when they transfer
to another hospital under geographical pooling. (Clearly, this average incremental driving time is
always below the maximal allowable incremental traveling time D.)
Though the two-dimensional geographic pooling model is much more complex than the simple
linear model with homogeneous patients, it produces very similar results. We proved analytically for
the simple linear model that geographical pooling dominates complete pooling and no pooling. We
obtain the same result for the two-dimensional geographic pooling model of 72 hospitals through
simulation. Figures 2 and 6 look remarkably similar: for each maximal allowed incremental travel
time, we can find the respective near-optimal clustering that minimizes the FET.
8.4. Performance at Each Individual Hospital
We have shown that it is advantageous for policymakers to implement resource pooling among all
72 hospitals. The aggregated number of patients exceeding their wait time target reduces under
24
geographic pooling without significant additional government costs. In this section we investigate
to what extent our centralized, aggregate approach is also “incentive-compatible” by analyzing the
impact of geographical pooling at each individual hospital.
Hospitals are generally compensated in one of two ways: 1) based on machine hours (e.g., Grand
River Hospital) or 2) based on number and type of scans performed (e.g., Sick Children’s Hospital).
It turns out that these two hospitals did not realize they fall under different compensation schemes.
Nor does our data show the actual compensation scheme or amount received at each hospital.
When we asked our government collaborators for that information and data they responded that
they themselves do not have access to that data, as payments and compensation is purview of a
different department and is intentionally kept separate. Therefore, we perform an analysis using
multiple possible compensation schemes:
Performance Measure 1: Fee per scan. Some hospitals are paid by the number and type
of scans performed. Given that we do not have access to the specific compensations, nor do we
know which actual hospitals are compensated on number of scans, we proxy the actual com-
pensation by the price quoted to patients for different types of scans, as shown on the website
https://www.canadadiagnostic.com/info/fees/: $900 for Brain, $900 for Spine, $900 for Joints,
$1400 for Arthrograms, $1600 for Abdomen, $1100 for Breast, and $2500 for Prostate.
(Of course, our evaluation would change if these fees change, so we did perform robustness
analysis on the fees. We also acknowledge that the actual revenue collected by a hospital may be
below the charged fee. As long as that ”shortfall” is proportional to the fee charged, our results
would not change.)
Performance Measure 2: Fee per machine hour. We are informed that due to the univer-
sal healthcare system in Canada, rather than receiving payments from patients for their MRI
examinations, some of the hospitals (such as Grand River Hospital) receive compensation from
the government for their machine hours. Therefore, we can use machine hours as a proxy for the
financial performance at each individual hospital.
Performance Measure 3: Fee per machine hour penalized for FET. As mentioned earlier,
FET is a key performance measure, and the hospital can be punished financially and/or through
non-monetary demerits for having FET below the accepted standard of 90%.
Performance Measure 4: Fee per machine hour penalized by priority-specific FET; i.e., Total
machine hours + Type 4 Patient FET + 2 ×Type 3 Patient FET + 10 ×Type 2 Patient FET+ 28
×Type 1 Patient FET. This performance measure is a weighted percentage of FET for different
priority classes. Even though there is no policy indicating that different priority classes’ FET should
be treated differently, the consequence of a Priority 1 overstaying their target wait time can be
25
Table 7 Fraction of individual hospitals (out of 72) for which Performance Measure X, or PM X, evaluated
over different rolling horizon timespans, (weakly) increases under Geographic Pooling.
Time Span PM 1 PM 2 PM 3 PM 4
1 Day 52% 53% 61% 73%
2 Days 57% 62% 69% 80%
3 Days 62% 69% 76% 83%
5 Days 64% 75% 84% 89%
1 Week 67% 80% 90% 94%
10 Days 71% 87% 92% 97%
2 Weeks 72% 96% 99% 100%
1 Month 71% 100% 100% 100%
1 Year 73% 100% 100% 100%
Table 8 Robustness of Performance Measure 1 for Different Scan Prices
Time Span At minimum prices At average price At maximum prices
1 Day 64% 52% 51%
2 Days 66% 58% 54%
3 Days 67% 60% 58%
5 Days 72% 61% 59%
1 Week 83% 66% 61%
10 Days 87% 71% 64%
2 Weeks 91% 73% 66%
1 Month 100% 74% 68%
1 Year 100% 76 % 70%
much dire than Priority 4. Therefore, we use the relative waiting time target as the coefficient to
give higher weights for higher priority classes.
Table 7 lists the fraction of individual hospitals (out of 72) for which Performance Measure X,
evaluated over different timespans, (weakly) increases (i.e., does not decrease) under Geographic
Pooling. Given the randomness in patient arrivals and scans performed, we evaluate these four
performance metrics, averaged over rolling horizons ranging from 1 day to 1 year.
We also conduct a robustness check on Performance Measure 1 using a wide range of prices for
each MRI scan. We used a range of prices for each type of MRI scan based on New Choice Health
(a healthcare provider in Ontario). $575 - $1450 for Brain, $440 - $1150 for Spine, $550-$1400 for
Cardiac, $575 - $1500 for Abdomen, and $625-$1650 for Breast, $800-$1200 for Neck, $500-$1400
for Pelvis, $340-$900 for extremity, and $360-$925 for head. Table 8 shows performance measure 1
using the minimum, the average, and the maximum price for each scan’s price range listed earlier.
We can see that the majority of the hospitals benefits from the resource sharing model when
using the lower bound of the reimbursement fee for each scan. However, if we use the average fee or
the upper bound of the fee, only 70% of the hospitals do benefit financially from resource sharing.
However, we should not only look at whether financial performance improves or not; we should
26
]
Table 9 Financial Performance under Fee per Scan (at average price)
Hospitals Worse-off Hospitals Not Worse-off
Time Span Avg Loss (per day) FET Reduction Avg Gain (per day) FET Reduction
1 Day $450 6% $442 9%
2 Days $435 8% $431 12%
3 Days $421 10% $438 14%
5 Days $403 13% $435 16%
1 Week $386 15% $441 19%
10 Days $381 17% $438 23%
2 Weeks $377 21% $442 25%
1 Month $370 23% $434 27%
1 Year $361 24 % $421 27%
also consider by how much to establish whether the change is significant. In addition, we should
also consider by how much the FET improves under our geographical resource sharing model.
We can see from Table 9 that even though some hospitals are worse-off financially under fee per
scan payment structure, their financial impact is small (the fee of about 1/2 to 1 scan). Yet there
is a significant reduction in the FET of each hospital under resource pooling. Even though there
is no direct financial penalty for each patient exceeding his/her wait time target, the benefit from
the reduction in FET should off-set the small financial loss experienced by some of the hospitals.
9. Geographic Pooling vs. Complete Resource Sharing
We showed that geographic pooling achieves a 30% reduction in FET for D= 3hr. However, there
is no guarantee that the geographic clustering method achieves a maximum reduction in the FET.
In this section, we compare our results with two other benchmark models: 1) complete resource
sharing with perfect information where we have advanced information on the arrival and service
rates before making the sequencing decisions; 2) geographic pooling with perfect information. We
will discuss each model’s advantages and disadvantages and evaluate the differences between our
geographic pooling results against the benchmark models.
9.1. Two Benchmark Resource Pooling Models
9.1.1. Complete Pooling with Perfect Information The best-case scenario considers
complete resource sharing with perfect information. We will use it as the benchmark as it guarantees
optimal resource allocation and maximum FET reduction. We continue to use the Augmented-
Priority sequencing rule proposed in Section 5. Instead of using predictive data, we assume that
we have perfect knowledge of the actual arrivals and the number of scans performed each day.
Therefore, the ranking and scheduling mechanism will guarantee that the patients are matched to
the best available resource under our objective function.
27
Table 10 Comparing the Three Resource Pooling Models
Pooling Model FET Waiting Time
(Days)
Traveling Time
(Hours)
Computational Time
(Seconds/Patient)
Complete Pooling with
Perfect Information 34.6% 25.9 5.2 32
Geographic Pooling with
Perfect Information 33.7% 25.2 2.6 1.3
Geographic Pooling with
Forecasted Data 36.59% ±5% 25.3 ±2.1 2.1 ±0.5 1.3
However, in addition to requiring perfect foresight on arrival and service times, the extreme
computational complexity of this mechanism makes this a stringent theoretical benchmark. Since
we are trying to minimize the objective function for all patients, we cannot assign patients with the
highest ranking to the next available scanner. Thus, the sequencing decision becomes a dynamic
process, where the decision of each prior patient affects all the subsequent patients. Therefore
under complete pooling, we need to iterate through all the patients to find the optimal sequencing
decision yielding an exponential increase in computational complexity.
9.1.2. Geographic Pooling with Perfect Information In this model, we use the geo-
graphic clusters proposed in Section 8. However, instead of using the forecasted data, we use the
actual dataset. Therefore, this assumes having perfect advanced information about arrivals and the
number of scans performed to ensure the patients are assigned to the most suited scanner. This
model is a good benchmark for the geographic pooling model as it shares all the same characteristics
but avoids bad routing decisions caused by forecasting error.
9.2. Comparing the Three Models
In this section, we compare the three models: complete pooling with perfect information, Geo-
graphic resource pooling with perfect information, and Geographic resource pooling with forecasted
data. To compare the out-of-sample results of the three models, we use the 2017 data for the mod-
els with perfect information and use the dataset generated from our predictive model for the two
models with forecasted information. There is no simulation error under the two pooling models
with perfect information as all inputs are deterministic. There is a simulation error associated with
the models with forecasted data. We ran the simulation twenty times and averaged over the results.
Table 10 shows that complete pooling with perfect information model produces the smallest
FET. Using that as the benchmark, we see that geographic pooling with perfect information has the
second-best performance in reducing FET, followed closely by geographic pooling with forecasted
data. Complete pooling with forecasted data has the highest FET since any small prediction errors
can cause a bullwhip effect in the FET. The average waiting time and driving time are similar
between complete pooling with perfect information and the two geographic pooling models. The
28
Table 11 Marginal Benefit of Additional Scanners
Independent Hospitals Geographic Partial Pooling
# of Additional
Scanners
Service
Level
# of Hospitals
with FET ≤10%
Avg Idling
Time
Service
Level
# of Hospitals
with FET ≤10%
Avg Idling
Time
0 36% 1 32% 67% ±5% 9 ±2 3.7% ±1.7%
5 40% 5 44% 79% ±4% 44 ±1 4.9% ±1.3 %
10 57% 10 61% 100% ±1% 72 ±0 7.7% ±1.1%
50 90% 50 68% - - -
72 100% 72 75% - - -
complete pooling models’ computational complexity is very high and needs over 32 seconds to rank
and schedule each patient, while the geographic pooling model only needs 1.3 seconds.
From these results, we see that the geographic pooling model performance is very close to
the benchmark optimal achieved by complete pooling with perfect information. The geographic
pooling model is not sensitive to imperfections in the predicted data. The computational time is
also significantly low for geographic pooling. Since we can never have perfect information and the
complete pooling model suffers from the curse of dimensionality, our proposed geographic pooling
model with a near-optimal solution serves as a good proximate in real practice.
10. Strategic Capacity Expansion
The FET is influenced by scanner capacities and patient arrivals. We showed that resource sharing
and efficient scheduling cannot achieve 10% FET with the current capacity. So how sensitive is the
FET to capacity expansions, and what is the capacity needed to achieve 10% FET?
We conduct a counterfactual analysis of FET as we add scanners. First consider adding scanners
when hospitals operate independently. Currently, there are 115 scanners allocated among the 72
hospitals, and no hospital has FET below 10%. Even the most congested hospitals can treat all
the patients within their wait time targets with one additional scanner. Therefore, we would need
53 scanners placed at the busiest hospitals to achieve an average FET of 10% and 72 additional
scanners to achieve a FET of 10% at every hospitals. However, adding 72 more scanners would be
extremely costly and inefficient because it would create much idling time at the MRI machines.
Second, consider adding scanners with geographic resource pooling. The current 115 scanners
yield FET of 36%. We ran twenty simulations to evaluate various capacity additions. Using the
clusters we constructed in Section 8, we place one additional scanner at a time to the cluster with
the highest FET using a greedy approach. Some clusters can achieve 10% FET with one additional
scanner while the busier ones need two additional scanners. Using geographic resource sharing,
eight scanners placed at the right location is all we need to achieve the desired 10% FET.
Table 11 demonstrates that geographic partial pooling greatly reduces the scanner capacity
required to achieve a given level of FET. Resource pooling significantly increases the flexibility and
29
marginal value of capacity: in contrast to scanners dedicated to individual hospitals, all hospitals
in a cluster can benefit from the additional scanner. A little extra capacity added at the right
location can significantly impact the FET when we use smart data-driven scheduling and routing.
11. Conclusions and Contribution
This paper demonstrates how tailored geographic resource pooling can be effectively applied to
reduce the Fraction Exceeding Target (FET) in MRI hospitals with high server utilization. We
analyzed five years of MRI scan data gathered from 72 hospitals in Ontario, Canada. Due to limited
scanner capacity, inefficient scheduling, and lack of resource sharing, the FET averaged over the
72 MRI hospitals during 2013-2017 was over 66%.
We proposed a heuristic sequencing rule using augmented-priority that balances the initial pri-
ority and the number of days until the wait time target. Using predicted arrivals and service rates,
we designed an advance scheduling algorithm that predicts the treatment time for each patient
and matches them to the appropriate hospital. We use LSTM network to increase the accuracy in
predicting arrivals and service times to reduce the uncertainty associated with advanced scheduling.
We construct a two-dimensional geographic resource pooling with heterogeneous multi-priority
class patients. To minimize the FET while limiting the maximal allowed incremental travel time to
less than 3 hours, we partitioned the 72 hospitals into 10 geographic resource pools using a Genetic
algorithm. Compared with the benchmark solution produced by complete resource sharing with
perfect information, our geographic resource pooling model produced a near-optimal solution with
much lower computational complexity and high robustness.
The geographic pooling model alone reduces the FET from 66% to 54%. Using both geographic
pooling and our improved scheduling heuristics, the overall FET is further reduced to 36%. More
importantly, we can serve all high priority patients within their wait time targets, while ensuring
no patient waits more than 48 days. We also show that, based on the 2017 data, every hospital
improves its FET and over 70% of the hospitals are not worse-off financially under resource pooling.
We provide some insights into the value of flexibility and resource sharing in capacity expansion.
Comparing with the 50 scanners needed to achieve 10% FET under the current operation, we can
achieve the desired FET with only ten more scanners.
Our geographical resource sharing model is not without limitations. Our model requires coordi-
nation between hospitals. Only with the support from our industry collaborators and the existing
network infrastructures in Canada can our geographic partial pooling be implemented.
Our paper provides a partial resource-sharing model that the 72 hospitals in Ontario and hos-
pitals in regions with similar healthcare structures can implement. The Central Intake Program
30
initiated by Ontario Ministry of Health2embraces the core ideas of our paper. The hospitals in
the program centralize the decision-making process of patient assignment. Our model provides
empirical support for this movement towards a centralized decision-making process and recom-
mends optimal clustering of hospitals to reduce the FET further. We are hopeful that the pilot
program will demonstrate the effect of resource-sharing and further advance the implementation
of our proposed geographic resource pools across Ontario, Canada.
References
Af`eche P (2013) Incentive-compatible revenue management in queueing systems: Optimal strategic delay.
Manufacturing & Service Operations Management 15(3):423–443.
Armony M, Roels G, Song H (2017) Pooling queues with discretionary service capacity. History .
Ayvaz N, Huh WT (2010) Allocation of hospital capacity to multiple types of patients. Journal of Revenue
and Pricing Management 9(5):386–398.
Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult.
IEEE transactions on neural networks 5(2):157–166.
Buzacott JA, Shanthikumar JG, Yao DD (1994) Jackson network models of manufacturing systems. Stochas-
tic modeling and analysis of manufacturing systems, 1–45 (Springer).
CAR (2014) Maximum wait time access targets for medical imaging. Canadian Association of Radiologists .
Chakraborty S, Muthuraman K, Lawley M (2010) Sequential clinical scheduling with patient no-shows and
general service time distributions. IIE Transactions 42(5):354–366.
Chvatal V (1979) A greedy heuristic for the set-covering problem. Mathematics of operations research
4(3):233–235.
Do H, Shunko M, Lucas M, Novak D (2015) On the pooling of queues: How server behavior affects perfor-
mance. Available at SSRN 2606071 .
Geng N, Xie X (2016) Optimal dynamic outpatient scheduling for a diagnostic facility with two waiting time
targets. IEEE Transactions on Automatic Control 61(12):3725–3739.
Hay AM, Valentin EC, Bijlsma RA (2006) Modeling emergency care in hospitals: a paradox-the patient
should not drive the process. Proceedings of the 2006 winter simulation conference, 439–445 (IEEE).
He QM, Xie J, Zhao X (2012) Priority queue with customer upgrades. Naval Research Logistics (NRL)
59(5):362–375.
Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR (2012) Improving neural networks
by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 .
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural computation 9(8):1735–1780.
2https://www.ottawahospital.on.ca/en/mricentralintake/
31
Hotelling H (1990) Stability in competition. The Collected Economics Articles of Harold Hotelling, 50–63
(Springer).
Jiang Y, Abouee-Mehrizi H, Diao Y (2020) Data-driven analytics to support scheduling of multi-priority
multi-class patients with wait time targets. European Journal of Operational Research 281(3):597–611.
Kleinrock L, Finkelstein RP (1967) Time dependent priority queues. Operations Research 15(1):104–116.
Kumar M, Husian M, Upreti N, Gupta D (2010) Genetic algorithm: Review and application. International
Journal of Information Technology and Knowledge Management 2(2):451–454.
Laszlo M, Mukherjee S (2007) A genetic algorithm that exchanges neighboring centers for k-means clustering.
Pattern Recognition Letters 28(16):2359–2366.
Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern recognition
36(2):451–461.
Lim MK, Mak HY, Shen ZJM (2017) Agility and proximity considerations in supply chain design. Manage-
ment Science 63(4):1026–1041.
Loch C (1998) Operations management and reengineering. European Management Journal 16(3):306–317.
Mandelbaum A, Reiman MI (1998) On pooling in queueing networks. Management Science 44(7):971–981.
Maulik U, Bandyopadhyay S (2000) Genetic algorithm-based clustering technique. Pattern recognition
33(9):1455–1465.
Min D, Yih Y (2014) Managing a patient waiting list with time-dependent priority and adverse events.
RAIRO-Operations Research 48(1):53–74.
Motiwala SS, Flood CM, Coyte PC, Laporte A (2005) The first ministers’ accord on health renewal and the
future of home care in canada. Available at SSRN 1144868 .
Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. International
conference on machine learning, 1310–1318.
Patrick J, Puterman ML, Queyranne M (2008) Dynamic multipriority patient scheduling for a diagnostic
resource. Operations research 56(6):1507–1525.
Shunko M, Niederhoff J, Rosokha Y (2018) Humans are not machines: The behavioral impact of queueing
design on service time. Management Science 64(1):453–473.
Smith DR, Whitt W (1981) Resource sharing for efficiency in traffic systems. Bell System Technical Journal
60(1):39–55.
Stanford DA, Taylor P, Ziedins I (2014) Waiting time distributions in the accumulating priority queue.
Queueing Systems 77(3):297–330.
Van Dijk N, van der Sluis E (2009) Pooling is not the answer. European Journal of Operational Research
197(1):415–421.
32
Van Mieghem JA (1998) Investment strategies for flexible resources. Management Science 44(8):1071–1078.
Williams RJ, Zipser D (1995) Gradient-based learning algorithms for recurrent. Backpropagation: Theory,
architectures, and applications 433.
Wolsey LA (1982) An analysis of the greedy algorithm for the submodular set covering problem. Combina-
torica 2(4):385–393.
Zipkin PH (2000) Foundations of inventory management.
33
Appendix A: Impact of Pooling on FET versus on Expected Waiting Time
Resource sharing has been shown to be less effective when resources are highly utilized (ρ→1, also known
as heavy traffic), especially if the arrivals are positively correlated. Since many hospitals, especially those
located in urban areas, are currently experiencing very high server utilization, would resource sharing be
effective in solving their congestion problem? The wait time and fraction of patients exceeding their target,
though correlated, react very differently to resource pooling. Since the hospitals’ performances are measured
by FET, it is possible to improve hospital performance without significantly reducing the expected waiting
time. We can push back the right tail of the waiting time distribution while barely changing the mean. To
illustrate why FET performance can increase without improving the average, consider the following example.
Suppose there are two homogeneous hospitals: A and B. Hospital A has one Priority 3 patient with a wait
time target of 10 days who has been waiting for eight days. Hospital B has one Priority 1 patient with a wait
time target of 24 hours that is about to exceed her wait time target. Consider the time when Hospital A has
finished serving a patient and a server becomes available for immediate service, while all servers in Hospital
B are busy. Without resource sharing, the Priority 3 patient receives treatment immediately and leaves the
hospitals with a waiting time of 8 days. The Priority 1 patient would receive service once an MRI machine
becomes available in Hospital B and experience a waiting time of 2 days. This current scheduling system
would result in one patient exceeding her wait time target. By pooling resources together, the Priority 1
patient from Hospital B can receive immediate service and the Priority 3 patient from Hospital A can receive
service when the server becomes idle in Hospital B. This way, the Priority 3 patient leaves with a waiting
time of 9 days and the Priority 1 patient is treated within 24 hours. Therefore, if the expected waiting time
is the performance measure, resource sharing does not improve performance. In contrast, if the performance
metric is FET, resource sharing provides a substantial improvement in this example as both patients are
served within their wait time targets. In this example, the FET is reduced from 50% to 0% under resource
pooling. This simple example demonstrates that resource pooling can provide improvement in FET even if
the waiting time cannot be reduced.
Appendix B: Advanced Scheduling Algorithm
The exact mechanism determining patient i’s expected order and time of service is as follows. We first generate
the arrivals and service rates for a 100-day horizon using the neural networks outlined in Section 6.1.2. We
define the forecasted arrivals on any given day tas Af(t), and the forecasted number of scans performed as
Sf(t). This is done for each cluster of hospitals, where Afand Sfare the aggregated arrivals and service
rates within each cluster. We define E[Order of Service] as the rank of patient ifor a given patient pool and
∆E[Order of Service] as the change in the expected ranking of patient ifor each iteration of the loop. We
initialize ∆E[Order of Service] to an arbitrarily large number, E[Order of Service](0) as the initial ranking
of patient i, and the service capacity and counter jto zero. For each iteration of the loop, we compute the
number of days it would take to sever everyone ahead of patient iand forecast the new arrivals during the
waiting period. We then add the forecasted arrivals in the waiting period to the patient pool and compute
the new E[Order of Service] of patient i. The loop is terminated when the expected Order of Service no
longer changes.
34
Algorithm 1 Treatment Time Prediction Algorithm
1: Patient irequests an MRI scan on day Ai,
2: while ∆E[Order of Service] = 0 do
3: t←Ai
4: while E[Order of Service] >Service Capacity do
5: Service Capacity ←Service Capacity + Sf(t)
6: Waiting Time ←Waiting Time + 1
7: Patient pool ←Patient pool + Af(t)
8: t←t+ 1
9: end while
10: j←j+ 1
11: Rank Patient iagainst patient pool using Equation 6 ←E[Order of Service](j)
12: ∆E[Order of Service] = E[Order of Service](j) - E[Order of Service](j-1)
13: end while
14: return E[Order of Service](j), TreatmentTimei= Arrival Time Ai+ Waiting Time
Algorithm 1 implements this iterative prediction process and returns both the expected Order of Service
and the expected treatment time for patient ibased on the existing patients in the queue, the forecasted
arrivals in the waiting period, and the number of services performed each day. For each cluster, we compute
the expected order of service for each patient. The actual assignment of patients to MRI machines in the
cluster is simple: we assign the patient at the head of the queue (the one with the lowest sequencing score)
to the next available MRI scanner and send the patient to the respective hospital.
Appendix C: Modified K-mean Cluster
The modified K-means algorithm (see Algorithm 2) is essentially an exhaustive process that loops through
all possible numbers of clusters p∈ {1,··· , n}. For a given number of clusters p, we first pick at random
phospitals as the initial center of each cluster. Then we iterate through all possible assignments of the
remaining n-p hospitals to one of the pclusters. Note that certain assignments are excluded if they exceed the
driving constraint; therefore this algorithm is a modified K-means that converges even faster. The maximum
number of iterations (without imposing the driving constraint) to assign all of the remaining n−phospitals to
one of the pclusters is: p×(n−p)! Therefore, the k-means clustering algorithm is guaranteed to convergence
to a feasible solution.
However, there are many limitations to this technique. Firstly, this is a greedy algorithm, which means it
does not guarantee convergence to the global optimum. Secondly, the clustering result is highly dependent
on the selection of the initial phospitals. In our paper, we repeated this algorithm ten times with different
hospital selections as the initial centers of the clusters and obtained different results.
35
Algorithm 2 K-means Clustering
1: Let X={x1, x2, x3, ..., xn}be the set of hospitals we need to cluster
2: for p= 1 to ndo
3: For any given cluster-size p, select phospitals at random as the initial center of each cluster
and denote the clusters as C={c1, c2, ...cp}. Note, we ensure that the pcluster centers are
chosen such that all remaining hospitals are within Dhour driving time to at least one of the
cluster centers
4: Let S={s1, s2, ...sn−p}be the set of hospitals not yet unassigned to a cluster.
5: for si= 1 to n−pdo
6: Let Dsi ={dsi1, dsi2, ...dsip}denote the driving time from hospital sito each of the cluster
centers.
7: Remove all “infeasible” cluster centers for which dsi > D.
8: Compute Equation 5 between hospitals siand all the remaining feasible clusters centers.
9: Assign hospital sito the cluster that minimizes Equation 5
10: Update the clusters C={c1, c2, ..., cp}to include hospital si
11: end for
12: end for
13: return the pthat minimizes Equation 5 as the optimal number of clusters
C.1. Comparing Three Clustering Techniques
As mentioned in Section 7.1, exhaustive search guarantees convergence to the global optimal solution while
the K-means clustering and Genetic Algorithm may converge to a locally optimal solution (Laszlo and
Mukherjee 2007). However, as the number of hospitals in the set increases, exhaustive search suffers from
the curse of dimensionality, and the computational complexity increases exponentially. GA guarantees con-
vergence regardless of the number of hospitals.
Therefore, in situations where it is infeasible to use an exhaustive search, we approximate the optimal
solution using the GA or K-means clustering. Based on the machine learning literature, GA converges faster,
traps in a local optimal less often, and has a lower dependency on the initial population selection. Since
we cannot use an exhaustive search on 72 hospitals, we demonstrate GA and K-means clusters’ ability to
approximate the set cover obtained through exhaustive search using an example of seven hospitals. We
randomly divided the 72 hospitals into ten sets of 7 hospitals tested each of the three algorithms.
Using exhaustive search, we enumerated all possible partitions of the seven hospitals to see which partition
minimizes Equation 5. Among the ten sets of hospitals we picked, GA converged to the global optimal six
times while K-means only converged twice. On average, GA deviates less than two percentage points from
the true optimal, while K-means deviates five percentage points. GA converges quicker and produces fewer
clusters than K-means under all the scenarios. Using the example of seven hospitals, we can see that the GA
36
Figure 7 The Partition Results of 7 Hospitals Using Different Techniques
algorithm converges faster and produces a partition comparable to the global optimal. K-means algorithm
still provides a reasonable approximation to the exhaustive search. We also compared the results of using
Greedy Matching and LP rounding to approximate the set-covering results. We found that GA outperforms
Greedy Matching and shows comparable results to LP for seven hospitals. Since it is impossible to use an
exhaustive search to partition the 72 hospitals, we will use GA to approximate the near-optimal partition of
the 72 hospitals to obtain the geographic resource pools.
Using the set theory technique, we enumerated all possible partitions of the seven hospitals to see which
partition minimizes Equation 2. In this case, we have to compute Equation 2 a total of 1
7+2
7+3
7+4
7+
5
7+6
7+7
7= 127 times to find the results of all possible combinations of the 7 hospitals.
As shown in Figure 7, the seven hospitals are partitioned into 5 clusters and achieve the optimal results
of 28.9% of the patients exceeding their wait time target. The Genetic Algorithm converged after 62 itera-
tions and achieved a 29.59% exceeding percentage. The modified K-means method looped through all seven
hospitals and achieved a 31.31% exceeding percentage. We can see that the GA algorithm’s partition of the
seven hospitals is very similar to the true optimal obtained by set theory. The K-means, regardless of the
starting hospital, converges to a sub-optimal local optimal.