We introduce a Markov chain model to represent a patient's path in terms of the number and type of infections s/he may have acquired during a hospitalization period. The model allows for categories of patient diagnosis, surgery, the four major types of nosocomial (hospital-acquired) infections, and discharge or death. Data from a national medical records survey including 58,647 patients enable us to estimate transition probabilities and, ultimately, perform statistical tests of fit, including a validation test. Novel parameterizations (functions of the transition matrix) are introduced to answer research questions on time-dependent infection rates, time to discharge or death as a function of patient diagnostic groups and conditional infection rates reflecting intervening variables (e.g., surgery).
After acute care services are no longer required, a patient in an acute care hospital often must remain there while he or she awaits the provision of extended care services by a nursing home, through social support services, or by a home health care service. This waiting period is often referred to as "administrative days" because the time is spent in the acute facility not for medical reasons, but rather for administrative reasons. In this paper we use a queueing-analytic approach to describe the process by which patients await placement. We model the situation using a state-dependent placement rate for patients backed up in the acute care facility. We compare our model results with data collected from a convenience sample of 7 hospitals in New York State. We conclude with a discussion of the policy implications of our models.
A health insurance market is examined in which individuals with a history of high utilization of health care services tend to select fee-for-service (FFS) insurance when offered a choice between FFS and health maintenance organizations (HMOs). In addition, HMOs are assumed to practice community rating of employee groups. Based on these observations and health plan enrollment and premium data from Minneapolis-St. Paul, a deterministic simulation model is constructed to predict equilibrium market shares and premiums for HMO and FFS insurers within a firm. Despite the fact that favorable selection enhances their ability to compete with FFS insurers, the model predicts that HMOs maximize profits at less than 100% market share, and at a lower share than they could conceivably capture. That is, HMOs would not find it to their advantage to drive FFS insurers from the market even if they could. In all cases, however, the profit-maximizing HMO premium is greater than the experience-rated premium and, thus, the average health insurance premium per employee in firms offering both HMOs and FFS insurance is predicted to be greater than in firms offering one experience-rated plan. The model may be used to simulate the effects of varying the employer's method of contributing to health insurance premiums. Several contribution methods are compared. Employers who offer FFS and HMO insurance and pay the full cost of the lowest-cost plan are predicted to have lower average total premiums (employer plus employee contributions) than employers who pay any level percent of the cost of each plan.
An approach is presented using interactive microcomputers for the development of diagnostic decision aids applicable to some complaints encountered in ambulatory care. The central feature of the descriptive phase of the approach is the use of the underlying (and perhaps dynamic) state of patient health. The central feature of the prescriptive phase of the approach is quick, simple assessment which produces a set of nondominated diagnostic tests, the selection of which is biased by the subjectively determined disease(s) that the diagnostician wishes to rule out or confirm. We present an application of the approach to the complaint, "diarrhea of recent onset in adults," discuss the hardware/software implementation, and summarize preliminary evaluation results.
The classification of short-term hospitals into homogeneous groups has become an integral part of many systems designed to abate continuing cost inflation in the hospital industry. This paper describes one approach which was developed to identify homogeneous groups of short-term hospitals. The approach, based on hierarchical cluster analysis, defines an objective measure (called expected distinctiveness) to evaluate any group of hospitals identified by a hierarchical grouping structure or dendrogram. Using this measure, an efficient algorithm is developed which finds the hospital partition from the identified groups which maximizes total expected distinctiveness. A numerical example illustrates the application and extensions.
We consider a priority queue in steady state with N servers, two classes of customers, and a cutoff service discipline. Low priority arrivals are "cut off" (refused immediate service) and placed in a queue whenever N1 or more servers are busy, in order to keep N-N1 servers free for high priority arrivals. A Poisson arrival process for each class, and a common exponential service rate, are assumed. Two models are considered: one where high priority customers queue for service and one where they are lost if all servers are busy at an arrival epoch. Results are obtained for the probability of n servers busy, the expected low priority waiting time, and (in the case where high priority customers do not queue) the complete low priority waiting time distribution. The results are applied to determine the number of ambulances required in an urban fleet which serves both emergency calls and low priority patients transfers.
This paper describes 110 applications of decision analysis to health care. Each paper is characterized according to the particular problem it addresses and the methods employed in the application. These applications span 15 years of study and are reported in a widely dispersed literature. Nearly half of the published articles appear in journals with a medical audience and more than 25% of the studies remain unpublished. The major areas of application identified in this review have been the evaluation of alternatives in treatment and health policy planning. Studies discussing conceptual issues in the application of decision analysis represent a substantial portion of those identified. Almost equal numbers of applications involve the use of single and multiattribute utilities in scaling decision outcomes and relatively few apply to group utilities. General discussions of decision analysis methods and applications focused on probability assessments/analyses represent the other major categories of studies cited.
The single-block appointment system is the most common method of scheduling ambulatory care clinics today. Several studies have examined various appointment systems ranging from single-block appointments on one extreme to individual appointments on the other, and including mixtures of these such as multiple-block (m-at-a-time) and block/individual systems. In this paper we analyze a general single-server multiple-block system, one permitting blocks of variable size. In the analysis we use a dynamic programming approach, with some modifications to compensate for the non-Markov nature of the problem. Analytical results and approximations which significantly reduce the computational requirements for a solution are obtained. Examples demonstrate that under certain weightings of the criteria of waiting, idle, and overtime, the generality of the system considered here allows performance superior to that of other commonly used systems.
This paper considers the multidimensional weighted minimax location problem, namely, finding a facility location that minimizes the maximal weighted distance to n points. General distance norms are used. An epsilon-approximate solution is obtained by applying a variant of the Russian method for the solution of Linear Programming. The algorithm has a time complexity of O(n log epsilon) for fixed dimensionality k. Computational results are presented.
The Confidence Profile Method is a Bayesian method for adjusting and combining pieces of evidence to estimate parameters, such as the effect of health technologies on health outcomes. The information in each piece of evidence is captured in a likelihood function that gives the likelihood of the observed results of the evidence as a function of possible values of the parameter. A posterior distribution is calculated from Bayes formula as the product of the likelihood function and a prior distribution. Multiple pieces of evidence are incorporated by successive applications of Bayes' formula. Pieces of evidence are adjusted for biases to internal or external validity by modeling the biases and deriving "adjusted" likelihood functions that incorporate the models. Likelihood functions have been derived for one-, two- and multi-arm prospective studies; 2 x 2, 2 x n and matched case-control studies, and cross-sectional studies. Biases that can be incorporated in likelihood functions include crossover in controlled trials, error in measurement outcomes, patient selection biases, differences in technologies, and differences in length of follow-up. Effect measures include differences of rates, ratios of rates, and odds ratios. The elements of the method are illustrated with an analysis of the effect of a thrombolytic agent on the difference in probability of 1-year survival after a heart attack.
To decide whether or not to undertake an expensive national survey to determine the effectiveness of infection control, we devised a quantitative decision model to analyze the costs and probabilities of successful study outcomes. The result allowed us to determine whether the proposed study method and design would provide sufficient statistical power to ensure meaningful conclusions from the research. The model was robust in assessing the adequacy of method accuracy and, within the range of assumptions specified, it suggested that the project should be undertaken. The results helped to secure official approval and funding for this large-scale research project. A novel approach to evaluating sensitivity analysis is included. As constructed, the model is applicable to other projects in applied research and, with some modification, to projects in basic research as well.
A four-attribute health state classification system designed to uniquely categorize the health status of all individuals two years of age and over is presented. A social preference function defined over the health state classification system is required. Standard multi-attribute utility theory is investigated for the task, problems are identified and modifications to the standard method are proposed. The modified methods is field tested in a survey research project involving 112 home interviews. Results are presented and discussed in detail for both the social preference function and the performance of the modified method. A recommended social preference function is presented, complete with a range of uncertainty. The modified method is found to be applicable to the task--no insurmountable difficulties are encountered. Recommendations are presented, based on our experience, for other investigators who may be interested in reapplying the method in other studies.
Breast cancer is the most common non-skin cancer affecting women in the United States, where every year more than 20 million mammograms are performed. Breast biopsy is commonly performed on the suspicious findings on mammograms to confirm the presence of cancer. Currently, 700,000 biopsies are performed annually in the U.S.; 55%-85% of these biopsies ultimately are found to be benign breast lesions, resulting in unnecessary treatments, patient anxiety, and expenditures. This paper addresses the decision problem faced by radiologists: When should a woman be sent for biopsy based on her mammographic features and demographic factors? This problem is formulated as a finite-horizon discrete-time Markov decision process. The optimal policy of our model shows that the decision to biopsy should take the age of patient into account; particularly, an older patient's risk threshold for biopsy should be higher than that of a younger patient. When applied to the clinical data, our model outperforms radiologists in the biopsy decision-making problem. This study also derives structural properties of the model, including sufficiency conditions that ensure the existence of a control-limit type policy and nondecreasing control-limits with age.
We employ a birth and death process to describe the spread of an infectious disease through a closed population. Control of the epidemic can be effected at any instant by varying the birth and death rates to represent quarantine and medical care programs. An optimal strategy is one which minimizes the expected discounted losses and costs resulting from the epidemic process and the control programs over an infinite horizon. We formulate the problem as a continuous-time Markov decision model. Then we present conditions ensuring that optimal quarantine and medical care program levels are nonincreasing functions of the number of infectives in the population. We also analyze the dependence of the optimal strategy on the model parameters. Finally, we present an application of the model to the control of a rumor.
One measure of the effectiveness of institutional trauma and burn management based on collected patient data involves the computation of a standard normal Z statistic. A potential weakness of the measure arises from incomplete patient data. In this paper, we apply methods of fractional programming and global optimization to efficiently calculate bounds on the computed effectiveness of an institution. The measure of effectiveness (i.e., the trauma outcome function) is briefly described, the optimization problems associated with its upper and lower bounds are defined and characterized, and appropriate solution procedures are developed. We solve an example problem to illustrate the method.
This paper presents a mixed-integer goal programming model for expense budgeting in a hospital nursing department. The model incorporates several different objectives based upon such considerations as cost containment and providing appropriate nursing hours for delivering quality nursing care. Also considered are possible trade-offs among full-time, part-time and overtime nurses on weekdays as well as weekends. The budget includes vacation, sick leave, holiday, and seniority policies of a hospital and various constraints on a hospital nursing service imposed by nursing unions. The results are based upon data from a study hospital and indicate that the model is practical for budgeting in a hospital nursing department.
In this paper we present two iterative methods for solving a model to evaluate busy probabilities for Emergency Medical Service (EMS) vehicles. The model considers location dependent service times and is an alternative to the mean service calibration method; a procedure, used with the Hypercube Model, to accommodate travel times and location-dependent service times. We use monotonicity arguments to prove that one iterative method always converges to a solution. A large computational experiment suggests that both methods work satisfactorily in EMS systems with low ambulance busy probabilities and the method that always converges to a solution performs significantly better in EMS systems with high busy probabilities.
This paper describes the development of a model for making project funding decisions at The National Cancer Institute (NCI). The American Stop Smoking Intervention Study (ASSIST) is a multiple-year, multiple-site demonstration project, aimed at reducing smoking prevalence. The initial request for ASSIST proposals was answered by about twice as many states as could be funded. Scientific peer review of the proposals was the primary criterion used for funding decisions. However, a modified Delphi process made explicit several criteria of secondary importance. A structured questionnaire identified the relative importance of these secondary criteria, some of which we incorporated into a composite preference function. We modeled the proposal funding decision as a zero-one program, and adjusted the preference function and available budget parametrically to generate many suitable outcomes. The actual funding decision, identified by our model, offers significant advantages over manually generated solutions found by experts at NCI.
Models estimating demand and need for emergency transportation services are developed. These models can provide reliable estimates which can be used for planning purposes, by complementing and/or substituting for historical data. The model estimating demand requires only four independent variables: population in the area, employment in the area, and two indicators of socioeconomic status which can be obtained from census data. The model can be used to estimate demand according to 4 operational categories and 11 clinical categories. The parameters of the model are calibrated with 1979 data from 82 ambulance services covering over 200 minor civil divisions in Southwestern Pennsylvania. This model was tested with data from another 55 minor civil divisions, also in Southwestern Pennsylvania, and it provided good estimates to total demand. The model to estimate need evolves from the demand model. It enables planners to estimate unmet need occurring in the region. The effect of emergency transportation service (ETS) provider characteristics on demand was also investigated. Statistical tests show that, for purposes of forecasting demand, when the sociodemographic factors are taken into account, provider characteristics are not significant.
A work force includes workers of m types. The worker categories are ordered, with type-1 workers the most highly qualified, type-2 the next, and so on. If the need arises, a type-k worker is able to substitute for a worker of any type j greater than k (k = 1, ..., m - 1). For 7-day-a-week operation, daily requirements are for at least Dk workers of type-k or better, of which at least dk must be precisely type-k. Formulas are given to find the smallest number and most economical mix of workers, assuming that each worker must have 2 off-days per week and a given fraction of weekends off. Algorithms are presented which generate a feasible schedule, and provide work stretches between 2 and 5 days, and consecutive weekdays off when on duty for 2 weekends in a row, without additional staff.
When conducting inferential and epidemiologic studies, researchers are often interested in the distribution of time until the occurrence of some specified event, a form of incidence calculation. Furthermore, this interest often extends to the effects of intervening factors on this distribution. In this paper we impose the assumption that the phenomena being investigated are governed by a stationary Markov chain and review how one may estimate the above distribution. We then introduce and relate two different methods of investigating the effects of intervening factors. In particular, we show how an investigator may evaluate the effect of potential intervention programs. Finally, we demonstrate the proposed methodology using data from a population study.
This paper considers the impact of congestion on the spatial distribution of customer utilization of service facilities in a stochastic-dynamic environment. Previous research has assumed that the rate of demand for service is independent of the attributes of the facilities. We consider the more general case in which facility utilization is determined both by individual facility choice (based on the stochastic disaggregate choice mechanism) and by the rate of demand for service. We develop generalized results for proving that equilibria exist and describe sufficient conditions for the uniqueness and global stability of these equilibria. These conditions depend upon the elasticity of demand with respect to the level of congestion at the facilities, and on whether customers are congestion-averse or are congestion-loving. Finally, we examine special cases when these conditions are satisfied.
A fundamental problem of cyclic staffing is to size and schedule a minimum-cost workforce so that sufficient workers are on duty during each time period. This may be modeled as an integer linear program with a cyclically structured 0-1 constraint matrix. We identify a large class of such problems for which special structure permits the ILP to be solved parametrically as a bounded series of network flow problems. Moreover, an alternative solution technique is shown in which the continuous-valued LP is solved and the result rounded in a special way to yield an optimum solution to the ILP.
This paper investigates a class of single-facility location problems on an arbitrary network. Necessary and sufficient conditions are obtained for characterizing locally optimal locations with respect to a certain nonlinear objective function. This approach produces a number of new results for locating a facility on an arbitrary network, and in addition it unifies several known results for the special case of tree networks. It also suggests algorithmic procedures for obtaining such optimal locations.
A co-epidemic arises when the spread of one infectious disease stimulates the spread of another infectious disease. Recently, this has happened with human immunodeficiency virus (HIV) and tuberculosis (TB). We develop two variants of a co-epidemic model of two diseases. We calculate the basic reproduction number (R(0)), the disease-free equilibrium, and the quasi-disease-free equilibria, which we define as the existence of one disease along with the complete eradication of the other disease, and the co-infection equilibria for specific conditions. We determine stability criteria for the disease-free and quasi-disease-free equilibria. We present an illustrative numerical analysis of the HIV-TB co-epidemics in India that we use to explore the effects of hypothetical prevention and treatment scenarios. Our numerical analysis demonstrates that exclusively treating HIV or TB may reduce the targeted epidemic, but can subsequently exacerbate the other epidemic. Our analyses suggest that coordinated treatment efforts that include highly active antiretroviral therapy for HIV, latent TB prophylaxis, and active TB treatment may be necessary to slow the HIV-TB co-epidemic. However, treatment alone may not be sufficient to eradicate both diseases. Increased disease prevention efforts (for example, those that promote condom use) may also be needed to extinguish this co-epidemic. Our simple model of two synergistic infectious disease epidemics illustrates the importance of including the effects of each disease on the transmission and progression of the other disease.
A general structural equation model for representing consumer response to innovation is derived and illustrated. The approach both complements and extends an earlier model proposed by Hauser and Urban. Among other benefits, the model is able to take measurement error into account explicitly, to estimate the intercorrelation among exogenous factors if these exist, to yield a unique solution in a statistical sense, and to test complex hypotheses (e.g., systems of relations, simultaneity, feedback) associated with the measurement of consumer responses and their impact on actual choice behavior. In addition, the procedures permit one to model environmental and managerially controllable stimuli as they constrain and influence consumer choice. Limitations of the procedures are discussed and related to existing approaches. Included in the discussion is a development of four generic response models designed to provide a framework for modeling how consumers behave and how managers might better approach the design of products, persuasive appeals, and other controllable factors in the marketing mix.
In this paper, a simplified model describing the stochastic process underlying the etiology of contagious and noncontagious diseases with mass screening is developed. Typical examples might include screening of tuberculosis in urban ghetto areas, venereal diseases in the sexually active, or AIDS in high risk population groups. The model is addressed to diseases which have zero or negligible latent periods. In the model, it is assumed that the reliabilities of the screening tests are constant, and independent of how long the population unit has the disease. Both tests with perfect and imperfect reliabilities are considered. It is shown that most of the results of a 1978 study by W.P. Pierskalla and J.A. Voelker for noncontagious diseases can be generalized for contagious diseases. A mathematical program for computing the optimal test choice and screening periods is presented. It is shown that the optimal screening schedule is equally spaced for tests with perfect reliability. Other properties relating to the managerial problems of screening frequencies, test selection, and resource allocation are also presented.
This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.
This article presents a methodology to identify and specify a continuous time semi-Markov model of population flow within a network of service facilities. An iterative procedure of state space definition, population disaggregation, and parameter estimation leads to the specification of a model which satisfies the underlying semi-Markov assumptions. We also present a test of the impact of occupancy upon realizations of population flows. The procedure is applied to data describing the movement of obstetric patients in a large university teaching hospital. We use the model to predict length-of-stay distributions. Finally, we compare these results with those that would have been obtained without the procedure, and show the modified model to be superior.
This paper presents a methodology for estimating expected utilization and service level for a class of capacity constrained service network facilities operating in a stochastic environment. A semi-Markov process describes the flows of customers (patients) through a network of service units. We model the case where one of the units has finite capacity and no queues are allowed to form. We show that the expected level of utilization and service can be computed from a simple linear relationship based on (a) the equilibrium arrival rates at each unit which are associated with the case of infinite capacity, (b) mean holding times for each unit, and (c) the probability that the finite capacity unit is at full capacity. We use Erlang's loss formula to calculate the probability of full capacity, show this calculation to be exact for two cases, and recommend its use as an approximation in the general case. We test the accuracy of the approximation on a set of published data. In the discussion, we present a technique for analyzing collected patient flow data using the results of this methodology.
The federal Medicare regulations reimburse hospitals on a pro rata share of the hospital's cost. Hence, to meet its financial requirements, a hospital is forced to shift more of the financial burdens onto its private patients. This procedure has contributed to double digit inflation in hospital prices and to proposed federal regulation to control the rate of increase in hospital revenues. In this regulatory environment, we develop nonlinear programming pricing and cost allocation models to aid hospital administrators in meeting their profit maximizing and profit satisfying goals. The model enables administrators to explore tactical issues such as: (i) studying the relationship between a voluntary or legislated cap on a hospital's total revenues and the hospital's profitability, (ii) identifying those departments within the hospital that are the most attractive candidates for cost reduction or cost containment efforts, and (iii) isolating those services that should be singled out by the hospital manager for renegotiation of the prospective or "customary and reasonable" cap. Finally the modeling approach is helpful in explaining the departmental cross subsidies observed in practice, and can be of aid to federal administrators in assessing the impacts of proposed changes in the Medicare reimbursement formula.
This paper reviews the relevant literature on the problem of determining suitable ordering policies for both fixed life perishable inventory, and inventory subject to continuous exponential decay. We consider both deterministic and stochastic demand for single and multiple products. Both optimal and suboptimal order policies are discussed. In addition, a brief review of the application of these models to blood bank management is included. The review concludes with a discussion of some of the interesting open research questions in the area.
We consider a time-dependent stopping problem and its application to the decision-making process associated with transplanting a live organ. "Offers" (e.g., kidneys for transplant) become available from time to time. The values of the offers constitute a sequence of independent identically distributed positive random variables. When an offer arrives, a decision is made whether to accept it. If it is accepted, the process terminates. Otherwise, the offer is lost and the process continues until the next arrival, or until a moment when the process terminates by itself. Self-termination depends on an underlying lifetime distribution (which in the application corresponds to that of the candidate for a transplant). When the underlying process has an increasing failure rate, and the arrivals form a renewal process, we show that the control-limit type policy that maximizes the expected reward is a nonincreasing function of time. For non-homogeneous Poisson arrivals, we derive a first-order differential equation for the control-limit function. This equation is explicitly solved for the case of discrete-valued offers, homogeneous Poisson arrivals, and Gamma distributed lifetime. We use the solution to analyze a detailed numerical example based on actual kidney transplant data.
The Venture Evaluation and Review Technique (VERT) is a computerized, mathematically oriented network-based simulation technique designed to analyze risk existing in three parameters of most concern to managers in new projects or ventures--time, cost, and performance. As such, the VERT technique is more powerful than techniques such as GERT, which are basically time and cost oriented. VERT has been successfully utilized to assess the risks involved in new ventures and projects, in the estimation of future capital requirements, in control monitoring, and in the overall evaluation of ongoing projects, programs, and systems. It has been helpful to management in cases where there is a requirement to make decisions with incomplete or inadequate information about the alternatives. An example describing the application of VERT to an operational planning problem--the evaluation of electric power generating methods--is illustrated.
The optimal control of arrivals to a two-station token ring
network is analyzed. By adopting a maximum system throughput under a
system time-delay optimality criterion, a social optimality problem is
studied under the assumption that both stations have global information
(i.e. the number of packets in each station). The controlled arrivals
are assumed to be state-dependent Poisson streams and have exponentially
distributed service time. The optimality problem is formulated as a
dynamic programming problem with a convex cost function. Using duality
theory, it is then shown that the optimal control is switchover when
both queues have the same service rate and sufficiently large buffers.
Nonlinear programming is used to numerically approximate the optimal
local controls for comparison purposes. The results obtained under
global and local information can be used to provide a measure of the
tradeoff between maximum throughput efficiency and protocol complexity.
Numerical examples illustrating the theoretical results are provided
When a bidder's strategy in one auction will affect his competitor's behavior in subsequent auctions, bidding in a sequence of auctions can be modeled fruitfully as a multistage control process in which the control is the bidder's strategy while the state characterizes the competitors' behavior. This paper presents such a model in which the state transition represents the competitors' reaction to the bidder's strategy. Dynamic programming is used to derive the infinite horizon optimal bidding strategy. It is shown that in steady state this optimal strategy generalizes a previous result for equilibrium bidding strategy in "one-shot" auctions.
In this paper we study a generalized model for quantity (Cournot) oligopolistic competition. The main goal in this paper is to understand Cournot competition when firms are producing multiple differentiated products and are faced with a variety of constraints. We first study existence and uniqueness of Cournot equilibria under general constraints. The main focus of the paper is to compare the total society surplus and the total firms' profit under Cournot competition to the corresponding total surplus and total profit of the firms under a centralized setting, (i.e., when a single firm controls all the products in the market maximizing total surplus or total profit respectively). Our goal is to understand how the presence of competition affects the overall society (that includes firms and consumers) as well as the overall firms' profit in the system, but also determine what the key drivers of the inefficiencies that arise due to competition are.
R. Larson (1990) proposed a method to statistically infer the
expected transient queue length during a busy period with Poisson
arrival in O ( n <sup>5</sup>) solely from the n
starting and stopping times of each customer's service during the busy
period. Here, the authors develop a novel O ( n <sup>3
</sup>) algorithm which uses those data to deduce transient queue
lengths as well as the waiting times of each customer in the busy
period. In a manner analogous to the Kalman filter, they also develop an
O ( n ) online algorithm to dynamically update the
current estimates for queue lengths after each departure. Moreover, they
generalize their algorithms for the case of a time-varying Poisson
process and also for the case of i.i.d. interarrival times with an
arbitrary distribution. Computational results that exhibit the speed and
accuracy of these algorithms are reported
This paper presents a method for real-time scheduling and routing of material in a Flexible Manufacturing System (FMS). It extends the earlier scheduling work of Kimemia and Gershwin. The FMS model includes machines that fail at random times and stay down for random lengths of time. The new element is the capability of different machines to perform some of the same operations. The times that different machines require to perform the same operation may differ. This paper includes a model, its analysis, a real-time algorithm, and examples.
We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.
Dynamic spectrum access (DSA) is a new paradigm in radio frequency spectrum
sharing. It allows unlicensed secondary users (SUs) to access the spectrum
licensed to primary spectrum owners (POs) opportunistically. Market-driven
secondary spectrum trading is an effective way to provide proper economic
incentive for DSA, and to achieve high spectrum secondary utilization
efficiency in DSA. In this paper, we consider the short-term secondary spectrum
trading between one PO and multiple SUs in a hybrid spectrum market consisting
of both the futures market (with contract buyers) and the spot market (with
spot transaction buyers). We focus on the expected spectrum efficiency
maximization (E-SEM) under stochastic network information, taking into
consideration both the spatial spectrum reuse and information asymmetry. To
solve this problem, we first compute an optimal policy that maximizes the
ex-ante expected spectrum efficiency based on the stochastic distribution of
network information, and then design a VCG-based mechanism that determines the
real-time allocation and pricing under information asymmetry. With the spatial
spectrum reuse, the VCG mechanism is NP-hard. Thus, we further propose a
heuristics solution based on an VCG-like mechanism with polynomial complexity,
and quantify the associated efficiency loss systematically. Simulations show
that (i) the optimal policy significantly outperforms the random and greedy
allocation policies with an average increase of 20% in terms of the expected
spectrum efficiency, and (ii) the heuristics solution exhibits good and robust
performance (reaches at least 70% of the optimal efficiency in our
Growing demand, increasing diversity of services, and advances in transmission and switching technologies are prompting telecommunication companies to rapidly expand and modernize their networks. This paper develops and tests a decomposition methodology to generate cost-effective expansion plans, with performance guarantees, for one major component of the network hierarchy – the local access network. The model captures economies of scale in facility costs and tradeoffs between installing concentrators and expanding cables to accommodate demand growth. Our solution method exploits the special tree and routing structure of the expansion planning problem to incorporate valid inequalities, obtained by studying the problem’s polyhedral structure, in a dynamic program which solves an uncapacitated version of the problem. Computational results for three realistic test networks demonstrate that our enhanced dynamic programming algorithm, when embedded in a Lagrangian relaxation scheme (with problem preprocessing and local improvement), is very effective in generating good upper and lower bounds: Implemented on a personal computer, the method generates solutions within 1.2-7.0% of optimality. In addition to developing a successful solution methodology for a practical problem, this paper illustrates the possibility of effectively combinating decomposition methods and polyhedral approaches.
We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity; and (ii) the number of servers. It follows that its relative suboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's rule to the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set of work decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.
We study the provision of an excludable public good to discuss whether the imposition of participation constraints is desirable. It is shown that this question may equivalently be cast as follows: should a firm that produces a public good receive tax revenues, or face a self-financing requirement. The main result is that the desirability of participation constraints is shaped by an equity-efficiency tradeoff: While first-best is out of reach with participation constraints, their imposition yields a more equitable distribution of the surplus. This result relies on an incomplete contracts perspective. With a benevolent mechanism designer, participation constraints are never desirable.
We consider the problem of finding an optimal history-dependent routing
strategy on a directed graph weighted by stochastic arc costs when the decision
maker is constrained by a travel-time budget and the objective is to optimize
the expected value of a function of the budget overrun. Leveraging recent
results related to the problem of maximizing the probability of termination
within budget, we first propose a general formulation and solution method able
to handle not only uncertainty but also tail risks. We then extend this general
formulation to the robust setting when the available knowledge on arc cost
probability distributions is restricted to a given subset of their moments.
This robust version takes the form of a continuous dynamic programming
formulation with an inner generalized moment problem. We propose a general
purpose algorithm to solve a discretization scheme together with a streamlined
procedure in the case of scarce information limited to lower-order statistics.
To illustrate the benefits of a robust policy, we run numerical experiments
with field data from the Singapore road network.
It is often the case that the computed optimal solution of an optimization
problem cannot be implemented directly, irrespective of data accuracy, due to
either (i) technological limitations (such as physical tolerances of machines
or processes), (ii) the deliberate simplification of a model to keep it
tractable (by ignoring certain types of constraints that pose computational
difficulties), and/or (iii) human factors (getting people to "do" the optimal
solution). Motivated by this observation, we present a modeling paradigm called
"fabrication-adaptive optimization" for treating issues of
implementation/fabrication. We develop computationally-focused theory and
algorithms, and we present computational results for incorporating
considerations of implementation/fabrication into constrained optimization
problems that arise in photonic crystal design. The fabrication-adaptive
optimization framework stems from the robust regularization of a function. When
the feasible region is not a normed space (as typically encountered in
application settings), the fabrication-adaptive optimization framework
typically yields a non-convex optimization problem. (In the special case where
the feasible region is a finite-dimensional normed space, we show that
fabrication-adaptive optimization can be re-cast as an instance of modern
robust optimization.) We study a variety of problems with special structures on
functions, feasible regions, and norms, for which computation is tractable, and
develop an algorithmic scheme for solving these problems in spite of the
challenges of non-convexity. We apply our methodology to compute
fabrication-adaptive designs of two-dimensional photonic crystals with a
variety of prescribed features.
In this paper, we discuss the use of mixed integer rounding (MIR) inequalities to solve mixed integer programs. MIR inequalities are essentially Gomory mixed integer cuts. However, as we wish to use problem structure, we insist that MIR inequalities be generated from constraints or simple aggregations of constraints of the original problem. This idea is motivated by the observation that several strong valid inequalities based on specific problem structure can be derived as MIR inequalities. Here we present and test a separation routine for such MIR inequalities that includes a heuristic row aggregation procedure to generate a single knapsack plus continuous variables constraint, complementation of variables, and finally the generation of an MIR inequality. Inserted in a branch-and-cut system, the results suggest that such a routine is a useful additional tool for tackling a variety of mixed integer programming problems.
Motivated by the important problem of congestion costs (they were estimated to be $ 2 billion in 199 1) in air transportation and observing that ground delays are more preferable than airborne delays, we have formulated and studied several integer programming models to assign ground-holding delays optimally in a general network of airports, so that the total (ground plus airborne) delay cost of all flights is minimized. All previous research on this problem has been restricted to the single-airport case, which neglects ''down-the-road'' effects due to transmission of delays between successive flights performed by the same aircraft. We formulate several models, and then propose a heuristic algorithm which finds a feasible solution to the integer program by rounding the optimal solution of the LP relaxation. Finally, we present extensive computational results with the goal of obtaining qualitative insights on the behavior of the problem under various combinations of the input parameter's. We demonstrate that the problem can be solved in reasonable computation times for networks with at least as many as 6 airports and 3,000 flights.