Operations Research

Published by Institute for Operations Research and the Management Sciences

Online ISSN: 1526-5463

·

Print ISSN: 0030-364X

Articles


A Stochastic Model to Measure Patient Effects Stemming from Hospital-Acquired Infections
  • Article

December 1982

·

52 Reads

George T. Kastner

·

We introduce a Markov chain model to represent a patient's path in terms of the number and type of infections s/he may have acquired during a hospitalization period. The model allows for categories of patient diagnosis, surgery, the four major types of nosocomial (hospital-acquired) infections, and discharge or death. Data from a national medical records survey including 58,647 patients enable us to estimate transition probabilities and, ultimately, perform statistical tests of fit, including a validation test. Novel parameterizations (functions of the transition matrix) are introduced to answer research questions on time-dependent infection rates, time to discharge or death as a function of patient diagnostic groups and conditional infection rates reflecting intervening variables (e.g., surgery).
Share

Administrative Days in Acute Care Facilities: A Queueing-Analytic Approach

February 1987

·

11 Reads

After acute care services are no longer required, a patient in an acute care hospital often must remain there while he or she awaits the provision of extended care services by a nursing home, through social support services, or by a home health care service. This waiting period is often referred to as "administrative days" because the time is spent in the acute facility not for medical reasons, but rather for administrative reasons. In this paper we use a queueing-analytic approach to describe the process by which patients await placement. We model the situation using a state-dependent placement rate for patients backed up in the acute care facility. We compare our model results with data collected from a convenience sample of 7 hospitals in New York State. We conclude with a discussion of the policy implications of our models.

Simulation of a Health Insurance Market With Adverse Selection
  • Article
  • Full-text available

December 1982

·

149 Reads

A health insurance market is examined in which individuals with a history of high utilization of health care services tend to select fee-for-service (FFS) insurance when offered a choice between FFS and health maintenance organizations (HMOs). In addition, HMOs are assumed to practice community rating of employee groups. Based on these observations and health plan enrollment and premium data from Minneapolis-St. Paul, a deterministic simulation model is constructed to predict equilibrium market shares and premiums for HMO and FFS insurers within a firm. Despite the fact that favorable selection enhances their ability to compete with FFS insurers, the model predicts that HMOs maximize profits at less than 100% market share, and at a lower share than they could conceivably capture. That is, HMOs would not find it to their advantage to drive FFS insurers from the market even if they could. In all cases, however, the profit-maximizing HMO premium is greater than the experience-rated premium and, thus, the average health insurance premium per employee in firms offering both HMOs and FFS insurance is predicted to be greater than in firms offering one experience-rated plan. The model may be used to simulate the effects of varying the employer's method of contributing to health insurance premiums. Several contribution methods are compared. Employers who offer FFS and HMO insurance and pay the full cost of the lowest-cost plan are predicted to have lower average total premiums (employer plus employee contributions) than employers who pay any level percent of the cost of each plan.
Download

Decision Aid Development for Use in Ambulatory Health Care Settings

June 1982

·

18 Reads

An approach is presented using interactive microcomputers for the development of diagnostic decision aids applicable to some complaints encountered in ambulatory care. The central feature of the descriptive phase of the approach is the use of the underlying (and perhaps dynamic) state of patient health. The central feature of the prescriptive phase of the approach is quick, simple assessment which produces a set of nondominated diagnostic tests, the selection of which is biased by the subjectively determined disease(s) that the diagnostician wishes to rule out or confirm. We present an application of the approach to the complaint, "diarrhea of recent onset in adults," discuss the hardware/software implementation, and summarize preliminary evaluation results.

Waiting Time In a Multi-Server Cutoff-Priority Queue, and Its Application to an Urban Ambulance Service

October 1980

·

53 Reads

We consider a priority queue in steady state with N servers, two classes of customers, and a cutoff service discipline. Low priority arrivals are "cut off" (refused immediate service) and placed in a queue whenever N1 or more servers are busy, in order to keep N-N1 servers free for high priority arrivals. A Poisson arrival process for each class, and a common exponential service rate, are assumed. Two models are considered: one where high priority customers queue for service and one where they are lost if all servers are busy at an arrival epoch. Results are obtained for the probability of n servers busy, the expected low priority waiting time, and (in the case where high priority customers do not queue) the complete low priority waiting time distribution. The results are applied to determine the number of ambulances required in an urban fleet which serves both emergency calls and low priority patients transfers.

An Annotated Bibliography of Decision Analytic Applications to Health Care

February 1980

·

12 Reads

This paper describes 110 applications of decision analysis to health care. Each paper is characterized according to the particular problem it addresses and the methods employed in the application. These applications span 15 years of study and are reported in a widely dispersed literature. Nearly half of the published articles appear in journals with a medical audience and more than 25% of the studies remain unpublished. The major areas of application identified in this review have been the evaluation of alternatives in treatment and health policy planning. Studies discussing conceptual issues in the application of decision analysis represent a substantial portion of those identified. Almost equal numbers of applications involve the use of single and multiattribute utilities in scaling decision outcomes and relatively few apply to group utilities. General discussions of decision analysis methods and applications focused on probability assessments/analyses represent the other major categories of studies cited.

Decision Analysis Assessment of a National Medical Study

February 1980

·

10 Reads

To decide whether or not to undertake an expensive national survey to determine the effectiveness of infection control, we devised a quantitative decision model to analyze the costs and probabilities of successful study outcomes. The result allowed us to determine whether the proposed study method and design would provide sufficient statistical power to ensure meaningful conclusions from the research. The model was robust in assessing the adequacy of method accuracy and, within the range of assumptions specified, it suggested that the project should be undertaken. The results helped to secure official approval and funding for this large-scale research project. A novel approach to evaluating sensitivity analysis is included. As constructed, the model is applicable to other projects in applied research and, with some modification, to projects in basic research as well.

Application of Multi-Attribute Utility Theory to Measure Social Preference for Health States

December 1982

·

976 Reads

A four-attribute health state classification system designed to uniquely categorize the health status of all individuals two years of age and over is presented. A social preference function defined over the health state classification system is required. Standard multi-attribute utility theory is investigated for the task, problems are identified and modifications to the standard method are proposed. The modified methods is field tested in a survey research project involving 112 home interviews. Results are presented and discussed in detail for both the social preference function and the performance of the modified method. A recommended social preference function is presented, complete with a range of uncertainty. The modified method is found to be applicable to the task--no insurmountable difficulties are encountered. Recommendations are presented, based on our experience, for other investigators who may be interested in reapplying the method in other studies.


Figure 1. State transition diagram of the OBDM.  
Figure 2. Optimal age-dependent policy to perform biopsy.  
Figure 3.  
Figure 7.  
Optimal Breast Biopsy Decision-Making Based on Mammographic Features and Demographic Factors

November 2010

·

193 Reads

Breast cancer is the most common non-skin cancer affecting women in the United States, where every year more than 20 million mammograms are performed. Breast biopsy is commonly performed on the suspicious findings on mammograms to confirm the presence of cancer. Currently, 700,000 biopsies are performed annually in the U.S.; 55%-85% of these biopsies ultimately are found to be benign breast lesions, resulting in unnecessary treatments, patient anxiety, and expenditures. This paper addresses the decision problem faced by radiologists: When should a woman be sent for biopsy based on her mammographic features and demographic factors? This problem is formulated as a finite-horizon discrete-time Markov decision process. The optimal policy of our model shows that the decision to biopsy should take the age of patient into account; particularly, an older patient's risk threshold for biopsy should be higher than that of a younger patient. When applied to the clinical data, our model outperforms radiologists in the biopsy decision-making problem. This study also derives structural properties of the model, including sufficiency conditions that ensure the existence of a control-limit type policy and nondecreasing control-limits with age.

Bounds on a Trauma Outcome Function via Optimization

February 1992

·

13 Reads

·

·

·

[...]

·

One measure of the effectiveness of institutional trauma and burn management based on collected patient data involves the computation of a standard normal Z statistic. A potential weakness of the measure arises from incomplete patient data. In this paper, we apply methods of fractional programming and global optimization to efficiently calculate bounds on the computed effectiveness of an institution. The measure of effectiveness (i.e., the trauma outcome function) is briefly described, the optimization problems associated with its upper and lower bounds are defined and characterized, and appropriate solution procedures are developed. We solve an example problem to illustrate the method.

A Mixed-Integer Goal Programming Model for Nursing Service Budgeting

October 1981

·

33 Reads

This paper presents a mixed-integer goal programming model for expense budgeting in a hospital nursing department. The model incorporates several different objectives based upon such considerations as cost containment and providing appropriate nursing hours for delivering quality nursing care. Also considered are possible trade-offs among full-time, part-time and overtime nurses on weekdays as well as weekends. The budget includes vacation, sick leave, holiday, and seniority policies of a hospital and various constraints on a hospital nursing service imposed by nursing unions. The results are based upon data from a study hospital and indicate that the model is practical for budgeting in a hospital nursing department.

Methods for Solving Nonlinear Equations Used in Evaluating Emergency Vehicle Busy Probabilities

December 1991

·

23 Reads

In this paper we present two iterative methods for solving a model to evaluate busy probabilities for Emergency Medical Service (EMS) vehicles. The model considers location dependent service times and is an alternative to the mean service calibration method; a procedure, used with the Hypercube Model, to accommodate travel times and location-dependent service times. We use monotonicity arguments to prove that one iterative method always converges to a solution. A large computational experiment suggests that both methods work satisfactorily in EMS systems with low ambulance busy probabilities and the method that always converges to a solution performs significantly better in EMS systems with high busy probabilities.

A Model for Making Project Funding Decisions at the National Cancer Institute

December 1992

·

126 Reads

This paper describes the development of a model for making project funding decisions at The National Cancer Institute (NCI). The American Stop Smoking Intervention Study (ASSIST) is a multiple-year, multiple-site demonstration project, aimed at reducing smoking prevalence. The initial request for ASSIST proposals was answered by about twice as many states as could be funded. Scientific peer review of the proposals was the primary criterion used for funding decisions. However, a modified Delphi process made explicit several criteria of secondary importance. A structured questionnaire identified the relative importance of these secondary criteria, some of which we incorporated into a composite preference function. We modeled the proposal funding decision as a zero-one program, and adjusted the preference function and available budget parametrically to generate many suitable outcomes. The actual funding decision, identified by our model, offers significant advantages over manually generated solutions found by experts at NCI.


Equilibrium Analysis of Disaggregate Facility Choice Systems Subject to Congestion-Elastic Demand

April 1985

·

17 Reads

This paper considers the impact of congestion on the spatial distribution of customer utilization of service facilities in a stochastic-dynamic environment. Previous research has assumed that the rate of demand for service is independent of the attributes of the facilities. We consider the more general case in which facility utilization is determined both by individual facility choice (based on the stochastic disaggregate choice mechanism) and by the rate of demand for service. We develop generalized results for proving that equilibria exist and describe sufficient conditions for the uniqueness and global stability of these equilibria. These conditions depend upon the elasticity of demand with respect to the level of congestion at the facilities, and on whether customers are congestion-averse or are congestion-loving. Finally, we examine special cases when these conditions are satisfied.

Cyclic Scheduling via Integer Programs with Circular Ones

October 1980

·

535 Reads

A fundamental problem of cyclic staffing is to size and schedule a minimum-cost workforce so that sufficient workers are on duty during each time period. This may be modeled as an integer linear program with a cyclically structured 0-1 constraint matrix. We identify a large class of such problems for which special structure permits the ILP to be solved parametrically as a bounded series of network flow problems. Moreover, an alternative solution technique is shown in which the continuous-valued LP is solved and the result rounded in a special way to yield an optimum solution to the ILP.

Controlling Co-Epidemics: Analysis of HIV and Tuberculosis Infection Dynamics

February 2008

·

123 Reads

A co-epidemic arises when the spread of one infectious disease stimulates the spread of another infectious disease. Recently, this has happened with human immunodeficiency virus (HIV) and tuberculosis (TB). We develop two variants of a co-epidemic model of two diseases. We calculate the basic reproduction number (R(0)), the disease-free equilibrium, and the quasi-disease-free equilibria, which we define as the existence of one disease along with the complete eradication of the other disease, and the co-infection equilibria for specific conditions. We determine stability criteria for the disease-free and quasi-disease-free equilibria. We present an illustrative numerical analysis of the HIV-TB co-epidemics in India that we use to explore the effects of hypothetical prevention and treatment scenarios. Our numerical analysis demonstrates that exclusively treating HIV or TB may reduce the targeted epidemic, but can subsequently exacerbate the other epidemic. Our analyses suggest that coordinated treatment efforts that include highly active antiretroviral therapy for HIV, latent TB prophylaxis, and active TB treatment may be necessary to slow the HIV-TB co-epidemic. However, treatment alone may not be sufficient to eradicate both diseases. Increased disease prevention efforts (for example, those that promote condom use) may also be needed to extinguish this co-epidemic. Our simple model of two synergistic infectious disease epidemics illustrates the importance of including the effects of each disease on the transmission and progression of the other disease.

A Holistic Methodology for Modeling Consumer Response to Innovation

February 1983

·

329 Reads

A general structural equation model for representing consumer response to innovation is derived and illustrated. The approach both complements and extends an earlier model proposed by Hauser and Urban. Among other benefits, the model is able to take measurement error into account explicitly, to estimate the intercorrelation among exogenous factors if these exist, to yield a unique solution in a statistical sense, and to test complex hypotheses (e.g., systems of relations, simultaneity, feedback) associated with the measurement of consumer responses and their impact on actual choice behavior. In addition, the procedures permit one to model environmental and managerially controllable stimuli as they constrain and influence consumer choice. Limitations of the procedures are discussed and related to existing approaches. Included in the discussion is a development of four generic response models designed to provide a framework for modeling how consumers behave and how managers might better approach the design of products, persuasive appeals, and other controllable factors in the marketing mix.

Mass Screening Models for Contagious Diseases with No Latent Period
In this paper, a simplified model describing the stochastic process underlying the etiology of contagious and noncontagious diseases with mass screening is developed. Typical examples might include screening of tuberculosis in urban ghetto areas, venereal diseases in the sexually active, or AIDS in high risk population groups. The model is addressed to diseases which have zero or negligible latent periods. In the model, it is assumed that the reliabilities of the screening tests are constant, and independent of how long the population unit has the disease. Both tests with perfect and imperfect reliabilities are considered. It is shown that most of the results of a 1978 study by W.P. Pierskalla and J.A. Voelker for noncontagious diseases can be generalized for contagious diseases. A mathematical program for computing the optimal test choice and screening periods is presented. It is shown that the optimal screening schedule is equally spaced for tests with perfect reliability. Other properties relating to the managerial problems of screening frequencies, test selection, and resource allocation are also presented.

Optimal Ordering Policies for Continuous Review Perishable Inventory Models

April 1980

·

79 Reads

This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.

An Iterative Estimation and Validation Procedure for Specification of Semi-Markov Models with Application to Hospital Patient Flow

December 1982

·

286 Reads

This article presents a methodology to identify and specify a continuous time semi-Markov model of population flow within a network of service facilities. An iterative procedure of state space definition, population disaggregation, and parameter estimation leads to the specification of a model which satisfies the underlying semi-Markov assumptions. We also present a test of the impact of occupancy upon realizations of population flows. The procedure is applied to data describing the movement of obstetric patients in a large university teaching hospital. We use the model to predict length-of-stay distributions. Finally, we compare these results with those that would have been obtained without the procedure, and show the modified model to be superior.

A Stochastic Service Network Model with Application to Hospital Facilities

February 1981

·

259 Reads

This paper presents a methodology for estimating expected utilization and service level for a class of capacity constrained service network facilities operating in a stochastic environment. A semi-Markov process describes the flows of customers (patients) through a network of service units. We model the case where one of the units has finite capacity and no queues are allowed to form. We show that the expected level of utilization and service can be computed from a simple linear relationship based on (a) the equilibrium arrival rates at each unit which are associated with the case of infinite capacity, (b) mean holding times for each unit, and (c) the probability that the finite capacity unit is at full capacity. We use Erlang's loss formula to calculate the probability of full capacity, show this calculation to be exact for two cases, and recommend its use as an approximation in the general case. We test the accuracy of the approximation on a set of published data. In the discussion, we present a technique for analyzing collected patient flow data using the results of this methodology.

Hospital Profit Planning under Medicare Reimbursement

April 1984

·

6 Reads

The federal Medicare regulations reimburse hospitals on a pro rata share of the hospital's cost. Hence, to meet its financial requirements, a hospital is forced to shift more of the financial burdens onto its private patients. This procedure has contributed to double digit inflation in hospital prices and to proposed federal regulation to control the rate of increase in hospital revenues. In this regulatory environment, we develop nonlinear programming pricing and cost allocation models to aid hospital administrators in meeting their profit maximizing and profit satisfying goals. The model enables administrators to explore tactical issues such as: (i) studying the relationship between a voluntary or legislated cap on a hospital's total revenues and the hospital's profitability, (ii) identifying those departments within the hospital that are the most attractive candidates for cost reduction or cost containment efforts, and (iii) isolating those services that should be singled out by the hospital manager for renegotiation of the prospective or "customary and reasonable" cap. Finally the modeling approach is helpful in explaining the departmental cross subsidies observed in practice, and can be of aid to federal administrators in assessing the impacts of proposed changes in the Medicare reimbursement formula.

Perishable Inventory Theory: A Review

August 1982

·

11,442 Reads

This paper reviews the relevant literature on the problem of determining suitable ordering policies for both fixed life perishable inventory, and inventory subject to continuous exponential decay. We consider both deterministic and stochastic demand for single and multiple products. Both optimal and suboptimal order policies are discussed. In addition, a brief review of the application of these models to blood bank management is included. The review concludes with a discussion of some of the interesting open research questions in the area.

Operations planning with VERT

August 1981

·

38 Reads

The Venture Evaluation and Review Technique (VERT) is a computerized, mathematically oriented network-based simulation technique designed to analyze risk existing in three parameters of most concern to managers in new projects or ventures--time, cost, and performance. As such, the VERT technique is more powerful than techniques such as GERT, which are basically time and cost oriented. VERT has been successfully utilized to assess the risks involved in new ventures and projects, in the estimation of future capital requirements, in control monitoring, and in the overall evaluation of ongoing projects, programs, and systems. It has been helpful to management in cases where there is a requirement to make decisions with incomplete or inadequate information about the alternatives. An example describing the application of VERT to an operational planning problem--the evaluation of electric power generating methods--is illustrated.

Optimal Control of Arrivals to Token Ring Networks with Exhaustive Service Discipline

July 1990

·

14 Reads

The optimal control of arrivals to a two-station token ring network is analyzed. By adopting a maximum system throughput under a system time-delay optimality criterion, a social optimality problem is studied under the assumption that both stations have global information (i.e. the number of packets in each station). The controlled arrivals are assumed to be state-dependent Poisson streams and have exponentially distributed service time. The optimality problem is formulated as a dynamic programming problem with a convex cost function. Using duality theory, it is then shown that the optimal control is switchover when both queues have the same service rate and sufficiently large buffers. Nonlinear programming is used to numerically approximate the optimal local controls for comparison purposes. The results obtained under global and local information can be used to provide a measure of the tradeoff between maximum throughput efficiency and protocol complexity. Numerical examples illustrating the theoretical results are provided

Optimal Bidding in Sequential Auctions

December 1974

·

34 Reads

When a bidder's strategy in one auction will affect his competitor's behavior in subsequent auctions, bidding in a sequence of auctions can be modeled fruitfully as a multistage control process in which the control is the bidder's strategy while the state characterizes the competitors' behavior. This paper presents such a model in which the state transition represents the competitors' reaction to the bidder's strategy. Dynamic programming is used to derive the infinite horizon optimal bidding strategy. It is shown that in steady state this optimal strategy generalizes a previous result for equilibrium bidding strategy in "one-shot" auctions.

Deducing Queueing from Transactional Data: The Queue Inference Engine, Revisited
R. Larson (1990) proposed a method to statistically infer the expected transient queue length during a busy period with Poisson arrival in O ( n <sup>5</sup>) solely from the n starting and stopping times of each customer's service during the busy period. Here, the authors develop a novel O ( n <sup>3 </sup>) algorithm which uses those data to deduce transient queue lengths as well as the waiting times of each customer in the busy period. In a manner analogous to the Kalman filter, they also develop an O ( n ) online algorithm to dynamically update the current estimates for queue lengths after each departure. Moreover, they generalize their algorithms for the case of a time-varying Poisson process and also for the case of i.i.d. interarrival times with an arbitrary distribution. Computational results that exhibit the speed and accuracy of these algorithms are reported

Dynamic Scheduling and Routing for Flexible Manufacturing Systems that Have Unreliable Machines

April 1987

·

22 Reads

This paper presents a method for real-time scheduling and routing of material in a Flexible Manufacturing System (FMS). It extends the earlier scheduling work of Kimemia and Gershwin. The FMS model includes machines that fail at random times and stay down for random lengths of time. The new element is the capability of different machines to perform some of the same operations. The times that different machines require to perform the same operation may differ. This paper includes a model, its analysis, a real-time algorithm, and examples.

Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes

July 2005

·

65 Reads

We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.

Combining Spot and Futures Markets: A Hybrid Market Approach to Dynamic Spectrum Access

May 2014

·

102 Reads

Dynamic spectrum access (DSA) is a new paradigm in radio frequency spectrum sharing. It allows unlicensed secondary users (SUs) to access the spectrum licensed to primary spectrum owners (POs) opportunistically. Market-driven secondary spectrum trading is an effective way to provide proper economic incentive for DSA, and to achieve high spectrum secondary utilization efficiency in DSA. In this paper, we consider the short-term secondary spectrum trading between one PO and multiple SUs in a hybrid spectrum market consisting of both the futures market (with contract buyers) and the spot market (with spot transaction buyers). We focus on the expected spectrum efficiency maximization (E-SEM) under stochastic network information, taking into consideration both the spatial spectrum reuse and information asymmetry. To solve this problem, we first compute an optimal policy that maximizes the ex-ante expected spectrum efficiency based on the stochastic distribution of network information, and then design a VCG-based mechanism that determines the real-time allocation and pricing under information asymmetry. With the spatial spectrum reuse, the VCG mechanism is NP-hard. Thus, we further propose a heuristics solution based on an VCG-like mechanism with polynomial complexity, and quantify the associated efficiency loss systematically. Simulations show that (i) the optimal policy significantly outperforms the random and greedy allocation policies with an average increase of 20% in terms of the expected spectrum efficiency, and (ii) the heuristics solution exhibits good and robust performance (reaches at least 70% of the optimal efficiency in our simulations).

Table 2 Linear and Integer Programming Results using LINDO for the Basic Model [LAN1]
A Decomposition Algorithm for Local Access Telecommunications Network Expansion Planning

February 1992

·

137 Reads

Growing demand, increasing diversity of services, and advances in transmission and switching technologies are prompting telecommunication companies to rapidly expand and modernize their networks. This paper develops and tests a decomposition methodology to generate cost-effective expansion plans, with performance guarantees, for one major component of the network hierarchy – the local access network. The model captures economies of scale in facility costs and tradeoffs between installing concentrators and expanding cables to accommodate demand growth. Our solution method exploits the special tree and routing structure of the expansion planning problem to incorporate valid inequalities, obtained by studying the problem’s polyhedral structure, in a dynamic program which solves an uncapacitated version of the problem. Computational results for three realistic test networks demonstrate that our enhanced dynamic programming algorithm, when embedded in a Lagrangian relaxation scheme (with problem preprocessing and local improvement), is very effective in generating good upper and lower bounds: Implemented on a personal computer, the method generates solutions within 1.2-7.0% of optimality. In addition to developing a successful solution methodology for a practical problem, this paper illustrates the possibility of effectively combinating decomposition methods and polyhedral approaches.

Figure 7 
Robust Adaptive Routing Under Uncertainty

August 2014

·

186 Reads

We consider the problem of finding an optimal history-dependent routing strategy on a directed graph weighted by stochastic arc costs when the decision maker is constrained by a travel-time budget and the objective is to optimize the expected value of a function of the budget overrun. Leveraging recent results related to the problem of maximizing the probability of termination within budget, we first propose a general formulation and solution method able to handle not only uncertainty but also tail risks. We then extend this general formulation to the robust setting when the available knowledge on arc cost probability distributions is restricted to a given subset of their moments. This robust version takes the form of a continuous dynamic programming formulation with an inner generalized moment problem. We propose a general purpose algorithm to solve a discretization scheme together with a streamlined procedure in the case of scarce information limited to lower-order statistics. To illustrate the benefits of a robust policy, we run numerical experiments with field data from the Singapore road network.

Fabrication-Adaptive Optimization, with an Application to Photonic Crystal Design

July 2013

·

149 Reads

It is often the case that the computed optimal solution of an optimization problem cannot be implemented directly, irrespective of data accuracy, due to either (i) technological limitations (such as physical tolerances of machines or processes), (ii) the deliberate simplification of a model to keep it tractable (by ignoring certain types of constraints that pose computational difficulties), and/or (iii) human factors (getting people to "do" the optimal solution). Motivated by this observation, we present a modeling paradigm called "fabrication-adaptive optimization" for treating issues of implementation/fabrication. We develop computationally-focused theory and algorithms, and we present computational results for incorporating considerations of implementation/fabrication into constrained optimization problems that arise in photonic crystal design. The fabrication-adaptive optimization framework stems from the robust regularization of a function. When the feasible region is not a normed space (as typically encountered in application settings), the fabrication-adaptive optimization framework typically yields a non-convex optimization problem. (In the special case where the feasible region is a finite-dimensional normed space, we show that fabrication-adaptive optimization can be re-cast as an instance of modern robust optimization.) We study a variety of problems with special structures on functions, feasible regions, and norms, for which computation is tractable, and develop an algorithmic scheme for solving these problems in spite of the challenges of non-convexity. We apply our methodology to compute fabrication-adaptive designs of two-dimensional photonic crystals with a variety of prescribed features.

Table 3 : Previous Results at the infeasibility border for 1000 Flights
The Air Traffic Flow Management Problem with Enroute Capacities

January 1994

·

1,250 Reads

Throughout the United States and Europe, demand for airport use has been increasing rapidly, while airport capacity has been stagnating. Over the last ten years the number of passengers has increased by more than 50 percent and is expected to continue increasing at this rate. Acute congestion in many major airports has been the unfortunate result. For U.S. airlines, the expected yearly cost of the resulting delays is currently estimated at $3 billion. In order to put this number in perspective, the total reported losses of all U.S. airlines amounted to approximately $2 billion in 1991 and $2.5 billion in 1990. Furthermore, every day 700 to 1100 flights are delayed by 15 minutes or more. European airlines are in a similar plight. Optimally controlling the flow of aircraft either by adjusting their release times into the network (ground-holding) or their speed once they are airborne is a cost effective method to reduce the impact of congestion on the air traffic system. This paper makes the following contributions: (a) we build a model that takes into account the capacities of the National Airspace System (NAS) as well as the capacities at the airports, and we show that the resulting formulation is rather strong as some of the proposed inequalities are facet defining for the convex hull of solutions; (b) we address the complexity of the problem; (c) we extend that model to account for several variations of the basic problem, most notably, how to reroute flights and how to handle banks in the hub and spoke system; (d) we show that by relaxing some of our constraints we obtain a previously addressed problem and that the LP relaxation bound of our formulation is at least as strong when compared to all others proposed in the literature for this problem; and (e) we solve large scale, realistic size problems with several thousand flights.

On Optimal Allocation of Indivisibles Under Uncertainty.

May 1994

·

55 Reads

The optimal use of indivisible resources is often the central issue in the economy and management. One of the main difficulties is the discontinuous nature of the resulting resource allocation problems which may lead to the failure of competitive market allocation mechanisms (unless we agree to "divide" the indivisibles in some indirect way). The problem becomes even more acute when uncertainty of the outcomes of decisions is present. In this paper we formalize the problem as a stochastic optimization problem involving discrete decision variables and uncertainties. By using some concrete examples, we illustrate how some problems of "dividing indivisibles" under uncertainty can be formalized in such terms. Next, we develop a general methodology to solve such problems based on the concept of the branch and bound method. The main idea of the approach is to process large collections of possible solutions and to devote more attention to the most promising groups. By gathering more information to reduce the uncertainty and by specializing the solution the optimal decision can be found.

The Conditions for Dominance Between Alternatives which are Independent Multiplicative Random Variables

July 1968

·

33 Reads

Maximizing long term investment capital growth by conditions for dominance between alternatives consisting of independent multiplicative random variables

Analyzing Cost Efficient Production Behavior Under Economies of Scope: A Nonparametric Methodology

October 2006

·

34 Reads

In designing a production model for firms that generate multiple outputs, we take as a starting point that such multi-output production refers to economies of scope, which in turn originate from joint input use and input externalities. We provide a nonparametric characterization of cost efficient behavior under these conditions, and subsequently institute necessary and sufficient conditions for data consistency with such efficient behavior that only include observed firm demand and supply data. We illustrate our methodology by examining the cost efficiency of research programs in Economics and Business Management faculties of Dutch universities. This application shows that the proposed methodology may entail robust conclusions regarding cost efficiency differences between universities within specific specialization areas, even when using shadow prices to evaluate the different inputs.

Still flowing: old and new approaches for traffic flow modeling

January 2003

·

50 Reads

Certain aspects of traffic flow measurements imply the existence of a phase transition. Models known from chaos and fractals, such as non-linear analysis of coupled differential equations, cellular automata, or coupled maps, can generate behavior which indeed resembles a phase transition in the flow behavior. Other measurements point out that the same behavior could be generated by relatively simple geometrical constraints of the scenario. This paper looks at some of the empirical evidence, but mostly focuses on the different modelling approaches. The theory of traffic jam dynamics is reviewed in some detail, starting from the well-established theory of kinematic waves and then veering into the area of phase transitions. One aspect of the theory of phase transitions is that, by changing one single parameter, a system can be moved from displaying a phase transition to not displaying a phase transition. This implies that models for traffic can be tuned so that they display a phase transition or not. The paper focuses on microscopic modeling, discussing the approaches mentioned above, i.e. coupled differential equations, cellular automata, and coupled maps. The phase transition behavior of these models, as far as it is known, is discussed. Similarly, fluid-dynamical models for the same questions are considered. A large portion of the paper is given to the discussion of extensions and open questions, which makes clear that the question of traffic jam dynamics is, albeit important, only a small part of an interesting and vibrant field. As our outlook shows, the whole field is moving away from a rather static view of traffic towards a dynamic view, which uses simulation as an important tool.

Approximate Dynamic Programming via a Smoothed Linear Program

August 2009

·

59 Reads

We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program--the `smoothed approximate linear program'--is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate substantially superior bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. Second, experiments with our approach on a challenging problem (the game of Tetris) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several ADP algorithms) by an order of magnitude.

An Approximate Dynamic Programming Algorithm for Monotone Value Functions

January 2014

·

139 Reads

Many sequential decision problems can be formulated as Markov Decision Processes (MDPs) where the optimal value function (or cost-to-go function) can be shown to satisfy a monotone structure in some or all of its dimensions. When the state space becomes large, traditional techniques, such as the backward dynamic programming algorithm (i.e., backward induction), may no longer be effective in finding a solution within a reasonable time frame, and thus, we are forced to consider other approaches, such as approximate dynamic programming (ADP). We propose a provably convergent ADP algorithm called Monotone-ADP that exploits the monotonicity of the value functions in order to increase the rate of convergence. In this paper, we describe a general problem setting where the optimal value functions are monotone, present a convergence proof for Monotone-ADP, and show numerical results for an example application in energy storage and bidding. The empirical results indicate that by taking advantage of monotonicity, we can attain high quality solutions within a relatively small number of iterations.

Simple Approximations for the Batch-Arrival Mx/G/1 Queue

February 1988

·

64 Reads

In this paper we consider the Mx/G/1 queueing system with batch arrivals. We give simple approximations for the waiting-time probabilities of individual customers. These approximations are checked numerically and they are found to perform very well for a wide variety of batch-sizes and service-time distributions.

Strategic Arrivals into Queueing Networks: The Network Concert Queueing Game

December 2011

·

36 Reads

Queueing networks are typically modelled assuming that the arrival process is exogenous, and unaffected by admission control, scheduling policies, etc. In many situations, however, users choose the time of their arrival strategically, taking delay and other metrics into account. In this paper, we develop a framework to study such strategic arrivals into queueing networks. We start by deriving a functional strong law of large numbers (FSLLN) approximation to the queueing network. In the fluid limit derived, we then study the population game wherein users strategically choose when to arrive, and upon arrival which of the K queues to join. The queues start service at given times, which can potentially be different. We characterize the (strategic) arrival process at each of the queues, and the price of anarchy of the ensuing strategic arrival game. We then extend the analysis to multiple populations of users, each with a different cost metric. The equilibrium arrival profile and price of anarchy are derived. Finally, we present the methodology for exact equilibrium analysis. This, however, is tractable for only some simple cases such as two users arriving at a two node queueing network, which we then present.

An Ascending Vickrey Auction for Selling Bases of a Matroid

July 2005

·

445 Reads

Consider selling bundles of indivisible goods to buyers with concave utilities that are additively separable in money and goods. We propose an ascending auction for the case when the seller is constrained to sell bundles whose elements form a basis of a matroid. It extends easily to polymatroids. Applications include scheduling, allocation of homogeneous goods, and spatially distributed markets, among others. Our ascending auction induces buyers to bid truthfully and returns the economically efficient basis. Unlike other ascending auctions for this environment, ours runs in pseudopolynomial or polynomial time. Furthermore, we prove the impossibility of an ascending auction for nonmatroidal independence set-systems.

A Tactical Planning Model for Mixed-Model Electronics Assembly Operations

February 1993

·

138 Reads

Utilizing the automatic assembly machines effectively is critical for electronics assembly operations. This paper develops an optimization model and methodology to assign product families to parallel placement machines in a high mix, low volume environment. Unlike strategic and operational models that emphasize either workload balancing or setup optimization, our tactical planning model incorporates both factors by minimizing the total setup cost per demand period while ensuring that none of the placement machines is overloaded. To capture the impact of product assignment decisions on setup cost, we consider a partial setup policy of mounting some components permanently on each machine and loading other components as neeclecl for each product. We formulate the tactical planning problem as an integer program, and show that even the special case of minimizing setup cost on a single machine for a given assignment of products is NP-hard. Our solution method combines column generation, heuristics, and lower bounding procedures. We solve two practical subproblems - a product selection subproblem, and a setup optimization subproblem - that apply directly to short-term production planning. Our computational experience shows that the algorithm performs well, and provides insights regarding the effective implementation of column generation for this problem context.

An Algorithm for the Three-Index Assignment Problem

February 1988

·

103 Reads

We describe a branch-and-bound algorithm for solving the axial three-index assignment problem. The main features of the algorithm include a Lagrangian relaxation that incorporates a class of facet inequalities and is solved by a modified subgradient procedure to find good lower bounds, a primal heuristic based on the principle of minimizing maximum regret plus a variable depth interchange phase for finding good upper bounds, and a novel branching strategy that exploits problem structure to fix several variables at each node and reduce the size of the total enumeration tree. Computational experience is reported on problems with up to 78 equations and 17,576 variables. The primal heuristics were tested on problems with up to 210 equations and 343,000 variables.

Asymptotically attainable structures in nonhomogeneous Markov systems

November 1992

·

29 Reads

We find the sets of d-periodic asymptotically attainable structures, and we establish the periodicities that exist between these structures, for a nonhomogeneous Markov system in the case where the imbedded nonhomogeneous Markov chain is periodic with period d. Also, it is proved that under certain conditions each converging subsequence of the sequence of relative structures has a geometric rate of convergence.

The Empirical Implications of Privacy-Aware Choice

June 2014

·

23 Reads

This paper initiates the study of the testable implications of choice data in settings where agents have privacy preferences. We adapt the standard conceptualization of consumer choice theory to a situation where the consumer is aware of, and has preferences over, the information revealed by her choices. The main message of the paper is that little can be inferred about consumers' preferences once we introduce the possibility that the consumer has concerns about privacy. This holds even when consumers' privacy preferences are assumed to be monotonic and separable. This motivates the consideration of stronger assumptions and, to that end, we introduce an additive model for privacy preferences that does have testable implications.

TABLE 2 : OPTIMAL SYSTEM AVAILABILITY FOR RANGE OF BUDGET VALUES 
TABLE 6 : IMPACT OF ALTERNATE VERSIONS FOR MODULE 1 
System Balance for Extended Logistic Systems

February 1981

·

54 Reads

An extended logistic system is a well-defined configuration of complex equipment, supporting inventory levels of components and modules, supporting maintenance facilities, supporting transportation system between local and remote inventory and maintenance sites, and procedures governing the allocation and shipment of components from remote and local sites. The evaluation of system performance includes system availability and the logistic costs required to obtain that level of availability. This study extends the authors' earlier work which developed methods for measuring system persistence times of extended logistic systems. In particular, the authors propose an optimization model for examining system design and trade-off decisions.

Top-cited authors