Operations Research Letters

Published by Elsevier
Print ISSN: 0167-6377
Given a graph G=(V,E) with edge costs and an integer vector associated with the nodes of V, the survivable network design problem is to find a minimum cost subgraph of G such that between every pair of nodes s,t of V, there are at least min{r(s),r(t)} edge-disjoint paths. In this paper we consider that problem when r∈{1,2}V. This case is of particular interest to the telecommunication industry. We show that the separation problem for the so-called partition inequalities reduces to minimizing a submodular function. This yields a polynomial time separation algorithm for these inequalities in that case.
In this note, we give a 1.47-approximation algorithm for the preemptive scheduling of jobs with release dates on a single machine so as to minimize the weighted sum of job completion times; this problem is denoted by 1|rj,pmtn|∑jwjCj in the notation of Lawler et al. (Handbooks in Operations Research and Management Science, Vol. 4, Logistics of Production and Inventory, North-Holland, Amsterdam, pp. 445-522). Our result improves on a 2-approximation algorithm due to Hall et al. Math. Oper. Res. 22 (1997) 513–544, and also yields an improved bound on the quality of a well-known linear programming relaxation of the problem.
Recently Byrka, Grandoni, Rothvoss and Sanita (at STOC 2010) gave a 1.39-approximation for the Steiner tree problem, using a hypergraph-based linear programming relaxation. They also upper-bounded its integrality gap by 1.55. We describe a shorter proof of the same integrality gap bound, by applying some of their techniques to a randomized loss-contracting algorithm.
We present a cost-sharing method that is competitive, cross-monotonic and approximate cost recovering for an economic lot-sizing game under a weak triangle inequality assumption, along with numerical results showing the effectiveness of the proposed method.
In this paper, we introduce a general framework for situations with decision making under uncertainty and cooperation possibilities. This framework is based upon a two stage stochastic programming approach. We show that under relatively mild assumptions the associated cooperative games are totally balanced. Finally, we consider several example situations.
We consider a combinatorial optimization problem arising, in different forms, in scheduling theory and in satellite communication theory. We show that two classical algorithms, independently obtained in these two domains, implement a technique developed in 1931 by Egerváry.
This paper reports on the fourth version of the Mixed Integer Programming Library. Since MIPLIB is to provide a concise set of challenging problems, it became necessary to purge instances that became too easy. We present an overview of the 27 new problems and statistical data for all 60 instances.
An iterative method is proposed for the K facilities location problem. The problem is relaxed using probabilistic assignments, depending on the distances to the facilities. The probabilities, that decompose the problem into K single-facility location problems, are updated at each iteration together with the facility locations. The proposed method is a natural generalization of the Weiszfeld method to several facilities.
This paper introduces a branch-and-bound algorithm for the maximum clique problem which applies existing clique finding and vertex coloring heuristics to determine lower and upper bounds for the size of a maximum clique. Computational results on a variety of graphs indicate the proposed procedure in most instances outperforms leading algorithms.
Let G = (V, E) be a connected undirected graph with positive edge lengths. Let V = {0} ∪ N, where N = {1,…,n}. Each node in N is identified as a customer, and 0 is the home location of a traveling salesman or repairman who serves the customers in N. Each subset of customers S can hire the repairman to serve its members only. In that case the cost incurred by S, c(S), is the minimum length of a tour traversed by the repairman who starts at node 0, visits each node in S at least once and returns to 0. We consider the core of the cooperative cost allocation game (N; c) defined by the cost function c(S), S ⊆ N. We show that the core can be empty even if G is series parallel by presenting the unique minimal counter example for such graphs. We then use a recent result of Fonlupt and Naddef, and prove that the core is nonempty for a class of graphs that properly contains the subclass of cycle tress, i.e. graphs which have no edge included in more than one simple cycle.
We derive a -approximation algorithm for the NP-hard parallel machine total weighted completion time problem with controllable processing times by the technique of convex quadratic programming relaxation.
We give tight upper bounds on the number of maximal independent sets of size k (and at least k and at most k) in graphs with n vertices. As an application of the proof, we construct improved algorithms for graph colouring and computing the chromatic number of a graph.
We show that the maximization version of the multi-level facility location problem can be approximated by a factor of 0.5. The only previously known result is a factor of 0.47 for the two-level case obtained recently by Bumb (Oper. Res. Lett. 29(4) (2001) 155).
The aim of this paper is to determine conditions under which the Lagrangian maximum of a utility function and compromise programming lead to close solutions.
Fluid models have become an important tool for the study of many-server queues with general service and patience time distributions. The equilibrium state of a fluid model has been revealed by Whitt (2006) and shown to yield reasonable approximations to the steady state of the original stochastic systems. However, it remains an open question whether the solution to a fluid model converges to the equilibrium state and under what condition. We show in this paper that the convergence holds under a mild condition. Our method builds on the framework of measure-valued processes developed in Zhang (2013), which keeps track of the remaining patience and service times.
Insight is provided into a previously developed M/M/s/r+M(n) approximation for the M/GI/s/r+GI queueing model by establishing fluid and diffusion limits for the approximating model. Fluid approximations for the two models are compared in the many-server efficiency-driven (overloaded) regime. The two fluid approximations do not coincide, but they are close.
Classical stochastic programming has already been used with large-scale LP models for long-term analysis of energy–environment systems. We propose a Minimax Regret formulation suitable for large-scale linear programming models. It has been experimentally verified that the minimax regret strategy depends only on the extremal scenarios and not on the intermediate ones, thus making the approach computationally efficient. Key results of minimax regret and minimum expected value strategies for Greenhouse Gas abatement in the Province of Québec, are compared.
The two-dimensional vector packing problem is the generalization of the classical one-dimensional bin packing problem to two dimensions. While an asymptotic polynomial time approximation scheme has been designed for one-dimensional bin packing, the existence of an asymptotic polynomial time approximation scheme for two dimensions would imply P=NP. The existence of an approximation algorithm for the two-dimensional vector packing problem with an asymptotic performance guarantee 2 was an open problem so far. In this paper we present an time algorithm for two-dimensional vector packing with absolute performance guarantee 2.
Konno (Oper. Res. Soc. Japan 33 (1990) 139–156). introduced a piecewise linear objective function for portfolio optimization to measure the deviation from a mean return. An apparently asymmetric objective function can be obtained by changing the gradients either side of the mean. However, we show that when the linear deviations are taken relative to the mean, any two piece linear objective function is equivalent to the mean absolute deviation, which is symmetric. Equivalent is used here to mean that one function is proportional to the other. Also we show that emphasizing upside risk is exactly equal to emphasizing downside risk when these are taken relative to the mean. No distributional assumptions are required beyond the existence of the first moment. In this case an investor changing from upside to downside risk would not change his solution at all, despite what the investor intended to achieve.
This article investigates some properties of the absorbing set and proposes an example of a game that has no absorbing set.
A label setting algorithm for solving the Elementary Resource Constrained Shortest Path Problem, using node resources to forbid repetition of nodes on the path, is implemented. A state-space augmenting approach for accelerating run times is considered. Several augmentation strategies are suggested and compared numerically.
This paper presents a simple procedure for accelerating convergence in a generalized Fermat–Weber problem with lp distances. The main idea is to multiply the predetermined step size of the Weiszfeld algorithm by a factor which is a function of the parameter p. The form of this function is derived from the local convergence properties of the iterative sequence. Computational results are obtained which demonstrate that the total number of iterations to meet a given stopping criterion will be reduced substantially by the new step size, with the most dramatic results being observed for values of p close to 1.
We study a two-phase, budget-constrained, network-planning problem with multiple hub types and demand scenarios. In each phase, we install (or move) capacitated hubs on selected buildings. We allocate hubs to realized demands, under technological constraints. We present a greedy algorithm to maximize expected demand covered and computationally study its performance.
We consider two models of M/G/1 and G/M/1 type queueing systems with restricted accessibility. Let (V(t))t⩾0 be the virtual waiting time process, let Sn be the time required for a full service of the nth customer and let τn be his arrival time. In both models there is a capacity bound v∗∈(0,∞). In Model I the amount of service given to the nth customer is equal to , i.e. the full currently free workload is assigned to the new customer. In Model II the customer is rejected iff the currently used workload V(τn−) exceeds v∗, but the service times of admitted customers are not censored. We obtain closed-form expressions for the Laplace transforms of the lengths of the busy periods.
We propose two path-selection algorithms for the transport of hazardous materials. The algorithms can deal with link impedances that are path-dependent. This approach is superior to the use of a standard shortest path algorithm, common in the literature and practice, which results in inaccuracies.
We examine an allocation problem in which the objective is to allocate resources among competing activities so as to balance weighted deviations from given demands. A lexicographic minimax algorithm that solves successive problems by A minimax optimizer is developed. The algorithm is extremely fast and can readily solve large-scale problems that may be encountered in applications, e.g., in production planning.
By adding a set of redundant constraints, and by iteratively refining the approximation, we show that a commercial solver is able to routinely solve moderate-size strategic safety stock placement problems to optimality. The speed-up arises because the solver automatically generates strong flow cover cuts using the redundant constraints.
We compare two different models for multicriterion routing in stochastic time-dependent networks: the classic “time-adaptive” model and the more flexible “history-adaptive” one. We point out several properties of the sets of efficient solutions found under the two models. We also devise a method for finding supported history-adaptive solutions.
The foundations are laid for an additive version of the Analytic Hierarchy Process by constructing a framework for the study of multiplicative and additive pairwise comparison matrices and the relations between them. In particular, it will be proved that the only solution satisfying consistency axioms for the problem of retrieving weights from inconsistent additive judgements matrices is the arithmetic mean.
An efficient algorithm is proposed for the additive and multiplicative models in data envelopment analysis (DEA). In simulation studies the algorithm executed in less than 60% of the cpu time required by the revised simplex method.
We study the first passage process of a spectrally negative Markov additive process (MAP). The focus is on the background Markov chain at the times of the first passage. This process is a Markov chain itself with a transition rate matrix Λ. Assuming time reversibility, we show that all the eigenvalues of Λ are real, with algebraic and geometric multiplicities being the same, which allows us to identify the Jordan normal form of Λ. Furthermore, this fact simplifies the analysis of fluctuations of a MAP. We provide an illustrative example and show that our findings greatly reduce the computational efforts required to obtain Λ in the time-reversible case.
We propose a tariff structure for high speed multiservice networks which encourages the cooperative sharing of information between users and the network. In the case of on/off sources with a policed peak rate the tariff structure takes a very simple form: a charge am per unit time and a charge bm per cell carried, where the pair (am, bm) are fixed by a declaration m, made by the user at the time of call admission, of the expected rate of the source.
Comparative average solution times of SPAR (in seconds) for completely dense transition matrices
The effect of probability transition density on SPAR parameter set 2
We consider a general adversarial stochastic optimization model. Our model involves the design of a system that an adversary may subsequently attempt to destroy or degrade. We introduce SPAR, which utilizes mixed-integer programming for the design decision and a Markov decision process (MDP) for the modeling of our adversarial phase.
Recently, there has been a surge of interest in algorithms that allocate advertisement space in an online revenue-competitive manner. Most such algorithms, however, assume a pay-as-you-bid pricing scheme. In this paper, we study the query allocation problem where the ad space is priced using the well-known and widely used generalized second-price (GSP) scheme. We observe that the previous algorithms fail to achieve a bounded competitive ratio under the GSP scheme. On the positive side, we present online constant-competitive algorithms for the problem.
We formulate the problem of optimizing a convex function over the weakly efficient set of a multicriteria affine fractional program as a special biconvex problem. We propose a decomposition algorithm for solving the latter problem. The proposed algorithm is a branch-and-bound procedure taking into account the affine fractionality of the criterion functions.
Tsuchiya and Muramatsu recently proved that the affine-scaling algorithm for linear programming generates convergent sequences of primal and dual variables whose limits are optimal for the corresponding primal and dual problems as long as the step size is no more than two-thirds of the distance to the nearest face of the polytope. An important feature of this result is that it does not require any nondegeneracy assumptions. In this paper we show that Tsuchiya and Muramatsu's result is sharp by providing a simple linear programming problem for which the sequence of dual variables fails to converge for every step size greater than two-thirds.
We give, for a class of monotone affine variational inequality problems, a simple characterization of when a certain residual function provides a bound on the distance from any feasible point to the solution set. This result has implications on the global linear convergence of a certain projection algorithm and of matrix splitting algorithms using regular splitting.
The purpose of this paper is to describe computational experience with a dual affine variant of Karmarkar's method for solving linear programming problems. This approach was implemented by the authors over a twelve week period during the summer of 1986. Computational tests were made comparing this implementation with MINOS 5.0, a state-of-the-art implementation of the simplex method. Our implementation compares favorably on publicly-available linear programming test problems with an average speedup of about three over MINOS 5.0.
This paper derives the owner’s optimal contract with a bonus-incentive and audit when the owner delegates the investment timing decision to a manager with private information on an investment project. The optimal solution not only unifies the previous studies, but also accounts for actual auditing systems in firms.
A well-known heuristic for estimating the rate function or cumulative rate function of a nonhomogeneous Poisson process assumes that the rate function is piecewise constant on a set of data-independent intervals. We investigate the asymptotic (as the amount of data grows) behavior of this estimator in the case of equal interval widths, and show that it can be transformed into a consistent estimator if the interval lengths shrink at an appropriate rate as the amount of data grows.
This paper proposes practical modeling and analysis methods to facilitate dynamic staffing in a telephone call center with the objective of immediately answering all calls. Because of this goal, it is natural to use infinite-server queueing models. These models are very useful because they are so tractable. A key to the dynamic staffing is exploiting detailed knowledge of system state in order to obtain good estimates of the mean and variance of the demand in the near future. The near-term staffing needs, e.g., for the next minute or the next 20 min., can often be predicted by exploiting information about recent demand and current calls in progress, as well as historical data. The remaining holding times of calls in progress can be predicted by classifying and keeping track of call types, by measuring holding-time distributions and by taking account of the elapsed holding times of calls in progress. The number of new calls in service can be predicted by exploiting information about both historical and recent demand.
In recent years, approximation algorithms based on randomized rounding of fractional optimal solutions have been applied to several classes of discrete optimization problems. In this paper, we describe a class of rounding methods that exploits the structure and geometry of the underlying problem to round fractional solution to 0–1 solution. This is achieved by introducing dependencies in the rounding process. We show that this technique can be used to establish the integrality of several classical polyhedra (min cut, uncapacitated lot-sizing, Boolean optimization, k-median on cycle) and produces an improved approximation bound for the min-k-sat problem.
We extend discrete-time dynamic flow algorithms presented in the literature to solve the analogous continuous-time dynamic flow problems. These problems include finding maximum dynamic flows, quickest flows, universally maximum dynamic flows, lexicographically maximum dynamic flows, dynamic transshipments, and quickest transshipments in networks with capacities and transit times on the edges.
The minimization of maximum completion time for scheduling n jobs on m identical parallel machines is an NP-hard problem for which many excellent heuristic algorithms have been developed. In this paper, the problem is investigated under the assumption that only limited information about the jobs is available. Specifically, processing times are not known for the jobs; rather, the ordering of the jobs by processing time is known.For the cases of two and three parallel machines, algorithms which cannot be improved upon with respect to worst case performance ratio are developed. For the case of four parallel machines, an algorithm which is near optimal with respect to worst case performance ratio is developed. For arbitrary m, an algorithm which produces solutions whose value is at most five-thirds times the optimal value is presented. Finally, it is shown that as the number of machines gets arbitrarily large, the best possible ordinal algorithm has worst case performance ratio of at least .
We describe approximation algorithms with bounded performance guarantees for the following problem: A graph is given with edge weights satisfying the triangle inequality, together with two numbers k and p. Find k disjoint subsets of p vertices each, so that the total weight of edges within subsets is maximized
This paper presents new algorithms to construct a Generalized Round Robin (GRR) routing sequence for distributing one stream of traffic to multiple queues. The performance objective is to minimize the expected delay. Given a target allocation in terms of the fraction of jobs that should be routed to each queue, the algorithms scan the queues in order of descending fraction and assign the next job to the first queue with current allocation not exceeding an easilycomputed threshold. We prove that the constructed sequence has properties related to Hajek's most regular sequence; in particular, it is optimal for the case of two queues. Simulation results show improvement over Itai-Rosberg's Golden Ratio Rule and come very close to a lower bound obtained by Hajek.
We present near-optimal algorithms for two problems related to finding the replacement paths for edges with respect to shortest paths in sparse graphs. The problems essentially study how the shortest paths change as edges on the path fail, one at a time. Our technique improves the existing bounds for these problems on directed acyclic graphs, planar graphs, and non-planar integer-edge-weighted graphs.
We consider the problem of scheduling jobs on-line on a single machine and on identical machines with the objective to minimize total completion time. We assume that the jobs arrive over time. We give a general 2-competitive algorithm for the single machine problem. The algorithm is based on delaying the release time of the jobs, i.e., making the jobs artificially later available to the on-line scheduler than the actual release times. Our algorithm includes two known algorithms for this problem that apply delay of release times. The proposed algorithm is interesting since it gives the on-line scheduler a whole range of choices for the delays, each of which leading to 2-competitiveness.We also show that the algorithm is 2α competitive for the problem on identical machines where α is the performance ratio of the Shortest Remaining Processing Time first rule for the preemptive relaxation of the problem.
In this paper we consider coupled-task single-machine and two-machine flow shop scheduling problems with exact delays, unit processing times, and the makespan as an objective function. The main results of the paper are fast 7/4- and 3/2-approximation algorithms for solving the single- and two-machine problems, respectively.
This paper considers the following optimization problem: Given positive integers n, B, αi and nondecreasing functions Ci(·) for i = 1, 2, …, n, find z = (z1, z2…zn) such that 0⩽zi</αi for i = 1, 2,…, n, Σi=1nZi ⩾ B and Σi=1nCi(zi) is the minimum possible. A fully polynomial approximation scheme for this problem is presented.
Top-cited authors
Aharon Ben-Tal
  • Technion - Israel Institute of Technology
Arkadi Nemirovski
  • Georgia Institute of Technology
Mauricio G. C. Resende
Amir Beck
  • Tel Aviv University
Arie Tamir
  • Tel Aviv University