European Journal of Operational Research

Published by Elsevier
Print ISSN: 0377-2217
Publications
This paper discusses the use of the Nalytical Hierarchy Process (AHP) to develop a rating system allocation of cadaver livers for orthotopic transplantation. Five major criteria for comparison were established, defined, and rated relative to one another: logistic considerations, tissue compatibility, waiting time, financial considerations, and medical status. Subcriteria were also established and rated to one another in relative terms. Patients that met appropriate inclusion screening criteria. Final weighing selection by assigning the appropriate rating to subcriteria in the major selection criteria. Final weighing can be used to develop an alternative to the rigid computerized multifactorial point system that now exists. This paper is meant to demonstrate the utility of the AHP process in complex medical decision making, rather than to necessarily document a final allocation system.
 
Medium potential to exploit bureaucracy’s corruption but lower benefits from own corruption: interior unstable node and (weak) Skiba point coinciding with the steady state; (intermediate α (=2.122), small β (=3.32); r=0.1, δ=0.2, C=1, G=1).
High potential to exploit bureaucracy’s corruption and leader’s own corruption is profitable: no interior steady state and maximum corruption; (large α (=2.18), large β (=3.4); r=0.1, δ=0.2, C=1, G=1).
Low potential to exploit bureaucracy’s corruption, but leader’s own corruption is profitable: saddle point equilibrium; (small α (=2.1), large β (=3.4); r=0.1, δ=0.2, C=1, G=1).
Medium potential to exploit bureaucracy’s corruption and lower benefits from own corruption: interior unstable focus with Skiba point at x¯; (large α (=2.18), small β (=3); r=0.1, δ=0.2, C=1, G=1).
We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure's actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader.
 
Graphical visualization of test instances.
Average solution quality throughout planning horizon when (i) solved
in a naive way, (ii) evaluated ex post and (iii) solved
myopically.
Overestimation of solution quality with varying level of aggregation
a. (The resulting number of aggregated demand nodes
is given in parenthesis.)
Spatial pattern of solution and traffic situation during (off-) peak
hours.
Average solution
(Favg)
with respect to varying cost per relocation
(β).
EMERGENCY SERVICE PROVIDERS ARE FACING THE FOLLOWING PROBLEM: how and where to locate vehicles in order to cover potential future demand effectively. Ambulances are supposed to be located at designated locations such that in case of an emergency the patients can be reached in a time-efficient manner. A patient is said to be covered by a vehicle if (s)he can be reached by an ambulance within a predefined time limit. Due to variations in speed and the resulting travel times it is not sufficient to solve the static ambulance location problem once using fixed average travel times, as the coverage areas themselves change throughout the day. Hence we developed a multi-period version, taking into account time-varying coverage areas, where we allow vehicles to be repositioned in order to maintain a certain coverage standard throughout the planning horizon. We have formulated a mixed integer program for the problem at hand, which tries to optimize coverage at various points in time simultaneously. The problem is solved metaheuristically using variable neighborhood search. We show that it is essential to consider time-dependent variations in travel times and coverage respectively. When ignoring them the resulting objective will be overestimated by more than 24%. By taking into account these variations explicitly the solution on average can be improved by more than 10%.
 
We propose a class of mathematical models for the transmission of infectious diseases in large populations. This class of models, which generalizes the existing discrete-time Markov chain models of infectious diseases, is compatible with efficient dynamic optimization techniques to assist real-time selection and modification of public health interventions in response to evolving epidemiological situations and changing availability of information and medical resources. While retaining the strength of existing classes of mathematical models in their ability to represent the within-host natural history of disease and between-host transmission dynamics, the proposed models possess two advantages over previous models: (1) these models can be used to generate optimal dynamic health policies for controlling spreads of infectious diseases, and (2) these models are able to approximate the spread of the disease in relatively large populations with a limited state space size and computation time.
 
Multimedia systems are gaining importance as novel computer and communication system architectures, which are specialized to the storage and transfer of video documents. Consider a multimedia-on-demand server which transmits video documents through a high-speed network, to geographically distributed clients. The server accumulates requests for specific documents in separate queues. The queues need to share the transmission medium in some fashion, typically in Round-Robin mode. We describe the resulting performance modeling problem, and develop an approximate representation using queuing networks. Our analytic model enables the efficient implementation of a new scheduling scheme, that we call the Local Round-Robin (LRR). We show that LRR yields significant improvement in system performance, compared to the original Round-Robin
 
Lagrangian relaxation (LR) has emerged as a practical approach for complex scheduling problems. The efficiency of the approach, however, depends on how fast the relaxed subproblems and the dual problem are solved. Previously, machine capacity constraints were relaxed and the subproblems were solved by using dynamic programming (DP). The number of multipliers and the computation complexity of the DP algorithm, however, are proportional to the time horizon. This becomes a barrier for the approach to solve problems with long time horizons. The paper presents an alternative framework to overcome the barrier. By using much fewer multipliers to relax operation precedence constraints rather than machine capacity constraints and by approximately solving subproblems, a new LR approach is developed. The approach can find good schedules for problems with thousands of pairs and tens of machines within a short time on a personal computer, making it possible for practical use
 
The bin packing problem is widely found in applications such as loading of tractor trailer trucks, cargo airplanes and ships, where a balanced load provides better fuel efficiency and safer ride. In these applications, there are often conflicting criteria to be satisfied, i.e., to minimize the bins used and to balance the load of each bin, subject to a number of practical constraints. Unlike existing studies that consider only the minimization of bins, a two-objective mathematical model for the bin packing problem with multiple constraints is formulated in this paper. Without the need of combining both objectives into a composite scalar, a hybrid multiobjective particle swarm optimization algorithm (HMOPSO) incorporating the concept of Pareto's optimality to evolve a family of solutions along the trade-off is proposed. The algorithm is also featured with bin packing heuristic, variable length representation, and specialized mutation operator to solve the multiobjective and multi-model combinatorial bin packing problem. Extensive simulations are performed on various test instances, and their performances are compared both quantitatively and statistically with other optimization methods. Each of the proposed features is also explicitly examined to illustrate their usefulness in solving the multiobjective bin packing problem.
 
We consider an open tandem queueing network with population constraint and constant service times. The total number of customers that may be present in the network cannot exceed a given value K. Customers arriving at the queueing network when there are more than K customers are forced to wait in an external queue. The arrival process to the queueing network is assumed to be arbitrary. We show that this queueing network can be transformed into a simple network involving only two nodes. We obtain an upper and lower bound on the mean waiting time. Validations against simulation data establish the tightness of these bounds
 
The paper deals with the stochastic optimal intervention problem which arises in a production and storage system involving identical items. The requests for items arrive at random and the production of an item can be interrupted during production to meet the corresponding demand. The operational costs considered are due the stock/backlog, running costs and set up costs associated to interruptions and re-initializations. The process presents distinct behaviour on each of two disjoint identical subsets of the state space, and the state process can only be transferred from one subset to the other by interventions associated with interruptions/re-initializations. A characterization is given in terms of piecewise deterministic Markov process, which explores the aforementioned structure, and a method of solution with assured convergence, that does not require any special initialization, is provided. Additionally, under some conditions on the data, it is shown that the optimal policy is to produce the item completely in a certain region of the state space of low stock level
 
Estimator Performance -single replication experiments
Estimator Performance -multiple replication experiments
Steady-state mean performance is a common basis for evaluating discrete event simulation models. However, analysis is complicated by autocorrelation and initial transients. We present confidence interval procedures for estimating the steady-state mean of a stochastic process from observed time series which may be short, autocorrelated, and transient. We extend the generalized least squares estimator of M. Snell and L. Schruben [IIE Trans. 17, 354–363 (1985)] and develop confidence interval procedures for single and multiple-replication experiments. The procedures are asymptotically valid and, for short series with reasonable initializations (e.g., empty and idle), are comparable or superior to existing procedures. Further, we demonstrate and explain the robustness of the weighted batch means procedure of D. P. Bischak et al. [Manage. Sci. 39, 1002–1019 (1993; Zbl 0786.65132)] to initialization bias. The proposed confidence interval procedure requires neither truncation of initial observations nor choice of batch size, and permits the existence of steady-state mean to be inferred from the data.
 
Design projects are typically broken down into inter-related tasks that are worked on by designers equipped with various resources. In the creative but uncertain design process, certain tasks may have to be iterated a few times to meet the design criteria, and this very often has major impact on designer/resource planning and on project completion. This paper presents a novel integer programming formulation for the scheduling of design projects with uncertain number of iterations. A combined Lagrangian relaxation and backward dynamic programming algorithm is developed to solve the problem with manageable complexity. Testing results demonstrate that good and robust schedules can be generated in computationally efficient manner
 
Modern technology is succeeding in delivering more information to people at ever faster rates. Under traditional views of rational decision making where individuals should evaluate and combine all available evidence, more information will yield better decisions. But our minds are designed to work in environments where information is often costly and difficult to obtain, leading us to use simple fast and frugal heuristics when making many decisions. These heuristics typically ignore most of the available information and rely on only a few important cues. Yet they make choices that are accurate in their appropriate application domains, achieving ecological rationality through their fit to particular information structures. This paper presents four classes of simple heuristics that use limited information—recognition-based heuristics, one-reason decision mechanisms, multiple-cue elimination strategies, and quick sequential search mechanisms—applied to environments from stock market investment to judging intentions of other organisms to choosing a mate. The findings that ecological rationality can be achieved with limited information are also used to indicate how our mind’s design, relying on decision mechanisms tuned to specific environments, should be taken into account in our technology’s design, creating environments that can enable better decisions.
 
We study some complex distribution network design problems, which involve facility location, warehousing, transportation and inventory decisions. Several realistic scenarios are investigated. Two kinds of mathematical programming formulations are proposed for all the introduced problems, together with a proof of their correctness. Some formulations extend models proposed by Perl and Daskin (1985) for some warehouse location-routing problems; other formulations are based on flow variables and constraints.
 
Current univariate and multivariate time series modelling procedures are reviewed. Areas of disagreement are discussed, such as the choice of a forecasting method, whether to use time series analysis or econometrics for economic data, the merits of prior seasonal adjustment, and the advisability of prewhitening in multivariate modelling. Some recent developments in model identification, estimation and model criticism are described. It is argued that too little attention has been paid to allowing for departures from the model; this leads to a discussion of exploratory and robust techniques. Finally, it is pointed out that little has been done on modelling positive and discrete-valued time series of importance in OR such as lifetimes and counts; some recent proposals are described.
 
We take the position that 'complex' and 'simple' are not merely quantitatively different values assigned to a single system attribute, measured by a numerical quantity varying over some spectrum. Rather, we argue that complex and simple systems are of a fundamentally different character, and that the distinctions between 'simple' and 'complex' raise some basic issues for theoretical science in general, and for physics and biology in particular. Our discussion is couched in terms of the class (or better, the category) of mathematical images which a given natural system may have, and the relations between these images. Essentially, we define a simple system as: (a) a system whose mathematical images are dynamical systems (i.e. dual structures consisting of a set of states, and superimposed dynamical laws or equations of motion) and (b) this class contains a unique maximal image, which behaves like a free object (i.e. all other images are homomorphic images of the maximal one). A complex system is one which possesses mathematical images which are not dynamical systems. In particular, there is no maximal description of a complex system. We suggest a number of specific examples of complex systems and their class of mathematical images. The behaviours of complex systems are contrasted with those of simple counterparts. We point out in particular that contemporary physics is essentially the science of simple systems, and is neither directly applicable nor adequate for natural systems, like organisms, which are not simple. In effect, we are arguing that the problems associated with complex systems, particulary in biology, pose problems for contemporary physics at least as serious as those posed by e.g. atomic spectra or chemical bonding in the last century.
 
In this paper we propose a risk-value model for evaluating decisions under risk. In this model preference for a gamble is determined by its riskiness and its value or worth. In a simple form of the risk-value model, risk is measured by variance and value by expected returns. We discuss several other empirically more attractive forms of the risk-value model. We show that the risk-value model provides a framework for unifying the streams of research on risk judgments and on modeling choices. We explore the consistency of the risk-value model with both expected utility and non-expected utility preferences. Specifically, we show that if we define risk and value in appropriate ways, the rank order produced by the risk-value model will be consistent with a suitably chosen expected utility or non-expected utility model. We briefly discuss application of the risk-value model to the theory of finance and to social risk analysis.
 
A basic requirement of any distribution studys is to estimate the location of, and the demand from, customers. Following a description of methods for obtaining these estimates, showing that demand is sometimes dependent on distance from a depot, the measurement of performance and costs is discussed. The problems of vehicle scheduling, fleet size and warehouse location are then surveyed. Finally, an attempt is made to indicate future developments in O.R. distribution studies.
 
Energy modelling has received considerable attention, following the oil price increases. This paper discusses the contributions of energy modelling to our understanding of certain basic issues and to the practice of OR in general. The successes and failures of energy models are reviewed, and areas of useful research are indicated.
 
In this paper we propose a relative risk-value model and derive a relative measure of risk for lotteries with positive outcomes. Under a condition called relative risk independence, a decision could be made by explicitly trading off between the relative measure of risk and a measure of value, which can either be consistent with some expected utility models or represent nonexpected utility preferences. Specifically, this type of risk-value model is associated with power (or linear plus power) and logarithmic (or linear plus logarithmic) functions. We address some prescriptive and descriptive implications of our relative risk-value framework, and show that our generalized relative risk-value model is very flexible for modeling individuals' preferences and can explain many decision paradoxes.
 
This paper deals with the problem of inaccuracy of the solutions generated by metaheuristic approaches for combinatorial optimization bi-criteria {0, 1}-knapsack problems. A hybrid approach which combines systematic and heuristic searches is proposed to reduce that inaccuracy in the context of a scatter search method. The components of this method are used to determine regions in the decision space to be systematically searched.Comparisons with small and medium size instances solved by exact methods are presented. Large size instances are also considered and the quality of the approximation is evaluated by taking into account the proximity to the upper frontier, devised by the linear relaxation, and the diversity of the solutions. Comparisons with other two well-known metaheuristics are also performed.The results show the effectiveness of the proposed approach for both small/medium and large size instances.
 
This paper provides a categorized bibliography on the application of the techniques of multiple criteria decision making (MCDM) to problems and issues in finance. A total of 265 references have been compiled and classified according to the methodological approaches of goal programming, multiple objective programming, the analytic hierarchy process, etc., and to the application areas of capital budgeting, working capital management, portfolio analysis, etc. The bibliography provides an overview of the literature on “MCDM combined with finance,” shows how contributions to the area have come from all over the world, facilitates access to the entirety of this heretofore fragmented literature, and underscores the often multiple criterion nature of many problems in finance.
 
Given an unconstrained quadratic optimization problem in the following form: (QP) min{x(t)Qx vertical bar x is an element of {-1,1}(n)}. with Q is an element of R(nxn), we present different methods for computing bounds on its optimal objective value. Some of the lower bounds introduced are shown to generally improve over the one given by a classical semidefinite relaxation. We report on theoretical results on these new bounds and provide preliminary computational experiments on small instances of the maximum cut problem illustrating their performance.
 
This paper addresses a real-life 1.5D cutting stock problem, which arises in a make-to-order plastic company. The problem is to choose a subset from the set of stock rectangles to be used for cutting into a number of smaller rectangular pieces so as to minimize total production cost and meet orders. The total production cost includes not only material wastage, as in traditional cutting stock problems, but also production time. A variety of factors are taken into account, like cutter knife changes, machine restrictions, due dates and other work in progress limitations. These restrictions make the combinatorial structure of the problem more complex. As a result, existing algorithms and mathematical models are no longer appropriate. Thus we developed a new 1.5D cutting stock model with multiple objectives and multi-constraints and solve this problem in an incomplete enumerative way. The computational results show that the solution procedure is easy to implement and works very well.
 
In this paper we study a 1.5-dimensional cutting stock and assortment problem which includes determination of the number of different widths of roll stocks to be maintained as inventory and determination of how these roll stocks should be cut by choosing the optimal cutting pattern combinations. We propose a new multi-objective mixed integer linear programming (MILP) model in the form of simultaneously minimization two contradicting objectives related to the trim loss cost and the combined inventory cost in order to fulfill a given set of cutting orders. An equivalent nonlinear version and a particular case related to the situation when a producer is interested in choosing only a few number of types among all possible roll sizes, have also been considered. A new method called the conic scalarization is proposed for scalarizing non-convex multi-objective problems and several experimental tests are reported in order to demonstrate the validity of the developed modeling and solving approaches.
 
Some comments on the paper by Tsubakitani and Evans concerning a `jump search' metaheuristic for the traveling salesman problem (TSP) are presented. The origin and nature of certain recent developments that yield promising outcomes for the TSP was disclosed. Results on computational tests were compared with the outcomes of previous studies which have been superseded by superior findings.
 
The subject of this paper is the formulation and discussion of a semi-infinite linear vector optimization problem which extends multiple objective linear programming problems to those with an infinite number of objective functions and constraints. Furthermore it generalizes in some way semi-infinite programming. Besides the statement of some immediately derived results which are related to known results in semi-infinite linear programming and vector optimization, the problem mentioned above is interpreted as a decision model, under risk or uncertainty containing continuous random variables. Thus we treat the case of an infinite number of occuring states of nature. These types of problems frequently occur within aspects of decision theory in management science.
 
We consider a hierarchical workforce in which a higher qualified worker can substitute for a lower qualified one, but not vice versa. Daily labor requirements within a week may vary, but each worker must receive n off-days in the week. This problem has been considered by Hung (R. Hung, Eur. J. Oper. Res. 78(1) (1994) 49–57), who discusses a necessary and sufficient condition for a labor mix to be feasible and presents a simple one-pass method that frequently gives the least cost labor mix. We show in this paper that the integer programming approach is well suited for solving this problem: the definition of the integer programming model is simple, its implementation is immediate by using, for example, the Mathematical programming language (MPL) and the integer programming solver XA, the computation times are low (generally a few seconds on a small microcomputer) and finally the powerful of the integer programming approach allows us to extend the model in two interesting directions.
 
Iceland, one of the smallest European economies, was hit severely by the 2008-financial crisis. This paper uses a firm-level Community Innovation Survey (CIS) data set to consider the economy in the period preceding the collapse of its financial system. We examine the linkage between the crisis and innovativeness from the perspective of technical efficiency by means of the Data Envelopment Analysis of 204 randomly selected firms. The results suggest that a substantial fraction of the Icelandic firms can be classified as non-efficient in their production process. The production scale of many manufacturing firms is too small to be considered technically efficient, while services firms typically use excessive resources in their production process. A remarkably weak performance in transforming R&D and labor efforts into successful innovations is observed. Based on the empirical results, suitable policy implications are suggested to remedy the inoptimal production structure and help economic recovery.
 
Flexible manufacturing systems operate in a dynamic environment and face considerable uncertainty in production demands. The development of a flexible machine layout is a critical issue in creating a system that can respond effectively to these requirements. Unlike most existing methods for creating flexible layout designs, the procedure developed in this paper is not restricted to equal size machines. It optimizes the trade-offs between increased material handling costs as requirements change and machine rearrangement costs needed to adapt the layout to these changes. The proposed flexible machine layout design procedure formulates and solves a robust machine layout design problem over a rolling horizon planning time window. The formulation, details of the solution methodology, illustrative examples, and computational results are presented.
 
We consider the Asymmetric Capacitated Vehicle Routing Problem (ACVRP[, a particular case of the standard asymmetric Vehicle Routing Problem arising when only the vehicle capacity constraints are imposed. ACVRP is known to be NP-hard and finds practical applications, e.g. in distribution and scheduling. In this paper we describe the extension to ACVRP of the two well-known Clarke-Wright and Fisher-Jaikumar heuristic algorithms. We also propose a new heuristic algorithm for ACVRP that, starting with an initial infeasible solution, determines the final set of vehicle routes through an insertion procedure as well as intea-route and inter-route arc exchanges. The initial infeasible solution is obtained by using the additive bounding procedures for ACVRP described by Fischetti, Toth and Vigo in 1992. Extensive computational results on several classes of randomly generated test problems involving up to 300 customers and on some real instances of distribution problems in urban areas, are presented. The results obtained show that the proposed approach favourably compares with previous algorithms from the literature.
 
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units (DMUs) with exact values of inputs and outputs. For imprecise data, i.e., mixtures of interval data and ordinal data, some methods have been developed to calculate the upper bound of the efficiency scores. This paper constructs a pair of two-level mathematical programming models, whose objective values represent the lower bound and upper bound of the efficiency scores, respectively. Based on the concept of productive efficiency and the application of a variable substitution technique, the pair of two-level nonlinear programs is transformed to a pair of ordinary one-level linear programs. Solving the associated pairs of linear programs produces the efficiency intervals of all DMUs. An illustrative example verifies the idea of this paper. A real case is also provided to give some interpretation of the interval efficiency. Interval efficiency not only describes the real situation in better detail; psychologically, it also eases the tension of the DMUs being evaluated as well as the persons conducting the evaluation.
 
Material Requirements Planning (MRP) and Just-in-Time (JIT) system are directed toward planning and controlling the important characteristics of material flow: how much of what materials flow and when. Since the material flow is at the heart of the manufacturing firm, MRP and JHT are the powerful management tools that could determine the success or failure of an entire manufacturing system. One of the strongest debates in manufacturing has been centered on the performance comparison and compatibility of JIT production system to the existing MRP. The primary intent of this research is to provide an overview of the manufacturing planning and control environment associated with MRP and JIT. Classifying the existing MRP/JIT comparison and integration literature, two different perspectives on MRP/JIT are discussed, and future research area is proposed based on the taxonomy.
 
An M/G/1 retrial queueing system with disasters and unreliable server is investigated in this paper. Primary customers arrive in the system according to a Poisson process, and they receive service immediately if the server is available upon their arrivals. Otherwise, they will enter a retrial orbit and try their luck after a random time interval. We assume the catastrophes occur following a Poisson stream, and if a catastrophe occurs, all customers in the system are deleted immediately and it also causes the server’s breakdown. Besides, the server has an exponential lifetime in addition to the catastrophe process. Whenever the server breaks down, it is sent for repair immediately. It is assumed that the service time and two kinds of repair time of the server are all arbitrarily distributed. By applying the supplementary variables method, we obtain the Laplace transforms of the transient solutions and also the steady-state solutions for both queueing measures and reliability quantities of interest. Finally, numerical inversion of Laplace transforms is carried out for the blocking probability of the system, and the effects of several system parameters on the blocking probability are illustrated by numerical inversion results.
 
Minimum Spanning Tree (MST) problem is of high importance in network optimization. The multi-criteria MST (mc-MST) is a more realistic representation of the practical problem in the real world, but it is difficult for the traditional network optimization technique to deal with. In this paper, a genetic algorithm (GA) approach is developed to deal with this problem. Without neglecting its network topology, the proposed method adopts the Prüfer number as the tree encoding and applies the Multiple Criteria Decision Making (MCDM) technique and nondominated sorting technique to make the GA search give out all Pareto optimal solutions either focused on the region near the ideal point or distributed all along the Pareto frontier. Compared with the enumeration method of Pareto optimal solution, the numerical analysis shows the efficiency and effectiveness of the GA approach on the mc-MST problem.
 
In the past two decades, the researchers used to develop simulation models or mathematical programming models to estimate the performance measures of a production system which may or may not include the considerations of layout design, rather than develop indices specifically for evaluating a layout alternative. These models usually ask for very detailed information. Most of them involve oversimplifying assumptions and request overwhelming computational efforts such that they cannot be manipulated with ease in practice. The limitations and deficiencies of previous indices and performance measures include: parameters hard to obtain; inappropriate detailed data requirement; much effort to obtain little accuracy improvement; data available after operations start; no generic approach and no clear validation provided. To overcome these deficiencies, the generic approaches for developing quantitative and qualitative indices are provided and new indices for the flow criterion group and environment criterion group are presented. The parameters of each index are easier to obtain and do not require much effort on data collection. The validations of each quantitative index with examples are also provided. The generic approaches also allow the users to revise the indices according to the specific case considered.
 
Abstract This research,is motivated,by issues faced by a large manufacturer,of semiconductor,devices. Semiconductor manufacturing,companies,allocate millions of dollars every year for new,types of machine,tools for their facilities. Typically these are special purpose,machine,tools which,are made,to order. The rate of change,in products,and technology makes it diÅcult for manufacturers to have a good estimate of future tool requirements. Further, manu- facturers experience a long lead time while procuring these tools. In this paper, we model the tool capacity planning problem,under uncertainty in demand. The number,of tools required in a facility is suÅciently large (nearly hundred,or more,tools) to make,it nearly impossible to obtain eÅcient exact algorithms. We provide heuristics to find eÅcient tool procurement,plans and,test their quality using lower bounds,on the formulation. ” 2000 Elsevier Science B.V. All rights reserved. Keywords: Demand,scenarios; Semiconductor,manufacturing;,Strategic planning; Optimization; Heuristics
 
This paper presents a bilevel programming formulation of a leader–follower game that can be used to help decision makers arrive at a rational policy for encouraging biofuel production. In the model, the government is the leader and would like to minimize the annual tax credits it allows the petro-chemical industry for producing biofuels. The crops grown for this purpose are on land now set aside and subsidized through a different support program. The agricultural sector is the follower. Its objective is to maximize profits by selecting the best mix of crops to grow as well as the percentage of land to set aside. Two solution algorithms are developed. The first involves a grid search over the tax credit variables corresponding to the two biofuels under consideration, ester and ethanol. Once these values are fixed, nonfood crop prices can be determined and the farm sector linear program solved. The second algorithm is based on an approximate nonlinear programming (NLP) formulation of the bilevel program. An “engineering” approach is taken where the discontinuities in the government's problem are ignored and the farm model is treated as a function that maps nonfood crop prices into allocation decisions. Results are given for an agricultural region in the northern part of France comprising 393 farms.
 
This paper presents the model, solution method, and system developed and implemented for hot rolling production scheduling. The project is part of a large-scale effort to upgrade production and operations management systems of major iron and steel companies in China. Hot rolling production involves sequence dependent setup costs. Traditionally the production is scheduled using a greedy serial method and the setup cost is very high. In this study we propose a parallel strategy to model the scheduling problem and solve it using a new modified genetic algorithm (MGA). Combing the model and man–machine interactive method, a scheduling system is developed. The result of one year’s running in Shanghai Baoshan Iron & Steel Complex shows 20% improvement over the previous manual based system. As the company is one of the largest steel companies and the most modernized one in China, the successful application of the scheduling system in this company sets an example for other steel companies which have more potentials for improvement.
 
The probabilistic traveling salesman problem is a well known problem that is quite challenging to solve. It involves finding the tour with the lowest expected cost for customers that will require a visit with a given probability. There are several proposed algorithms for the homogeneous version of the problem, where all customers have identical probability of being realized. From the literature, the most successful approaches involve local search procedures, with the most famous being the 2-p-opt and 1-shift procedures proposed by Bertsimas [D.J. Bertsimas, L. Howell, Further results on the probabilistic traveling salesman problem, European Journal of Operational Research 65 (1) (1993) 68–95]. Recently, however, evidence has emerged that indicates the equations offered for these procedures are not correct, and even when corrected, the translation to the heterogeneous version of the problem is not simple. In this paper we extend the analysis and correction to the heterogeneous case. We derive new expressions for computing the cost of 2-p-opt and 1-shift local search moves, and we show that the neighborhood of a solution may be explored in O(n2) time, the same as for the homogeneous case, instead of O(n3) as first reported in the literature.
 
This paper addresses the problem of determining a dynamic berth assignment to ships in the public berth system. While the public berth system may not be suitable for most container ports in major countries, it is desired for higher cost-effectiveness in Japan’s ports. The berth allocation to calling ships is a key factor for efficient public berthing. However, it is not calculated in polynomially-bounded time. To obtain a good solution with considerably small computational effort, we developed a heuristic procedure based on the genetic algorithm. We conducted a large amount of computational experiments which showed that the proposed algorithm is adaptable to real world applications.
 
A polynomial time algorithm is developed to minimize maximum tardiness on a single machine in the presence of deadlines. The possibility of extending the total tardiness pseudopolynomial algorithm to the cases where release times or deadlines are in place is also investigated. It is concluded that the aforementioned pseudopolynomial algorithm can be extended only when deadlines and due dates are compatible and all job release times are equal.
 
A firm using a discount rate defined at the corporate scale as a weighted average cost of capital (WACC) may have to value projects subject to a different tax rate from the one used to calculate its discount rate. Moreover, to determine the economic value of a project, the WACC and Arditti–Levy methods need to be adjusted if the firm allocates to this project a loan representing proportionally more (or less) than the fraction corresponding to the target debt ratio defined by the firm for projects in the same class of risk. We first propose a method which corresponds to the adjustment of standard WACC calculations. The formulation adopted (“generalized ATWACC method”) has the advantage of being independent of any consideration related to debt ratios. We then develop a consistent adaptation of the Arditti–Levy method (“adapted BTWACC method”), but it does not possess the simplicity of that of the generalized ATWACC method.
 
In the present study we examine the design and implementation of a computer network based system to aid the construction of a university course timetable. The proposed system uses a centralized database, which contains all the appropriate information and operates on a computing platform. The different university departments collect all the appropriate parts of the database in which they have access and update them. Moreover, the updated database is processed by the timetable constructors to produce timetables for each one of the departments separately, which are then forwarded to appropriate nodes of the network. A unified version of the timetable is also available. The system uses an integer programming model that assigns courses to time slots and rooms to construct each department’s timetable. An automated procedure is generated to link the integer programming model with each department’s data and to resolve conflicts produced by the distribution of processes.The model is supported by a flexible front-end device, which generates constraints corresponding to assumptions specified by the user and report writers, which facilitate the presentation of the resulting schedules. The whole system is flexible and allows the easy construction and testing of alternative schedules, which are pre-conditioned according to requirements specified by the user.The timetabling system has been tested with data provided by the Athens University of Economics and Business.
 
The paper presents a new genetic local search (GLS) algorithm for multi-objective combinatorial optimization (MOCO). The goal of the algorithm is to generate in a short time a set of approximately efficient solutions that will allow the decision maker to choose a good compromise solution. In each iteration, the algorithm draws at random a utility function and constructs a temporary population composed of a number of best solutions among the prior generated solutions. Then, a pair of solutions selected at random from the temporary population is recombined. Local search procedure is applied to each offspring. Results of the presented experiment indicate that the algorithm outperforms other multi-objective methods based on GLS and a Pareto ranking-based multi-objective genetic algorithm (GA) on travelling salesperson problem (TSP).
 
Bilevel programming involves two optimization problems where the constraint region of the first level problem is implicitly determined by another optimization problem. This paper develops a genetic algorithm for the linear bilevel problem in which both objective functions are linear and the common constraint region is a polyhedron. Taking into account the existence of an extreme point of the polyhedron which solves the problem, the algorithm aims to combine classical extreme point enumeration techniques with genetic search methods by associating chromosomes with extreme points of the polyhedron. The numerical results show the efficiency of the proposed algorithm. In addition, this genetic algorithm can also be used for solving quasiconcave bilevel problems provided that the second level objective function is linear.
 
, where s
In this paper, we present the Promethee methods, a new class of outranking methods in multicriteria analysis. Their main features are simplicity, clearness and stability. The notion of generalized criterion is used to construct a valued outranking relation. All the parameters to be defined have an economic signification, so that the decision maker can easily fix them. Two ways of treatment are proposed: It is possible to obtain either a partial preorder (Promethee I) or a complete one (Promethee II), both on a finite set of feasible actions. A comparison is made with the Electre III method. The stability of the results given by the two methods is analysed. Numerical applications are given in order to illustrate the properties of the new methods and some further problems are discussed.
 
Top-cited authors
Thomas L. Saaty
  • University of Pittsburgh
Joe Zhu
  • Worcester Polytechnic Institute
Kaoru Tone
  • National Graduate Institute for Policy Studies
Michel Gendreau
  • Polytechnique Montréal
T. C. E. Cheng
  • The Hong Kong Polytechnic University