Computational Management Science

Published by Springer Nature

Online ISSN: 1619-6988

·

Print ISSN: 1619-697X

Articles


Global optimization of mixed-integer bilevel programming problems. Computational Management Science, 2, 181-212
  • Article

February 2005

·

94 Reads

·

Christodoulos A. Floudas
Two approaches that solve the mixed-integer nonlinear bilevel programming problem to global optimality are introduced. The first addresses problems mixed-integer nonlinear in outer variables and C 2 -nonlinear in inner variables. The second adresses problems with general mixed-integer nonlinear functions in outer level. Inner level functions may be mixed-integer nonlinear in outer variables, linear, polynomial, or multilinear in inner integer variables, and linear in inner continuous variables. This second approach is based on reformulating the mixed-integer inner problem as continuous via its vertex polyheral convex hull representation and solving the resulting nonlinear bilevel optimization problem by a novel deterministic global optimization framework. Computational studies illustrate proposed approaches. Copyright Springer-Verlag Berlin/Heidelberg 2005
Share

Abass, S.A.: Bilevel programming approach applied to the flow shop scheduling problem under fuzziness. Computational Management Science 4(4), 279-293

November 2005

·

26 Reads

This paper presents a fuzzy bilevel programming approach to solve the flow shop scheduling problem. The problem considered here differs from the standard form in that operators are assigned to the machines and imposing a hierarchy of two decision makers with fuzzy processing times. The shop owner considered higher level and assigns the jobs to the machines in order to minimize the flow time while the customer is the lower level and decides on a job schedule in order to minimize the makespan. In this paper, we use the concepts of tolerance membership function at each level to define a fuzzy decision model for generating optimal (satisfactory) solution for bilevel flow shop scheduling problem. A solution algorithm for solving this problem is given. Copyright Springer-Verlag Berlin/Heidelberg 2005

Jacques F. Benders is 80
  • Article
  • Full-text available

February 2005

·

333 Reads

Download

A maximal predictability portfolio using absolute deviation reformulation

January 2010

·

60 Reads

This paper shows that a large-scale maximal predictability portfolio (MPP) optimization problem can be solved within a practical amount of computational time using absolute deviation instead of squared deviation in the definition of the coefficient of determination. Also, we will show that MPP portfolio outperforms the mean-absolute deviation portfolio using real asset data in Tokyo Stock Exchange.

Day-ahead market bidding for a Nordic hydropower producer: Taking the Elbas market into account

April 2011

·

270 Reads

In many power markets around the world the energy generation decisions result from two-sided auctions in which producing and consuming agents submit their price-quantity bids. The determination of optimal bids in power markets is a complicated task that has to be undertaken every day. In the present work, we propose an optimization model for a price-taker hydropower producer in Nord Pool that takes into account the uncertainty in market prices and both production and physical trading aspects. The day-ahead bidding takes place a day before the actual operation and energy delivery. After this round of bidding, but before actual operation, some adjustments in the dispatched power (accepted bids) have to be done, due to uncertainty in prices, inflow and load. Such adjustments can be done in the Elbas market, which allows for trading physical electricity up to one hour before the operation hour. This paper uses stochastic programming to determine the optimal bidding strategy and the impact of the possibility to participate in the Elbas. ARMAX and GARCH techniques are used to generate realistic market price scenarios taking into account both day-ahead price and Elbas price uncertainty. The results show that considering Elbas when bidding in the day-ahead market does not significantly impact neither the profit nor the recommended bids of a typical hydro producer. KeywordsStochastic programming–Mixed integer programming–Electricity auctions–Elbas–Hydroelectric scheduling–GARCH

Fig. 5 RSE in the evaluation of R Y in (14) versus number of trajectories
Fig. 6 RSE versus number of processors. The local number of simulated trajectories is fixed. For each simulation, the value of the execution time in seconds is also reported
Participating life insurance policies: An accurate and efficient parallel software for COTS clusters

August 2011

·

112 Reads

In this paper we discuss the development of a parallel software for the numerical simulation of Participating Life Insurance Policies in distributed environments. The main computational kernels in the mathematical models for the solution of the problem are multidimensional integrals and stochastic differential equations. The former is solved by means of Monte Carlo method combined with the Antithetic Variates variance reduction technique, while differential equations are approximated via a fully implicit, positivity-preserving, Euler method. The parallelization strategy we adopted relies on the parallelization of Monte Carlo algorithm. We implemented and tested the software on a PC Linux cluster. KeywordsLife insurance policies–Monte Carlo method–Parallel computing

Table 1 Parameter Values Used in the Base Run 
Figure 1.  
Figure 2.  
Figure 3.  
Comparison of Policy Functions from the Optimal Learning and Adaptive Control Frameworks

January 2008

·

62 Reads

Comparisons of various methods for solving stochastic control economic models can be done with Monte Carlo methods. These methods have been applied to simple one-state, one-control quadraticlinear tracking models; however, large outliers may occur in a substantial number of the Monte Carlo runs when certain parameter sets are used in these models. This paper tracks the source of these outliers to two sources: (1) the use of a zero for the penalty weights on the control variables and (2) the generation of nearzero initial estimate of the control parameter in the systems equations by the Monte Carlo routine. This result leads to an understanding of why both the unsophisticated Optimal Feedback (Certainty Equivalence) and the sophisticated Dual methods do poorly in some Monte Carlo comparisons relative to the moderately sophisticated Expected Optimal Feedback method.

Fig. 6 Value difference between high and low emission energy technology for different time horizons of decision-making (a), with variation of discount rate (b), damage factor (c) and carbon price (d)
Adaptive management of energy transitions in long-term climate change

February 2008

·

137 Reads

The UN Framework Convention on Climate Change (UNFCCC) demands stabilization of atmospheric greenhouse gas concentrations at levels that prevent dangerous anthropogenic interference with the climate system. This requires an unprecedented degree of international action for emission reductions and technological change in the energy sector. Extending the established optimal control approach, the paper combines the concepts of adaptive control, inverse modeling and local optimization to climate change decision-making and management. An alternative decision model is described where controls are adjusted towards a moving target under changing conditions. A framework for integrated assessment is introduced, where a basic climate model is coupled to an economic production function with energy as a production factor, which is controlled by the allocation of investments to alternative energy technologies. Investment strategies are shaped by value functions, including utility, costs and climate damages for a given future time horizon, which are translated into admissible emission limits to keep atmospheric carbon concentrations and global mean temperature asymptotically below a given threshold. Conditions for switching between management and technology paths with different costs and carbon intensities are identified. To take account of the substantial uncertainties, an exemplary case discusses the sensitivity of the results to variation of crucial parameters, in particular time discounting, climate damage, taxes and time horizon for decision-making.

Table 1 : SASVM Overall Performance on USPS Data Set 
Self-adaptive support vector machines: Modelling and experiments

February 2009

·

126 Reads

Method In this paper, we introduce a bi-level optimization formulation for the model and feature selection problems of support vector machines (SVMs). A bi-level optimization model is proposed to select the best model, where the standard convex quadratic optimization problem of the SVM training is cast as a subproblem. Feasibility The optimal objective value of the quadratic problem of SVMs is minimized over a feasible range of the kernel parameters at the master level of the bi-level model. Since the optimal objective value of the subproblem is a continuous function of the kernel parameters, through implicity defined over a certain region, the solution of this bi-level problem always exists. The problem of feature selection can be handled in a similar manner. Experiments and results Two approaches for solving the bi-level problem of model and feature selection are considered as well. Experimental results show that the bi-level formulation provides a plausible tool for model selection.

An adaptive Monte Carlo algorithm for computing mixed logit estimators

January 2006

·

141 Reads

Researchers and analysts are increasingly using mixed logit models for estimating responses to forecast demand and to determine the factors that affect individual choices. However the numerical cost associated to their evaluation can be prohibitive, the inherent probability choices being represented by multidimensional integrals. This cost remains high even if Monte Carlo or quasi-Monte Carlo techniques are used to estimate those integrals. This paper describes a new algorithm that uses Monte Carlo approximations in the context of modern trust-region techniques, but also exploits accuracy and bias estimators to considerably increase its computational efficiency. Numerical experiments underline the importance of the choice of an appropriate optimisation technique and indicate that the proposed algorithm allows substantial gains in time while delivering more information to the practitioner. Copyright Springer-Verlag Berlin/Heidelberg 2006

Fig. 1 Fully-indexed rights over nominal rights in an AE scheme for an example working career  
Table 2 Actual disbursements for t = 2006 (in thousands of Euros) 
Fig. 2 Aggregate approach errors for t = 2006 and s I = 2000  
Fig. 3 Aggregate approach errors for t = 2014 and s I = 2000  
Collective adjustment of pension rights in ALM models

April 2011

·

57 Reads

Collective adjustment of pension rights is a way to keep defined benefit systems tenable. In asset liability management (ALM) models presented in the literature these decisions are modeled both at the aggregate level of the liabilities as a whole and at a more detailed level. In this paper we compare the approximate aggregate approach to the accurate detailed approach for the average earnings scheme with conditional indexation. We prove that the aggregate approach leads to one-sided errors. Moreover, we show that for semi-realistic data these biases are considerable. KeywordsAsset liability management–Pension funds–Indexation

Intervention analysis to identify significant exposures in pulsing advertising campaigns: an operative procedure

November 2005

·

24 Reads

The purpose of this paper is to develop an operational method to detect the most effective exposures in the context of a given pulsing advertising campaign. For most effective, are intended those exposures that produce a statistically significant increase in the level of a response variable, either temporarily or permanently. The method consists in specifying an intervention model for the response variable, where the significant exposures are selected on the basis of a probabilistic criterion, and is empirically evaluated by using brandlevel data from five advertising tracking studies that also include the actual spending schedules. Given a pulsing advertising campaign, the proposed method serves both as an a-posteriori improvement of the campaign itself and as an a-priori additional information for programming future scheduling.

Numerical Modelling of Autonomous Agent Movement and Conflict

February 2006

·

16 Reads

The world that we live in is filled with large scale agent systems, from diverse fields such as biology, ecology or finance. Inspired by the desire to better understand and make the best out of these systems, we propose to build stochastic mathematical models, in particular G-networks models. With our approach, we aim to provide insights into systems in terms of their performance and behavior, to identify the parameters which strongly influence them, and to evaluate how well individual goals can be achieved. Through comparing the effects of alternatives, we hope to offer the users the possibility of choosing an option that address their requirements best. We have demonstrated our approach in the context of urban military planning and analyzed the obtained results. The results are validated against those obtained from a simulator (Gelenbe et al. in simulating the navigation and control of autonomous agents, pp 183–189, 2004a; in Enabling simulation with augmented reality, pp 290–310, 2004b) that was developed in our group and the observed discrepancies are discussed. The results suggest that the proposed approach has tackled one of the classical problems in modeling multi-agent systems and is able to predict the systems’ performance at low computational cost. In addition to offering the numerical estimates of the outcome, these results help us identify which characteristics most impact the system. We conclude the paper with potential extensions of the model.

Airline Network Revenue Management by Multistage Stochastic Programming

October 2008

·

115 Reads

A multistage stochastic programming approach to airline network revenue management is presented. The objective is to determine seat protection levels for all itineraries, fare classes, points of sale of the airline network and all dcps of the booking horizon such that the expected revenue is maximized. While the passenger demand and cancelation rate processes are the stochastic inputs of the model, the stochastic protection level process represents its output and allows to control the booking process. The stochastic passenger demand and cancelation rate processes are approximated by a finite number of tree structured scenarios. The scenario tree is generated from historical data using a stability-based recursive scenario reduction scheme. Numerical results for a small hub-and-spoke network are reported.

Automatic Formulation of Stochastic Programs Via an Algebraic Modeling Language

February 2007

·

78 Reads

This paper presents an open source tool that automatically generates the so-called deterministic equivalent in stochastic programming. The tool is based on the algebraic modeling language ampl. The user is only required to provide the deterministic version of the stochastic problem and the information on the stochastic process, either as scenarios or as a transitions-based event tree.


Multiobjective evolutionary algorithms for complex portfolio optimization problems

August 2011

·

333 Reads

This paper investigates the ability of Multiobjective Evolutionary Algorithms (MOEAs), namely the Non-dominated Sorting Genetic Algorithm II (NSGA-II), Pareto Envelope-based Selection Algorithm (PESA) and Strength Pareto Evolutionary Algorithm 2 (SPEA2), for solving complex portfolio optimization problems. The portfolio optimization problem is a typical bi-objective optimization problem with objectives the reward that should be maximized and the risk that should be minimized. While reward is commonly measured by the portfolio’s expected return, various risk measures have been proposed that try to better reflect a portfolio’s riskiness or to simplify the problem to be solved with exact optimization techniques efficiently. However, some risk measures generate additional complexities, since they are non-convex, non-differentiable functions. In addition, constraints imposed by the practitioners introduce further difficulties since they transform the search space into a non-convex region. The results show that MOEAs, in general, are efficient and reliable strategies for this kind of problems, and their performance is independent of the risk function used. KeywordsMultiobjective optimization–NSGA-II–PESA–Portfolio selection–SPEA2

Reformulation and solution algorithms for the maximum leaf spanning tree problem

July 2010

·

59 Reads

Given a graph G=(V, E), the maximum leaf spanning tree problem (MLSTP) is to find a spanning tree of G with as many leaves as possible. The problem is easy to solve when G is complete. However, for the general case, when the graph is sparse, it is proven to be NP-hard. In this paper, two reformulations are proposed for the problem. The first one is a reinforced directed graph version of a formulation found in the literature. The second recasts the problem as a Steiner arborescence problem over an associated directed graph. Branch-and-Cut algorithms are implemented for these two reformulations. Additionally, we also implemented an improved version of a MLSTP Branch-and-Bound algorithm, suggested in the literature. All of these algorithms benefit from pre-processing tests and a heuristic suggested in this paper. Computational comparisons between the three algorithms indicate that the one associated with the first reformulation is the overall best. It was shown to be faster than the other two algorithms and is capable of solving much larger MLSTP instances than previously attempted in the literature.

Progressive Hedging Innovations for a Class of Stochastic Resource Allocation Problems

September 2008

·

178 Reads

Numerous planning problems can be formulated as multi-stage stochastic programs and many possess key discrete (integer) decision variables in one or more of the stages. Progressive hedging (PH) is a scenario-based decomposition technique that can be leveraged to solve such problems. Originally devised for problems possessing only continuous variables, PH has been successfully applied as a heuristic to solve multi-stage stochastic programs with integer variables. However, a variety of critical issues arise in practice when implementing PH for the discrete case, especially in the context of very difficult or large-scale mixed-integer problems. Failure to address these issues properly results in either non-convergence of the heuristic or unacceptably long run-times. We investigate these issues and describe algorithmic innovations in the context of a broad class of scenario-based resource allocation problem in which decision variables represent resources available at a cost and constraints enforce the need for sufficient combinations of resources. The necessity and efficacy of our techniques is empirically assessed on a two-stage stochastic network flow problem with integer variables in both stages.

Leader-Follower Equilibria for Electric Power and NO x Allowances Markets

February 2006

·

359 Reads

This paper investigates the ability of the largest producer in an electricity market to manipulate both the electricity and emission allowances markets to its advantage. A Stackelberg game to analyze this situation is constructed in which the largest firm plays the role of the leader, while the medium-sized firms are treated as Cournot followers with price-taking fringes that behave competitively in both markets. Since there is no explicit representation of the best-reply function for each follower, this Stackelberg game is formulated as a large-scale mathematical program with equilibrium constraints. The best-reply functions are implicitly represented by a set of nonlinear complementarity conditions. Analysis of the computed solution for the Pennsylvania–New Jersey–Maryland electricity market shows that the leader can gain substantial profits by withholding allowances and driving up NO x allowance costs for rival producers. The allowances price is higher than the corresponding price in the Nash–Cournot case, although the electricity prices are essentially the same.

American option pricing under stochastic volatility: An efficient numerical approach

April 2010

·

322 Reads

This paper develops a new numerical technique to price an American option written upon an underlying asset that follows a bivariate diffusion process. The technique presented here exploits the supermartingale representation of an American option price together with a coarse approximation of its early exercise surface that is based on an efficient implementation of the least-squares Monte–Carlo algorithm (LSM) of Longstaff and Schwartz (Rev Financ Stud 14:113–147, 2001). Our approach also has the advantage of avoiding two main issues associated with LSM, namely its inherent bias and the basis functions selection problem. Extensive numerical results show that our approach yields very accurate prices in a computationally efficient manner. Finally, the flexibility of our method allows for its extension to a much larger class of optimal stopping problems than addressed in this paper. KeywordsAmerican option pricing-Optimal stopping-Approximate dynamic programming-Stochastic volatility-Doob–Meyer decomposition-Monte–Carlo

Table 3 Out-of-sample mean relative pricing errors (initial volatility = spot volatility)
Table 4 Out-of-sample mean relative pricing errors (initial volatility = long-term average θ ) 
Table 5 Out-of-sample mean absolute relative pricing errors (initial volatility = spot volatility)
Table 6 Out-of-sample mean absolute relative pricing errors (initial volatility = long-term average θ ) 
American option pricing under stochastic volatility: An empirical evaluation

April 2010

·

582 Reads

Over the past few years, model complexity in quantitative finance has increased substantially in response to earlier approaches that did not capture critical features for risk management. However, given the preponderance of the classical Black–Scholes model, it is still not clear that this increased complexity is matched by additional accuracy in the ultimate result. In particular, the last decade has witnessed a flurry of activity in modeling asset volatility, and studies evaluating different alternatives for option pricing have focused on European-style exercise. In this paper, we extend these empirical evaluations to American options, as their additional opportunity for early exercise may incorporate stochastic volatility in the pricing differently. Specifically, the present work compares the empirical pricing and hedging performance of the commonly adopted stochastic volatility model of Heston (Rev Financial Stud 6:327–343, 1993) against the traditional constant volatility benchmark of Black and Scholes (J Polit Econ 81:637–659, 1973). Using S&P 100 index options data, our study indicates that this particular stochastic volatility model offers enhancements in line with their European-style counterparts for in-the-money options. However, the most striking improvements are for out-of-the-money options, which because of early exercise are more valuable than their European-style counterparts, especially when volatility is stochastic. KeywordsStochastic volatility-Indirect inference-Model calibration-American option pricing-S&P 100 index-Approximate dynamic programming

Combined Discrete-event Simulation and Ant Colony Optimisation Approach for Selecting Optimal Screening Policies for Diabetic Retinopathy

February 2007

·

39 Reads

In this paper, we present the first published healthcare application of discrete-event simulation embedded in an ant colony optimization model. We consider the problem of choosing optimal screening policies for retinopathy, a serious complication of diabetes. In order to minimize the screening cost per year of sight saved, compared with a situation with no screening, individuals aged between 30 and 60 should be screened every 30 months, using tests with low sensitivity and high specificity, at an undiscounted cost of around £950 per year of sight saved. If the objective were simply to maximize the total number of years of sight saved, regardless of expense, then tests with high sensitivity and specificity should be used to screen all patients with diabetes every 6 months, at an undiscounted cost of around £4,000 per year of sight saved. The former strategy would incur up to 12 times lower costs in total, but would result in up to 3 times more years of preventable blindness in the population, compared with the second strategy.

Performance analysis of distributed solution approaches in simulation-based optimization

January 2005

·

9 Reads

Applying computationally expensive simulations in design or process optimization results in long-running solution processes even when using a state-of-the-art distributed algorithm and hardware. Within these simulation-based optimization problems the optimizer has to treat the simulation systems as black-boxes. The distributed solution of this kind of optimization problem demands efficient utilization of resources (i.e. processors) and evaluation of the solution quality. Analyzing the parallel performance is therefore an important task in the development of adequate distributed approaches taking into account the numerical algorithm, its implementation, and the used hardware architecture. In this paper, simulation-based optimization problems are characterized and a distributed solution algorithm is presented. Different performance analysis techniques (e.g. scalability analysis, computational complexity) are discussed and a new approach integrating parallel performance and solution quality is developed. This approach combines a priori and a posteriori techniques and can be applied in early stages of the solution process. The feasibility of the approach is demonstrated by applying it to three different classes of simulation-based optimization problems from groundwater management. Copyright Springer-Verlag Berlin/Heidelberg 2005

Integer programming approaches in mean-risk models

November 2005

·

23 Reads

This paper is concerned with porfolio optimization problems with integer constraints. Such problems include, among others mean-risk problems with nonconvex transaction cost, minimal transaction unit constraints and cardinality constraints on the number of assets in a portfolio. These problems, though practically very important have been considered intractable because we have to solve nonlinear integer programming problems for which there exists no efficient algorithms. We will show that these problems can now be solved by the state- of-the-art integer programming methodologies if we use absolute deviation as the measure of risk. Copyright Springer-Verlag Berlin/Heidelberg 2005

Top-cited authors