Article

Pseudo-feasible solutions in evolutionary bilevel optimization: Test problems and performance assessment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This work presents a study about a special class of infeasible solutions called here as pseudo-feasible solutions in bilevel optimization. This work is focused on determining how such solutions can affect the performance of an evolutionary algorithm. After its formal definition, and based on theoretical results, two conditions to detect and deal with them are proposed. Moreover, a novel and scalable set of test problems with characterized pseudo-feasible solutions is introduced. Furthermore, an algorithm designed to solve bilevel optimization problems (BOP) is adapted with the above mentioned conditions and tested in already known test problems and also in the new testbed so as to analyze its performance when compared with state-of-the-art evolutionary approaches for BOPs. The obtained results suggest that the presence of pseudo-feasible solutions can be considered as a source of difficulty in this type of optimization problems, since their presence may lead to incorrect comparisons among algorithms.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In another study on the presence of the so-called pseudo-feasible solutions in evolutionary bilevel optimisation(QBCA-2), the focus was on determining how pseudo-feasible solutions can affect the performance of an evolutionary algorithm. Moreover, a novel and scalable set of test problems with characterised pseudo-feasible solutions was introduced in [23]. In the literature [24], a co-evolutionary algorithm was proposed to solve BLPP. ...
... To demonstrate the feasibility and efficiency of the proposed algorithm TCEA, it was compared with the three already existing algorithms [23,35,31] ...
... In order to compare with the experimental data in the literature, the selected parameters are consistent with those in [23]. The optimal solution obtained by the algorithm TCEA is recorded as: (x � , y � ), the leader's and follower's objective function values are respectively denoted as:F(x � , y � ) and f(x � , y � ). ...
Article
Full-text available
In the engineering and economic management fields, optimisation models frequently involve different decision-making levels. These are known as multi-level optimisation problems. Because the decision-making process of such problems are hierarchical, they are also called a hierarchical optimisation problems. When the problem involves only two-level decision-making, the corresponding optimisation model is referred to as a bilevel programming problem(BLPP). To address the complex nonlinear bilevel programming problem, in this study, we design an evolutionary algorithm embedded with a surrogate model-that it is a approximation method and correlation coefficients. First, the isodata method is used to group the initial population, and the correlation coefficients of the individuals in each group are determined based on the rank of the leader and follower objective functions. Second, for the offspring individuals produced by the evolutionary operator, the surrogate model is used to approximate the solution of the follower’s programming problem, during which the points in the population are screened by combining the correlation coefficients. Finally, a new crossover operator is designed by the spherical search method, which diversifies the generated offspring. The simulation experimental results demonstrate that the proposed algorithm can effectively obtain an optimal solution.
Chapter
Full-text available
Physical phenomena have been the inspiration for proposing different optimization methods such as electro-search algorithm, central force optimization, and charged system search among others. This work presents a new optimization algorithm based on some principles from physics and mechanics, which is called Evolutionary Centers Algorithm (ECA). We utilize the center of mass definition for creating new directions for moving the worst elements in the population, based on their objective function values, to better regions of the search space. The efficiency of the new approach is showed by using the CEC 2017 competition benchmark functions. We present a comparison against the best algorithm (jSO) in such competition and against a classical method (SQP) for nonlinear optimization. The results obtained are promising.
Article
Full-text available
Bilevel optimization is defined as a mathematical program, where an optimization problem contains another optimization problem as a constraint. These problems have received significant attention from the mathematical programming community. Only limited work exists on bilevel problems using evolutionary computation techniques; however, recently there has been an increasing interest due to the proliferation of practical applications and the potential of evolutionary algorithms in tackling these problems. This paper provides a comprehensive review on bilevel optimization from the basic principles to solution strategies; both classical and evolutionary. A number of potential application problems are also discussed. To offer the readers insights on the prominent developments in the field of bilevel optimization, we have performed an automated text-analysis of an extended list of papers published on bilevel optimization to date. This paper should motivate evolutionary computation researchers to pay more attention to this practical yet challenging area.
Article
Full-text available
A large number of application problems involve two levels of optimization, where one optimization task is nested inside the other. These problems are known as bilevel optimization problems and have been studied by both classical optimization community and evolutionary optimization community. Most of the solution procedures proposed until now are either computationally very expensive or applicable to only small classes of bilevel optimization problems adhering to mathematically simplifying assumptions. In this paper, we propose an evolutionary optimization method that tries to reduce the computational expense by iteratively approximating two important mappings in bilevel optimization; namely, the lower level rational reaction mapping and the lower level optimal value function mapping. The algorithm has been tested on a large number of test problems and comparisons have been performed with other algorithms. The results show the performance gain to be quite significant. To the best knowledge of the authors, a combined theory-based and population-based solution procedure utilizing mappings has not been suggested yet for bilevel problems.
Article
Full-text available
In recent years, the application ofmetaheuristic techniques to solve multilevel and particularly bi-level optimization problems (BOPs) has become an active research area. BOPs constitute a very important class of problems with various applications in different domains. A wide variety ofmetaheuristics have been proposed in the literature to solve such hierarchical optimization problems. In this paper, a taxonomy of metaheuristics to solve BOPs is presented in an attempt to provide a common terminology and classification mechanisms. The taxonomy, while presented in terms of metaheuristics, is also applicable to most types of heuristics and exact optimization algorithms
Article
Full-text available
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow, 2. One must prototype in one language and then rewrite in another language for speed or deploy- ment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design — a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.
Article
Full-text available
Abstract In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of twelve problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The codes related to the paper may be accessed from the website http://bilevel.org .
Article
Full-text available
Bilevel optimization problems are a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, the optimal solutions to the lower level problem become possible feasible candidates to the upper level problem. Such a requirement makes the optimization problem difficult to solve, and has kept the researchers busy towards devising methodologies, which can efficiently handle the problem. Despite the efforts, there hardly exists any effective methodology, which is capable of handling a complex bilevel problem. In this paper, we introduce bilevel evolutionary algorithm based on quadratic approximations (BLEAQ) of optimal lower level variables with respect to the upper level variables. The approach is capable of handling bilevel problems with different kinds of complexities in relatively smaller number of function evaluations. Ideas from classical optimization have been hybridized with evolutionary methods to generate an efficient optimization algorithm for generic bilevel problems. The efficacy of the algorithm has been shown on two sets of test problems. The first set is a recently proposed SMD test set, which contains problems with controllable complexities, and the second set contains standard test problems collected from the literature. The proposed method has been evaluated against two benchmarks, and the performance gain is observed to be significant.
Article
Full-text available
A new branch-and-bound algorithm for linear bilevel programming is proposed. Necessary optimality conditions expressed in terms of tightness of the follower's constraints are used to fathom or simplify subproblems, branch and obtain penalties similar to those used in mixed-integer programming. Computational results are reported and compare favorably to those of previous methods. Problems with up to 150 constraints, 250 variables controlled by the leader, and 150 variables controlled by the follower have been solved.
Article
Full-text available
We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f , not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals con-taining a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Further-more, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one vari-able using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the de-sign of low-order controllers for linear dynamical systems. We conclude with a challenging open question.
Article
Full-text available
In this paper, we first prove that the expansion and contraction steps of the Nelder-Mead simplex algorithm possess a descent property when the objective function is uniformly convex. This property provides some new insights on why the standard Nelder-Mead algorithm becomes inefficient in high dimensions. We then propose an implementation of the Nelder-Mead method in which the expansion, contraction, and shrink parameters depend on the dimension of the optimization problem. Our numerical experiments show that the new implementation outperforms the standard Nelder-Mead method for high dimensional problems. KeywordsNelder-Mead method–Simplex–Polytope–Adaptive parameter–Optimization
Conference Paper
Full-text available
Bi-level programming problems arise in situations when the decision maker has to take into account the responses of the users to his decisions. These problems are recognized as one of the most difficult and challenging problems in transportation systems management. Several problems within the transportation literature can be cast in the bi-level programming framework. At the same time, significant advances have been made in the deployment of stochastic heuristics for function optimization. This paper reports on the use of Differential Evolution (DE) for solving bi-level programming problems with applications in the field of transportation planning. After illustrating our solution algorithm with some mathematical functions, we then apply this method to two control problems facing the transportation network manager. DE is integrated with conventional traffic assignment techniques to solve the resulting bi-level program. Numerical computations of this DE based algorithm (known as DEBLP) are presented and compared with existing results. Our numerical results augment the view that DE is a suitable contender for solving these types of problems.
Article
Full-text available
This paper is devoted to bilevel optimization, a branch of mathematical program- ming of both practical and theoretical interest. Starting with a simple example, we proceed towards a general formulation. We then present fields of application, focus on solution ap- proaches, and make the connection with MPECs (Mathematical Programs with Equilibrium Constraints).
Article
Bilevel optimization involves two levels of optimization, where one optimization problem is nested within the other. The structure of the problem often requires solving a large number of inner optimization problems that make these kinds of optimization problems expensive to solve. The reaction set mapping and the lower level optimal value function mapping are often used to reduce bilevel optimization problems to a single level; however, the mappings are not known a priori , and the need is to be estimated. Though there exist a few studies that rely on the estimation of these mappings, they are often applied to problems where one of these mappings has a known form, that is, piecewise linear, convex, etc. In this article, we utilize both these mappings together to solve general bilevel optimization problems without any assumptions on the structure of these mappings. Kriging approximations are created during the generations of an evolutionary algorithm, where the population members serve as the samples for creating the approximations. One of the important features of the proposed algorithm is the creation of an auxiliary optimization problem using the Kriging-based metamodel of the lower level optimal value function that solves an approximate relaxation of the bilevel optimization problem. The auxiliary problem when used for local search is able to accelerate the evolutionary algorithm toward the bilevel optimal solution. We perform experiments on two sets of test problems and a problem from the domain of control theory. Our experiments suggest that the approach is quite promising and can lead to substantial savings when solving bilevel optimization problems. The approach is able to outperform state-of-the-art methods that are available for solving bilevel problems, in particular, the savings in function evaluations for the lower level problem are substantial with the proposed approach.
Article
When solving bilevel optimization problems (BOPs) by evolutionary algorithms (EAs), it is necessary to obtain the lower-level optimal solution for each upper-level solution, which gives rise to a large number of lower-level fitness evaluations, especially for large-scale BOPs. It is interesting to note that some upper-level variables may not interact with some lower-level variables. Under this condition, if the value(s) of one/several upper-level variables change(s), we only need to focus on the optimization of the interacting lower-level variables, thus reducing the dimension of the search space and saving the number of lower-level fitness evaluations. This paper proposes a new framework (called GO) to identify and utilize the interactions between upper-level and lower-level variables for scalable BOPs. GO includes two phases: the grouping phase and the optimization phase. In the grouping phase, after identifying the interactions between upper-level and lower-level variables, they are divided into three types of subgroups (denoted as types I-III), which contain only upper-level variables, only lower-level variables, and both upper-level and lower-level variables, respectively. In the optimization phase, if type-I and type-II subgroups only include one variable, a multi-start sequential quadratic programming is designed; otherwise, a single-level EA is applied. In addition, a criterion is proposed to judge whether a type-II subgroup has multiple optima. If multiple optima exist, by incorporating the information of the upper level, we design new objective function and degree of constraint violation to locate the optimistic solution. As for type-III subgroups, they are optimized by a bilevel EA (BLEA). The effectiveness of GO is demonstrated on a set of scalable test problems by applying it to five representative BLEAs. Moreover, GO is applied to the resource pricing in mobile edge computing.
Article
A key aspect of the design of evolutionary and swarm intelligence algorithms is studying their performance. Statistical comparisons are also a crucial part which allows for reliable conclusions to be drawn. In the present paper we gather and examine the approaches taken from different perspectives to summarise the assumptions made by these statistical tests, the conclusions reached and the steps followed to perform them correctly. In this paper, we conduct a survey on the current trends of the proposals of statistical analyses for the comparison of algorithms of computational intelligence and include a description of the statistical background of these tests. We illustrate the use of the most common tests in the context of the Competition on single-objective real parameter optimisation of the IEEE Congress on Evolutionary Computation (CEC) 2017 and describe the main advantages and drawbacks of the use of each kind of test and put forward some recommendations concerning their use.
Article
Bilevel optimization refers to a challenging optimization problem which contains two levels of optimization problems. The task of bilevel optimization is to find the optimum of the upper-level problem, subject to the optimality of the corresponding lower-level problem. This nested nature introduces many difficulties such as non-convexity and disconnectedness, and poses great challenges to traditional optimization methods. Using evolutionary algorithms in bilevel optimization has been demonstrated to be very promising in recent years. However, these algorithms suffer from low efficiency since they usually require a huge number of function evaluations. This paper proposes a bilevel covariance matrix adaptation evolution strategy (BL-CMA-ES) to handle bilevel optimization problems. A search distribution sharing mechanism is designed so that we can extract a priori knowledge of the lower-level problem from the upper-level optimizer, which significantly reduces the number of function evaluations. We also propose a refinement based elite preservation mechanism to trace the elite and avoid inaccurate solutions. Comparisons with five state-of-the-art algorithms on twenty-two benchmark problems and two real-world applications are carried out to test the performance of the proposed approach. The experimental results have shown the effectiveness of the proposed approach in keeping a good trade-off between solution quality and computational efficiency. IEEE
Article
A common technique to solve bilevel optimization problems is by reducing the problem to a single level and then solving it as a standard optimization problem. A number of single level reduction formulations exist, but one of the most common ways is to replace the lower level optimization problem with its Karush-Kuhn-Tucker (KKT) conditions. Such a reduction strategy has been widely used in the classical optimization as well as the evolutionary computation literature. However, KKT conditions contain a set of non-linear equality constraints that are often found hard to satisfy. In this paper, we discuss a single level reduction of a bilevel problem using recently proposed relaxed KKT conditions. The conditions are relaxed; therefore, approximate, but the error in terms of distance from the true lower level KKT point is bounded. There is a proximity measure associated to the new KKT conditions, which gives an idea of the KKT error and distance from the optimum. We utilize this reduction method within an evolutionary algorithm to solve bilevel optimization problems. The proposed algorithm is compared against a number of recently proposed approaches. The idea is found to lead to significant computational savings, especially, in the lower level function evaluations. The idea is promising and might be useful for further developments on bilevel optimization both in the domain of classical as well as evolutionary optimization research.
Article
Purified terephthalic acid (PTA) is used for producing a variety of polyesters. In the production of PTA, p-xylene (PX) is first transformed into terephthalic acid (TA) by oxidation process and then TA is refined. As a key step, the oxidation of PX to TA is a significant chemical process of PTA production. To improve qualified product yield with low energy consumption, in this paper, multi-objective optimization of various conflicting objectives (namely minimization of combustion loss, maximization of TA yield) is conducted using self-adaptive multi-objective differential evolution algorithm (SADE). The main characteristic of it is that DE's trial vector generation strategies and the corresponding control parameters are gradually self-adjusted adaptively based on the knowledge learnt from the previous searches in generating improved solutions. Furthermore, to handle constraints in multi-objective problems, the pseudo feasible concept is proposed to effectively utilize the critical information carried by some infeasible solutions. Optimization results of PX oxidation reaction process indicate that application of SADE can greatly improve the yield of TA with low combustion loss without degenerating TA quality. Furthermore, SADE can provide a set of Pareto optimal solutions and then suitable multi-criterion decision-making techniques can be employed to select one or a small set of the optimal solution(s) of design parameter(s) based on preference.
Article
Real-world problems are inherently constrained optimization problems often with multiple conflicting objectives. To solve such constrained multi-objective problems effectively, in this paper, we put forward a new approach which integrates self-adaptive differential evolution algorithm with α-constrained-domination principle, named SADE-αCD. In SADE-αCD, the trial vector generation strategies and the DE parameters are gradually self-adjusted adaptively based on the knowledge learnt from the previous searches in generating improved solutions. Furthermore, by incorporating domination principle into α-constrained method, α-constrained-domination principle is proposed to handle constraints in multi-objective problems. The advantageous performance of SADE-αCD is validated by comparisons with non-dominated sorting genetic algorithm-II, a representative of state-of-the-art in multi-objective evolutionary algorithms, and constrained multi-objective differential evolution, over fourteen test problems and four well-known constrained multi-objective engineering design problems. The performance indicators show that SADE-αCD is an effective approach to solving constrained multi-objective problems, which is basically enabled by the integration of self-adaptive strategies and α-constrained-domination principle.
Article
Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. This paper provides a survey of some of the main metaheuristics. It outlines the components and concepts that are used in various metaheuristics in order to analyze their similarities and differences. The classification adopted in this paper differentiates between single solution based metaheuristics and population based metaheuristics. The literature survey is accompanied by the presentation of references for further details, including applications. Recent trends are also briefly discussed.
Article
Challenging optimisation problems, which elude acceptable solution via conventional methods, arise regularly in control systems engineering. Evolutionary algorithms (EAs) permit flexible representation of decision variables and performance evaluation and are robust to difficult search environments, leading to their widespread uptake in the control community. Significant applications are discussed in parameter and structure optimisation for controller design and model identification, in addition to fault diagnosis, reliable systems, robustness analysis, and robot control. Hybrid neural and fuzzy control schemes are also described. The important role of EAs in multiobjective optimisation is highlighted. Evolutionary advances in adaptive control and multidisciplinary design are predicted. r 2002 Published by Elsevier Science Ltd.
Article
The multi-level linear programs of Candler, Norton and Townsley are a simple class of sequenced-move games, in which players are restricted in their moves only by common linear constraints, and each seeks to optimize a fixed linear criterion function in his/her own continuous variables and those of other players. All data of the game and earlier moves are known to a player when he/she is to move. The one-player case is just linear programming.We show that questions concerning only the value of these games exhibit complexity which goes up all levels of the polynomial hierarchy and appears to increase with the number of players.For three players, the games allow reduction of the 2 and 2 levels of the hierarchy. These levels essentially include computations done with branch-and-bound, in which one is given an oracle which can instantaneously solve NP-complete problems (e.g., integer linear programs). More generally, games with (p + 1) players allow reductions of p and p in the hierarchy.An easy corollary of these results is that value questions for two-player (bi-level) games of this type is NP-hard.
Article
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.
Article
This paper contains a bibliography of all references central to bilevel and multilevel programming that the authors know of. It should be regarded as a dynamic and permanent contribution since all the new and appropriate references that are brought to our attention will be periodically added to this bibliography. Readers are invited to suggest such additions, as well as corrections or modifications, and to obtain a copy of the LaTeX and BibTeX files that constitute this manuscript, using the guidelines contained in this paper.To classify some of the references in this bibliography a short overview of past and current research in bilevel and multilevel programming is included. For those who are interested in but unfamiliar with the references in this area, we hope that this bibliography facilitates and encourages their research.
Article
The bilevel programming problem involves two optimization problems where the data of the first one is implicitly determined by the solution of the second. In this paper, we introduce two descent methods for a special instance of bilevel programs where the inner problem is strictly convex quadratic. The first algorithm is based on pivot steps and may not guarantee local optimality. A modified steepest descent algorithm is presented to overcome this drawback. New rules for computing exact stepsizes are introduced and a hybrid approach that combines both strategies is discussed. It is proved that checking local optimality in bilevel programming is a NP-hard problem.
Article
Runge showed more than a century ago that polynomial interpolation of a function f(x), using points evenly spaced on x∈[-1,1], could diverge on parts of this interval even if f(x) was analytic everywhere on the interval. Least-squares fitting of a polynomial of degree N to an evenly spaced grid with P points should improve accuracy if P≫N. We show through numerical experiments that such an overdetermined fit reduces but does not eliminate the Runge Phenomenon. More precisely, define β≡(N+1)/P. The least-squares fit will fail to converge everywhere on [−1, 1] as N→∞ for fixed β if f(x) has a singularity inside a curve D(β) in the complex-plane. The width of the region enclosed by the convergence boundary D shrinks as β diminishes (i. e., more points for a fixed polynomial degree), but shrinks to the interval [−1, 1] only when β→0. We also show that the Runge Phenomenon can be completely defeated by interpolation on a “mock–Chebyshev” grid: a subset of (N+1) points from an equispaced grid with O(N2) points chosen to mimic the non-uniform N+1-point Chebyshev–Lobatto grid.
Conference Paper
The bilevel programming problem (BLPP) has proved to be a NP-hard problem. In this paper, we propose a hierarchial particle swarm optimization (PSO) for solving general BLPPs. Unlike most traditional algorithms designed for specific versions or based on specific assumptions, the proposed method is a hierarchical algorithm framework, which solves the general bilevel programming problems directly by simulating the decision process of bilevel programming. The solving general BLPPs is transformed to solve the upper-level and lower-level problems iteratively by two variants of PSO. The variants of PSO are built to solve upper-level and lower-level constrained optimization problems. The experimental results compared with those of other methods show that the proposed algorithm is a competitive method for solving general BLPPs.
Article
A multiobjective optimization problem involves several conflicting objectives and has a set of Pareto optimal solutions. By evolving a population of solutions, multiobjective evolutionary algorithms (MOEAs) are able to approximate the Pareto optimal set in a single run. MOEAs have attracted a lot of research effort during the last 20 years, and they are still one of the hottest research areas in the field of evolutionary computation. This paper surveys the development of MOEAs primarily during the last eight years. It covers algorithmic frameworks such as decomposition-based MOEAs (MOEA/Ds), memetic MOEAs, coevolutionary MOEAs, selection and offspring reproduction operators, MOEAs with specific search methods, MOEAs for multimodal problems, constraint handling and MOEAs, computationally expensive multiobjective optimization problems (MOPs), dynamic MOPs, noisy MOPs, combinatorial and discrete MOPs, benchmark problems, performance indicators, and applications. In addition, some future research issues are also presented.
Article
The emergence of metaheuristics for solving difficult combinatorial optimization problems is one of the most notable achievements of the last two decades in operations research. This paper provides an account of the most recent developments in the field and identifies some common issues and trends. Examples of applications are also reported for vehicle routing and scheduling problems.
Article
In this paper, a special nonlinear bilevel programming problem (nonlinear BLPP) is transformed into an equivalent single objective nonlinear programming problem. To solve the equivalent problem effectively, we first construct a specific optimization problem with two objectives. By solving the specific problem, we can decrease the leader's objective value, identify the quality of any feasible solution from infeasible solutions and the quality of two feasible solutions for the equivalent single objective optimization problem, force the infeasible solutions moving toward the feasible region, and improve the feasible solutions gradually. We then propose a new constraint-handling scheme and a specific-design crossover operator. The new constraint-handling scheme can make the individuals satisfy all linear constraints exactly and the nonlinear constraints approximately. The crossover operator can generate high quality potential offspring. Based on the constraint-handling scheme and the crossover operator, we propose a new evolutionary algorithm and prove its global convergence. A distinguishing feature of the algorithm is that it can be used to handle nonlinear BLPPs with nondifferentiable leader's objective functions. Finally, simulations on 31 benchmark problems, 12 of which have nondifferentiable leader's objective functions, are made and the results demonstrate the effectiveness of the proposed algorithm.
Article
Due to its independence of the actual search space and its impact on the exploration-exploitation tradeoff, selection is an important operator in any kind of Evolutionary Algorithm. In this paper, all important selection operators are discussed and quantitatively compared with respect to their selective pressure. The comparison clarifies that only a few really different and useful selection operators exist: Proportional selection (in combination with a scaling method), linear ranking, tournament selection, and (¯,)-selection (respectively (¯+)-selection). Their selective pressure increases in the order as they are listed here. The theoretical results are confirmed by an experimental investigation using a Genetic Algorithm with different selection methods on a simple unimodal objective function. I. Introduction Evolutionary Algorithms (EAs) are a class of direct probabilistic search algorithms based on the model of organic evolution. Currently, Genetic Algorithms (GAs) [17; 12], Evolu...
Medical image registration using evolutionary computation
  • Damas
Multilevel programming
  • Candler
Mathematical programs with optimization problems in the constraints
  • Bracken