Article

Modelling the Pareto-optimal set using B-spline basis functions for continuous multi-objective optimization problems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In the past few years, multi-objective optimization algorithms have been extensively applied in several fields including engineering design problems. A major reason is the advancement of evolutionary multi-objective optimization (EMO) algorithms that are able to find a set of non-dominated points spread on the respective Pareto-optimal front in a single simulation. Besides just finding a set of Pareto-optimal solutions, one is often interested in capturing knowledge about the variation of variable values over the Pareto-optimal front. Recent innovization approaches for knowledge discovery from Pareto-optimal solutions remain as a major activity in this direction. In this article, a different data-fitting approach for continuous parameterization of the Pareto-optimal front is presented. Cubic B-spline basis functions are used for fitting the data returned by an EMO procedure in a continuous variable space. No prior knowledge about the order in the data is assumed. An automatic procedure for detecting gaps in the Pareto-optimal front is also implemented. The algorithm takes points returned by the EMO as input and returns the control points of the B-spline manifold representing the Pareto-optimal set. Results for several standard and engineering, bi-objective and tri-objective optimization problems demonstrate the usefulness of the proposed procedure.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Since the way to construct Pareto set does not need to set weight selection for each objective, the current researches on MOPs are in favor of this method [4,26]. ...
... Mutation is also an important operator for BBO to help algorithms break away local optimum and explore the searching space. In BBO [33], the mutation rate is calculated by (4). ...
... Optimizing only the control points of the Bézier curve, that define its curvature, enforces the decision variables of solutions in the approximation set to vary in a smooth, continuous fashion, thereby likely improving intuitive navigability of the approximation set. Previous work on parameterizations of the approximation set has been applied mainly in a post-processing step after optimization, or was performed in the objective space [17,3,24], but this does not aid in the navigability of the approximation set in decision space. Moreover, fitting a smooth curve through an already optimized set of solutions might result in a bad fit, resulting in a lower-quality approximation set. ...
Preprint
The aim of bi-objective optimization is to obtain an approximation set of (near) Pareto optimal solutions. A decision maker then navigates this set to select a final desired solution, often using a visualization of the approximation front. The front provides a navigational ordering of solutions to traverse, but this ordering does not necessarily map to a smooth trajectory through decision space. This forces the decision maker to inspect the decision variables of each solution individually, potentially making navigation of the approximation set unintuitive. In this work, we aim to improve approximation set navigability by enforcing a form of smoothness or continuity between solutions in terms of their decision variables. Imposing smoothness as a restriction upon common domination-based multi-objective evolutionary algorithms is not straightforward. Therefore, we use the recently introduced uncrowded hypervolume (UHV) to reformulate the multi-objective optimization problem as a single-objective problem in which parameterized approximation sets are directly optimized. We study here the case of parameterizing approximation sets as smooth Bezier curves in decision space. We approach the resulting single-objective problem with the gene-pool optimal mixing evolutionary algorithm (GOMEA), and we call the resulting algorithm BezEA. We analyze the behavior of BezEA and compare it to optimization of the UHV with GOMEA as well as the domination-based multi-objective GOMEA. We show that high-quality approximation sets can be obtained with BezEA, sometimes even outperforming the domination- and UHV-based algorithms, while smoothness of the navigation trajectory through decision space is guaranteed.
... In recent years, the multiobjective optimization method [1] has been widely used in engineering design [8][9][10]. Optimization algorithms have been used in the design of 2 Advances in Mechanical Engineering hydraulic components. ...
Article
Full-text available
For the design of double-row blades hydraulic retarder involves too many parameters, the solution process of the optimal parameter combination is characterized by the large calculation load, the long calculation time, and the high cost. In this paper, we proposed a multiobjective optimization method to obtain the optimal balanced solution between the braking torque and volume of double-row blades hydraulic retarder. Moreover, we established the surrogate model for objective function with radial basis function (RBF), thus avoiding the time-consuming three-dimensional modeling and fluid simulation. Then, nondominated sorting genetic algorithm-II (NSGA-II) was adopted to obtain the optimal combination solution of design variables. Moreover, the comparison results of computational fluid dynamics (CFD) values of the optimal combination parameters and original design parameters indicated that the multiobjective optimization method based on surrogate model was applicable for the design of double-row blades hydraulic retarder.
... The traditional approach to solve such kind of problem lies in the numerical resolution of a sequence of discretized control problems with different fixed values of weighting coefficients [36]. Another possibility is to solve a family of discretized controlled problems simultaneously and to choose the weighting coefficient such that a set of Pareto optimal solutions is identified as a polynomial in the space of the optimal values of considered objective functions [37]. ...
Article
Full-text available
This paper studies the problem of economically oriented optimal operation of batch membrane diafiltration processes that are designed to concentrate the valuable components of the solution and to purge the impurities from it. We consider a complex economical objective that accounts for the total operational costs comprising a cost of consumed diluant, costs related to duration of processing, and a cost of product loss. The optimization problem is formulated as a multi-objective optimal control problem in order to investigate the impact of operational cost factors on optimal operation policy. This is achieved thanks to the use of the analytical approach that exploits Pontryagin's minimum principle. We show that the economically optimal control strategy is to carry out an operation involving saturated (bang-bang or constraint-tracking) control modes and a singular arc. For the most common cases of diafiltration problems, it turns out that the switching of the consecutive control modes can be realized in the state feedback fashion, i.e. the entire optimal operation is defined analytically in the space of process states. We demonstrate the applicability of the presented approach and we illustrate achievable benefits, over traditional control methods for the batch diafiltration processes, on two case studies taken from the literature.
Article
Owing to the stochasticity of Evolutionary Multi-objective Optimization (EMO) Algorithms and an application with a limited budget of solution evaluations, a perfectly converged and uniformly distributed Pareto-optimal (PO) front cannot be always guaranteed. Thus, a subsequent decision-making step or a curiosity on the part of the optimization researcher may demand solutions at regions not well-represented by the obtained PO front. In this study, we propose to train Machine Learning (ML) models to capture the mapping between unique identifiers of PO solutions – pseudo-weight vectors, computed from the existing PO front data, and their corresponding decision variable vectors. These learned models can then be used to predict PO decision variables for any new desired pseudo-weight vector. We evaluate the proposed approach with two different ML methods on a variety of multi-and many-objective test and real-world problems. This procedure can also be incorporated into an EMO algorithm to find a better converged set of PO solutions, attempt to fill apparent gaps, and find more non-dominated solutions at preferred regions of the PO front, facilitating a number of key advances for multi-objective optimization and decision-making tasks.
Chapter
The aim of bi-objective optimization is to obtain an approximation set of (near) Pareto optimal solutions. A decision maker then navigates this set to select a final desired solution, often using a visualization of the approximation front. The front provides a navigational ordering of solutions to traverse, but this ordering does not necessarily map to a smooth trajectory through decision space. This forces the decision maker to inspect the decision variables of each solution individually, potentially making navigation of the approximation set unintuitive. In this work, we aim to improve approximation set navigability by enforcing a form of smoothness or continuity between solutions in terms of their decision variables. Imposing smoothness as a restriction upon common domination-based multi-objective evolutionary algorithms is not straightforward. Therefore, we use the recently introduced uncrowded hypervolume (UHV) to reformulate the multi-objective optimization problem as a single-objective problem in which parameterized approximation sets are directly optimized. We study here the case of parameterizing approximation sets as smooth Bézier curves in decision space. We approach the resulting single-objective problem with the gene-pool optimal mixing evolutionary algorithm (GOMEA), and we call the resulting algorithm BezEA. We analyze the behavior of BezEA and compare it to optimization of the UHV with GOMEA as well as the domination-based multi-objective GOMEA. We show that high-quality approximation sets can be obtained with BezEA, sometimes even outperforming the domination- and UHV-based algorithms, while smoothness of the navigation trajectory through decision space is guaranteed.
Article
Full-text available
The GPareto package for R provides multi-objective optimization algorithms for expensive black-box functions and an ensemble of dedicated uncertainty quantification methods. Popular methods such as efficient global optimization in the mono-objective case rely on Gaussian processes or kriging to build surrogate models. Driven by the prediction uncertainty given by these models, several infill criteria have also been proposed in a multi-objective setup to select new points sequentially and efficiently cope with severely limited evaluation budgets. They are implemented in the package, in addition with Pareto front estimation and uncertainty quantification visualization in the design and objective spaces. Finally, it attempts to fill the gap between expert use of the corresponding methods and user-friendliness, where many efforts have been put on providing graphical postprocessing, standard tuning and interactivity.
Research
Mg alloys are known for their specific strength, stiffness, damping capacity, EMI shielding. Particularly, Rare earths added Mg alloys find applications in the gear box casing, transmission housing, engine mount, ribs, frames, instrument panels due to their improved corrosion resistance, pressure tightness, specific strength and creep strength. Reemergence of Mg alloys in the aircraft structural applications demands for advanced machining processes such as EDM to fabricate complex geometry parts. In this study, parametric multi-objective optimization of EDM on Mg–RE–Zn–Zr alloy is carried out using the novel meta-heuristic algorithm – Passing Vehicle Search (PVS). The input parameters considered are pulse-on (Ton), pulse-off (Toff) and peak current (A). Response surface method (RSM) is implemented through the Box–Behnken design to formulate a mathematical model for Material Removal Rate (MRR), Tool Wear Rate (TWR) and Roundness of holes. The accuracy of theoretical model has been established using the confirmation runs. Using the weighted sum method, the multi-objective PVS calculated optimal solutions for different weights to generate 2-D and surface pareto fronts. These pareto fronts were evaluated for performance determination of PVS using novel and established metrics such as spacing, spreading, hypervolume and pure diversity. The values of performance metrics indicate acceptable nature of the graphs and such analysis would facilitate better comparisons of solutions to select algorithms for optimization. Finally, decision making is illustrated with the help of level diagrams to draw up practical inferences for designing production plans and providing the best choice of machining parameters according to their preferences.
Conference Paper
A new trajectory planning method using spline interpolation was proposed to lessen the duration and discontinuous impact in current erection equipment containing two-stage electric cylinder. By analyzing the kinematic and dynamical models for erection equipment, actuating constrains and boundary conditions are obtained. To Conduct trajectory planning, five order Bspline interpolation method was employed, and penalty function was introduced to process constrains. To conduct multi-objective optimization for B-spline trajectory, an NSGA-II algorithm was used. The intervals between adjacent control points of splines were taken as the designed variations. The calculation result was a Pareto optimal aggregate, from which the optimal solution is chosen for being the smoothest and the fastest. The simulation results show that five order B spline interpolation could achieve approximate optimal time consumption while the polynomial interpolation couldn't. And five order B spline interpolation leads to more continuous jerk than cubic B spline. That stabilizes the erection process and lessens the tracking error.
Article
Computationally expensive multiobjective optimization problems arise, e.g. in many engineering applications, where several conflicting objectives are to be optimized simultaneously while satisfying constraints. In many cases, the lack of explicit mathematical formulas of the objectives and constraints may necessitate conducting computationally expensive and time-consuming experiments and/or simulations. As another challenge, these problems may have either convex or nonconvex or even disconnected Pareto frontier consisting of Pareto optimal solutions. Because of the existence of many such solutions, typically, a decision maker is required to select the most preferred one. In order to deal with the high computational cost, surrogate-based methods are commonly used in the literature. This paper surveys surrogate-based methods proposed in the literature, where the methods are independent of the underlying optimization algorithm and mitigate the computational burden to capture different types of Pareto frontiers. The methods considered are classified, discussed and then compared. These methods are divided into two frameworks: the sequential and the adaptive frameworks. Based on the comparison, we recommend the adaptive framework to tackle the aforementioned challenges.
Article
Full-text available
In [5] an evolutionary algorithm for detecting continuous Pareto optimal sets has been proposed. In this paper we propose a new evolutionary elitist approach combing a non-standard solution representation and an evolutionary optimization technique. The proposed method permits detection of continuous decision regions. In our approach an individual (a solution) is either a closed interval or a point. The individuals in the final population give a realistic representation of Pareto optimal set. Each solution in this population corresponds to a decision region of Pareto optimal set. Proposed technique is an elitist one. It uses a unique population. Current population contains non- dominated solutions already founded.
Article
Full-text available
We present an efficient method for estimating cluster centers of numerical data. This method can be used to determine the number of clusters and their initial values for initializing iterative optimization-based clustering algorithms such as fuzzy C-means. Here we use the cluster estimation method as the basis of a fast and robust algorithm for identifying fuzzy models. A benchmark problem involving the prediction of a chaotic time series shows this model identification method compares favorably with other, more computationally intensive methods. We also illustrate an application of this method in modeling the relationship between automobile trips and demographic factors.
Article
Full-text available
In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such ‘higher knowledge’ would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a ‘basis function’. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.
Article
Full-text available
For three-objective maximization problems involving continuous, semistrictly quasiconcave functions over a compact convex set, it is shown that the set of efficient solutions is connected. With that, an open problem stated by Choo, Schaible, and Chew in 1985 is solved.
Article
Full-text available
The procedure samples the efficient set by computing the nondominated criterion vector that is closest to an ideal criterion vector according to a randomly weighted Tchebycheff metric. Using ‘filtering’ techniques, maximally dispersed representatives of smaller and smaller subsets of the set of nondominated criterion vectors are presented at each iteration. The procedure has the advantage that it can converge to non-extreme final solutions. Especially suitable for multiple objective linear programming, the procedure is also applicable to integer and nonlinear multiple objective programs.
Article
Full-text available
A survey of current continuous nonlinear multi-objective optimization (MOO) concepts and methods is presented. It consolidates and relates seemingly different terminology and methods. The methods are divided into three major categories: methods with a priori articulation of preferences, methods with a posteriori articulation of preferences, and methods with no articulation of preferences. Genetic algorithms are surveyed as well. Commentary is provided on three fronts, concerning the advantages and pitfalls of individual methods, the different classes of methods, and the field of MOO as a whole. The Characteristics of the most significant methods are summarized. Conclusions are drawn that reflect often-neglected ideas and applicability to engineering problems. It is found that no single approach is superior. Rather, the selection of a specific method depends on the type of information that is provided in the problem, the users preferences, the solution requirements, and the availability of software.
Conference Paper
Full-text available
In our previous work conducted by Aimin Zhou et. al., (2005), it has been shown that the performance of multi-objective evolutionary algorithms can be greatly enhanced if the regularity in the distribution of Pareto-optimal solutions is used. This paper suggests a new hybrid multi-objective evolutionary algorithm by introducing a convergence based criterion to determine when the model-based method and when the genetics-based method should be used to generate offspring in each generation. The basic idea is that the genetics-based method, i.e., crossover and mutation, should be used when the population is far away from the Pareto front and no obvious regularity in population distribution can be observed. When the population moves towards the Pareto front, the distribution of the individuals will show increasing regularity and in this case, the model-based method should be used to generate offspring. The proposed hybrid method is verified on widely used test problems and our simulation results show that the method is effective in achieving Pareto-optimal solutions compared to two state-of-the-art evolutionary multi-objective algorithms: NSGA-II and SPEA2, and our pervious method in Aimin Zhou et. al., (2005).
Article
Full-text available
Most existing multiobjective evolutionary algorithms aim at approximating the Pareto front (PF), which is the distribution of the Pareto-optimal solutions in the objective space. In many real-life applications, however, a good approximation to the Pareto set (PS), which is the distribution of the Pareto-optimal solutions in the decision space, is also required by a decision maker. This paper considers a class of multiobjective optimization problems (MOPs), in which the dimensionalities of the PS and the PF manifolds are different so that a good approximation to the PF might not approximate the PS very well. It proposes a probabilistic model-based multiobjective evolutionary algorithm, called MMEA, for approximating the PS and the PF simultaneously for an MOP in this class. In the modeling phase of MMEA, the population is clustered into a number of subpopulations based on their distribution in the objective space, the principal component analysis technique is used to estimate the dimensionality of the PS manifold in each subpopulation, and then a probabilistic model is built for modeling the distribution of the Pareto-optimal solutions in the decision space. Such a modeling procedure could promote the population diversity in both the decision and objective spaces. MMEA is compared with three other methods, KP1, Omni-Optimizer and RM-MEDA, on a set of test instances, five of which are proposed in this paper. The experimental results clearly suggest that, overall, MMEA performs significantly better than the three compared algorithms in approximating both the PS and the PF.
Conference Paper
Full-text available
Evolutionary multi-objective optimization algorithms are commonly used to obtain a set of non-dominated solutions for over a decade. Recently, a lot of emphasis have been laid on hybridizing evolutionary algorithms with MCDM and mathematical programming algorithms to yield a computationally efficient and convergent procedure. In this paper, we test an augmented local search based EMO procedure rigorously on a test suite of constrained and unconstrained multi-objective optimization problems. The success of our approach on most of the test problems not only provides confidence but also stresses the importance of hybrid evolutionary algorithms in solving multi-objective optimization problems.
Conference Paper
Full-text available
Despite having a wide-spread applicability of evolutionary optimization procedures over the past few decades, EA researchers still face criticism about the theoretical optimality of obtained solutions. In this paper, we address this issue for problems for which gradients of objectives and constraints can be computed either exactly, or numerically or through subdifferentials. We suggest a systematic procedure of analyzing a representative set of Pareto-optimal solutions for their closeness to satisfying Karush-Kuhn-Tucker (KKT) points, which every Pareto-optimal solution must also satisfy. The procedure involves either a least-square solution or an optimum solution to a set of linear system of equations involving Lagrange multipliers. The procedure is applied to a number of differentiable and non-differentiable test problems and to a highly nonlinear engineering design problem. The results clearly show that EAs are capable of finding solutions close to theoretically optimal solutions in various problems. As a by-product, the error metric suggested in this paper can also be used as a termination condition for an EA application. Hopefully, this study will bring EAs and its research closer to classical optimization studies.
Conference Paper
Full-text available
Evolutionary algorithms (EAs) are increasingly being applied to solve real-parameter optimization problems due to their flexibility in handling complexities such as non-convexity, non-differentiability, multi-modality and noise in problems. However, an EA's solution is never guaranteed to be optimal in generic problems, even for smooth problems, and importantly EAs still lack theoretically motivated termination criterion for stopping an EA run only when a near-optimal point is found. We address both these issues in this paper by integrating the Karush-Kuhn-Tucker (KKT) optimality conditions that involve first-order derivatives of objective and constraint functions with an EA. For this purpose, we define a KKT-proximity measure by relaxing the complimentary slackness condition associated with the KKT conditions. Results on a number of standard constrained test problems indicate that in spite of not using any gradient information and any theoretical optimality conditions, an EA's selection, recombination and mutation operation lead the search process to a point close to the KKT point. This suggests that the proposed KKT-proximity measure can be used termination criterion in an EA simulation.
Conference Paper
Full-text available
The Pareto optimal solutions to a multi-objective optimization problem often distribute very regularly in both the decision space and the objective space. Most existing evolutionary algorithms do not explicitly take advantage of such a regularity. This paper proposed a model-based evolutionary algorithm (M-MOEA) for bi-objective optimization problems. Inspired by the ideas from estimation of distribution algorithms, M-MOEA uses a probability model to capture the regularity of the distribution of the Pareto optimal solutions. The local principal component analysis (local PCA) and the least-squares method are employed for building the model. New solutions are sampled from the model thus built. At alternate generations, M-MOEA uses crossover and mutation to produce new solutions. The selection in M-MOEA is the same as in non-dominated sorting genetic algorithm-II (NSGA-II). Therefore, MOEA can be regarded as a combination of EDA and NSGA-II. The preliminary experimental results show that M-MOEA performs better than NSGA-II.
Conference Paper
Full-text available
The distribution of the Pareto-optimal solutions often has a clear structure. To adapt evolutionary algorithms to the structure of a multi-objective optimization problem, either an adaptive representation or adaptive genetic operators should be employed. We suggest an estimation of distribution algorithm for solving multi-objective optimization, which is able to adjust its reproduction process to the problem structure. For this purpose, a new algorithm called Voronoi-based estimation of distribution algorithm (VEDA) is proposed. In VEDA, a Voronoi diagram is used to construct stochastic models, based on which new offspring will be generated. Empirical comparisons of the VEDA with other estimation of distribution algorithms (EDAs) and the popular NSGA-II algorithm are carried out. In addition, representation of Pareto-optimal solutions using a mathematical model rather than a solution set is also discussed.
Conference Paper
Full-text available
Local search techniques have proved to be very efficient in evolutionary multi-objective optimization (MOO). However, the reasons behind the success of local search in MOO have not yet been well discussed. This paper attempts to investigate empirically the main factors that may have contributed significantly to the success of local search in MOO. It is found that for many widely used test problems, the Pareto optimal solutions are connected both in objective space and parameter space. Besides, the Pareto-optimal solutions often distribute so regularly in parameter space that they can be defined by piecewise linear functions. By constructing an approximate model using the solutions produced by an optimizer, the quality of the non-dominated solution set can be further improved. The evolutionary dynamic weighted aggregation (EDWA) method has been adopted as a local search technique in finding Pareto-optimal solutions. Its effectiveness for MOO is demonstrated on a number of two or three objective optimization problems.
Article
Full-text available
Under mild conditions, it can be induced from the Karush-Kuhn-Tucker condition that the Pareto set, in the decision space, of a continuous multiobjective optimization problem is a piecewise continuous (m - 1)-D manifold, where m is the number of objectives. Based on this regularity property, we propose a regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA) for continuous multiobjective optimization problems with variable linkages. At each generation, the proposed algorithm models a promising area in the decision space by a probability distribution whose centroid is a (m - 1)-D piecewise continuous manifold. The local principal component analysis algorithm is used for building such a model. New trial solutions are sampled from the model thus built. A nondominated sorting-based selection is used for choosing solutions for the next generation. Systematic experiments have shown that, overall, RM-MEDA outperforms three other state-of-the-art algorithms, namely, GDE3, PCX-NSGA-II, and MIDEA, on a set of test instances with variable linkages. We have demonstrated that, compared with GDE3, RM-MEDA is not sensitive to algorithmic parameters, and has good scalability to the number of decision variables in the case of nonlinear variable linkages. A few shortcomings of RM-MEDA have also been identified and discussed in this paper.
Article
Full-text available
This paper proposes an alternate method for finding several Pareto optimal points for a general nonlinear multicriteria optimization problem. Such points collectively capture the trade-off among the various conflicting objectives. It is proved that this method is independent of the relative scales of the functions and is successful in producing an evenly distributed set of points in the Pareto set given an evenly distributed set of parameters, a property which the popular method of minimizing weighted combinations of objective functions lacks. Further, this method can handle more than two objectives while retaining the computational efficiency of continuation-type algorithms. This is an improvement over continuation techniques for tracing the trade-off curve since continuation strategies cannot easily be extended to handle more than two objectives. This research was partially supported by the Dept. of Energy, DOE Grant DE-FG03-95ER25257, the Air Force, Air Force Grant F49620-95-1-021...
Book
List of Figures. List of Tables. Preface. Foreword. 1. Basic Concepts. 2. Evolutionary Algorithm MOP Approaches. 3. MOEA Test Suites. 4. MOEA Testing and Analysis. 5. MOEA Theory and Issues. 3. MOEA Theoretical Issues. 6. Applications. 7. MOEA Parallelization. 8. Multi-Criteria Decision Making. 9. Special Topics. 10. Epilog. Appendix A: MOEA Classification and Technique Analysis. Appendix B: MOPs in the Literature. Appendix C: Ptrue & PFtrue for Selected Numeric MOPs. Appendix D: Ptrue & PFtrue for Side-Constrained MOPs. Appendix E: MOEA Software Availability. Appendix F: MOEA-Related Information. Index. References.
Article
In many countries, the most widely used method for timing plan selection and implementation is the time-of-day (TOD) method. In TOD mode, a few traffic patterns that exist in the historical volume data are recognized and used to find the signal timing plans needed to achieve optimum performance of the intersections during the day. Traffic engineers usually determine TOD breakpoints by analyzing 1 or 2 days worth of traffic data and relying on their engineering judgment. The current statistical methods, such as hierarchical and K-means clustering methods, determine TOD breakpoints but introduce a large number of transitions. This paper proposes adopting the Z-score of the traffic flow and time variable in the K-means clustering to reduce the number of transitions. The numbers of optimum breakpoints are chosen based on a microscopic simulation model considering a set of performance measures. By using simulation and the K-means algorithm, it was found that five clusters are the optimum for a major arterial in Al-Khobar, Saudi Arabia. As an alternative to the simulation-based approach, a subtractive algorithm-based K-means technique is introduced to determine the optimum number of TODs. Through simulation, it was found that both approaches results in almost the same values of measure of effectiveness (MOE). The proposed two approaches seem promising for similar studies in other regions, and both of them can be extended for different types of roads. The paper also suggests a procedure for considering the cyclic nature of the daily traffic in the clustering effort. DOI: 10.1061/(ASCE)CP.1943-5487.0000099. (C) 2011 American Society of Civil Engineers.
Article
The theory of economic models of decentralization is developed from a point of view quite different from the virtual planning phases of economic systems as displayed by market mechanism type procedures. The Central unit transmits appropriate information to each divisional unit which then acts (commits resources) according to optimization with respect to variables under its control. Our theory presents a natural hierarchy of coherently decentralized systems which is based on the increasing amounts of information which are to be transmitted to divisional units for proper decentralized decision making. In general, this information involves more than prices alone and the notion of preemptive goals is introduced for this purpose by means of examining dual convex programming problems. Employing our precise specifications, uniqueness of divisional optima (in contrast to certain trivial cases of decentralization) is not required. Thus, additional flexibility is available to divisions without violating company goals. Further we show that our procedure is a robust one in the sense that approximate fulfillment of the preemptive goals, e.g., non-optimal but close to optimal divisional solutions, results in small deviations from optimal profit.
Article
If a linear programming problem involves two objective functions, it is desirable to learn all solutions depending on the relative weight attached to the two functions. This paper presents details of an algorithm which finds these solutions systematically.
Conference Paper
Estimation of distribution algorithms have been shown to perform well on a wide variety of single–objective optimization problems. Here, we look at a simple – yet effective – extension of this paradigm for multi–objective optimization, called the naive M{\mathbb M}IDE{\mathbb E}A. The probabilistic model in this specific algorithm is a mixture distribution, and each component in the mixture is a univariate factorization. Mixture distributions allow for wide–spread exploration of the Pareto front thus aiding the important preservation of diversity in multi–objective optimization. Due to its simplicity, speed, and effectiveness the naive M{\mathbb M}IDE{\mathbb E}A can well serve as a baseline algorithm for multi–objective evolutionary algorithms.
Article
A new effective and computationally efficient approach for design optimization, hereby entitled physical programming, is developed. This new approach is intended to substantially reduce the computational intensity of large problems and to place the design process into a more flexible and natural framework. Knowledge of the desired attributes of the optimal design is judiciously exploited. For each attribute of interest to the designer (each criterion), regions are defined that delineate degrees of desirability: unacceptable, highly undesirable, undesirable, tolerable, desirable, and highly desirable. This approach completely eliminates the need for iterative weight setting, which is the object of the typical computational bottleneck in large design optimization problems. Two key advantages of physical programming are 1) once the designer's preferences are articulated, obtaining the corresponding optimal design is a noniterative process - in stark contrast to conventional weight-based methods and 2) it provides the means to reliably employ optimization with minimal prior knowledge thereof. The mathematical infrastructure that supports the physical programming design optimization framework is developed, and a numerical example provided. Physical programming is a new approach to realistic design optimization that may be appealing to the design engineer in an industrial setting.
Book
Preface. Acknowledgements. Notation and Symbols. Part I: Terminology and Theory. 1. Introduction. 2. Concepts. 3. Theoretical Background. Part II: Methods. 1. Introduction. 2. No-Preference Methods. 3. A Posteriori Methods. 4. A Priori Methods. 5. Interactive Methods. Part III: Related Issues. 1. Comparing Methods. 2. Software. 3. Graphical Illustration. 4. Future Directions. 5. Epilogue. References. Index.
Article
The paper reviews the development of the cluster-oriented genetic algorithm (COGA) strategy from the initial approach to more recent advances which significantly improve the performance of COGA in both the search capabilities and the consistency of the generated design solutions. COGAs are specifically designed to identify high-performance (HP) regions of complex, multi-variable design spaces whilst also achieving good set cover in terms of solutions across these regions. The objective is to extract information from such regions relating to the nature of the problem space in addition to providing the designer with a succinct collection of HP design options. The application of COGA to a number of real-world design tasks is discussed, and also its integration within a graphical user interface and the interactive evolutionary design system.
Conference Paper
Linear programming deals with problems such as (see [ 4], [ 5]): to maximize a linear function gxcixi  of  n  real  variables  x1,...,xn \rm g{x}\equiv \sum {c_{i}x_{i}} \; \rm {of} \; n \;\rm{real \; variables} \; x_{1},...,x_{n} (forming a vector x) constrained by m + n linear inequalities.
Article
In this work, we present a new set-oriented numerical method for the numerical solution of multiobjective optimization problems. These methods are global in nature and allow to approximate the entire set of (global) Pareto points. After proving convergence of an associated abstract subdivision procedure, we use this result as a basis for the development of three different algorithms. We consider also appropriate combinations of them in order to improve the total performance. Finally, we illustrate the efficiency of these techniques via academic examples plus a real technical application, namely, the optimization of an active suspension system for cars.
Chapter
After adequately demonstrating the ability to solve different two-objective optimization problems, multiobjective evolutionary algorithms (MOEAs) must demonstrate their efficacy in handling problems having more than two objectives. In this study, we have suggested three different approaches for systematically designing test problems for this purpose. The simplicity of construction, scalability to any number of decision variables and objectives, knowledge of the shape and the location of the resulting Pareto-optimal front, and introduction of controlled difficulties in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions are the main features of the suggested test problems. Because of the above features, they should be found useful in various research activities on MOEAs, such as testing the performance of a new MOEA, comparing different MOEAs, and better understanding of the working principles of MOEAs.
Article
In multiple criteria optimization an important research topic is the topological structure of the set Xe of efficient solutions. Of major interest is the connectedness of Xe, since it would allow the determination of Xe without considering non-efficient solutions in the process. We review general results on the subject, including the connectedness result for efficient solutions in multiple criteria linear programming. This result can be used to derive a definition of connectedness for discrete optimization problems. We present a counterexample to a previously stated result in this area, namely that the set of efficient solutions of the shortest path problem is connected. We will also show that connectedness does not hold for another important problem in discrete multiple criteria optimization: the spanning tree problem.
Article
A systematic approach is presented to approximate the Pareto optimal front (POF) by a response surface approximation. The data for the POF is obtained by multi-objective evolutionary algorithm. Improvements to address drift in the POF are also presented. The approximated POF can help visualize and quantify trade-offs among objectives to select compromise designs. The bounds of this approximate POF are obtained using multiple convex-hulls. The proposed approach is applied to study trade-offs among objectives of a rocket injector design problem where performance and life objectives compete. The POF is approximated using a quintic polynomial. The compromise region quantifies trade-offs among objectives.
Article
Many papers have addressed the problem of fitting curves to data points. However, most of the approaches are subject to a restriction that the data points must be ordered. The paper presents a method for generating a piecewise continuous parametric curve from a set of unordered and error-filled data points. The resulting curve not only provides a good fit to the original data but also possesses good fairness. Excluding the endpoints of the curve, none of the connectivity information needs to be specified, thus eliminating the necessity of an initial parameterization. The standard regularization method for univariate functions is modified for multidimensional parametric functions and results in a nonlinear minimization problem. Successive quadratic programming is applied to find the optimal solution. A physical model is also supplied to facilitate an intuitive understanding of the mathematical background.
Article
This is part 1 of a survey of recent developments in goal programming and multiple objective optimizations. In this part, attention is directed to goal programming with emphasis on the authors' own work (with others) in a variety of applications. This includes goal and goal interval programming along with definitions and examples of proper goal functionals. Characterizations are also supplied for alternate representations and explicit solutions from special structural properties involving piecewise linear functions. Possibilities for various other kinds of goal functionals are explored and delineated. One class of examples is developed in detail and an algorithm is supplied which utilizes sequences of ordinary linear programming problems to solve certain nonlinear and non-convex problems involving maxima of ratios of linear forms.
Conference Paper
Evolutionary multi-objective optimization (EMO) methodologies have been amply applied to find a representative set of Pareto-optimal solutions in the past decade and beyond. Although there are advantages of knowing the range of each objective for Pareto-optimality and the shape of the Pareto-optimal frontier itself in a problem for an adequate decision-making, the task of choosing a single preferred Pareto-optimal solution is also an important task which has received a lukewarm attention so far. In this paper, we combine one such preference-based strategy with an EMO methodology and demonstrate how, instead of one solution, a preferred set solutions near the reference points can be found parallely. We propose a modified EMO procedure based on the elitist non-dominated sorting GA 'or NSGA-II. On two-objective to 10-objective optimization problems, the modified NSGA-II approach shows its efficacy in: finding an adequate set of Pareto-optimal points. Such procedures will provide the decision-maker with a set of solutions near her/his preference so that a better and a more reliable decision can be made.
Article
Despite the volume of research conducted on efficient frontiers, in many cases it is still not the easiest thing to compute a mean-variance (MV) efficient frontier even when all constraints are linear. This is particularly true of large-scale problems having dense covariance matrices and hence they are the focus in this paper. Because standard approaches for constructing an efficient frontier one point at a time tend to bog down on dense covariance matrix problems with many more than about 500 securities, we propose as an alternative a procedure of parametric quadratic programming for more effective usage on large-scale applications. With the proposed procedure we demonstrate through computational results on problems in the 1000-3000 security range that the efficient frontiers of dense covariance matrix problems in this range are now not only solvable, but can actually be computed in quite reasonable time.
Article
The problem of determining necessary conditions and sufficient conditions for a relative minimum of a function f(x1,x2,....,xn) f({x_1},{x_2},....,{x_n}) in the class of points x=(x1,x2,....,xn) x = ({x_1},{x_2},....,{x_n}) Satisfying the equations gα(X)=0(α=1,2,....,m), \rm {g_{\alpha}(X)= 0 (\alpha = 1, 2,....,m),} where the functions f and gα have continuous derivatives of at least the second order, has been satisfactorily treated [1]*. This paper proposes to take up the corresponding problem in the class of points x satisfying the inequalities gα(x)0α=1,2,...,m \begin{array}{clcclclclcl}\rm {g_{\alpha}(x)\geqq 0} & & & & & & \rm{\alpha = 1,2,...,m}\end{array} where m may be less than, equal to, or greater than n.
Article
Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN3) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed
Article
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix I. Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . 1-1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1 1.2 Research Definition . . . . . . . . . . . . . . . . . . . . . . . 1-2 1.3 Research Goals and Objectives . . . . . . . . . . . . . . . . . 1-4 1.3.1 Goal 1: MOEA Classifications . . . . . . . . . . . . 1-4 1.3.2 Goal 2: MOEA Analyses . . . . . . . . . . . . . . . 1-4 1.3.3 Goal 3: MOEA Innovations . . . . . . . . . . . . . . 1-5 1.4 Research Approach and Scope . . . . . . . . . . . . . . . . . 1-5 1.5 Document Organization . . . . . . . . . . . . . . . . . . . . . 1-6 II. Multiobjective Optimization and Evolutionary Algorithms . . . . . . 2-1 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1 2.2 MOP Definition and Overview . . . . . . . . . . . . . . . . . 2-1 2.2.1 Pareto Concepts . . . . . . . . . . . . . . . . . . . . 2...
Article
In [5] an evolutionary algorithm for detecting continuous Pareto optimal sets has been proposed. In this paper we propose a new evolutionary elitist approach combing a non-standard solution representation and an evolutionary optimization technique. The proposed method permits detection of continuous decision regions. In our approach an individual (a solution) is either a closed interval or a point. The individuals in the final population give a realistic representation of Pareto optimal set. Each solution in this population corresponds to a decision region of Pareto optimal set. Proposed technique is an elitist one. It uses a unique population. Current population contains non-dominated solutions already founded.
A Comparative Study of Various Clustering Algorithms in Data Mining
  • M Verma
  • M Srivastava
  • N Chack
  • A K Diswar
  • N Gupta
Verma, M., M. Srivastava, N. Chack, A. K. Diswar, and N. Gupta, 2012. " A Comparative Study of Various Clustering Algorithms in Data Mining. " International Journal of Engineering Research and Applications 2: 1379–1384.
A New Evolutionary Technique for Detecting Pareto Continuous Regions
  • C Grosan
Grosan, C. 2003. " A New Evolutionary Technique for Detecting Pareto Continuous Regions. " In 2003 Genetic and Evolutionary Computation Conference. Workshop Program, July, edited by A. Barry, 304–307.
Approximate KKT Conditions for Variational Inequality Problems Optimization Online
  • G Haeser
  • M L Schuverdt
Haeser, G., and M. L. Schuverdt. 2009. " Approximate KKT Conditions for Variational Inequality Problems. " Optimization Online. http://www.optimization-online.org/DB_HTML/2009/10/2415.html. Hillermeier, C. 2001. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach. Vol. of International Series of Numerical Mathematics. Basel, Switzerland: Birkhaüser Verlag.
the change in the objective function value is caused by the change in the value of x 3 alone
  • For Ab
  • Bc
  • De
For AB, BC and DE, the change in the objective function value is caused by the change in the value of x 3 alone.
The Use of Reference Objectives in Multiobjective Optimization-Theoretical Implications and Practical Experiences
  • A P Wierzbicki
Wierzbicki, A. P. 1979. " The Use of Reference Objectives in Multiobjective Optimization—Theoretical Implications and Practical Experiences, " 1–32.
Manuale di Economica Politica
  • V Pareto
Pareto, V. 1906. " Manuale di Economica Politica. " [Manual of Political Economy]. Milan: Societa Editrice Libraria. Translated into English by A. S. Schwier. New York: Macmillan, 1971. Ratrout, N. T. 2011. " Subtractive Clustering-Based K-Means Technique for Determining Optimum Time-of-Day Breakpoints. " Journal of Computing in Civil Engineering 25 (5): 380–387.
Multiobjective Evolutionary Algorithms: Classification, Analyses and New Innovations.” PhD diss., Air Force Institute of Technology
  • D A V Veldhuizen