Article

Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix I. Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . 1-1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1 1.2 Research Definition . . . . . . . . . . . . . . . . . . . . . . . 1-2 1.3 Research Goals and Objectives . . . . . . . . . . . . . . . . . 1-4 1.3.1 Goal 1: MOEA Classifications . . . . . . . . . . . . 1-4 1.3.2 Goal 2: MOEA Analyses . . . . . . . . . . . . . . . 1-4 1.3.3 Goal 3: MOEA Innovations . . . . . . . . . . . . . . 1-5 1.4 Research Approach and Scope . . . . . . . . . . . . . . . . . 1-5 1.5 Document Organization . . . . . . . . . . . . . . . . . . . . . 1-6 II. Multiobjective Optimization and Evolutionary Algorithms . . . . . . 2-1 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1 2.2 MOP Definition and Overview . . . . . . . . . . . . . . . . . 2-1 2.2.1 Pareto Concepts . . . . . . . . . . . . . . . . . . . . 2...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... These metrics can be broadly categorized into two groups: those exclusively evaluating convergence and those considering both convergence and diversity. Among metrics focused solely on convergence, examples include Generational distance (GD) (Van Veldhuizen, 1999), * 1 -metric (Zitzler, Deb, & Thiele, 2000), Degree of Approximation (DOA) (Dilettoso, Rizzo, & Salerno, 2017), and -family ( ) (Zitzler, Thiele, Laumanns, Fonseca, & Da Fonseca, 2003). However, it is worth noting that many of these metrics come with their own set of limitations. ...
... The first group focuses only on evaluating convergence. It encompasses various measures such as Generational distance (GD) (Van Veldhuizen, 1999), -metric , Seven points average distance (Schott, 1995), * 1 -metric , Degree of Approximation (DOA) (Dilettoso et al., 2017), -family ( ) (Zitzler et al., 2003) and Maximum Pareto front error (Van Veldhuizen, 1999). The second group concentrates exclusively on evaluating diversity. ...
... The first group focuses only on evaluating convergence. It encompasses various measures such as Generational distance (GD) (Van Veldhuizen, 1999), -metric , Seven points average distance (Schott, 1995), * 1 -metric , Degree of Approximation (DOA) (Dilettoso et al., 2017), -family ( ) (Zitzler et al., 2003) and Maximum Pareto front error (Van Veldhuizen, 1999). The second group concentrates exclusively on evaluating diversity. ...
Article
With the widespread application of Evolutionary Algorithms (EAs), their performance needs to be evaluated using more than the usual performance metrics. In the EA literature, various metrics assess the convergence ability of these algorithms. However, many of them require prior knowledge of the Pareto-optimal front. Recently, two Karush–Kuhn–Tucker Proximity Metrics (KKTPMs) have been introduced to measure convergence without needing prior knowledge. One relies on the Augmented Achievement Scalarization Function (AASF) method (AASF-KKTPM), and the other on Benson’s method (B-KKTPM). However, both require specific parameters and reference points, making them computationally expensive. In this paper, we introduce a novel version of KKTPM applicable to single-, multi-, and many-objective optimization problems, utilizing the Penalty-based Boundary Intersection (PBI) method (PBI-KKTPM). Additionally, we introduce an approximate approach to reduce the computational burden of solving PBI-KKTPM optimization problems. Through extensive computational experiments across 23 case studies, our proposed metric demonstrates a significant reduction in computational cost, ranging from 20.68% to 60.03% compared to the computational overhead associated with the AASF-KKTPM metric, and from 16.48% to 61.15% compared to the computational overhead associated with the B-KKTPM metric. Noteworthy features of the proposed metric include its independence from knowledge of the true Pareto-optimal front and its applicability as a termination criterion for EAs. Another feature of the proposed metric is its ability to deal with black box problems very efficiently.
... Convergence indicators quantify the closeness of the resulting Pareto front to the true Pareto front. Generational distance (GD) [11] and inverted GD (IGD) [12] are two well-known metrics that belong to this category. Distribution and spread indicators measure the distribution and the extent of spacing among nondominated solutions. ...
... Generational Distance (GD) [11]: GD measures the average minimum distance between each obtained objective vector from the set S and the closest objective vector on the representative Pareto front, P , which is calculated as follows: ...
Preprint
Full-text available
As the interest in multi- and many-objective optimization algorithms grows, the performance comparison of these algorithms becomes increasingly important. A large number of performance indicators for multi-objective optimization algorithms have been introduced, each of which evaluates these algorithms based on a certain aspect. Therefore, assessing the quality of multi-objective results using multiple indicators is essential to guarantee that the evaluation considers all quality perspectives. This paper proposes a novel multi-metric comparison method to rank the performance of multi-/ many-objective optimization algorithms based on a set of performance indicators. We utilize the Pareto optimality concept (i.e., non-dominated sorting algorithm) to create the rank levels of algorithms by simultaneously considering multiple performance indicators as criteria/objectives. As a result, four different techniques are proposed to rank algorithms based on their contribution at each Pareto level. This method allows researchers to utilize a set of existing/newly developed performance metrics to adequately assess/rank multi-/many-objective algorithms. The proposed methods are scalable and can accommodate in its comprehensive scheme any newly introduced metric. The method was applied to rank 10 competing algorithms in the 2018 CEC competition solving 15 many-objective test problems. The Pareto-optimal ranking was conducted based on 10 well-known multi-objective performance indicators and the results were compared to the final ranks reported by the competition, which were based on the inverted generational distance (IGD) and hypervolume indicator (HV) measures. The techniques suggested in this paper have broad applications in science and engineering, particularly in areas where multiple metrics are used for comparisons. Examples include machine learning and data mining.
... The performance of an algorithm of resolution is one of the important steps in the resolution process, in our case, we chose three performance metrics: spacing metric (SM), hole relative size (HRS) for spacing studies and convergence metric (RP) for progression and convergence results [18]. ...
... HRS metric : The use of an SM metric (that calculates an average error relative to an optimal spacing) may obscure significant gaps in the results. To address this issue, a new metric called HRS (hole relative size) was introduced in [18]. The HRS metric enables us to measure the magnitude of the largest gap in the points distribution on the trade-off surface. ...
Article
Full-text available
In this paper, we propose an algorithm called Directional Exploration Genetic Algorithm (DEGA) to resolve a function Phi over the efficient set of a multi-objective integer linear programming problem (MOILP). DEGA algorithm belongs to evolutionary algorithms, which operate on the decision space by choosing the fastest improving directions that improve the objectives functions and Phi function. Two variants of this algorithm and a basic version of the genetic algorithm (BVGA) are performed and implemented in Python. Several benchmarks are carried out to evaluate the algorithm's performances and interesting results are obtained and discussed.
... If the value of GD is zero, this is an indication that all identified solutions are in the Pareto-optimal set. Lower values of GD indicate good performance of the solution, whereas higher values indicate how far the solution is from the Pareto-front Van Veldhuizen, 1999). The spacing (SP) criteria proposed by Schott (1995) was used to measure the distribution of solutions corresponding to a range variance of adjacent solutions defined as follows: SP = ...
... The fact that the SP value in the SM-2 solution is higher than in the SM-1 solution can be interpreted as the result of noisy data. As SP indicates a low or high value regardless of whether the solutions are in the Pareto-optimal set Van Veldhuizen, 1999), this criteria can be misleading in determining the quality of the solution. Nevertheless, the values of SP, which are close to zero, indicate that the distributions of the solutions are approximately smooth and nearly uniform. ...
Article
Particle swarm optimization, one of the modern global optimization methods, is attracting widespread interest because it overcomes the difficulties of conventional inversion techniques, such as trapping at a local minimum and/or initial model dependence. The main characteristic of particle swarm optimization is the large search space of parameters, which in a sense allows the exploration of the entire objective function space if the input parameters are properly chosen. However, in the case of a high-dimensional model space, the numerical instability of the solution may increase and lead to unrealistic models and misinterpretations due to the sampling problem of particle swarm optimization. Therefore, smoothness-constrained regularization techniques used for the objective function or model reduction techniques are required to stabilize the solution. However, weighting and combining objective function terms is partly a subjective process, as the regularization parameter is generally chosen based on some kind of criteria of how the smoothing constraints affect the data misfits. This means that it cannot be completely predefined but needs to be adjusted during the inversion process, which begins with the response of an initial model. In this paper, a new modelling approach is proposed to obtain a smoothness-constrained model from magnetotelluric data utilizing multi-objective particle swarm optimization based on the Pareto optimality approach without using a regularization parameter and combining several objective function terms. The presented approach was verified on synthetic models and an application with field data set from the Çanakkale–Tuzla geothermal field in Turkey. Findings from these analyses confirm the usefulness of the method as a new approach for all constrained inversions of geophysical data without the need to combine the objective function terms weighted by a regularization parameter.
... Many multiobjective problems have been employed mainly to compare different approximations of Pareto fronts. 5,[35][36][37][38][39] Four typical multi-objective canonical problems with three objectives are herein selected from the literature: dtlz1, dtlz2, takami, viennet4. These problems are defined in Appendix A and they include approximation sets with different curvature: concave, convex, and linear. ...
... 3. Test Problem takami. Taken from Van Veldhuizen's work, 38 Appendix B. ...
Conference Paper
Full-text available
This paper proposes a methodology for the direct quadrature of hypervolume-based expected improvement when objective spaces have more than two dimensions. New terms with respect to the hypervolume-based expected improvement for the-two objective case are explained and derived. When adaptively sampling computationally intensive multi-objective domains, the proposed methodology represents an alternative to the inaccurate and resource-consuming current state-of-the-art method, Monte Carlo integration, for the quadrature calculation of the hypervolume-based expected improvement. The methodology is first compared with the current state-of-the-art one on typical multi-objective canonical problems. This comparison allows to determine in which conditions each of the methodologies is more competitive while adaptively sampling multi-objective domains of computationally intensive functions. Next, a practical design space of an engineering system is adaptively sampled using the proposed methodology for the quadrature.
... This dataset, representing the design space, aims to obtain a genuine Pareto-optimal set for reference in evaluating the quality of exploration results. To evaluate these results, generational distance (GD) [24] is utilized as a critical metric, quantifying the discrepancy between the authentic Pareto frontier and the Pareto frontier ascertained by our framework. As illustrated in Figure 6, the Pareto frontiers derived through our methodology closely align with the genuine Pareto frontiers, evidenced by a GD value of 0.143061, indicating the framework's proficiency in approximating the Pareto-optimal set with considerable accuracy. ...
Article
Full-text available
This study explores the potential of Field-Programmable Gate Arrays (FPGAs) within the realm of cryogenic computing, which promises enhanced performance and power efficiency by reducing leakage power and wire resistance at low temperatures. Prior research has mainly adapted commercial FPGAs for cryogenic temperatures without fully exploiting the technology's benefits, necessitating significant design efforts for each application scenario. By characterizing FPGA performance in cryogenic conditions and examining the influence of architectural parameters, we propose a Bayesian optimization-based framework for systematic FPGA architecture exploration to identify FPGA architectures that are optimally suited for cryogenic applications. The architectures we developed, aimed at operating efficiently at 77K, significantly outperform conventional FPGAs designed for room-temperature conditions in performance and power consumption.
... To quantify the performance of the implemented algorithms, two metrics were utilized: Generational Distance (GD) and Convergence Rate. The GD performance indicator quantifies the distance between solutions and the reference points 48 . Let's define the points produced by the algorithm as the objective vector set A = {a 1 , a 2 , …, a n } and the reference points as Z = {z 1 , z 2 , …, z n }. ...
Article
Full-text available
Determination of optimum well location and operational settings for existing and new wells is crucial for maximizing production in field development. These optimum conditions depend on geological and petrophysical factors, fluid flow regimes, and economic variables. However, conducting numerous simulations for various parameters can be time-consuming and costly. Also, due to the high dimension of the possible solutions, there is still no general approach to address this problem. The application of searching algorithm as a general approach to solve such problems has received much attention in recent years. In this study, the efficiency, and reliability of genetic algorithm, particle swarm optimization and in particular a newly developed algorithm was analyzed and compared. The novelty of this work is the integrated algorithm, which improves searching performance by leveraging the memorizing characteristics of the particle swarm optimization algorithm to enhance genetic algorithm efficiency. In traditional genetic algorithms, solutions lacking adequate qualifications are deleted from the algorithmic process; however, the new algorithm provides these solutions with additional opportunities to prove themselves by acquiring new velocities from particle swarm optimization. The results indicate that while the genetic algorithm and particle swarm optimization do not guarantee optimal outcomes, the newly developed algorithm outperforms both methods. This performance was tested across various scenarios focused on well pattern optimization, highlighting its innovative contribution to the field development.
... Both metrics reflect the algorithm's convergence and diversity. Additionally, generational distance (GD) [62] is used to measure convergence performance, while the spacing (S) [63] metric is used to assess diversity capability. ...
Article
Full-text available
Many-objective optimization problems involve numerous objective functions, leading to larger and more intricate Pareto fronts. Conventional evolutionary algorithms struggle to sustain diversity as the objectives increase. The strategic distribution of ideal reference vectors across the objective space has partially mitigated this challenge. However, traditional selection methods based on Pareto dominance encounter reduced selection pressure in high-dimensional scenarios, often resulting in non-dominated solution set with significant diversity but prone to local optima. In addressing these issues, this study explores a reference vector-guided adaptive transfer evolutionary algorithm for solving many-objective optimization problems. This approach aims to maintain diversity through the reference vector mechanism while employing a score-based adaptive migration strategy to preserve individuals with superior convergence. The goal is to break free from local optima and converge toward the global Pareto front. The effectiveness of the proposed algorithm is extensively evaluated against seven other prominent evolutionary algorithms. Across 92 experiments conducted on 22 benchmark problems with up to 15 objectives, the results robustly demonstrate the competitiveness and efficacy of the researched algorithm compared to its counterparts.
... The test problem sets are regarded as some of the most challenging in the literature, offering a variety of multi-objective search spaces with distinct Pareto optimal fronts, including convex, nonconvex, discontinuous, and multi-modal types. To evaluate the effectiveness, we have used Inverted Generational Distance (IGD) [47], Spacing (SP) [48], Generational Distance (GD) [49] and Hypervolume (HV) [50] as metrics for measuring convergence and performance. ...
Preprint
Full-text available
Stock market prediction is a popular topic in both academia and industry due to its potential to offer significant financial returns, while transformer model is a cutting-edge tool in this task. However, how to fine-tune the hyperparameters in a reasonable timeframe without excessive computational resources remains a challenge. This paper purposes an innovative algorithm which extends the Escaping Bird Search optimization (EBS) into the area of multi-objective optimization, namely MOEBS, for efficient and effective hyperparameter fine-tuning of Transformer to predict stock prices. Initially, we validate MOEBS by conducting benchmark testing built upon widely used problem sets like ZDT, DTLZ, WFG, etc., together with numerical experiments based on evaluation of 4 recognized metrics, GD, Spacing, IGD, and HV. Next, we apply MOEBS to optimal hyperparameter searching for Transformer, such as Learning rate, Number of Heads and L2 regularization coefficient. The fine-tuned Transformer models by MOEBS and other competing algorithms are then applied to training and predicting on Google's historical stock price data set from 2016 to 2021. Comparative experiments demonstrate the extraordinary excellence of MOEBS-Transformer in terms of RMSE, RPD, and R² metrics, matching and surpassing the performance of the state-of-the-art competitors. As a novel and powerful stock price prediction model, MOEBS-Transformer has potential to become a reliable and accurate predictor for stock prices. The fine-tuned Transformer models by MOEBS and other competing algorithms are then applied to training and predicting on Google's historical stock price data set from 2016 to 2021. The Google stock data set used in this research can be accessed publicly through Kaggle at the following link: https://www.kaggle.com/datasets/varpit94/google-stock-data.
... As mentioned by Ref. 64 , GD quantifies the average Euclidean distance between the obtained frontier and the actual Pareto front. A smaller GD value indicates a closer match to the optimal efficient frontier, signifying a (34) www.nature.com/scientificreports/ ...
Article
Full-text available
Financial Portfolio Optimization Problem (FPOP) is a cornerstone in quantitative investing and financial engineering, focusing on optimizing assets allocation to balance risk and expected return, a concept evolving since Harry Markowitz’s 1952 Mean-Variance model. This paper introduces a novel meta-heuristic approach based on the Black Widow Algorithm for Portfolio Optimization (BWAPO) to solve the FPOP. The new method addresses three versions of the portfolio optimization problems: the unconstrained version, the equality cardinality-constrained version, and the inequality cardinality-constrained version. New features are introduced for the BWAPO to adapt better to the problem, including (1) mating attraction and (2) differential evolution mutation strategy. The proposed BWAPO is evaluated against other metaheuristic approaches used in portfolio optimization from literature, and its performance demonstrates its effectiveness through comparative studies on benchmark datasets using multiple performance metrics, particularly in the unconstrained Mean-Variance portfolio optimization version. Additionally, when encountering cardinality constraint, the proposed approach yields competitive results, especially noticeable with smaller datasets. This leads to a focused examination of the outcomes arising from equality versus inequality cardinality constraints, intending to determine which constraint type is more effective in producing portfolios with higher returns. The paper also presents a comprehensive mathematical model that integrates real-world constraints such as transaction costs, transaction lots, and a dollar-denominated budget, in addition to cardinality and bounding constraints. The model assesses both equality/inequality cardinality constraint versions of the problem, revealing that the inequality constraint tends to offer a wider range of feasible solutions with increased return potential.
... These choices enable a rigorous quantification of how far results using the FDC approximation are from those using the actual flow record. Hypervolume is complemented by generational distance (Van Veldhuizen, 1999), which calculates the average Euclidean distance between the solutions of the approximation set and the nearest member of the reference set. This metric primarily assesses convergence, yet it does not offer information regarding the diversity of the solution set (Blank and Deb, 2020). ...
... GD metric quantifies the average proximity of solutions generated by the MOEA to the nearest solution on the Pareto front (P) [17]. Suppose the solutions identified by the MOEA are represented by the objective vector set = 1 , 2 , ..., | | , and the reference points set is denoted by = 1 , 2 , ..., | | . ...
... Generational distance and inverted generational distance We evaluated the quality of the Pareto front in capturing the shape of the ground truth Pareto front by measuring how much the predicted Pareto front converges to the ground truth Pareto front by calculating the generational distance (GD) [45] and how much the predicted Pareto front covers the ground truth Pareto front by calculating the inverted generational distance (IGD) [11]. GD and IGD are standard measures used in evolutionary multi-objective optimization to evaluate the solutions found by the evolutionary algorithms. ...
Preprint
Full-text available
Model merging has emerged as an effective approach to combine multiple single-task models, fine-tuned from the same pre-trained model, into a multitask model. This process typically involves computing a weighted average of the model parameters without any additional training. Existing model-merging methods focus on enhancing average task accuracy. However, interference and conflicts between the objectives of different tasks can lead to trade-offs during model merging. In real-world applications, a set of solutions with various trade-offs can be more informative, helping practitioners make decisions based on diverse preferences. In this paper, we introduce a novel low-compute algorithm, Model Merging with Amortized Pareto Front (MAP). MAP identifies a Pareto set of scaling coefficients for merging multiple models to reflect the trade-offs. The core component of MAP is approximating the evaluation metrics of the various tasks using a quadratic approximation surrogate model derived from a pre-selected set of scaling coefficients, enabling amortized inference. Experimental results on vision and natural language processing tasks show that MAP can accurately identify the Pareto front. To further reduce the required computation of MAP, we propose (1) a Bayesian adaptive sampling algorithm and (2) a nested merging scheme with multiple stages.
... To capture the performance from the multi-objective optimization perspective, we will use five metrics, purity [5], generational distance (GD) [77], spread (Γ, ∆) [14] and spacing (SP) [68] to compare the performance of different solvers. The purity metric is computed by the number |Y N ∩ Y P |/|Y N |, where Y N is the Pareto front approximation obtained by the solver and Y P is a discrete representation of the real Pareto front. ...
Preprint
Multi-Objective Bi-Level Optimization (MOBLO) addresses nested multi-objective optimization problems common in a range of applications. However, its multi-objective and hierarchical bilevel nature makes it notably complex. Gradient-based MOBLO algorithms have recently grown in popularity, as they effectively solve crucial machine learning problems like meta-learning, neural architecture search, and reinforcement learning. Unfortunately, these algorithms depend on solving a sequence of approximation subproblems with high accuracy, resulting in adverse time and memory complexity that lowers their numerical efficiency. To address this issue, we propose a gradient-based algorithm for MOBLO, called gMOBA, which has fewer hyperparameters to tune, making it both simple and efficient. Additionally, we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity. Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results. To accelerate the convergence of gMOBA, we introduce a beneficial L2O neural network (called L2O-gMOBA) implemented as the initialization phase of our gMOBA algorithm. Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.
... This research also incorporates the Generational Distance (GD) metric [81] to assess algorithm convergence, where lower GD values signify stronger convergence, as depicted in equation (23): Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
In the past few decades, many multi-objective evolution algorithms (MOEAs) have been proposed, often emphasizing a single crossover operator, which has a significant impact on the algorithm’s performance. This paper proposed a novel MOEA, based on the MOEA/D framework and employing Q-learning for adaptive operator selection (QLMOEA/D-AOS). In every Iteration, Q-learning is used to dynamically choose an operator among five crossover operators. To obtain a better distribution of solutions in multi-objective optimization problems with irregular PFs, a new approach for weight vector initializing is proposed. Additionally, to enhance population diversity, a reward calculation method based on two metrics, Spacing and PD, is proposed. Finally, the proposed algorithm is validated for different numbers of objectives, ranging from two to five for multi/many-objective optimization problems. The experimental results demonstrate the significant advantages of the proposed algorithm compared to state-of-the-art MOEAs across multiple test cases.
... The performance indicators integrated into the proposed framework include generational distance (GD), inverted generational distance (IGD) [52], IGD+ [53], and hypervolume (HV) [54]. Moreover, performance indicators for DMOPs are also developed, which are mean IGD (MIGD) [55] and mean HV (MHV) [51]. ...
Article
Full-text available
Dynamic multi-objective big data optimization problems (DMBDOPs) are challenging because of the difficulty of dealing with large-scale decision variables and continuous problem changes. In contrast to classical multi-objective optimization problems, DMBDOPs are still not intensively explored by researchers in the optimization field. At the same time, there is lacking a software framework to provide algorithmic examples to solve DMBDOPs and categorize benchmarks for relevant studies. This paper presents a metaheuristic software framework for DMBDOPs to remedy these issues. The proposed framework has a lightweight architecture and a decoupled design between modules, ensuring that the framework is easy to use and has enough flexibility to be extended and modified. Specifically, the framework now integrates four basic dynamic metaheuristic algorithms, eight test suites of different types of optimization problems, as well as some performance indicators and data visualization tools. In addition, we have proposed an experience reuse method, speeding up the algorithm’s convergence. Moreover, we have implemented parallel computing with Apache Spark to enhance computing efficiency. In the experiments, algorithms integrated into the framework are tested on the test suites for DMBDOPs on an Apache Hadoop cluster with three nodes. The experience reuse method is compared to two restart strategies for dynamic metaheuristics.
... In addition, the proportion of hv to uhv (hv uhv n) is calculated to compare the volume of the objective space covered by the two sets. The last one is the distance between PF and unconstrained PF (GD cpo upo), which has been computed by using the generational distance metric [35] in order to approximate the location of the PF compared to the unconstrained one. Moreover, the average number of fronts (nfronts) in neighborhood and its first auto-correlation coefficient has been calculated as measures of evolvability and ruggedness of the landscape. ...
Preprint
Full-text available
Multi-objective optimization problems with constraints (CMOPs) are generally considered more challenging than those without constraints. This in part can be attributed to the creation of infeasible regions generated by the constraint functions, and/or the interaction between constraints and objectives. In this paper, we explore the relationship between constrained multi-objective evolutionary algorithms (CMOEAs) performance and CMOP instances characteristics using Instance Space Analysis (ISA). To do this, we extend recent work focused on the use of Landscape Analysis features to characterise CMOP. Specifically, we scrutinise the multi-objective landscape and introduce new features to describe the multi-objective-violation landscape, formed by the interaction between constraint violation and multi-objective fitness. Detailed evaluation of problem-algorithm footprints spanning six CMOP benchmark suites and fifteen CMOEAs, illustrates that ISA can effectively capture the strength and weakness of the CMOEAs. We conclude that two key characteristics, the isolation of non-dominate set and the correlation between constraints and objectives evolvability, have the greatest impact on algorithm performance. However, the current benchmarks problems do not provide enough diversity to fully reveal the efficacy of CMOEAs evaluated.
... To capture the performance from the multi-objective optimization perspective, we will use five metrics, purity [5], generational distance (GD) [77], spread (Γ, Δ) [14] and spacing (SP) [68] to compare the performance of different solvers. The purity metric is computed by the number |Y N ∩ Y P |/|Y N |, where Y N is the Pareto front approximation obtained by the solver and Y P is a discrete representation of the real Pareto front. ...
Article
Multi-objective bi-level optimization (MOBLO) addresses nested multi-objective optimization problems common in a range of applications. However, its multi-objective and hierarchical bi-level nature makes it notably complex. Gradient-based MOBLO algorithms have recently grown in popularity, as they effectively solve crucial machine learning problems like meta-learning, neural architecture search, and reinforcement learning. Unfortunately, these algorithms depend on solving a sequence of approximation subproblems with high accuracy, resulting in adverse time and memory complexity that lowers their numerical efficiency. To address this issue, we propose a gradient-based algorithm for MOBLO, called gMOBA, which has fewer hyperparameters to tune, making it both simple and efficient. Additionally, we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity. Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results. To accelerate the convergence of gMOBA, we introduce a beneficial L2O (learning to optimize) neural network (called L2O-gMOBA) implemented as the initialization phase of our gMOBA algorithm. Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.
... Usually, quantity, accuracy, and distribution are the main concerns of the quality to design performance metrics (Zitzler, Deb, and Thiele 2000). Therefore, a quantity metric, |N| (the number of non-dominated solutions found by an algorithm), an accuracy metric, GD (Generational Distance) (Veldhuizen 1999), and a distribution metric, HV (Hypervolume) (Zitzler 1999) are used to measure how close the found solutions are to the theoretical Pareto front and how the solutions are distributed. Indicators, GD and HV are calculated using Equations (16) and (17), respectively. ...
... We synthesised separate sets of Pareto-optimal DeepDECS controllers for each of these setups, as well as a fifth set of Pareto-optimal controllers corresponding to the perfectperception variant of the autonomous system. Finally, we used the following Pareto front quality metrics to compare the five sets of Pareto optimal controllers, and to evaluate their quality: 1) Inverted Generational Distance (IGD) [84], which measures the distance between the analysed Pareto front and a reference frame (e.g., the true Pareto front, the best known approximation of the true Parero front, or an "ideal" Pareto front) by calculating, for each point on the reference frame, the distance to the closest point on the Pareto front. The IGD measure for the front is then computed as the mean of these distances. ...
Article
Full-text available
We present DeepDECS, a new method for the synthesis of correct-by-construction software controllers for autonomous systems that use deep neural network (DNN) classifiers for the perception step of their decision-making processes. Despite major advances in deep learning in recent years, providing safety guarantees for these systems remains very challenging. Our controller synthesis method addresses this challenge by integrating DNN verification with the synthesis of verified Markov models. The synthesised models correspond to discrete-event software controllers guaranteed to satisfy the safety, dependability and performance requirements of the autonomous system, and to be Pareto optimal with respect to a set of optimisation objectives. We evaluate the method in simulation by using it to synthesise controllers for mobile-robot collision limitation, and for maintaining driver attentiveness in shared-control autonomous driving.
... Number of populations Fig. 3 The Pareto optimal front of the proposed method compared to other implemented algorithms and repetitions in the algorithms are determined by the quality of the solutions obtained and the execution time. In all optimization algorithms, we set the number of populations and iterations to be 100, akin to [10,53]. Two parameters, a and b, are determined by the user in sigma scaling. ...
Preprint
Full-text available
In recent years, the number of smart devices and wireless data transmissions has increased worldwide. These emerging applications and services require not only extensive computing capabilities and high battery power, but also elevated data transmissions as well. Nonetheless, the computing capacity of this equipment is constrained, resulting in significant consequences on the performance and operating costs of services in 5th-generation wireless networks. Recent advantages of Fog computing have increased the use of this model to fulfill above requirements in the IoT context. A new Fog computing network model has been proposed in order to address these issues by providing cloud computing services at the network's edge. In Fog computing, mobile devices are not required to offload all their tasks to remote and central servers. However, since other users are exposed to offloaded tasks, they are vulnerable to malicious attacks and eavesdropping. In this paper, we investigate security-aware resource allocation in device-to-device based fog computing systems. In order to enhance task offloading, a novel multi-objective function is proposed to optimize delay and energy savings compared to local computing, as well as security breach costs. In order to address this issue, a modified version of the NSGA-II algorithm is proposed that employs sigma scaling. According to the results, the defined objective function is successful in optimizing objectives simultaneously. Using sigma scaling made the NSGA-II algorithm better at spreading out solutions. Moreover, the algorithm's exploration and exploitation capabilities are well controlled.
... And [20] focused on two variations of ant colony optimization (ACO) algorithms, multi-pheromone and multi-colony, for solving a similar route finding problem. Following [20], two metrics of error ration (ER) for convergency [21,22] and spacing metric (SM) [23,24] for diversity purposes are used to compare the performance of the algorithms. Table 3 summarizes the evaluation outcomes. ...
Article
Full-text available
Route finding is an everyday challenge for urban residents. While many route planner applications exist, they cannot find suitable routes based on user preferences. According to user preferences, routing in a multimode urban transportation network can be considered a multiobjective optimization problem. Different objectives and modes for transportation, along with many routes as decision elements, give rise to the complexity of the problem. This study uses an elitism multiobjective evolutionary algorithm and the Pareto front concept to solve the problem. The data of a simulated multimode network consisting of 150 vertexes and 2600 edges are used to test and evaluate the proposed method. Four transport modes are considered: the metro, bus, taxi, and walking. Also, three minimization objective functions are considered: expense, discomfort, and time. The results show the competence of the algorithm in solving such a complex problem in a short run time. The optimal setting for the algorithm parameters is found by considering the algorithm run time, diversity of solutions, and convergence trend by running sensitivity analyses. A repeatability test is applied using the optimal setting of the algorithm, which shows a high level of repeatability. While NSGA-II (Non-dominated Sorting Genetic Algorithm II) may be a well-established algorithm in the literature, its application in multiobjective route finding in multimode transport networks is unique and novel. The outcomes of the proposed method are compared with existing methods in the literature, proving the better performance of the NSGA-II algorithm.
... The performance of an algorithm can be evaluated using the GD performance indicator, which calculates the distance from a solution to the Pareto front (Van Veldhuizen, 1999). Suppose the set of objective vectors, A a 1 ,a 2 ,a 3 , . . . ...
Article
Full-text available
Manufacturing advanced materials and products with a specific property or combination of properties is often warranted. To achieve that it is crucial to find out the optimum recipe or processing conditions that can generate the ideal combination of these properties. Most of the time, a sufficient number of experiments are needed to generate a Pareto front. However, manufacturing experiments are usually costly and even conducting a single experiment can be a time-consuming process. So, it's critical to determine the optimal location for data collection to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This paper presents a novel data-driven Bayesian optimization framework that utilizes sequential learning to efficiently optimize complex systems with multiple conflicting objectives. Additionally, this paper proposes a novel metric for evaluating multi-objective data-driven optimization approaches. This metric considers both the quality of the Pareto front and the amount of data used to generate it. The proposed framework is particularly beneficial in practical applications where acquiring data can be expensive and resource intensive. To demonstrate the effectiveness of the proposed algorithm and metric, the algorithm is evaluated on a manufacturing dataset. The results indicate that the proposed algorithm can achieve the actual Pareto front while processing significantly less data. Our data-driven framework can facilitate more efficient manufacturing choices, which not only minimizes resource usage but also promotes reduced energy consumption and thereby aids in pollution prevention.
... As shown in Table 1, typical multi-objective optimization algorithms have been compared with the NSGA-II. Four metrics have been applied in evaluating the performance of different algorithms, which are the generational distance(GD) [37], spread metric(Spread) [38], average Hausdorff Distance(deltaP) [39], and diversity metric(DM) [40]. The results show that NSGA-II applies better in convergence, uniformity, and spread performance. ...
Article
Full-text available
In this work, a new synchronous adjustment framework for the integrated hydrogen network based on multi-model ensemble method is proposed. Specifically, the contributions of this work can be highlighted as follows: ⋅ A model constructed for describing the synergistic relationship between the integrated hydrogen network is proposed. Compared with the model describing the isolated units, the intermediate substances between integrated hydrogen sinks and sources have been considered. Better performance could be achieved on the simulation for the operating state of the units. ⋅ Under the basis of the model, a concurrent optimization strategy for the production system and hydrogen network is proposed. The difficulty in synergistically determining the operating state of integrated units has been overcome through synchronous adjustment based on multi-objective integrated optimization. A synchronous adjustment framework for the integrated system is achieved. The excessive use of hydrogen could be avoided. ⋅ The problem of finding the optimal operating state of the units in hydrogen management is solved. The provided model and co-optimization strategy have achieved better utilization of the hydrogen capacity, which could help to mitigate the contradiction between hydrogen demand and production yield. A R T I C L E I N F O Keywords: Hydrogen network Continuous catalytic reforming Hydrocracking Multi-objective integrated optimization Automatic machine learning A B S T R A C T The hydrogen network plays an important role in maintaining the stable production of the refinery. However, the disconnected hydrogen network and production system has led to a shortfall of hydrogen to meet the product yield and hydrogen demand. Strategies for integrating two systems would need to be carefully investigated. In this work, the integrated hydrogen sources and sinks have been characterized with a synergistic model based on the automatic machine learning method. Besides, a synchronous adjustment framework for the hydrogen network and production system has been achieved based on a multi-objective integrated optimization method. The results show that the balance of hydrogen demand and product yield could be improved. Hydrogen purchased has been 1.8% lower than the separated model. Under the same hydrogen budget, the profit could be increased by 3.5%.
... The generation distance (GD) metric measures the average distance between the algorithm's computed optimal solution set and the actual Pareto front [45]. A smaller GD value denotes better convergence of the algorithm, as it brings the final solution set closer to the actual Pareto front (PF). ...
Preprint
Full-text available
When dealing with large-scale multi-objective optimization problems (LSMOP), enhancing the dimensionality of decision variables tends to render the MOEA/D algorithm poor scalability in decision space and more susceptible to converging toward local optima. In response to this issue, this article proposed an improved large-scale MOEA/D algorithm with multiple strategies (MSMOEA/D). In the MSMOEA/D algorithm, a hybrid initialization strategy based on automatic encoder was introduced into multi-objective optimization to provide a better initial population. Moreover, a neighborhood adjustment strategy based on the aggregation function value was proposed, which dynamically adjusted the neighborhood in accordance with the evolutionary progression of the current population and the change degree of the aggregation function value, and thus can obtained better search capabilities. Furthermore, a mutation selection strategy based on non-dominated sorting is adopted within the optimization process. Different subproblems select mutation strategies according to the number of individuals located at the first level of non-dominated sorting to avoid the population falling into local optima and enhance the overall performance of the algorithm. Finally, both the MSMOEA/D algorithm and other existing algorithms are evaluated using LSMOP and DTLZ test problems. The experimental outcomes substantiate the effectiveness of the improved algorithm in solving LSMOPs.
... This problem is an extension of the standard DEB problem 52,53 , because it has a nonlinear objective function and 4-segment Pareto optimal fronts, and its objective space is small, with a theoretical coverage scope of only 2.29695, which makes it more difficult to solve in a noisy environment than Problem 1. Therefore, the standard DEB problem was modified to test the algorithms' performance and noise suppression on different feature testing problems. ...
Preprint
Full-text available
In this paper, for a general type of multi-objective probabilistic optimization problem without any prior noisy information, which has an extensive engineering application background and needs to be solved urgently, we propose a small-population immune algorithm with adaptive sampling to solve. In the design. First, we design a small-population immune algorithm framework inspired by the response mechanism of adaptive immunity. Second, we design an adaptive sampling scheme that adaptively allocates an appropriate number of samples for each sub-objective function of all individuals in the population to estimate the objective function value. Third, based on the objective function estimates, the dominance levels of all individuals in the population and the crowding distances of individuals in each dominance level are determined. Fourth, the clone size, mutation rate, crossover distribution index, and mutation distribution index of an individual are designed to be adaptively determined based on the number of iterations, dominance level, and crowding distance. Cloning, crossover, and mutation operators are implemented for each individual, using simulated binary crossover and polynomial mutation to enhance co-evolution and facilitate information sharing and exchange among all individuals. Fifth, based on the dominance level and crowding distance, the population update strategies are designed to adaptively update the memory set with high-quality individuals and generate a new generation population with good diversity. Finally, based on three theoretical problems and two engineering problems, as well as six representative comparative algorithms, the experimental results show that the proposed algorithm is an optimizer with good competitiveness and application potential, and has few parameters, less sample consumption, strong noise suppression ability, and high search efficiency.
Article
As global energy demand continues to rise, the focus has increasingly shifted toward renewable energy sources. In this context, multi-body Wave Energy Converters (WECs) stand out due to their superior power capture efficiency and enhanced survivability, facilitated by their operational flexibility. Nevertheless, the complexity associated with deploying multi-body WECs in arrays has constrained extensive research in this domain. This study introduces an optimization strategy for the layout design and control of multi-body WECs aimed at efficiently harnessing wave energy. Utilizing Evolutionary Multi-Objective Optimization (EMO) algorithms, this framework optimizes the arrangement and operational parameters of WEC arrays, specifically tailored for the marine conditions off the coast of Oman. The research encompasses two primary optimization phases: layout optimization, which seeks an optimal balance between power production and the separation distance between WEC devices, and operational control optimization, which adjusts WEC configurations according to varying wave conditions. This approach aims to maximize energy capture and ensure sustainable array performance while addressing technical challenges. Simulation results indicate that the optimized configurations not only enhance energy extraction but also significantly reduce hydrodynamic loads, promising improved longevity and efficiency for multi-body WEC systems in extreme marine environments.
Article
Full-text available
Vertical-axis wind turbines are great candidates to enable wind power extraction in urban and off-shore applications. Currently, concerns around turbine efficiency and structural integrity limit their industrial deployment. Flow control can mitigate these concerns. Here, we experimentally demonstrate the potential of individual blade pitching as a control strategy and explain the flow physics that yields the performance enhancement. We perform automated experiments using a scaled-down turbine model coupled to a genetic algorithm optimiser to identify optimal pitching kinematics at on- and off-design operating conditions. We obtain two sets of optimal pitch profiles that achieve a three-fold increase in power coefficient at both operating conditions compared to the non-actuated turbine and a 77% reduction in structure-threatening load fluctuations at off-design conditions. Based on flow field measurements, we uncover how blade pitching manipulates the flow structures to enhance performance. Our results can aid vertical-axis wind turbines increase their much-needed contribution to our energy needs.
Article
Khi đánh giá giải thuật tiến hóa tối ưu mục tiêu, người ta không chỉ quan tâm đến chất lượng của tập giải pháp mà còn chú ý đến khả năng thăm dò và khai thác của giải thuật vì đó là yếu tố đảm bảo cho tập giải pháp có chất lượng hội tụ và đa dạng tốt. Duy trì cân bằng giữa khả năng thăm dò và khai thác trong quá trình tiến hóa của giải thuật là vấn đề khó nhưng được quan tâm trong lĩnh vực nghiên cứu. Trong bài báo này, chúng tôi phân tích mối quan hệ giữa chất lượng của tập giải pháp và hiệu quả tìm kiếm của giải thuật để đánh giá xu thế tìm kiếm và đề xuất kỹ thuật điều khiển dựa tỷ lệ biến đổi độ đo chất lượng tập giải pháp nhằm duy trì sự cân bằng tốt hơn giữa khả năng thăm dò và khai thác của giải thuật trong quá trình tiến hóa. Thử nghiệm kỹ thuật đề xuất trên giải thuật tiến hóa tối ưu đa mục tiêu sử dụng hướng vi phân với một số bài toán mẫu cho kết quả có tính cạnh tranh cao, minh chứng khả năng cải thiện hiệu quả tìm kiếm của giải thuật.
Article
Full-text available
Many real-world optimization problems, particularly engineering ones, involve constraints that make finding a feasible solution challenging. Numerous researchers have investigated this challenge for constrained single- and multi-objective optimization problems. In particular, this work extends the boundary updating (BU) method proposed by Gandomi and Deb (2020) for the constrained optimization problem. BU is an implicit constraint handling technique that aims to cut the infeasible search space over iterations to find the feasible region faster. In doing so, the search space is twisted, which can make the optimization problem more challenging. In response, two switching mechanisms are implemented that transform the landscape along with the variables to the original problem when the feasible region is found. To achieve this objective, two thresholds, representing distinct switching methods, are taken into account. In the first approach, the optimization process transitions to a state without utilizing the BU approach when constraint violations reach zero. In the second method, the optimization process shifts to a BU method-free optimization phase when there is no further change observed in the objective space. To validate, benchmarks and engineering problems are considered to be solved with well-known evolutionary single- and multi-objective optimization algorithms. Herein, the proposed method is benchmarked using with and without BU approaches over the whole search process. The results show that the proposed method can significantly boost the solutions in both convergence speed and finding better solutions for constrained optimization problems.
Article
Full-text available
Home healthcare is a product of an aging population, increased awareness of health management and growing demand for medical services. Home healthcare providers need to rationally and efficiently route caregivers to visit their customers. This paper investigates the green home healthcare routing and scheduling problem with simultaneous consideration of physician–patient satisfaction and sustainability based on prospect theory, which simultaneously optimizes four objectives including minimizing total cost, carbon emission, maximizing customer satisfaction, and caregiver satisfaction. Practicalities such as maximum working hours, load balancing, physician–patient skill level matching, customer prioritization, and time windows are also considered. In addition, an improved adaptive reference point-based third generation non-dominated sorting genetic algorithm is proposed to solve the problem. Finally, numerical experiments on various scales were conducted to verify the effectiveness of the algorithm, and the results show that the algorithm can provide decision makers with a larger number of feasible no dominated solutions, and can effectively solve the home healthcare path planning problem. It is also compared with three algorithms, second generation of non-dominated sorting genetic algorithm, third generation non-dominated sorting genetic algorithm, and adaptive reference point-based third generation non-dominated sorting genetic algorithm, to further verify that the proposed improved adaptive reference point-based third generation non-dominated sorting genetic algorithm performs better in terms of solution speed, distribution of Pareto-optimal solutions, convergence, and diversity in various instances. The problem proposed in this study synthesizes the goals of multiple stakeholders in home healthcare and is more conducive to the sustainable forward development of the enterprise.
Article
Applying multi-objective evolutionary optimization algorithms in solving multi-objective optimization problems is a research field that has received attention recently. In the literature of this research field, many studies have been carried out to propose multi-objective evolutionary algorithms or improve published algorithms. However, balancing the exploitation and exploration capabilities of the algorithm during the evolution process is still challenging. This article proposes an approach to solve that equilibrium problem based on analyzing population distribution during the evolutionary process to identify empty regions in which no solutions are selected. After that, information about empty regions with the most significant area will be combined with the current reference point to create a new reference point to prioritize choosing solutions in those regions. Experiments on 10 test problems of 2 typical benchmark sets showed that this mechanism increases the diversity of the population, thereby contributing to a balance between the algorithm's abilities in the evolutionary process and enhancing the algorithm.
ResearchGate has not been able to resolve any references for this publication.