Fig 10 - uploaded by Martin Drozdik
Content may be subject to copyright.
Source publication
Many multi-objective evolutionary algorithms rely on the non-dominated sorting procedure to determine the relative quality of individuals with respect to the population. In this paper we propose a new method to decrease the cost of this procedure. Our approach is to determine the non-dominated individuals at the start of the evolutionary algorithm...
Context in source publication
Context 1
... If we want to determine all the items whose attribute lies in a certain interval, it is helpful to have that data sorted by that attribute. We need to search according to all objectives, therefore we keep the population sorted by each objective. We keep a sorted doubly linked list for each objective. An illustration of these lists is shown in Fig. 10. The upper and lower reference sets can be constructed simply by iterating between the positions of ref and ...
Similar publications
Sparse-representation based approaches have been integrated into image fusion methods in the past few years and show great performance in image fusion. Training an informative and compact dictionary is a key step for a sparsity-based image fusion method. However, it is difficult to balance "informative" and "compact". In order to obtain sufficient...
The article considers the geometric construction of the orthogonal isometry of a curved surface. To solve this problem, the authors propose to perform additional construction of the contour of the section of the surface in the orthogonal drawing and use the coordinates of the points of this section to construct an axonometric outline of the surface...
In this study, a sexaxial figure of the Earth was created using randomly distributed 6400 control points on the geoid, which was generated by the EGM2008 gravity model. The shape of the sexaxial ellipsoid is defined by its six orthogonal axes. To construct the new geometric figure, eight triaxial ellipsoids were fitted to the Cartesian coordinates...
Studies show that in the cutting process, different parameters have different importance to performance indicators. Accordingly, it is necessary to define the importance of different parameters to performance indicators to study the correlation between parameters and performance indicators accurately. In the present study, the side milling process...
The precision of mechanical equipment closely associated with parts manufacturing precision. This article mainly to realize the machine precision on the basis of low manufacturing accuracy of parts through the homogeneous coordinate transformation and the Taguchi theory. Firstly, using the homogeneous coordinate transformation to establish the tran...
Citations
... The result is a number of differently-sized grids. M-Front has been proposed by Drozdik et al. [15]. A tree-based data structure called ND-Tree is proposed in [16]. ...
Pareto dominance-based multiobjective evolutionary algorithms use non-dominated sorting to rank their solutions. In the last few decades, various approaches have been proposed for non-dominated sorting. However, the running time analysis of some of the approaches has some issues and they are imprecise. In this paper, we focus on one such algorithm namely hierarchical non-dominated sort (HNDS), where the running time is imprecise and obtain the generic equations that show the number of dominance comparisons in the worst and the best case. Based on the equation for the worst case, we obtain the worst-case running time as well as the scenario where the worst case occurs. Based on the equation for the best case, we identify a scenario where HNDS performs less number of dominance comparisons than that presented in the original paper, making the best-case analysis of the original paper unrigorous. In the end, we present an improved version of HNDS which guarantees the claimed worst-case time complexity by the authors of HNDS which is O(MN2).
... The simplest data structure for the Pareto archive is a simple, unorderd list of solutions with linear time complexity of update. Several methods and related data structures aiming at e cient realization of the Pareto archive update have been proposed, e.g., Quad Tree (Sun and Steuer, 1996;Mostaghim and Teich, 2005;Sun, 2006Sun, , 2011Fieldsend, 2020), MFront II (Drozdík et al., 2015), BSP Tree (Glasmachers, 2017), and ND-Tree (Jaszkiewicz and Lust, 2018). Jaszkiewicz and Lust (2018) reported some complexity results for ND-Tree: ...
... Note that in this experiment we use instances with n = 16 instead of n = 10 used in other places, because with n = 10 the number of solutions was too low to show significant differences between the evaluated methods. The methods used in this experiments are simple list, ND-Tree (Jaszkiewicz and Lust, 2018), Quad Tree (precisely Quad Tree 2 algorithm described by Mostaghim and Teich (2005) with the corrections described by Fieldsend (2020)), and MFront II (Drozdík et al., 2015) with the modifications proposed by Jaszkiewicz and Lust (2018). We used C++ implementations of these methods described by Jaszkiewicz and Lust (2018), however, implementation of the Quad Tree has been improved on both technical level and using the corrections proposed by Fieldsend (2020). ...
The difficulty of solving a multi-objective optimization problem is impacted by the number of objectives to be optimized. The presence of many objectives typically introduces a number of challenges that affect the choice/design of optimization algorithms. This paper investigates the drivers of these challenges from two angles: (i) the influence of the number of objectives on problem characteristics and (ii) the practical behavior of commonly used procedures and algorithms for coping with many objectives. In addition to reviewing various drivers, the paper makes theoretical contributions by quantifying some drivers and/or verifying these drivers empirically by carrying out experiments on multi-objective combinatorial optimization problems (multi-objective NK-landscapes). We then make use of our theoretical and empirical findings to derive practical recommendations to support algorithm design. Finally, we discuss remaining theoretical gaps and opportunities for future research in the area of multi- and many-objective optimization.
... The simplest data structure for the Pareto archive is a simple, unorderd list of solutions with linear time complexity of update. Several methods and related data structures aiming at efficient realization of the Pareto archive update have been proposed, e.g., Quad Tree (Sun and Steuer, 1996;Mostaghim and Teich, 2005;Sun, 2006Sun, , 2011Fieldsend, 2020), MFront II (Drozdík et al., 2015), BSP Tree (Glasmachers, 2017), and ND-Tree (Jaszkiewicz and Lust, 2018). Jaszkiewicz and Lust (2018) reported some complexity results for ND-Tree: ...
... Note that in this experiment we use instances with n = 16 instead of n = 10 used in other places, because with n = 10 the number of solutions was too low to show significant differences between the evaluated methods. The methods used in this experiments are simple list, ND-Tree (Jaszkiewicz and Lust, 2018), Quad Tree (precisely Quad Tree 2 algorithm described by Mostaghim and Teich (2005) with the corrections described by Fieldsend (2020)), and MFront II (Drozdík et al., 2015) with the modifications proposed by Jaszkiewicz and Lust (2018). We used C++ implementations of these methods described by Jaszkiewicz and Lust (2018), however, implementation of the Quad Tree has been improved on both technical level and using the corrections proposed by Fieldsend (2020). ...
The difficulty of solving a multi-objective optimization problem is impacted by the number of objectives to be optimized. The presence of many objectives typically introduces a number of challenges that affect the choice/design of optimization algorithms. This paper investigates the drivers of these challenges from two angles: (i) the influence of the number of objectives on problem characteristics and (ii) the practical behavior of commonly used procedures and algorithms for coping with many objectives. In addition to reviewing various drivers, the paper makes theoretical contributions by quantifying some drivers and/or verifying these drivers empirically by carrying out experiments on multi-objective NK landscapes and other typical benchmarks. We then make use of our theoretical and empirical findings to derive practical recommendations to support algorithm design. Finally, we discuss remaining theoretical gaps and opportunities for future research in the area of multi- and many-objective optimization.
... There are also other approaches (Drozdik et al. 2015;Li et al. 2017;Mishra et al. 2017;Yakupov and Buzdalov 2017) where in spite of performing the complete non-dominated sorting, an offspring solution is inserted into its proper place in the existing sorted set of fronts. This kind of scenario is generally used in steady-state multiobjective evolutionary algorithms (Mishra et al. 2017). ...
Pareto-based multi-objective evolutionary algorithms use non-dominated sorting as an intermediate step. These algorithms are easy to parallelize as various steps of these algorithms are independent of each other. Researchers have focused on the parallelization of non-dominated sorting in order to reduce the execution time of these algorithms. In this paper, we focus on one of the initial approaches for non-dominated sorting also known as naive approach, proposed by Srinivas et al. and explore the scope of parallelism in this approach. Parallelism is explored in the considered approach in three different ways considering Parallel Random Access Machine, Concurrent Read Exclusive Write model. The time and space complexities of three different parallel versions are also analyzed. Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors are available. Thus, the same assumption has been considered in our analysis too and we have obtained the maximum number of processors required for three parallel versions.
... A new data structure, which is composed of a K-d tree and M-list called M-front, is proposed in [22] and used to decrease the computational cost of non-dominated sorting. In M-front, utilizing the over-non-domination phenomenon as an advantage with the increase in the number of objectives. ...
Non-dominated sorting, used to find pareto solutions or assign solutions to different fronts, is a key but time-consuming process in multi-objective evolutionary algorithms (MOEAs). The best-case and worst-case time complexity of non-dominated sorting algorithms currently known are O(MNlogN) and O(MN2); M and N represent the number of objectives and the population size, respectively. In this paper, a more efficient SET-based non-dominated sorting algorithm, shorted to SETNDS, is proposed. The proposed algorithm can greatly reduce the number of comparisons on the promise of ensuring a shorter running time. In SETNDS, the rank of a solution to be sorted is determined by only comparing with the one with the highest rank degree in its dominant set. This algorithm is compared with six generally existing non-dominated sorting algorithms—fast non-dominated sorting, the arena’s principle sort, the deductive sort, the corner sort, the efficient non-dominated sort and the best order sort on several kinds of datasets. The compared results show that the proposed algorithm is feasible and effective and its computational efficiency outperforms other existing algorithms.
... In the context of evolutionary computation, k-d trees are used in the ENS-NDT algorithm for non-dominated sorting [17], as well as in several algorithms for maintaining the non-dominated set of points [14,16]. The space complexity of a k-d tree is O(N ). ...
Some of the modern evolutionary multiobjective algorithms have a high computational complexity of the internal data processing. To further complicate this problem, researchers often wish to alter some of these procedures, and to do it with little effort.
The problem is even more pronounced for steady-state algorithms, which update the internal information as each single individual is computed. In this paper we explore the applicability of the principles behind the existing framework, called generalized offline orthant search, to the typical problems arising in steady-state evolutionary multiobjective algorithms.
We show that the variety of possible problem formulations is higher than in the offline setting. In particular, we state a problem which cannot be solved in an incremental manner faster than from scratch. We present an efficient algorithm for one of the simplest possible settings, incremental dominance counting, and formulate the set of requirements that enable efficient solution of similar problems. We also present an algorithm to evaluate fitness within the IBEA algorithm and show when it is efficient in practice.
... Although they are widely adopted in other fields of research, it is not usual to find MOEAs using K-D trees. Recent publications include M-Front [13] and ND-Tree [23] . M-Front computes which individuals are nondominated at the beginning of the MOEA and then update this knowledge each time an individual changes. ...
... if T is too big then 13 Rebuild T with the active solutions 14 Define POP as the active solutions in T 15 return POP In this case, U (0 , 1) (line 5) denotes a uniformly distributed random number in the interval [0..1], while the variation operator (line 9) is very similar to that from Differential Evolution algorithm [39] . The difference, in this case, is the value Table 1 Parameters settings of all the algorithms compared. ...
This paper presents KDT-MOEA, a framework that takes advantage of a special kind of binary search tree, named k-d tree, to solve multiobjective optimization problems (MOPs). Our main goal is to explore the capabilities of this data structure to define neighborhood structures either in decision variables space or in objective space, as well as by switching between them at any time. The KDT-MOEA framework performance is compared with five state-of-the-art algorithms on the DTLZ, WFG and LZ09 benchmarking problems with up to 15 objectives. Statistical tests demonstrate that KDT-MOEA was able to outperform the compared methods on most problems. In addition, in order to evaluate the flexibility and the potential of the proposed operators, extended versions of the compared algorithms are also presented. Empirical results pointed out that the new versions were superior to all five original MOEAs, indicating that the proposed operators can also be easily incorporated into existing MOEAs with different strategies to achieve better results.
... Divide-and-conquer based algorithms are easy to parallelize (Drozdik et al. 2015) and there have been several approaches (Jensen 2003;Fang et al. 2008;Fortin et al. 2013;Buzdalov and Shalyto 2014;Mishra et al. 2016Mishra et al. , 2019 based on a divideand-conquer strategy for non-dominated sorting. Some of these approaches (Jensen 2003;Fortin et al. 2013;Buzdalov and Shalyto 2014) are based on the algorithm of (Kung et al. 1975). ...
... The time complexity of this approach is O(N log M−1 N ) and the space complexity is O(M N ). This approach is based on a divide-and-conquer strategy so it also has the parallelism property (Drozdik et al. 2015). However, this approach is not applicable in the cases where two solutions share the same value for a particular objective. ...
Non-dominated sorting is a crucial component of Pareto-based multi- and many-objective evolutionary algorithms. As the number of objectives increases, the execution time of a multi-objective evolutionary algorithm increases, too. Since multi-objective evolutionary algorithms normally have a low data dependency, research-ers have increasingly adopted parallel programming techniques to reduce their execution time. Evidently, it is also desirable to parallelize non-dominated sorting. There are some recent proposals which focus on the parallelization of non-dominated sorting, with a particular emphasis on a very well-known approach called fast non-dominated sorting. In this paper, however, we explore the scope of parallelism in an approach called divide-and-conquer based non-dominated sorting (DCNS), which we recently introduced. This paper explores the parallelism from a theoretical point of view. The parallelization of the DCNS approach has been explored considering the PRAM-CREW (Parallel Random Access Machine, Concurrent Read Exclusive Write) model. The analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors are available. So, in our analysis, we have also considered the same assumption. The time and space complexities of the parallel version of the DCNS approach is obtained in different scenarios. The time complexity of the parallel version of the DCNS approach in different scenarios is proved to be . We have also obtained the maximum number of processors which can be required by the parallel version of the DCNS approach. The comparison of the parallel version of the DCNS approach with respect to some other approaches is also performed.
... In terms of computational costs, the size of the population can indeed increase the cost. Recent improvements in the DE algorithm led to a sub-quadratic cost in the number of individuals [31]. However, there is no interest here in enlarging the population. ...
Optimization of the hysteretic damping capacity of carbon nanotube (CNT) nanocomposites is carried out via a differential evolution algorithm coupled with an ad hoc finite element implementation of a nonlinear 3D mesoscale theory. Such theory describes the hysteresis due to the shear stick-slip between the nanotubes and the polymer chains of the hosting matrix. The amount of energy dissipated through the CNT-matrix stick-slip depends on the nanocomposite constitutive parameters such as the elastic mismatch, the nanofiller content/distribution, and the CNT-matrix interfacial shear strength. The optimization problem seeks to determine the set of material parameters that can give rise to the best damping capacity of the nanocomposite. The objective function is defined as the area below the damping ratio curve versus the strain amplitude over the range of strains of interest. The results confirm that the genetic-type nanocomposite damping optimization making use of a sound mechanical model of the material response can be an effective design method providing the right mix of phases overcoming a costly error-and-trial approach to manufacturing and testing.
... In such methods, computation of the Pareto archive cannot be postponed till the end of the algorithm. Note that as suggested in [19] the dynamic non-dominance problem may also be used to speed up the non-dominated sorting procedure used in many MOEAs. As the Pareto archive contains all non-dominated points generated so far the first front is immediately known and the non-dominated sorting may be applied only to the subset of dominated points. ...
... Other methods can be found in [23], [24] and reviews in [25], [26]. We describe linear list, Quad-tree and one recent method, M-Front [19]. A. Linear List 1) General case: In this structure, a new point is compared to all points in the list until a covering point is found or all points are checked. ...
... M-Front has been proposed relatively recently by Drozdík et al. [19]. The idea of of M-Front is as follows. ...
In this paper we propose a new method called ND-Tree-based update (or shortly ND-Tree) for the dynamic non-dominance problem, i.e. the problem of online update of a Pareto archive composed of mutually non-dominated points. It uses a new ND-Tree data structure in which each node represents a subset of points contained in a hyperrectangle defined by its local approximate ideal and nadir points. By building subsets containing points located close in the objective space and using basic properties of the local ideal and nadir points we can efficiently avoid searching many branches in the tree. ND-Tree may be used in multiobjective evolutionary algorithms and other multiobjective metaheuristics to update an archive of potentially non-dominated points. We prove that the proposed algorithm has sub-linear time complexity under mild assumptions. We experimentally compare ND-Tree to the simple list, Quad-tree, and M-Front methods using artificial and realistic benchmarks with up to 10 objectives and show that with this new method substantial reduction of the number of point comparisons and computational time can be obtained. Furthermore, we apply the method to the non-dominated sorting problem showing that it is highly competitive to some recently proposed algorithms dedicated to this problem.