Conference Paper

An enhanced statistical approach for evolutionary algorithm comparison

DOI: 10.1145/1389095.1389270 Conference: Genetic and Evolutionary Computation Conference, GECCO 2008, Proceedings, Atlanta, GA, USA, July 12-16, 2008
Source: DBLP

ABSTRACT This paper presents an enhanced approach for comparing evolutionary algorithm. This approach is based on three statistical techniques: (a) Principal Component Analysis, which is used to make the data uncorrelated; (b) Bootstrapping, which is employed to build the probability distribution function of the merit functions; and (c) Stochastic Dominance Analysis, that is employed to make possible the comparison between two or more probability distribution functions. Since the approach proposed here is not based on parametric properties, it can be applied to compare any kind of quantity, regardless the probability distribution function. The results achieved by the proposed approach have provided more supported decisions than former approaches, when applied to the same problems.

Download full-text


Available from: Ricardo H. C. Takahashi, Jun 25, 2015
1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Growing application of evolutionary optimization algorithms in the problems of different domain have led to analyze their efficiency and effectiveness rigorously. Various approaches have been proposed to algorithms for performance evaluation such as parametric, non-parametric or mathematical which lack direct involvement of results obtained. A visual comparative performance evaluation method has been proposed in this paper incorporating more direct participation of results. Proposed method has been studied in perspective of three types of comparisons one-to-one, one-to-many and many-to-many. Necessary interpretations for the method have been illustrated and examined with solutions obtained on several benchmark functions through well known evolutionary optimization algorithms.
    2014 2nd International Symposium on Computational and Business Intelligence (ISCBI 2014), New Delhi; 12/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a statistical based comparison methodology for performing evolutionary algorithm comparison under multiple merit criteria. The analysis of each criterion is based on the progressive construction of a ranking of the algorithms under analysis, with the determination of significance levels for each ranking step. The multicriteria analysis is based on the aggregation of the different criteria rankings via a non-dominance analysis which indicates the algorithms which constitute the efficient set. In order to avoid correlation effects, a principal component analysis pre-processing is performed. Bootstrapping techniques allow the evaluation of merit criteria data with arbitrary probability distribution functions. The algorithm ranking in each criterion is built progressively, using either ANOVA or first order stochastic dominance. The resulting ranking is checked using a permutation test which detects possible inconsistencies in the ranking-leading to the execution of more algorithm runs which refine the ranking confidence. As a by-product, the permutation test also delivers -values for the ordering between each two algorithms which have adjacent rank positions. A comparison of the proposed method with other methodologies has been performed using reference probability distribution functions (PDFs). The proposed methodology has always reached the correct ranking with less samples and, in the case of non-Gaussian PDFs, the proposed methodology has worked well, while the other methods have not been able even to detect some PDF differences. The application of the proposed method is illustrated in benchmark problems.
    IEEE Transactions on Evolutionary Computation 01/2012; DOI:10.1109/TEVC.2010.2069567 · 5.55 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The original formulation of the Generalized Assignment Problem (GAP) consists in, given a set of n different tasks and m different agents, as-signing each task to an agent in such a way that a cost function is minimized. A previous work introduced the Equilibrium Function as a new objective function in the problem formulation. The purpose of this second objective function is to minimize the maximum difference between the amount of work assigned to the agents. This allows better distributions of the tasks between the agents than the results found from the original problem, with a small increase in the cost. This paper proposes new crossover and mutation operators that produce improve-ments in the algorithm presented in [Subtil et al. 2010], leading to consider-ably better Pareto approximations than the ones obtained in the previous work, within the same number of function evaluations. The proposed operators exploit problem-specific information in a probabilistic way, performing operations that lead to objective function enhancement or feasibility enhancement with greater probability than operations that do not cause such enhancements. A statistical comparison procedure is employed for supporting such conclusions.