Chapter

DIMMA

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Metaheuristic algorithms will gain more and more popularity in the future as optimization problems are increasing in size and complexity. In order to record experiences and allow project to be replicated, a standard process as a methodology for designing and implementing metaheuristic algorithms is necessary. To the best of the authors’ knowledge, no methodology has been proposed in literature for this purpose. This paper presents a Design and Implementation Methodology for Metaheuristic Algorithms, named DIMMA. The proposed methodology consists of three main phases and each phase has several steps in which activities that must be carried out are clearly defined in this paper. In addition, design and implementation of tabu search metaheuristic for travelling salesman problem is done as a case study to illustrate applicability of DIMMA.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Swarm intelligence is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals. In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization. Ant colony optimization (ACO) takes inspiration from the foraging behavior of some ant species. These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. Ant colony optimization exploits a similar mechanism for solving optimization problems. From the early nineties, when the first ant colony optimization algorithm was proposed, ACO attracted the attention of increasing numbers of researchers and many successful applications are now available. Moreover, a substantial corpus of theoretical results is becoming available that provides useful guidelines to researchers and practitioners in further applications of ACO. The goal of this article is to introduce ant colony optimization and to survey its most notable applications
Article
Full-text available
We consider a variant of the classical symmetric traveling salesman problem in which the nodes are partitioned into clusters and the salesman has to visit at least one node for each cluster. This NP-hard problem is known in the literature as the symmetric generalized traveling salesman problem (GTSP), and finds practical applications in routing, scheduling and location-routing. In a companion paper [Networks 26, No. 2, 113-123 (1995; Zbl 0856.90116)], we modeled GTSP as an integer linear program, and studied the facial structure of two polytopes associated with the problem. Here we propose exact and heuristic separation procedures for some classes of facet-defining inequalities, which are used within a branch-and-cut algorithm for the exact solution of GTSP. Heuristic procedures are also described. Extensive computational results for instances taken from the literature and involving up to 442 nodes are reported.
Conference Paper
Full-text available
Heuristic Search techniques are known for their ef ficiency and effectiveness in solving NP-Hard problems. However, there has been limited success so far in constructing a software toolkit w hich is dedicated to these methods and can fully support all the stages and as pects of researching and developing a system based on these techniques. Some of the reasons for that include the lack of problem modelling facilities an d domain specific frameworks which specifically suit the operations o f heuristic search, tedious code optimisations which are often required to achi eve efficient implementations of these methods, and the large number of available algorithms - both local search and population-based - which ma ke it difficult to implement and evaluate a range of techniques to find the most efficient one for the problem at hand. The iOpt Toolkit, presented in this articl e, attempts to address these issues by providing problem modelling facilities we ll-matched to heuristic search operations, a generic framework for developi ng scheduling applications, and a logically structured heuristic search framewo rk allowing the synthesis and evaluation of a variety of algorithms. In addition to these, the toolkit incorporates interactive graphical components for t he visualisation of problem and scheduling models, and also for monitoring the run-time behaviour and configuring the parameters of heuristic search algo rithms.
Conference Paper
Full-text available
A lot of heuristic approaches have been explored in the last two decades in order to tackle large size optimization problems. These areas include parallel meta-heuristics, hybrid meta-heuristic, and cooper- ative search algorithms. Different taxonomies have been proposed in the literature for parallel and hybrid meta-heuristics. In these taxonomies, one can realize that cooperative search algorithms lie somewhere,in be- tween. This paper looks at cooperative search algorithms as a stand alone area. Two different taxonomies of cooperative search algorithm are pro- posed based on two different criteria. Different implementations in this area are reported and classified using these taxonomies.
Conference Paper
Full-text available
This paper describes a racing procedure for find- ing, in a limited amount of time, a configuration of a metaheuristic that performs as good as pos- sible on a given instance class of a combinatorial optimization problem. Taking inspiration from methods proposed in the machine learning litera- ture for model selection through cross-validation, we propose a procedure that empirically evalu- ates a set of candidate configurations by discard- ing bad ones as soon as statistically sufficient ev- idence is gathered against them. We empirically evaluate our procedure using as an example the configuration of an ant colony optimization algo- rithm applied to the traveling salesman problem. The experimental results show that our procedure is able to quickly reduce the number of candi- dates, and allows to focus on the most promising ones.
Conference Paper
Full-text available
This paper describes the recently released DREAM (Distributed Resource Evolutionary Algorithm Machine) framework for the automatic distribution of evolutionary algorithm (EA) processing through a virtual machine built from large numbers of individual machines linked by standard Internet protocols. The framework allows five different user entry points which depend on the knowledge and requirements of the user. At the highest level, users may specify and run distributed EAs simply by manipulating graphical displays. At the lowest level the framework turns becomes a P2P (Peer to Peer) mobile agent system, that may be used for the automatic distribution of a class of processes including, but not limited to, EAs.
Article
Full-text available
In studies of genetic algorithms, evolutionary computing, and ant colony mechanisms, it is recognized that the higher-order forms of collective intelligence play an important role in metaheuristic computing and computational intelligence. Collective intelligence is an integration of collective behaviors of individuals in social groups or collective functions of components in computational intelligent systems. This paper presents the properties of collective intelligence and their applications in metaheuristic computing. A social psychological perspective on collected intelligence is elaborated toward the studies on the structure, organization, operation, and development of collective intelligence. The collective behaviors underpinning collective intelligence in groups and societies are analyzed via the fundamental phenomenon of the basic human needs. A key question on how collective intelligence is constrained by social environment and group settings is explained by a formal motivation/attitude-driven behavioral model. Then, a metaheuristic computational model for a generic cognitive process of human problem solving is developed. This work helps to explain the cognitive and collective intelligent foundations of metaheuristic computing and its engineering applications.
Article
Full-text available
The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm's performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements.
Book
Full-text available
Planning systems generate partially ordered sequences of actions (or plans) that solve a goal. They start from a specification of the valid actions (also called operators), which includes both the conditions under which an action applies (the preconditions) ...
Article
Full-text available
The Future of Experimental Research We present a comprehensive, effective and very efficient methodology for the design and experimental analysis of search heuristics such as evolutionary algorithms, differential evolution, pattern search or even classical methods such as the Nelder-Mead simplex algorithm. Our approach extends the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems. The benefit of combining modern and classical statistical methods is demonstrated. Optimization practitioners receive valuable hints for choosing an adequate heuristic for their optimization problems -- theoreticians receive guidelines for testing results systematically on real problem instances. We demonstrate how SPO improves the performance of many search heuristics significantly. However, this performance gain is not available for free. Therefore, costs of this tuning process are discussed. Several examples from theory and practice are used to illustrate typical pitfalls in experimentation. Software tools implementing procedures described in this tutorial are freely available.
Article
Full-text available
There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.
Article
Full-text available
The undirected Selective Travelling Salesman Problem (STSP) is defined on a graph G=(V, E) with positive profits associated with vertices, and distances associated with edges. The STSP consists of determining a maximal profit Hamiltonian cycle over a subset of V whose length does not exceed a preset limit L. We describe a tabu search (TS) heuristic for this problem. The algorithm iteratively inserts clusters of vertices in the current tour or removes a chain of vertices. Tests performed on randomly generated instances with up to 300 vertices show that the algorithm consistently yields near-optimal solutions.
Article
Full-text available
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms
Article
Full-text available
In this paper we will introduce the Memetic Algorithms FRAmework, a general purpose evolutionary computation framework. MAFRA allows the construction of complex evolutionary ...
Article
The Distributed Computing Column covers the theory of systems that are composed of a number of interacting computing elements. These include problems of communication and networking, databases, distributed shared memory, multiprocessor architectures, ...
Chapter
This chapter presents the fundamental concepts of Tabu Search (TS) in a tutorial fashion. Special emphasis is put on showing the relationships with classical Local Search methods and on the basic elements of any TS heuristic, namely, the definition of the search space, the neighborhood structure, and the search memory. Other sections cover other important concepts such as search intensification and diversification and provide references to significant work on TS. Recent advances in TS are also briefly discussed.
Chapter
Metaheuristics are truly diverse in nature—under the overarching theme of performing operations to escape local optima, algorithms as different as ant colony optimization, tabu search, harmony search, and genetic algorithms have emerged. Due to the unique functionality of each type of metaheuristic, comparison of metaheuristics is in many ways more difficult than other algorithmic comparisons. In this chapter, we discuss techniques for meaningful comparison of metaheuristics. We discuss how to create and classify instances in a new testbed and how to make sure other researchers have access to the problems for future metaheuristic comparisons. Further, we discuss the disadvantages of large parameter sets and how to measure complicating parameter interactions in a metaheuristic’s parameter space. Last, we discuss how to compare metaheuristics in terms of both solution quality and runtime.
Article
This article discusses the design of computational experiments to test heuristic methods and provides reporting guidelines for such experimentation. The goal is to promote thoughtful, well-planned, and extensive testing of heuristics, full disclosure of experimental conditions, and integrity in and reproducibility of the reported results.
Article
In this paper, some new quality metrics concerning the evaluation of performances of biobjective optimization methods relating to the generation of the Pareto frontier are presented. A new metric for the calculation of the running speed of a multiobjective optimization method is also presented.These metrics are tested on two biobjective scalarization functions (Weighted sum and Tchebychev aggregation of objective functions) handled by a metaheuristic (Simulated Annealing).
Article
We consider heuristic algorithms for the resource-constrained project scheduling problem. Starting with a literature survey, we summarize the basic components of heuristic approaches. We briefly describe so-called X-pass methods which are based on priority rules as well as metaheuristic algorithms. Subsequently, we present the results of our in-depth computational study. Here, we evaluate the performance of several state-of-the-art heuristics from the literature on the basis of a standard set of test instances and point out to the most promising procedures. Moreover, we analyze the behavior of the heuristics with respect to their components such as priority rules and metaheuristic strategy. Finally, we examine the impact of problem characteristics such as project size and resource scarceness on the performance.
Article
This paper contains the description of a traveling salesman problem library (TSPLIB) which is meant to provide researchers with a broad set of test problems from various sources and with various properties. For every problem a short description is given along with known lower and upper bounds. Several references to computational tests on some of the problems are given. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
Article
Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empirically—by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden. This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid.
Article
Three open source codes that have the potential to provide the basis for research into evolutionary program code generation approaches to machine learning are discussed. The codes are named as lil-gp, ECJ and grammatical evolution (GE). Lil-gp provides a clear and concise ANSI C implementation for tree structured genetic programming (GP), with a strong emphasis towards the original Koza formulation. ECJ emphasizes and supports latest GP developments directly, but it takes longer time to become familiar with. GE is the least developed of the three systems but represents a specific instance of an on going research theme.
Article
Researchers and practitioners frequently spend more time fine-tuning algorithms than designing and implementing them. This is particularly true when developing heuristics and metaheuristics, where the right choice of values for search parameters has a considerable effect on the performance of the procedure. When testing metaheuristics, performance typically is measured considering both the quality of the solutions obtained and the time needed to find them. In this paper, we describe the development of CALIBRA, a procedure that attempts to find the best values for up to five search parameters associated with a procedure under study. Because CALIBRA uses Taguchis fractional factorial experimental designs coupled with a local search procedure, the best values found are not guaranteed to be optimal. We test CALIBRA on six existing heuristic-based procedures. These experiments show that CALIBRA is able to find parameter values that either match or improve the performance of the procedures resulting from using the parameter values suggested by their developers. The latest version of CALIBRA can be downloaded for free from the website that appears in the online supplement of this paper at http://or.pubs.informs.org/Pages.collect.html.
Article
In this paper, we propose a procedure, based on statistical design of experiments and gradient descent, that finds effective settings for parameters found in heuristics. We develop our procedure using four experiments. We use our procedure and a small subset of problems to find parameter settings for two new vehicle routing heuristics. We then set the parameters of each heuristic and solve 19 capacity-constrained and 15 capacity-constrained and route-length-constrained vehicle routing problems ranging in size from 50 to 483 customers. We conclude that our procedure is an effective method that deserves serious consideration by both researchers and operations research practitioners.
Book
A unified view of metaheuristics. This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems. Designing efficient metaheuristics for multi-objective optimization problems. Designing hybrid, parallel, and distributed metaheuristics. Implementing metaheuristics on sequential and parallel machines. Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.
Article
The development of successful metaheuristic algorithms such as local search for a difficult problem such as satisfiability testing (SAT) is a challenging task. We investigate an evolutionary approach to automating the discovery of new local search heuristics for SAT. We show that several well-known SAT local search algorithms such as Walksat and Novelty are composite heuristics that are derived from novel combinations of a set of building blocks. Based on this observation, we developed CLASS, a genetic programming system that uses a simple composition operator to automatically discover SAT local search heuristics. New heuristics discovered by CLASS are shown to be competitive with the best Walksat variants, including Novelty+. Evolutionary algorithms have previously been applied to directly evolve a solution for a particular SAT instance. We show that the heuristics discovered by CLASS are also competitive with these previous, direct evolutionary approaches for SAT. We also analyze the local search behavior of the learned heuristics using the depth, mobility, and coverage metrics proposed by Schuurmans and Southey.
Article
Local search is one of the fundamental approaches to tackle large combinatorial optimization problems. Yet relatively little support is available to facilitate the design and implementation of local search algorithms. This paper introduces Localizer++, an extensible object-oriented library for local search. Localizer++ supports both declarative abstractions to describe the neighborhood and high-level search constructs to specify local moves and meta-heuristics. It also supports a variety of features typically found only in modeling languages and its extensibility allows for an easy integration of new, user-de ned, abstractions. Of particular interest is the conciseness and readability of Localizer++ statements and the eciency of the Localizer++ implementation.
Conference Paper
This paper introduces a text based interface (PISA) that allows to separate the algorithm-specific part of an optimizer from the applicationspecific part. These parts are implemented as independent programs forming freely combinable modules. It is therefore possible to provide these modules as ready-to-use packages. As a result, an application engineer can easily exchange the optimization method and try di#erent variants, while an algorithm designer has the opportunity to test a search algorithm on various problems without additional programming e#ort.
Article
The landscapes of Traveling Salesman Problems are investigated by random walk techniques. The autocorrelation functions for different metrics on the space of tours are calculated. The landscape turns out to be AR(1) for symmetric TSPs. For asymmetric problems there can be a random contribution superimposed on an AR(1) behaviour. y Author to whom correspondence should be sent: Peter F. Stadler Institut fur Theoretische Chemie, University of Vienna Wahringerstr. 17, A-1090 Wien, Austria Phone: (*43 1) 43 61 41 / 70, Bitnet: A8443GAD@AWIUNI11 P.F.Stadler & W.Schnabl: The Landscape of TSP 1. Introduction The traveling salesman problem (TSP) [1] is the most prominent classical example of an NP-complete [2] combinatorial optimization problem. Given a distribution of cities the task is to find the shortest tour visiting each city once and returning to the starting point with prescribed costs j ij for traveling from i to j. The symmetric problem j ij = j ji has applications in X-ray cryst...
The mixed integer programming library: Miplib
  • T Achterberg
  • T Koch
  • A Martin
Measuring the performance of parallel metaheuristics
  • Albae
  • Lugueg
OR-Library: distributing test problems by electronic mail.
  • J. E.Beasley
Tuning Metaheuristics: A machine learning perspective
  • Biratterim
A taxonomy of cooprative search algorithms
  • El-Abdm
  • Kamelm
The Design of Experiments
  • Fisherw
Computers and intractability
  • R Gareym
  • S Johnsond
Design and analysis of experiments
  • Montgomeryd
MetaYourHeuristic V. 1.3, Intelligence Computing Laboratory
  • P Y Yin
CIlib: A collaborative framework for computational intelligence algorithms - part I
  • T Cloete
  • A P Engelbrecht
  • G Pampar