Journal of Heuristics

Published by Springer Nature
Online ISSN: 1572-9397
Print ISSN: 1381-1231
Learn more about this page
Recent publications
An example of the interventions having (left) positive and (right) negative slopes. Given a time t and τ=0.8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau = 0.8$$\end{document}, in the left figure, we have Qτt=128\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q^{t}_{\tau } = 128$$\end{document} and Q^τt=Q1,τt+Q2,τt=69+59=128\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{Q}_{\tau }^t = Q^{t}_{1,\tau } + Q^{t}_{2,\tau } = 69 + 59 = 128$$\end{document}. In the right figure, we have Qτt=128\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q^{t}_{\tau } = 128$$\end{document} and Q^τt=Q1,τt+Q2,τt=69+59=128\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{Q}_{\tau }^t = Q^{t}_{1,\tau } + Q^{t}_{2,\tau } = 69 + 59 = 128$$\end{document}
An example of the interventions having both positive and negative slopes. Given a time t and τ=0.8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau = 0.8$$\end{document}, we have Qτt=107\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q^{t}_{\tau } = 107$$\end{document} but Q^τt=Q1,τt+Q2,τt=59+73=132\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widehat{Q}_{\tau }^t = Q^{t}_{1,\tau } + Q^{t}_{2,\tau } = 59 + 73 = 132$$\end{document}
one-shift evaluation
Pairwise interchange evaluation
Convergence of the approximate Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document} to the true Z2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z_2$$\end{document}
This paper considers the planning problem arising in the maintenance of a power distribution grid. Maintenance works require the corresponding parts of the grid to be shut down for the entire duration of maintenance which could range from one day to several weeks. The planning specifies the starting times of the required outages for maintenance and should take into account the constrained resources as well as the uncertainty involved in the maintenance works which is characterized by the risk values provided by the grid operator. The problem was presented by the French company Réseau de Transport d’Électricité for the 2020 ROADEF/EURO challenge. Several approaches were developed during the competition and all approaches are reported in this paper. We evaluate our approaches on the benchmark instances proposed for the competition. It is reported that the iterated local search metaheuristic with self-adaptive perturbation performed the best.
 
We propose a novel technique for algorithm-selection, applicable to optimisation domains in which there is implicit sequential information encapsulated in the data, e.g., in online bin-packing. Specifically we train two types of recurrent neural networks to predict a packing heuristic in online bin-packing, selecting from four well-known heuristics. As input, the RNN methods only use the sequence of item-sizes. This contrasts to typical approaches to algorithm-selection which require a model to be trained using domain-specific instance features that need to be first derived from the input data. The RNN approaches are shown to be capable of achieving within 5% of the oracle performance on between 80.88 and 97.63% of the instances, depending on the dataset. They are also shown to outperform classical machine learning models trained using derived features. Finally, we hypothesise that the proposed methods perform well when the instances exhibit some implicit structure that results in discriminatory performance with respect to a set of heuristics. We test this hypothesis by generating fourteen new datasets with increasing levels of structure, and show that there is a critical threshold of structure required before algorithm-selection delivers benefit.
 
The intelligent management of available resources is one of the greatest challenges of any organization. Find the balance between the size of the stock and the production and transport capacity and ensuring quality service to suppliers and customers. This type of challenge is also very common in port terminals. Ensuring efficient and effective operations is fundamental to reduce fines, avoid accidents, and build customer loyalty. This paper considers integrated planning, scheduling, yard allocation, and berth allocation problem in dry bulk port terminals. The integrated problem consists of planning and scheduling the flow of products between the supply and demand nodes, allocating the products to the storage yards, and determining the loading sequence and berth time and position of each vessel. A mixed-integer linear programming model is proposed, connecting the problems and generating an integrated solution. To solve the integrated problem more efficiently, we developed an algorithm that combines the column generation method with a diving heuristic with limited backtracking, a relax-and-fix heuristic, and an exact algorithm from a commercial solver. The mathematical formulation and the proposed algorithm are tested and validated with large-scale instances. Computational experiments show that the proposed solution approach outperform commercial solver and is very effective in finding strong bounds for large instances.
 
The Capacitated Vehicle Routing Problem (CVRP) has been subject to intense research efforts for more than sixty years. Yet, significant algorithmic improvements are still being made. The most competitive heuristic solution algorithms of today utilize, and often combine, strategies and elements from evolutionary algorithms, local search, and ruin-and-recreate based large neighborhood search. In this paper we propose a new hybrid metaheuristic for the CVRP, where the education phase of the hybrid genetic search (HGS) algorithm proposed by (Vidal Hybrid Genetic Search for the CVRP: Open-Source Implementation and SWAP* Neighborhood 2020) is extended by applying large neighborhood search (LNS). By performing a series of computational experiments, we attempt to answer the following research questions: 1) Is it possible to gain performance by adding LNS as a component in the education phase of HGS? 2) How does the addition of LNS change the relative importance of the local search neighborhoods of HGS? 3) What is the effect of devoting computational efforts to the creation of an elite solution in the initial population of HGS? Through a set of computational experiments we answer these research questions, while at the same time obtaining a good configuration of global parameter settings for the proposed heuristic. Testing the heuristic on benchmark instances from the literature with limited computing time, it outperforms existing algorithms, both in terms of the final gap and the primal integral.
 
This paper deals with a multi-depot mixed vehicle routing problem under uncertain travel times (MDMVRP-UT), where there are several different depots and a number of identical vehicles. A vehicle can come back to any of the depots after its service is completed. A light-robust-optimization model is set up to control the total travel time within a preset value and to minimize the total travel time as much as possible. Then an effective evolutionary algorithm (EA) is proposed to solve the light-robust-optimization model. In the proposed EA, two constructive heuristics, namely a random customer sequence-based heuristic and a minimum spanning tree-based heuristic, are presented according to the problem-specific knowledge to generate a high-quality initial population with a certain level of diversity. A destruction and construction-based reproduction operator is provided to give birth to high-quality feasible offspring. A pairwise interchange based local search method is proposed to enhance the local exploitation capability. A hybrid selection operator and a population updating method are employed to remain the diversity of the population. The effectiveness of the proposed EA is verified by comprehensive experiments based on the well-known benchmark instances in the literature.
 
The cyclic cutwidth minimization problem (CCMP) is a graph layout problem that involves embedding a graph onto a circle to minimize the maximum cutwidth of the graph. In this paper, we present breakout local search (BLS) for solving CCMP, which combines a dedicated local search procedure to discover high-quality local optimal solutions and an adaptive diversification strategy to escape from local optima. Extensive computational results on a wide set of 179 publicly available benchmark instances show that the proposed BLS algorithm has excellent performance with respect to the best-performing state-of-the-art approaches in terms of solution quality and computational time. In particular, it reports improved best-known solutions for 31 instances, while finding matching best-known results on 139 instances.
 
Number of tests required to reach a certain coverage percentage for the tt-open-wbo-inc approach
Comparison of the required number of tests for different methods with regards to the number of test used by ≃T(N,t,S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\simeq T(N,t,S)$$\end{document} (as base) to cover each number of tuples
Comparison of the required number of tests for different methods to cover as much tuples at each test from ≃T(N,t,S)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\simeq T(N,t,S)$$\end{document} (as base)
Partial MaxSAT formula size for RL-B in literals as a function of test suite size
We present a Satisfiability (SAT)-based approach for building Mixed Covering Arrays with Constraints of minimum length, referred to as the Covering Array Number problem. This problem is central in Combinatorial Testing for the detection of system failures. In particular, we show how to apply Maximum Satisfiability (MaxSAT) technology by describing efficient encodings for different classes of complete and incomplete MaxSAT solvers to compute optimal and suboptimal solutions, respectively. Similarly, we show how to solve through MaxSAT technology a closely related problem, the Tuple Number problem, which we extend to incorporate constraints. For this problem, we additionally provide a new MaxSAT-based incomplete algorithm. The extensive experimental evaluation we carry out on the available Mixed Covering Arrays with Constraints benchmarks and the comparison with state-of-the-art tools confirm the good performance of our approaches.
 
Drones have been getting more and more popular in many economy sectors. Both scientific and industrial communities aim at making the impact of drones even more disruptive by empowering collaborative autonomous behaviors—also known as swarming behaviors—within fleets of multiple drones. In swarming-powered 3D mapping missions, unmanned aerial vehicles typically collect the aerial pictures of the target area whereas the 3D reconstruction process is performed in a centralized manner. However, such approaches do not leverage computational and storage resources from the swarm members. We address the optimization of a swarm-powered distributed 3D mapping mission for a real-life humanitarian emergency response application through the exploitation of a swarm-powered ad hoc cloud. Producing the relevant 3D maps in a timely manner, even when the cloud connectivity is not available, is crucial to increase the chances of success of the operation. In this work, we present a mathematical programming heuristic based on decomposition and a variable neighborhood search heuristic to minimize the completion time of the 3D reconstruction process necessary in such missions. Our computational results reveal that the proposed heuristics either quickly reach optimality or improve the best known solutions for almost all tested realistic instances comprising up to 1000 images and fifteen drones.
 
Industrial software often has many parameters that critically impact performance. Frequently, these are left in a sub-optimal configuration for a given application because searching over possible configurations is costly and, except for developer instinct, the relationships between parameters and performance are often unclear and complex. While there have been significant advances in automated parameter tuning approaches recently, they are typically black-box. The high-quality solutions produced are returned to the user without explanation. The nature of optimisation means that, often, these solutions are far outside the well-established settings for the software, making it difficult to accept and use them. To address the above issue, a systematic approach to software parameter optimization is presented. Several well-established techniques are followed in sequence, each underpinning the next, with rigorous analysis of the search space. This allows the results to be explainable to both end users and developers, improving confidence in the optimal solutions, particularly where they are counter-intuitive. The process comprises statistical analysis of the parameters; single-objective optimization for each target objective; functional ANOVA to explain trends and inter-parameter interactions; and a multi-objective optimization seeded with the results from the single-objective stage. A case study demonstrates application to business-critical software developed by the international airline Air France-KLM for measuring flight schedule robustness. A configuration is found with a run-time of 80% that of the tried-and-tested configuration, with no loss in predictive accuracy. The configuration is supplemented with detailed analysis explaining the importance of each parameter, how they interact with each other, how they influence run-time and accuracy, and how the final configuration was reached. In particular, this explains why the configuration included some parameter settings that were outwith the usually recommended range, greatly increasing developer confidence and encouraging adoption of the new configuration.
 
In this article, we study an Inventory Routing Problem with deterministic customer demand in a two-tier supply chain. The supply chain network consists of a supplier using a single vehicle with a given capacity to deliver a single product type to multiple customers. We are interested in population-based algorithms to solve our problem. A Memetic Algorithm (MA) is developed based on the Genetic Algorithm (GA) and Variable Neighborhood Search methods. The proposed meta-heuristics are tested on small and large reference benchmarks. The results of the MA are compared to those of the classical GA and to the optimal solutions in the literature. The comparison shows the efficiency of using MA and its ability to generate high quality solutions in a reasonable computation time.
 
Local search algorithms are frequently used to handle complex optimization problems involving binary decision variables. One way of implementing a local search procedure is by using a mixed-integer programming solver to explore a neighborhood defined through a constraint that limits the number of binary variables whose values are allowed to change in a given iteration. Recognizing that not all variables are equally promising to change when searching for better neighboring solutions, we propose a weighted iterated local branching heuristic. This new procedure differs from similar existing methods since it considers groups of binary variables and associates with each group a limit on the number of variables that can change. The groups of variables are defined using weights that indicate the expected contribution of flipping the variables when trying to identify improving solutions in the current neighborhood. When the mixed-integer programming solver fails to identify an improving solution in a given iteration, the proposed heuristic may force the search into new regions of the search space by utilizing the group of variables that are least promising to flip. The weighted iterated local branching heuristic is tested on benchmark instances of the optimum satisfiability problem, and computational results show that the weighted method is superior to an alternative method without weights.
 
Venn diagram of sets in the proof of Lemma 2
Representation of the basins of attraction in the solution space
Solutions explored by DS in an example problem
Best performing three-component CMCS configuration obtained by Karapetyan and Goldengorin (2018a)
Average relative error v/s time obtained for DS, variants, POPSTAR and CMCS3
In this paper we propose a novel heuristic search for solving combinatorial optimization problems which we call Diverse Search (DS). Like beam search, this constructive approach expands only a selected subset of the solutions in each level of the search tree. However, instead of selecting the solutions with the best values, we use an efficient method to select a diverse subset, after filtering out uninteresting solutions. DS also distinguishes solutions that do not produce better offspring, and applies a local search process to them. The intuition is that the combination of these strategies allows to reach more—and more diverse—local optima, increasing the chances of finding the global optima. We test DS on several instances of the Köerkel–Ghosh (KG) and K-median benchmarks for the Simple Plant Location Problem. We compare it with a state-of-the-art heuristic for the KG benchmark and the relatively old POPSTAR solver, which also relies on the idea of maintaining a diverse set of solutions and, surprisingly, reached a comparable performance. With the use of a Path Relinking post-optimization step, DS can achieve results of the same quality that the state-of-the-art in similar CPU times. Furthermore, DS proved to be slightly better on average for large scale problems with small solution sizes, proving to be an efficient algorithm that delivers a set of good and diverse solutions.
 
This paper proposes two heuristic algorithms for finding fixed-length circuits and cycles in undirected edge-weighted graphs. It focusses particularly on a largely unresearched practical application where we are seeking attractive round trips for pedestrians and joggers in urban street networks. Our first method is based on identifying suitable pairs of paths that are combined to form a solution; our second is based on local search techniques. Both algorithms display high levels of accuracy, producing solutions within just a few meters of the target. Run times for the local search algorithm are also short, with solutions in large cities often being found in less than one second.
 
Overall framework of the solution methodology
Tour divergence to add a new node to the vehicle’s service area
Network augmentation to determine the service area of each vehicle
Implementation of the rollback procedure
City of Dallas core area network
On-Demand Mobility Services (ODMS) have gained considerable popularity over the past few years. Travelers use mobile phone applications to easily request a ride, update trip itinerary and pay the ride fare. This paper describes a novel methodology for integrated ride matching and vehicle routing for ODMS with ridesharing and transfer options. The methodology adopts a hybrid heuristic approach, which enables solving medium to large problem instances in near real-time. The solution of this problem will be a set of routes for vehicles and a ride match for each passenger. The heuristic (1) promptly responds to individual ride requests, and (2) periodically re-evaluates the generated solutions and recommend modifications to enhance the overall solution quality by increasing the number of served passengers and total profit of the system. The results of a set of experiments considering hypothetical and real-world networks show that the methodology can provide efficient solutions while satisfying the real-time execution requirements. In addition, the results show that the Transportation Network Company (TNC) could serve more passengers and achieve higher profitability if more passengers are willing to rideshare or transfer. Also, activating a rollback procedure increases the number of served passengers and associated profits.
 
Illustration of generating a fixed set. The input is Pk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {P}_{k}$$\end{document} (top left), a set of four randomly selected solutions from the six solutions in the PF, and a base solution B (left bottom). The value on a node of B represents the number of occurrences of that node in elements of Pk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {P}_{k}$$\end{document}. The nodes on the right-hand side present the corresponding fixed set of size four
Approximations of Pareto fronts obtained by the MONSD, GRASP and FSS for different problem instances
Illustration of the hypervolume indicator. The value of the indicator is equal to the area of the dotted region
Illustration of the convergence speed of the GRASP and FSS algorithms. The convergence is shown based on the number of generated solutions and the value of the hypervolume indicator. The convergence speed is shown for different values of the FSS parameters
The Fixed Set Search (FSS) is a novel metaheuristic that adds a learning mechanism to the Greedy Randomized Adaptive Search Procedure (GRASP). In recent publications, its efficiency has been shown on different types of combinatorial optimization problems like routing, machine scheduling and covering. In this paper the FSS is adapted to multi-objective problems for finding Pareto Front approximations. This adaptation is illustrated for the bi-objective Minimum Weighted Vertex Cover Problem (MWVCP). In this work, a simple and effective bi-objective GRASP algorithm for the MWVCP is developed in the first stage. One important characteristic of the proposed GRASP is that it avoids the use of weighted sums of objective functions in the local search and the greedy algorithm. In the second stage, the bi-objective GRASP is extended to the FSS by adding a learning mechanism adapted to multi-objective problems. The conducted computational experiments show that the proposed FSS and GRASP algorithm significantly outperforms existing methods for the bi-objective MWVCP. To fully evaluate the learning mechanism of the FSS, it is compared to the underlying GRASP algorithm on a wide range of performance indicators related to convergence, distribution, spread and cardinality.
 
The quadratic unconstrained binary optimization (QUBO) problem belongs to the NP-hard complexity class of problems and has been the subject of intense research since the 1960s. Many problems in various areas of research can be reformulated as QUBO problems, and several reformulated instances have sparse matrices. Thus, speeding up implementations of methods for solving the QUBO problem can benefit all of those problems. Among such methods, Tabu Search (TS) has been particularly successful. In this work, we propose data structures to speed up TS implementations when the instance matrix is sparse. Our main result consists in employing a compressed sparse row representation of the instance matrix, and priority queues for conducting the search over the solution space. While our literature review indicates that current TS procedures for QUBO take linear time on the number of variables to execute one iteration, our proposed structures may allow better time complexities than that, depending on the sparsity of the instance matrix. We show, by means of extensive computational experiments, that our techniques can significantly decrease the processing time of TS implementations, when solving QUBO problem instances with matrices of relatively high sparsity. To assess the quality of our results regarding more intricate procedures, we also experimented with a Path Relinking metaheuristic implemented with the TS using our techniques. This experiment showed that our techniques can allow such metaheuristics to become more competitive.
 
Document vectorization with an appropriate encoding scheme is an essential component in various document processing tasks, including text document classification, retrieval, or generation. Training a dedicated document in a specific domain may require large enough data and sufficient resource. This motivates us to propose a novel document representation scheme with two main components. First, we train TD2V, a generic pre-trained document embedding for English documents from more than one million tweets in Twitter. Second, we propose a domain adaptation process with adversarial training to adapt TD2V to different domains. To classify a document, we use the rank list of its similar documents using query expansion techniques, either Average Query Expansion or Discriminative Query Expansion. Experiments on datasets from different online sources show that by using TD2V only, our method can classify documents with better accuracy than existing methods. By applying adversarial adaptation process, we can further boost and achieve the accuracy on BBC, BBCSport, Amazon4, 20NewsGroup datasets. We also evaluate our method on a specific domain of sensitivity classification and achieve the accuracy of higher than \(95\%\) even with a short text fragment having 1024 characters on 5 datasets: Snowden, Mormon, Dyncorp, TM, and Enron.
 
Paraphrase identification plays an important role with various applications in natural language processing tasks such as machine translation, bilingual information retrieval, plagiarism detection, etc. With the development of information technology and the Internet, the requirement of textual comparing is not only in the same language but also in many different language pairs. Especially in Vietnamese, detecting paraphrase in the English–Vietnamese pair of sentences is a high demand because English is one of the most popular foreign languages in Vietnam. However, the in-depth studies on cross- language paraphrase identification tasks between English and Vietnamese are still limited. Therefore, in this paper, we propose a method to identify the English–Vietnamese cross-language paraphrase cases, using hybrid feature classes. These classes are calculated by using the fuzzy-based method as well as the siamese recurrent model, and then combined to get the final result with a mathematical formula. The experimental results show that our model achieves 87.4% F-measure accuracy.
 
Examples of intention and non-intention posts
Method CroDoNB for cross-domain intention detection
Training time comparison
In this paper, we present a method to identify forum posts expressing user intentions in online discussion forums. The results of this task, for example buying intentions, can be exploited for targeted advertising or other marketing tasks. Our method utilizes labeled data from other domains to help the learning task in the target domain by using a Naive Bayes (NB) framework to combine the data statistics . Because the distributions of data vary from domain to domain, it is important to adjust the contributions of different data sources when constructing the learning model, to achieve accurate results. Here, we propose to adjust the parameters of the NB classifier by optimizing an objective, which is equivalent to maximizing the between-class separation, using stochastic gradient descent. Experimental results show that our method outperforms several competitive baselines on a benchmark dataset consisting of forum posts from four domains: Cellphone, Electronics, Camera, and TV. In addition, we explore the possibility of combining NB posteriors computed during the optimization process with another classifier, namely Support Vector Machines. Experimental results show the usefulness of optimized NB class posteriors when using as features for SVMs in the cross-domain settings.
 
a An example of a social network as a directed edge-labeled graph; b a simple regular path query as an automaton
Comparison of the true cost and the estimated cost. a #Query number on Alibaba graph, b #Query number on Yago graph, c #Query number on Freebase graph, d #Query number on Synthetic graph
Accuracy evaluation with varied query path length
Comparing the response time of parallel RPQs evaluation on large graphs
Evaluating the response time of parallel answering RPQs with varied graph size
Regular path queries (RPQs) are widely used on a graph whose answer is a set of tuples of nodes connected by paths corresponding to a given regular expression. Traditional automata-based approach for evaluating RPQs is restricted in the explosion of graph size, which makes graph searching take high cost (i.e. memory space and response time). Recently, a cost-based optimization technique using rare labels has been proved to be effective when it is applied to large graph. However, there is still a room for improvement, because the rare labels in the graph and/or the query are coarse information which could not guarantee the minimum searching cost all the time. This is our motivation to find a new approach using fine-grained information to estimate correctly the searching cost, which helps improving the performance of RPQs evaluation. For example, by using estimated searching cost, we can decompose an RPQ into small subqueries or separate multiple RPQs into small batch of queries in an efficient way for parallelism evaluation. In this paper, we present a novel approach for estimating the searching cost of RPQs on large graphs with cost functions based on the combinations of the searching cost of unit-subqueries (i.e. every smallest possible query). We extensively evaluated our method on real-world datasets including Alibaba, Yago, Freebase as well as synthetic datasets. Experimental results show that our estimation method obtains high accuracy which is approximately 87% on average. Moreover, two comparisons with automata-based and rare label based approaches demonstrate that our approach outperforms traditional ones.
 
Example instance with 4 elements and 5 features, (a), and two feasible solutions for it: S1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_1$$\end{document}, (b), and (c). Selected elements and features in common are highlighted with solid background color
The selection of individuals with similar characteristics from a given population have always been a matter of interest in several scientific areas: data privacy, genetics, art, among others. This work is focused on the maximum intersection of k-subsets problem (kMIS). This problem tries to find a subset of k individuals with the maximum number of features in common from a given population and a set of relevant features. The research presents a Greedy Randomized Adaptive Search Procedure (GRASP) where the local improvement is replaced by a complete Tabu Search metaheuristic with the aim of further improving the quality of the obtained solutions. Additionally, a novel representation of the solution is considered to reduce the computational effort. The experimental comparison carefully analyzes the contribution of each part of the algorithm to the final results as well as performs a thorough comparison with the state-of-the-art method. Results, supported by non-parametric statistical tests, confirms the superiority of the proposal.
 
Updating the piecewise linear cost function ft′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f'_t$$\end{document}
A piecewise linear cost function
This paper considers the problem of scheduling a set of time- and energy-constrained preemptive tasks on a discrete time horizon. At each time period, the total energy required by the tasks that are in process can be provided by two energy sources: a reversible one and a non-reversible one. The non-reversible energy source can provide an unlimited amount of energy for a given period but at the expense of a time-dependent piecewise linear cost. The reversible energy source is a storage resource. The goal is to schedule each task preemptively inside its time window and to dispatch the required energy to the sources at each time period, while satisfying the reversible source capacity constraints and minimizing the total cost. We propose a mixed integer linear program of pseudo-polynomial size to solve this NP-hard problem. Acknowledging the limits of this model for problem instances of modest size, we propose an iterative decomposition matheuristic to compute an upper bound. The method relies on an efficient branch-and-price method or on a local search procedure to solve the scheduling problem without storage. The energy source allocation problem for a fixed schedule can in turn be solved efficiently by dynamic programming as a particular lot-sizing problem. We also propose a lower bound obtained by solving the linear programming relaxation of a new extended formulation by column generation. Experimental results show the quality of the bounds compared to the ones obtained using mixed integer linear program.
 
Cumulative distributions of running times of the a exact and b heuristic algorithms for DIMACS instances
Given graph G, a k-vertex-critical subgraph (k-VCS) H⊆G is a subgraph with chromatic number χ(H)=k, for which no vertex can be removed without decreasing its chromatic number. The main motivation for finding a k-VCS is to prove k is a lower bound on χ(G). A graph may have several k-VCSs, and the k-Vertex-Critical Subgraph Problem asks for one with the least possible vertices. We propose a new heuristic for this problem. Differently from typical approaches that modify candidate subgraphs on a vertex-by-vertex basis, it generates new subgraphs by a heuristic that optimizes for maximum edges. We show this strategy has several advantages, as it allows a greater focus on smaller subgraphs for which computing χ is less of a bottleneck. Experimentally the proposed method matches or improves previous results in nearly all cases, and more often finds solutions that are provenly k-VCSs. We find new best k-VCSs for several DIMACS instances, and further improve known lower bounds for the chromatic number in two open instances, also fixing their chromatic numbers by matching existing upper bounds.
 
This paper addresses the vehicle routing and driver scheduling problem of finding a low cost route and stoppage schedule for long-haul point-to-point full-load trips with intermediate stops due to refueling needs and driver hours-of-service regulatory restrictions. This is an important problem for long-haul truck drivers because in practice regulatory driving limits often do not coincide with availability of stoppage alternatives for quick rest, for meal, for overnight, or for weekly downtime required stops. The paper presents a methodology and algorithm to pick routes that optimize stoppages within the HOS constraints, an important factor of both highway safety and driver productivity. A solution for this variant of the vehicle routing and truck driver scheduling problem (VRTDS-HOS) that is fast enough to potentially be used in real time is proposed by modeling possible stoppage configurations as nodes in an iteratively built multi-dimensional state-space graph and by using heuristics to decrease processing time when searching for the lowest-cost path in that graph. Individual nodes in the graph are characterized by spatial, temporal, and stoppage attributes, and are expanded sequentially to search for low-cost paths between the origin and the destination. Within this multi-dimensional state-space graph, the paper proposes two heuristics applied to a shortest-path algorithmic solution based on the A∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A^*$$\end{document} algorithm to increase processing speed enough to potentially permit real-time usage. An illustrative application to Brazilian regulations is provided. Results were successful and are reported together with sensitivity analyses comparing alternative routes and different heuristics processing speeds.
 
Roads intersections are one of the main causes of traffic jams since vehicles need to stop and wait for their time to go. Scenarios that only consider autonomous vehicles can minimize this problem using intelligent systems that manage the time when each vehicle will pass across the intersection. This paper proposes a mathematical model and a heuristic that optimize this management. The efficiency of this approach is demonstrated using traffic simulations, with scenarios of different complexities, and metrics representing the arrival time, CO2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {CO}_{{2}}$$\end{document} emission and fuel consumption. The results show that the present approach is scalable, maintaining its performance even in complex real scenarios. Moreover, its execution time is maintained in milliseconds, what suggests this approach as a candidate for dealing with real-time and dynamic scenarios.
 
The generation of interest for business education among students can take place at all levels of our educational system. This paper see to discuss issues of technology integration in teaching business education courses, technology and business education, factors militating against the effective and efficient teaching process. The paper also outlined issues, and benefits of business education, such that focus was made on the instructional materials and aids to teaching business education courses, problems of business education curriculum and implementation in Nigeria education system.
 
Combinatorial Auctions (CAs) allow the participants to bid on a bundle of items and can result in more cost-effective deals than traditional auctions if the goods are complementary. However, solving the Winner Determination Problem (WDP) in CAs is an NP-hard problem. Since Evolutionary Algorithms (EAs) can find good solutions in polynomial time within a huge search space, the use of EAs has become quite suitable for solving this type of problem. In this paper, we introduce a new Constraint-Guided Evolutionary Algorithm (CGEA) for the WDP. It employs a penalty component to represent each constraint in the fitness function and introduces new variation operators that consider each package value and each type of violated constraint to induce the generation of feasible solutions. CGEA also presents a survivor selection operator that maintains the exploration versus exploitation balance in the evolutionary process. The performance of CGEA is compared with that of three other evolutionary algorithms to solve a WDP in a Combinatorial Reverse Auction (CRA) of electricity generation and transmission line assets. Each of the algorithms compared employs different methods to deal with constraints. They are tested and compared on several problem instances. The results show that CGEA is competitive and results in better performance in most cases.
 
Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.
 
Metaheuristics are a class of approximate methods, which are designed to attack hard combinatorial optimization problems. In metaheuristics, a neighborhood is defined by the specified move operation for a solution. The neighborhood plays an essential role in the performance of its algorithms. It is important to capture the statistical properties of neighborhoods. In this paper, we present a theoretical analysis of neighborhoods for a wide class of combinatorial optimization problems, instead of just for restricted instances. First, we give a probabilistic model which allows us to compute statistics for various types of neighborhoods. Here we introduce an approach in which the solution space (the landscape) for a wide class of combinatorial optimization problems can be approximated to AR(1), which can be used to capture the statistics of the solution space. The theoretical results obtained from our proposed model closely match empirically observed behavior. Second, we present an application in which we use our probabilistic model of neighborhoods.
 
This paper introduces a multi-level (m-lev) mechanism into Evolution Strategies (ESs) in order to address a class of global optimization problems that could benefit from fine discretization of their decision variables. Such problems arise in engineering and scientific applications, which possess a multi-resolution control nature, and thus may be formulated either by means of low-resolution variants (providing coarser approximations with presumably lower accuracy for the general problem) or by high-resolution controls. A particular scientific application concerns practical Quantum Control (QC) problems, whose targeted optimal controls may be discretized to increasingly higher resolution, which in turn carries the potential to obtain better control yields. However, state-of-the-art derivative-free optimization heuristics for high-resolution formulations nominally call for an impractically large number of objective function calls. Therefore, an effective algorithmic treatment for such problems is needed. We introduce a framework with an automated scheme to facilitate guided-search over increasingly finer levels of control resolution for the optimization problem, whose on-the-fly learned parameters require careful adaptation. We instantiate the proposed m-lev self-adaptive ES framework by two specific strategies, namely the classical elitist single-child (1+1)-ES and the non-elitist multi-child derandomized \((\mu _W,\lambda )\)-sep-CMA-ES. We first show that the approach is suitable by simulation-based optimization of QC systems which were heretofore viewed as too complex to address. We also present a laboratory proof-of-concept for the proposed approach on a basic experimental QC system objective.
 
The schematic diagram of a coal mine
Framework of the two-phase approach
We consider a coal mine that extracts raw coal by a set of coal mining equipment (CME), separates out multiple products by a set of coal washing equipment, and delivers the products through a fleet of trains over a multi-period horizon. The equipment requires a daily preventive maintenance (PM) and each CME is subject to random failures and repairs. We study a joint PM, production, and delivery problem that determines when to perform the PM and how to manage coal production and delivery in each period, to minimize the expected total cost. We formulate a multi-period stochastic optimization model that delicately integrates the static PM decisions with the adaptive production-delivery decisions, which is extremely difficult to solve due to CME’s decision-dependent operating status. We propose a novel two-phase solution approach to overcome this difficulty. Phase 1 firstly determines the PM decisions using a scenario-based variable neighborhood search algorithm. Using the PM solution and the resultant set of scenarios as input parameters, Phase 2 adaptively determines the production-delivery decisions using a forward-looking algorithm in a rolling horizon manner. We show numerically that our approach consistently produces good-quality and robust solutions while preserving tractability for varying problem instances.
 
The variable ordering heuristic is an important module in algorithms dedicated to solve Constraint Satisfaction Problems (CSP), while it impacts the efficiency of exploring the search space and the size of the search tree. It also exploits, often implicitly, the structure of the instances. In this paper, we propose Conflict-History Search (CHS), a dynamic and adaptive variable ordering heuristic for CSP solving. It is based on the search failures and considers the temporality of these failures throughout the solving steps. The exponential recency weighted average is used to estimate the evolution of the hardness of constraints throughout the search. The experimental evaluation on XCSP3 instances shows that integrating CHS to solvers based on MAC (Maintaining Arc Consistency) and BTD (Backtracking with Tree Decomposition) achieves competitive results and improvements compared to the state-of-the-art heuristics. Beyond the decision problem, we show empirically that the solving of the constraint optimization problem (COP) can also take advantage of this heuristic.
 
Local search is a fundamental tool in the development of heuristic algorithms. A neighborhood operator takes a current solution and returns a set of similar solutions, denoted as neighbors. In best improvement local search, the best of the neighboring solutions replaces the current solution in each iteration. On the other hand, in first improvement local search, the neighborhood is only explored until any improving solution is found, which then replaces the current solution. In this work we propose a new strategy for local search that attempts to avoid low-quality local optima by selecting in each iteration the improving neighbor that has the fewest possible attributes in common with local optima. To this end, it uses inequalities previously used as optimality cuts in the context of integer linear programming. The novel method, referred to as delayed improvement local search, is implemented and evaluated using the travelling salesman problem with the 2-opt neighborhood and the max-cut problem with the 1-flip neighborhood as test cases. Computational results show that the new strategy, while slower, obtains better local optima compared to the traditional local search strategies. The comparison is favourable to the new strategy in experiments with fixed computation time or with a fixed target.
 
This paper focuses on designing a diameter - constrained network where the maximum distance between any pair of nodes is bounded. The objective considered is to minimise a weighted sum of the total length of the links followed by the total length of the paths between the pairs of nodes. First, the problem is formulated in terms of Mixed Integer Linear Programming and Constraint Programming to provide two alternative exact approaches. Then, an adaptive large neighbourhood search (LNS) to overcome memory and runtime limitations of the exact methods in large size instances is proposed. Such approach is based on computing an initial solution and repeatedly improve it by solving relatively small subproblems. We investigate various alternatives for finding an initial solution and propose two different heuristics for selecting subproblems. We have introduced a tighter lower bound, which demonstrates the quality of the solution obtained by the proposed approach. The performance of the proposed approach is assessed using three real-world network topologies from Ireland, UK and Italy, which are taken from national telecommunication operators and are used to design a transparent optical core network. Our results demonstrate that the LNS approach is scalable to large networks and it can compute very high quality solutions that are close to being optimal.
 
Illustration of a feasible solution to the MMpLHP on TSPLIB instance eil51 with n=51\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n = 51$$\end{document}, p=11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 11$$\end{document} and U=5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$U = 5$$\end{document}
Framework of selection hyper-heuristic based on Cowling et al. (2000) and Drake et al. (2012)
Representation of a given solution according to (p+1)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(p+1)$$\end{document}-cycles encoding
Solutions obtained by HH_GREEDY for instance eil51 with α=0.8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha =0.8$$\end{document} under different scenarios
Average distribution of heuristics in HH_GREEDY (in terms of percentage)
This paper addresses a variant of the many-to-many hub location-routing problem. Given an undirected edge-weighted complete graph \(G = (V, E)\), this problem consists in finding a subset of V designated as hub nodes, partitioning all the nodes of V into cycles such that each cycle has exactly one hub node, and determining a Hamiltonian cycle on the subgraph induced by hub nodes. The objective is to minimize the total cost resulting from all these cycles. This problem is referred to as Many-to-Many p-Location-Hamiltonian Cycle Problem (MMpLHP) in this paper. To solve this problem, one has to deal with aspects of subset selection, grouping, and permutation. The characteristics of MMpLHP change according to the values of its constituent parameters. Hence, this problem can be regarded as a general problem which encompasses a diverse set of problems originating from different combinations of values of its constituent parameters. Such a general problem can be tackled effectively by suitably selecting and combining several different heuristics each of which cater to a different characteristic of the problem. Keeping this in mind, we have developed a simple multi-start hyper-heuristic approach for MMpLHP. Further, we have investigated two different selection mechanisms within the proposed approach. Experimental results and their analysis clearly demonstrate the superiority of our approach over best approaches known so far for this problem.
 
Comparison of revenues
In this paper we present a novel approach to the dynamic pricing problem for hotel businesses. It includes disaggregation of the demand into several categories, forecasting, elastic demand simulation, and a mathematical programming model with concave quadratic objective function and linear constraints for dynamic price optimization. The approach is computationally efficient and easy to implement. In computer experiments with a hotel data set, the hotel revenue is increased by about 6% on average in comparison with the actual revenue gained in a past period, where the fixed price policy was employed, subject to an assumption that the demand can deviate from the suggested elastic model. The approach and the developed software can be a useful tool for small hotels recovering from the economic consequences of the COVID-19 pandemic.
 
The multiple knapsack problem with grouped items aims to maximize rewards by assigning groups of items among multiple knapsacks, without exceeding knapsack capacities. Either all items in a group are assigned or none at all. We study the bi-criteria variation of the problem, where capacities can be exceeded and the second objective is to minimize the maximum exceeded knapsack capacity. We propose approximation algorithms that run in pseudo-polynomial time and guarantee that rewards are not less than the optimal solution of the capacity-feasible problem, with a bound on exceeded knapsack capacities. The algorithms have different approximation factors, where no knapsack capacity is exceeded by more than 2, 1, and \(1/2\) times the maximum knapsack capacity. The approximation guarantee can be improved to \(1/3\) when all knapsack capacities are equal. We also prove that for certain cases, solutions obtained by the approximation algorithms are always optimal—they never exceed knapsack capacities. To obtain capacity-feasible solutions, we propose a binary-search heuristic combined with the approximation algorithms. We test the performance of the algorithms and heuristics in an extensive set of experiments on randomly generated instances and show they are efficient and effective, i.e., they run reasonably fast and generate good quality solutions.
 
Platelets are valuable, but highly perishable, blood components used in the treatment of, among others, viral dengue fever, blood-related illness, and post-chemotherapy following cancer. Given the short shelf-life of 3–5 days and a highly volatile supply and demand pattern, platelet inventory allocation is a challenging task. This is especially prevalent in emerging economies where demand variability is more pronounced due to neglected tropical diseases, and a perpetual shortage of supply. The consequences of which have given rise to an illegal ‘red market’. Motivated by experience at a regional hospital in India, we investigate the problem of platelet allocation among three priority-differentiated demand streams. Specifically we consider a central hospital which, in addition to internal emergency and non-emergency requests, faces external demand from local clinics. We analyze the platelet allocation decision from a social planner’s perspective and propose an allocation heuristic based on revenue management (RM) principles. The objective is to maximize total social benefit in a highly supply-constrained environment. Using data from the aforementioned Indian hospital as a case study, we conduct a numerical simulation and sensitivity analysis to evaluate the allocation heuristic. The performance of the RM-based policy is evaluated against the current sequential first come, first serve policy and two fixed proportion-based rationing policies. It is shown that the RM-based policy overall dominates, serves patients with the highest medical urgency better, and can curtail patients’ need to procure platelets from commercial sources.
 
Ribonucleic acid (RNA) molecules play informational, structural, and metabolic roles in all living cells. RNAs are chains of nucleotides containing bases {A, C, G, U} that interact via base pairings to determine higher order structure and functionality. The RNA folding problem is to predict one or more secondary RNA structures from a given primary sequence of bases. From a mathematical modeling perspective, solutions to the RNA folding problem come from minimizing the thermodynamic free energy of a structure by selecting which bases will be paired, subject to a set of constraints. Here we report on a Quadratic Unconstrained Binary Optimization (QUBO) modeling paradigm that fits naturally with the parameters and constraints required for RNA folding prediction. Three QUBO models are presented along with a hybrid metaheuristic algorithm. Extensive testing results show a strong positive correlation with benchmark results.
 
In this paper, we describe a matheuristic to solve the stochastic facility location problem which determines the location and size of storage facilities, the quantities of various types of supplies stored in each facility, and the assignment of demand locations to the open facilities, which minimize unmet demand and response time in lexicographic order. We assume uncertainties about demands, inventory spoilage, and transportation network availability. A good example where such a formulation makes sense is the the problem of pre-positioning emergency supplies, which aims to increase disaster preparedness by making the relief items readily available to people in need. The matheuristic employs iterated local search techniques to look for good location and inventory configurations, and uses CPLEX to optimize the assignments. Numerical experiments on a number of case studies and random instances for the pre-positioning problem demonstrate the effectiveness and efficiency of the matheuristic, which is shown to be particularly useful for tackling larger instances that are intractable for exact solvers. The matheuristic is therefore a contribution to the literature on heuristic approaches to solving facility location under uncertainties, can be used to further study the particular variant of the facility location problem, and can also support humanitarian logisticians in their planning of pre-positioning strategies.
 
A general enhancement of the Benders’ decomposition (BD) algorithm can be achieved through the improved use of large neighbourhood search heuristics within mixed-integer programming solvers. While mixed-integer programming solvers are endowed with an array of large neighbourhood search heuristics, few, if any, have been designed for BD. Further, typically the use of large neighbourhood search heuristics is limited to finding solutions to the BD master problem. Given the lack of general frameworks for BD, only ad hoc approaches have been developed to enhance the ability of BD to find high quality primal feasible solutions through the use of large neighbourhood search heuristics. The general BD framework of SCIP has been extended with a trust region based heuristic and a general enhancement for large neighbourhood search heuristics. The general enhancement employs BD to solve the auxiliary problems of all large neighbourhood search heuristics to improve the quality of the identified solutions. The computational results demonstrate that the trust region heuristic and a general large neighbourhood search enhancement technique accelerate the improvement in the primal bound when applying BD.
 
This work investigates different Bayesian network structure learning techniques by thoroughly studying several variants of Hybrid Multi-objective Bayesian Estimation of Distribution Algorithm (HMOBEDA), applied to the MNK Landscape combinatorial problem. In the experiments, we evaluate the performance considering three different aspects: optimization abilities, robustness and learning efficiency. Results for instances of multi- and many-objective MNK-landscape show that, score-based structure learning algorithms appear to be the best choice. In particular, HMOBEDAk2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{k2}$$\end{document} was capable of producing results comparable with the other variants in terms of the runtime of convergence and the coverage of the final Pareto front, with the additional advantage of providing solutions that are less sensible to noise while the variability of the corresponding Bayesian network models is reduced.
 
Motivated by the celebrated paper of Hooker (J Heuristics 1(1): 33–42, 1995) published in the first issue of this journal, and by the relative lack of progress of both approximation algorithms and fixed-parameter algorithms for the classical decision and optimization problems related to covering edges by vertices, we aimed at developing an approach centered in augmenting our intuition about what is indeed needed. We present a case study of a novel design methodology by which algorithm weaknesses will be identified by computer-based and fixed-parameter tractable algorithmic challenges on their performance. Comprehensive benchmarkings on all instances of small size then become an integral part of the design process. Subsequent analyses of cases where human intuition “fails”, supported by computational testing, will then lead to the development of new methods by avoiding the traps of relying only on human perspicacity and ultimately will improve the quality of the results. Consequently, the computer-aided design process is seen as a tool to augment human intuition. It aims at accelerating and foster theory development in areas such as graph theory and combinatorial optimization since some safe reduction rules for pre-processing can be mathematically proved via theorems. This approach can also lead to the generation of new interesting heuristics. We test our ideas with a fundamental problem in graph theory that has attracted the attention of many researchers over decades, but for which seems it seems to be that a certain stagnation has occurred. The lessons learned are certainly beneficial, suggesting that we can bridge the increasing gap between theory and practice by a more concerted approach that would fuel human imagination from a data-driven discovery perspective.
 
Proximity search is an iterative method to solve complex mathematical programming problems. At each iteration, the objective function of the problem at hand is replaced by the Hamming distance function to a given solution, and a cutoff constraint is added to impose that any new obtained solution improves the objective function value. A mixed integer programming solver is used to find a feasible solution to this modified problem, yielding an improved solution to the original problem. This paper introduces the concept of weighted Hamming distance that allows to design a new method called weighted proximity search. In this new distance function, low weights are associated with the variables whose value in the current solution is promising to change in order to find an improved solution, while high weights are assigned to variables that are expected to remain unchanged. The weights help to distinguish between alternative solutions in the neighborhood of the current solution, and provide guidance to the solver when trying to locate an improved solution. Several strategies to determine weights are presented, including both static and dynamic strategies. The proposed weighted proximity search is compared with the classic proximity search on instances from three optimization problems: the p -median problem, the set covering problem, and the stochastic lot-sizing problem. The obtained results show that a suitable choice of weights allows the weighted proximity search to obtain better solutions, for 75 $$\%$$ % of the cases, than the ones obtained by using proximity search and for 96 $$\%$$ % of the cases the solutions are better than the ones obtained by running a commercial solver with a time limit.
 
In this paper, we propose a heuristic search algorithm based on maximum conflicts to find a weakly stable matching of maximum size for the stable marriage problem with ties and incomplete lists. The key idea of our approach is to define a heuristic function based on the information extracted from undominated blocking pairs from the men’s point of view. By choosing a man corresponding to the maximum value of the heuristic function, we aim to not only remove all the blocking pairs formed by the man but also reject as many blocking pairs as possible for an unstable matching from the women’s point of view to obtain a solution of the problem as quickly as possible. Experiments show that our algorithm is efficient in terms of both execution time and solution quality for solving the problem.
 
This paper presents an iterated local search (ILS) algorithm for the single machine total weighted tardiness batch scheduling problem. To our knowledge, this is one of the first attempts to apply ILS to solve a batching scheduling problem. The proposed algorithm contains a local search procedure that explores five neighborhood structures, and we show how to efficiently implement them. Moreover, we compare the performance of our algorithm with dynamic programming-based implementations for the problem, including one from the literature and two other ones inspired in biased random-key genetic algorithms and ILS. We also demonstrate that finding the optimal batching for the problem given a fixed sequence of jobs is NP-hard, and provide an exact pseudo-polynomial time dynamic programming algorithm for solving such problem. Extensive computational experiments were conducted on newly proposed benchmark instances, and the results indicate that our algorithm yields highly competitive results when compared to other strategies. Finally, it was also observed that the methods that rely on dynamic programming tend to be time-consuming, even for small size
 
The maximum k-plex problem is an important, computationally complex graph based problem. In this study an effective k-plex local search (KLS) is presented for solving this problem on a wide range of graph types. KLS uses data structures suitable for the graph being analysed and has mechanisms for preventing search cycling and promoting search diversity. State of the art results were obtained on 121 dense graphs and 61 large real-life (sparse) graphs. Comparisons with three recent algorithms on the more difficult graphs show that KLS performed better or as well as in 93% of 332 significant k-plex problem instances investigated achieving either larger average k-plex sizes (including some new results) or, when these were equivalent, lower CPU requirements.
 
We propose a solution approach for stochastic network design problems with uncertain demands. We investigate how to efficiently use reduced cost information as a means of guiding variable fixing to define a restriction that reduces the complexity of solving the stochastic model without sacrificing the quality of the solution obtained. We then propose a matheuristic approach that iteratively defines and explores restricted regions of the global solution space that have a high potential of containing good solutions. Extensive computational experiments show the effectiveness of the proposed approach in obtaining high-quality solutions, while reducing the computational effort to obtain them.
 
In this paper, we propose a method to solve a bi-objective variant of the well-studied traveling thief problem (TTP). The TTP is a multi-component problem that combines two classic combinatorial problems: traveling salesman problem and knapsack problem. We address the BI-TTP, a bi-objective version of the TTP, where the goal is to minimize the overall traveling time and to maximize the profit of the collected items. Our proposed method is based on a biased-random key genetic algorithm with customizations addressing problem-specific characteristics. We incorporate domain knowledge through a combination of near-optimal solutions of each subproblem in the initial population and use a custom repair operator to avoid the evaluation of infeasible solutions. The bi-objective aspect of the problem is addressed through an elite population extracted based on the non-dominated rank and crowding distance. Furthermore, we provide a comprehensive study showing the influence of each parameter on the performance. Finally, we discuss the results of the BI-TTP competitions at EMO-2019 and GECCO-2019 conferences where our method has won first and second places, respectively, thus proving its ability to find high-quality solutions consistently.
 
Top-cited authors
Karl F. Doerner
  • University of Vienna
Marc Reimann
  • Karl-Franzens-Universität Graz
Richard F. Hartl
  • University of Vienna
Michael Polacek
  • University of Vienna
Michel Gendreau
  • Polytechnique Montréal