Science topic

Evolutionary Algorithms - Science topic

Explore the latest questions and answers in Evolutionary Algorithms, and find Evolutionary Algorithms experts.
Questions related to Evolutionary Algorithms
  • asked a question related to Evolutionary Algorithms
Question
7 answers
Evolutionary algorithm, convergence curve, iteration count, maximum function evaluations, minimization objective.
Relevant answer
Answer
Definitely use the number of evaluations if you can't guarantee that the algorithms consume the same number of evaluations per generation/iteration.
  • asked a question related to Evolutionary Algorithms
Question
2 answers
I am applying a multi-objective algorithm, specifically NSGA-II. In some instances of my problem, I obtain a Pareto front, but in others, I only get a single solution or two solutions. Does this mean that my objectives are not conflicting? What method can I apply to verify the conflict between two objectives? I’m not sure what to do since sometimes I get a Pareto front, but in many cases, I don’t.
Relevant answer
Answer
Hello Sofia,
It is indeed possible that for some instances of your problem, the objectives are not conflicting. For example, in a shortest path problem, the path with the shortest distance may also be the fastest in terms of travel time, resulting in no trade-off between the objectives.
However, the lack of a diverse Pareto front in some instances does not always imply non-conflicting objectives. It could also be due to factors such as:
  1. The nature of the specific problem instance (e.g., highly constrained or trivial).
  2. Insufficient diversity in the solutions explored by the algorithm.
  3. Potential issues with the parameter settings of NSGA-II.
If you describe your problem, we can help you more easily and as Roy already stated, you can verify whether your objectives are conflicting by calculating the correlation coefficient.
There was a similar question already asked here:
I hope this helps!
  • asked a question related to Evolutionary Algorithms
Question
2 answers
Hi, I wanna to implement evolutionary algorithms in ryu sdn controller in mininet, i have some challenges, how i can run the big scale topo with one sdn contoller??? and another question is to generate real dataset( for example for 12000 node) some real cpu,ram bw value... Do you have any suggestions for this dataset? How can I get data set? with real values.
Thanks...
Relevant answer
Answer
Mastering the increasing complexity of current and future networks, while reducing the operational and investments costs, is one of the major challenges faced by network operators (NO). This explains in large part the recent enthusiasm of NOs towards Software Defined Networking (SDN) and Network Function Virtualization (NFV). Indeed, on the one hand, the SDN makes it possible to get rid of the control plane distribution complexity, by centralizing it logically, while allowing its programmability. On the other hand, the NFV allows virtualizing the network functions, which considerably facilitates the deployment and the orchestration of the network resources.
Regards,
Shafagat
  • asked a question related to Evolutionary Algorithms
Question
1 answer
International Journal of Complexity in Applied Science and Technology,Topics covered include
  • Evolutionary algorithms
  • Particle swarm optimisation
  • Single-/multi-/many-objective optimisation
  • Constrained optimisation
  • Multi-modal optimisation
  • Dynamic optimisation
  • Data-driven optimisation
  • Large-scale optimisation
  • Engineering design optimisation
  • Applications associated with intelligent computation
  • Machine learning
  • Big data
Relevant answer
  • asked a question related to Evolutionary Algorithms
Question
3 answers
International Journal of Complexity in Applied Science and Technology,
Topics covered include
  • Evolutionary algorithms
  • Particle swarm optimisation
  • Single-/multi-/many-objective optimisation
  • Constrained optimisation
  • Multi-modal optimisation
  • Dynamic optimisation
  • Data-driven optimisation
  • Large-scale optimisation
  • Engineering design optimisation
  • Applications associated with intelligent computation
  • Machine learning
  • Big data
Relevant answer
Answer
Neuroevolution is a paradigm that leverages evolutionary algorithms to train neural networks. Unlike traditional gradient-based optimization methods, which rely on backpropagation, neuroevolution evolves neural network architectures and parameters over generations.
Regards,
Shafagat
  • asked a question related to Evolutionary Algorithms
Question
1 answer
I am working on flexible job shop scheduling problem, which I want to solve using a hybrid algorithm like VNS based NSGA-II or TS/SA + NSGA. Can I use Pymoo to implement this problem? Pymoo has specific structure to call the built-in algorithms, but in case of customization and using two algorithms how can I use pymoo framework?
Relevant answer
I recommend studying the following systematic review.
  • asked a question related to Evolutionary Algorithms
Question
2 answers
2024 3rd International Conference on Artificial Intelligence, Internet and Digital Economy (ICAID 2024) will be held in Bangkok, Thailand on April 19-21, 2024.
Important Dates:
Full Paper Submission Date: March 19, 2024
Registration Deadline: March 29, 2024
Final Paper Submission Date: April 09, 2024
Conference Dates: April 19-21, 2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
- The development of artificial intelligence (AI tools, artificial intelligence and evolutionary algorithms, intelligent user interfaces, intelligent information fusion, etc.) and their applications in economic and social development.
- The development of mobile Internet, artificial intelligence, big data and other technologies and their application in economic and social development.
- Artificial intelligence and digital economy development frontier in the Internet era and typical cases.
- Technology, methods and applications of the integration and development of digital economy and artificial intelligence.
- Other topics related to artificial intelligence, Internet and digital economy can be contributed.
All accepted papers will be published in the AHIS-Atlantis Highlights in Intelligent Systems (2589-4919), and submitted to EI Compendex, Scopus for indexing.
For More Details please visit:
Relevant answer
Answer
I am from Ethiopia and happy to participate on this conference.
  • asked a question related to Evolutionary Algorithms
Question
13 answers
I have 'N' number of inputs (which correspond to temperature) to calculate a specific output parameter. I'm getting 'N' number output data based on the inputs.
However, my goal is to select an optimum number from all the output data and use it in another calculation.
'N' number of input data --> Output parameter calculation --> Identification of an optimized output parameter --> Use that value for another calculation.
How to find an optimal value among 'N' "number of output data. Can we employ any algorithm or process?
Relevant answer
Answer
Hello,
I think it would be better if you gave more complete information about your question. For example, is your output data one-dimensional (i.e. a vector) or two-dimensional (matrix) and... or were you explaining that the optimal value in your opinion is the lowest value or the highest value?
However, I offer a general trend. If your output dataset is one-dimensional (and your goal is to find the minimum or maximum value), you can use MATLAB's max or min functions. But if your output data is multi-dimensional, although the problem can be solved with MATLAB and command codes, I suggest you use optimization algorithms such as PSO. ​But the bigger the search space, the more optimization-based algorithms are felt.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
To start with, I want to believe for a modification of an algorithm like evolutionary algorithm to be considered novel, it has to offer some benefit (like automation) and should not perform worse than the manually designed algorithm.
Base on the above, If I want to implement an evolutionary algorithm to design another evolutionary algorithm, I wouldn't know before hand whether the proposed method will perform as good or even better than the manual method or some other similar method.
I can only tell how the method performs after the completion of the implementation.
Is there a way to know before hand that a proposal will not bring some benefit, before putting energy and time into developing it?
Please, what do you advise in this situation from your experience?
Thank you in anticipation of your kind response.
Relevant answer
Answer
Usually the code can be optimised: same algorithm but less computing time, less memory (well, sometimes _far_ less). Now, for the quality of the results if you find a _sure_ way to improve it ((for a given search effort, of course, like a number of evaluations) publish it quickly!
Note that a different initialisation method might improve the algorithm on some problems, but not always. See for example
Omran, Mahamed G. H., Salah al-Sharhan, Ayed Salman, and Maurice Clerc. ‘Studying the Effect of Using Low-Discrepancy Sequences to Initialize Population-Based Optimization Algorithms’. Computational Optimization and Applications, 2013, 1–24.
Sometimes the problem can be reformulated. Using different formulae to take constraints into account or rewriting a discrete problem as binary.
  • asked a question related to Evolutionary Algorithms
Question
4 answers
The set of optimal solutions obtained in the form of Pareto front includes all equally good trade-off solutions. But I was wondering, whether these solutions are global optima or local optima or mix of both. In other words, does an evolutionary algorithm like NSGA-II guaranties global optimum solutions?
Thank you in anticipation.
Relevant answer
Answer
No, a Pareto front produced by an evolutionary algorithm does not necessarily include both global and local optima. The Pareto front represents the set of non-dominated solutions in multi-objective optimization problems. These solutions are not dominated by any other solution in terms of all the objective functions simultaneously.
In a multi-objective optimization problem, there can be multiple optimal solutions, known as Pareto optimal solutions, that represent trade-offs between conflicting objectives. These solutions lie on the Pareto front and are considered efficient solutions because improving one objective would require sacrificing performance in another objective.
The Pareto front typically contains a mixture of global and local optima. Global optima are solutions that provide the best performance across all objectives in the entire search space. Local optima, on the other hand, are solutions that are optimal within a specific region of the search space but may not be globally optimal.
The evolutionary algorithm aims to explore the search space and find a diverse set of Pareto optimal solutions across the entire front, which may include both global and local optima. However, the algorithm's ability to discover global optima depends on its exploration and exploitation capabilities, the problem complexity, and the specific settings and parameters of the algorithm.
It's important to note that the distribution and representation of global and local optima on the Pareto front can vary depending on the problem and algorithm used. Analyzing the Pareto front and its solutions can provide valuable insights into the trade-offs and optimal solutions available in multi-objective optimization problems.
  • asked a question related to Evolutionary Algorithms
Question
17 answers
These datasets will be used for data classification and predicting new information
Relevant answer
Answer
The Aidos-x system is an excellent intelligent system for diagnosing and classifying human diseases. The methods of using the Aidos-x system for diagnosing human diseases are disclosed in lectures with sound "Using automated system-cognitive analysis for the classification of human organ tumors", "Intelligent system for diagnosing early stages of chronic kidney disease", which can be downloaded right now from the website https ://www.patreon.com/user?u=87599532 Creator's title: «Lectures on Electronic Medicine». After subscribing to this site, you will receive databases for medical research to identify the diseases that you will read about in lectures. The acquired skills of working in the Aidos-x system will allow you to apply for grants to carry out scientific research in the field of medicine.
To subscribe to the site https://www.patreon.com/user?u=87599532 you do not need to go to the bank, but you can do it using the Pay Pal system. Send a transfer in your currencies, and the transfer to dollars will be done automatically. After subscribing on the site, you will receive the Aidos-x system with an English user interface for free.
Thank you.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
How to initialize the population size and maximum number of iterations in the evolutionary algorithms? Is there any hard and fast rule for that?
Relevant answer
Answer
I would be surprised if you will find any "hard" rule on this since hyperparameters of optimization routines are typically quite application specific. For meta-heuristic algorithms you can often find the rule of the thumb to use 10 search entities (e.g., 10 particles for PSO) per optimization variable in some lecture books, but this is definitely not a hard rule.
In terms of required iterations, you might just want to monitor the cost curve during optimization. If it stalls for a certain number of function evaluations, you can stop the optimization. Otherwise or additionally, you might want to consider a simple time constraint allowing your algorithm to optimize as long as you like it.
  • asked a question related to Evolutionary Algorithms
Question
1 answer
Researchers are using different algorithms like GA, PSO and other variants of evolutionary algorithms. My question is why to do so as Lagrange itself can provide a solution to it?
Relevant answer
Answer
In fcat to a problem, we can propose many apporoaches of resolutions, and aalso Lagrange can not solve any kind of problem, another thing is the computation time a method can use to solve the problem, is the computation time good or appropriated ? Some problems can easily be solved in a short time using evolutionnary computation like GA, PSO, ABC, SA etc
  • asked a question related to Evolutionary Algorithms
Question
9 answers
Although the genetic algorithm is very helpful to find a good solution, it is very time-consuming. I've recently read an article in which the authors used the genetic algorithm for pruning a neural network on a very small dataset. They pruned a third of their network and reduces the size of the model from 90MG to 60MG. Their pruning also decreased the inference time of their model by around 0.5 seconds. Although, unlike other methods, pruning using the genetic algorithm does not deteriorate their model's performance, finding redundant parts of the network using the genetic algorithm is an overhead. I wonder why they think their method is useful and why people still use the genetic algorithm.
Relevant answer
Answer
Genetic algorithms have the several characteristics over others such as , (1) Natural selection and natural genetics are at the heart of the Genetic Algorithm. -(2)These are simple to comprehend and use. -(3) These are frequently encountered in optimization issues. (4)These are based on a population of points rather than a single point. -(5) Instead of deterministic rules, these utilize probabilistic transition rules. -(6)Genetic algorithms can handle a large number of variables efficiently. - (7)No derivative information is required for these methods. -(8)These are more successful when dealing with complicated difficulties than when dealing with basic problems.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
Hello, I want to solve a bilevel optimization problem using evolutionary algorithms for my research project. Can somebody tell me where can I find code for such problems?
Relevant answer
Answer
It's a good idea to learn how to code. On the other hand ``bilevel optimization problem'' doesn't provide any relevant information, so it would be better to just ask about how to code evolutionary algorithms. This: https://towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac might be a good place to start.
  • asked a question related to Evolutionary Algorithms
Question
5 answers
Hello, I've been focusing on an evolutionary algorithm based on prediction ,it has good exploration ability, but unfortunately his exploitation ability is very poor. What strategies can improve his exploitation ability?
Relevant answer
Answer
@Helei Kang please search the concept "Global neighborhood algorithm". I cannot find the paper at the moment, but that is more or less what you are looking for.
Good luck
  • asked a question related to Evolutionary Algorithms
Question
4 answers
To solve multi-objective problems using the lexicographic method.
Can we use for each step different algorithms? For instance, when we have two objectives to be minimized, can we use firstly the Genetic Algorithm to optimize the first objective, then use Particle swarm optimization Algorithm to solve the second objective while taking the result of the Genetic Algorithm as a constraint?
Relevant answer
Answer
You most certainly can. In fact, there may be an advantage in employing such a method since we know that some algorithms are better suited for some problems over others (NFL theorem). Just be sure to properly select the correct MH for any given objective function. This may require some experimentation on your part, but whatever you do, don't let the algorithm selection process be randomized. A methodological approach needs to be adopted here.
Personally, I would advise using Pareto-based techniques instead of conventional non-Pareto-based approaches like Lexicographic or sum weighted optimization. Nonetheless, good luck!
  • asked a question related to Evolutionary Algorithms
Question
7 answers
I need matlab code to reproduce the attached research paper
Relevant answer
Answer
Dear Anil,
In this paper, four optimisation methods were applied, including GA, DE, ES, and BBO. I know there are many different available source codes for these methods. Which one is most interesting for you to use?
Yarpiz (2022). Biogeography-Based Optimization (BBO) (https://www.mathworks.com/matlabcentral/fileexchange/52901-biogeography-based-optimization-bbo), MATLAB Central File Exchange. Retrieved March 18, 2022.
  • asked a question related to Evolutionary Algorithms
Question
17 answers
Dear Researchers,
I'm looking for the implementation of advanced metaheuristic algorithms in Python. I would be incredibly thankful if someone could assist me with the execution of evolutionary algorithms or can provide me with the necessary codes in Python.
Thank you very much.
Relevant answer
Answer
Dear Muhammad,
In the website of Professor Mirjalili who is one of the pivotal pioneers in Swarm Intelligence, you can find several modern meta-heuristics' source codes which have been developed in different languages like Python. Matlab, R, and C++ and so on. Please find the codes in the below website:
Good luck
  • asked a question related to Evolutionary Algorithms
Question
6 answers
Hello
Can any one help me about the meaning of chaotic maps and the opposition based learning population initialization and the difference between them? Are these strategies used for population initialization in all evolutionary algorithms, and when is it efficient?
Thanks so much.
Relevant answer
Answer
(Chaotic initialization)
(Opposition-based initialization)
Of the two, initialization with chaotic mappers is superior. Distributing the population evenly across the search space is key for the algorithm to have the best start possible. Opposition-based techniques have been found, at least in my personal experience, to reduce population diversity in this crucial phase.
  • asked a question related to Evolutionary Algorithms
Question
5 answers
Population size = N
fi represents the fitness of ith candidate solution
fmax = max {fi : 1 <= i <= N}
fmin = min {fi : 1 <= i <= N}
Find jth solution whose fitness value is the mean of all fitness values.
Relevant answer
Answer
Dear Dr Sajid,
In each iteration, after computing the average fitness of all solutions, the difference between each solution's fitness and the average fitness can be calculated. Next, the minimum difference shows the solution with the highest similarity with the average fitness.
  • asked a question related to Evolutionary Algorithms
Question
10 answers
Dear Experts,
I am looking for a solution for a convex optimization problem (linear programming) and I am thinking about evolutionary algorithms. Do you think it would be helpful?
Relevant answer
Answer
Most LPs arising in practice can be solved quickly using simplex. For very large and/or degenerate instances, interior point methods (IPMs) may be faster. Many solvers (e.g., CPLEX, Gurobi, Xpress, MOSEK) allow the user to try several methods.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
Hello RG Community,
As we have ZDT, DTLZ, WFG test functions for continuous multiobjective optimization problems, are there any test functions for mixed integer multiobjective optimization problems?
Thanks!
Relevant answer
Answer
I think that these papers may help you
1-Burachik, R. S., Kaya, C. Y., & Rizvi, M. M. (2021). Algorithms for generating Pareto fronts of multi-objective integer and mixed-integer programming problems. Engineering Optimization, 1–13. doi: 10.1080 / 0305215x.2021.1939695
2- Sujeet Kumar Singh & Mark Goh (2018): Multi-objective mixed integer programming and an application in a pharmaceutical supply chain, International Journal of Production Research, DOI: 10.1080/00207543.2018.1504172
3- Tušar, T., Brockhoff, D., & Hansen, N. (2019). Mixed-integer benchmark problems for single- and bi-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO ’19. doi:10.1145/3321707.3321868
  • asked a question related to Evolutionary Algorithms
Question
7 answers
Dear All,
I have successfully applied NSGA-II on a multi-objective optimization problem.
While I was observing the results during the optimization process, I have noticed that some variables (genes) have reached very good values that matches my objectives, while others haven't. However during the optimization, the good genes are being frequently changed and swapped to undesired results (due to genetic operations; mutation and crossover) until the algorithm reaches a good local optima, or hits a condition.
My question is this:
Can I exclude some genes from the optimization process since they have already reached my condition sand finally combine them with the remained genes in the final chromosome?
Relevant answer
Answer
All the genes play an important rolein in the optimization process, The likely good genes contribute information for improving the qulity of new ones and the likely bad genes could help to avoid non factible solutions and to re-direct the search to new areas of the domain. So, I think that all the genes must to participate in the solution search process.
  • asked a question related to Evolutionary Algorithms
Question
5 answers
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
Relevant answer
Answer
Miha Ravber : for me, [-2, 2] was enough because I fixed the first atom at (0, 0, 0), the second at (>= 0, 0, 0), etc. If you don't, you get free coordinates between your bounds.
You can definitely start at [-10, 10] and see what the results are, then adjust.
  • asked a question related to Evolutionary Algorithms
Question
4 answers
Why Particle Swarm Optimization works better for this classification problem?
Can anyone give me any strong reasons behind it?
Thanks in advance.
Relevant answer
Answer
Arash Mazidi PSO is also in various classification problems. I particularly use it for Phishing website datasets.
  • asked a question related to Evolutionary Algorithms
Question
45 answers
Hi everyone,
We have implemented four metaheuristic algorithms to solve an optimization problem. Each algorithm is repeated 30 times for an instance of the problem, and we have stored the best objective function values for 30 independent runs for each algorithm.
We want to compare these four algorithms. Apart from maximum, minimum, average, and standard deviation, is there any statistical measure for comparison?
Alternatively, we have four independent samples each of size 30, and we want to test the null hypothesis that the means (or, medians) of these four samples are equal against an alternative hypothesis that they are not. What kind of statistical test should we perform?
Regards,
Soumen Atta
Relevant answer
In the following article, the performance of a new metaheuristic algorithm named QANA was compared with ten comparative algorithms using three statistical tests, Wilcoxon signed-rank sum, analysis of variance (ANOVA), and mean absolute error (MAE).
  • asked a question related to Evolutionary Algorithms
Question
3 answers
I used uniform crossover to solve eight queens problem. It is taking more than 3hrs to get the output. Is there any way to reduce the running time or better crossover to solve eight queens problem? Any answer regarding this will be appreciated.
Thank you
Relevant answer
Answer
  • asked a question related to Evolutionary Algorithms
Question
7 answers
In optimization of sphere we already knows that the optimal point is the Origin and this is achieved by analytical optimization procedures and as we run Many/Multi Evolutionary Algorithms, the results are compared with the true PF.
Now I wanna know how to compute the true PF of normalized test Function such as DTLZ, DTZ and etc!!!
Relevant answer
Answer
First you may check two sources of the optimal fronts of 3-D DTLZ instances:
<< Coello, Lamont, & Van Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems: Appendix E >> http://www.cs.cinvestav.mx/~emoobook/apendix-e/apendix-e.html
<< Durillo & Nebro's jMetal Framework >> http://jmetal.sourceforge.net/problems.html Those could help you to evaluate your MOEAs on the 3-D instances. =====================================
As for more than three objectives, we can generate the optimal fronts by ourselves. In this case, let us check the problem descriptions in the original paper:
<< Deb, Thiele, Laumanns, & Zitzler, Scalable Test Problems for Evolutionary Multiobjective Optimization >> http://link.springer.com/content/pdf/10.1007%2F1-84628-137-7_6.pdf
According to the descriptions, the optimal front of DTLZ4 is the same as those of DTLZ2 and DTLZ3. A simple way to generate the optimal front is to generate a set of decision vectors by setting the values of the first M-1 variables randomly and set the remaining k variables by 0.5. Evaluate these decision vectors by DTLZ2, and the obtained objective vectors could serve as the reference optimal front (,though not very evenly distributed). Another way is to generate the objective vectors directly by making the sum of the square of objective values as one (f1^2 + f2^2 + ... + fM^2 = 1). The DTLZ5 and DTLZ6 instances have the same optimal front. You can generate a set of decision vectors by setting the first variable randomly and set all remaining variables by 0.5. Evaluate these decision vectors by DTLZ5, and the obtained objectives will form a reference optimal front. For the implementation of DTLZ5, you could check the PISA implementation or jMetal implementation.
MOEAFramework provides the PF of DTLZ1-4 and DTLZ7 with 2, 3, 4, 6, and 8 objectives. You can download them here:
If you still need to generate PF of DTLZ1-4 by yourself, the paper of Li et al. [1] describes one method in details in Section IV-B. Just check equations (14) and (17). You may also use the PF generator included in nsga3cpp [2], which is an unofficial implementation of NSGA-III by me. Note that for 8 and more objectives, the number of Pareto optimal solutions could be very large.
In this case, the 2-layer methoed mentioned in [3] is adopted. Li et al. [4] proposed another complex method to generate PF of DTLZ functions.
If you want to generate PF of other test functions, MOEAFramework provides a way to do so. Please check the discussions here [5].
I guess that you are generating PF of DTLZ and other functions since you want to measure algorithm performance by the inverted generational distance (IGD) indicator. In this regard, you may want to check Ishibuchi et al.' study [6] on an improved indicator called IGD+, which is weakly Pareto compliant (but IGD is not).
Finally, Ishibuchi et al. [7] indicated that DTLZ1-4 and WFG4-9 are similar functions with respect to the shape of PF and mentioned the necessity of more general test problems. Cheng et al. [8] proposed a new set of test functions with an open-source software platform PlatEMO [9], which may also be interesting to you.
Hope the above information is useful to you. :)
-------------------------------------
[1] Li et al., An evolutionary many-objective optimization algorithm based on dominance and decomposition, t-EC 2015, https://doi.org/10.1109/TEVC.2014.2373386
[3] Deb and Jain, An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: solving problems with box constraints, tEC 2014. https://doi.org/10.1109/TEVC.2013.2281535
[4] Li et al., A method for distributing reference points uniformly along the Pareto front of DTLZ test functions in many-objective evolutionary optimization, ICIST 2015. https://doi.org/10.1109/ICIST.2015.7289031
[6] Ishibuchi et al, Modified distance calculation in generational distance and inverted generational distance, EMO 2015, http://doi.org/10.1007/978-3-319-15892-1_8
[7] Ishibuchi et al., Performance of decomposition-based many-objective algorithms stronly depends on Pareto front shapes, tEC 2017. https://doi.org/10.1109/TEVC.2016.2587749
[8] Cheng et al., A benchmark test suite for evolutionary many-objective optimization, CIS 2017. https://doi.org/10.1007/s40747-017-0039-7
Kind Regards
Qamar Ul Islam
  • asked a question related to Evolutionary Algorithms
Question
8 answers
We are trying to breed some parameter configurations controlling the search of a deduction system. Some parameters are integers, some are reals, some are boolean, and the most complex one is a variable length list of different elements, where each of the elements has its own (smallish) sub-set of parameters. Since we have Python competence and Python is already used in the project, that looks like a good fit. I've found DEAP and PyEvolve as already existing frameworks for genetic algorithms. Does anybody have experience with these and can tell me about the strengths and weaknesses of the two (or any other appropriate) systems?
If it helps: In our application, determining the fitness of the individual is likely the most expensive part - it will certainly be minutes per generation, and if we are not careful and/or rich (to buy compute clusters), could be hours per individual. So time taken by the rest of the GA is probably not a major factor - think "several generations per day", not "several generations per second".
Relevant answer
Answer
It is an open-source library for general-purpose optimization using the genetic algorithm. It is very easy to learn.
  • asked a question related to Evolutionary Algorithms
Question
7 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Evolutionary Algorithms
Question
1 answer
hi
is there any python package that implements the epsilon constraint method for solving multiobjective optimization problems using the evolutionary algorithms
Relevant answer
Answer
  • asked a question related to Evolutionary Algorithms
Question
3 answers
Genetic algorithm can be implemented using ga tool in matlab Plz suggest any tool to implement the evolutionary algorithm or nature inspired algorithm.
Relevant answer
Answer
You can find a lot of ready algorithms from this website.
  • asked a question related to Evolutionary Algorithms
Question
5 answers
I'm looking to compare an evolutionary algorithm to a state-of-the-art algorithm using a set of test functions. I'd like to use modern test functions instead of those defined in Deb (2002) work.
Relevant answer
Answer
  • asked a question related to Evolutionary Algorithms
Question
31 answers
Various metaheuristic optimization algorithms with different inspiration sources have been proposed in recent past decades. Unlike mathematical methods, metaheuristics do not require any gradient information and are not dependent on the starting point. Furthermore, they are suitable for complex, nonlinear, and non-convex search spaces, especially when near-global optimum solutions are sought after using limited computational effort. However, some of these metaheuristics are trapped in a local optimum and are not able to escape out, for example. For this purpose, numerous researchers focus on adding efficient mechanisms for enhancing the performance of the standard version of the metaheuristics. Some of them are addressed in the following references:
I will be grateful If anyone can help me to find other efficient mechanisms.
Thanks in advance.
Relevant answer
I recommend you to check also the CCSA algorithm implemented by a Conscious Neighborhood-based approach which is an effective mechanism to improve other metaheuristic algorithms as well. The CCSA and its full source code are available here:
  • asked a question related to Evolutionary Algorithms
Question
5 answers
If yes, could anyone provide me with some references/resources on how to compute the theoretical performance guarantee (i.e. approximation ratio) for a population-based differential evolutionary algorithm?
Relevant answer
Answer
Yes there are, but most of them focus on the very simple optimisation problems.
such as
  • asked a question related to Evolutionary Algorithms
Question
12 answers
For my research purpose and to compare the results with the results of swarm algorithm I need an evolutionary algorithm. And I consider GA as one of them.
Relevant answer
Answer
Also, if you prefer using predefined functions in Matlab, check this link:
  • asked a question related to Evolutionary Algorithms
Question
6 answers
Hi,
I just want to make sure that I understand the mechanics of the NSGA-II (the non-dominating sorting genetic algorithm) for a multiobjective optimization problem, since the resources I have (I am not satisfied with, I would be grateful if anyone can recommend me a good paper or source to read more about the NSGA-II)
Here is what I got so far:
1- We start with a random population lets call P_0 of N individuals
2- Generate an off-spring population Q_0 of size N from P_0 by using binary tournament selection, crossover and mutation)
3- Let R_0=P_0 U Q_0
While (itr<= maxitr) do,
5- Identify the non-dominating fronts in R_0, (F_1,..F_j)
6- create P_1 (of size N) as follows:
for i=1:j
if |P_1| + |F_i| <= N
set P_1=P_1 U F_i
else,
add the least crowded N - |P_1| solutions from F_i to P_1
end
end
7- set P_1=P_0;
8- generate an off-spring Q_0 from P_0 and set R_0=Q_0 U P_0
9- itr=itr+1;
end(do)
My question (assuming the previous algorithm is correct, how do I generate Q_0 from P_0 in step 8?
Do I choose randomly any 2 solutions from P_0 and mate them or do I choose according to or is it better to select the parents according to some condition like those who have the highest rank should mate?
Also, if you can leave me some well-written papers on NSGA-II I would be grateful.
Thanks
Relevant answer
Answer
I am a bit muddled up with respect to tournament selection..
Should the population set size be the same as the size of the offspring?
If that is the case, why do we need a tournament selection strategy to form a mating pool- we might as well as directly use the entire population as a whole?
  • asked a question related to Evolutionary Algorithms
Question
5 answers
In most MOEA publications, researchers will use IGD or hyper-volume as their evaluation metrics. However, in real-world problems, both nadir points and ideal points are unknown. What's worse, in many cases, the domain knowledge is unavailable. Thus, in this case, how do I compare the performance between different algorithms? For example, how do I compare the performance of different MOEAs on a real-world TSP problem without any domain knowledge?
Relevant answer
Answer
Iago Augusto Carvalho sorry for the late response. I saw this approach in some papers. I can't remember all of them, but I found at least one where they used this method:
  • asked a question related to Evolutionary Algorithms
Question
5 answers
Hi,
Ok so I'm all new to computer science and metaheuristics, and need to implement multi-objective optimization for an environemental economics problem with real world data.
The problem goes as this :
I have a large set of possible locations for an infrastructure within a city. Each location provides two benefits A and B, as well as costs (more benefits may be added in the future, depending on further research). Data points are unrelated, cost and benefit functions are not linear and not continuous.
I need a method for selecting a handful of locations that maximizes simultaneously A and B, under the constraint that the total costs < a budget parameter.
I went towards genetic algorithm (GA) for this problem, as it is is highly combinatorial, but I am facing the fact that most "traditional" GA I've looked at have fixed chromosome lengths, and ultimately only give out final individuals of n items. In my case, i am quite flexible on the quantity of best locations, as long as it either minimizes total costs, or handles it as a constraint. As a matter of fact, it would be quite interesting to have as a final result a pareto-front of individuals of different size (for example : in my problem, locations in city center have more "value" than locations in periurban areas, so a few centric locations could be as pareto-optimal as more numerous periurban locations). So I see the problem as a knapsack problem, where costs would be the "weight"; however there can't be any repetition of items here (a same location cannot be used twice).
Is there a way to handle costs constraint as to make a knapsack genetic algorithm that can provide a pareto front of individuals of heteogeneous length. I have been trying it with DEAP library but don't really find details in its documentation.
Many thanks
Georges Farina
Relevant answer
Answer
Knapsack problems of economic type can be easily solved by Genetic algorithm see the following sites for an example of knapsack business problems:
Knapsack Problem Using Genetic Algorithm With Source Code | by Muzamil Shahbaz | Medium
Genetic algorithm for knapsack problem in Matlab - Free Open Source Codes - CodeForge.com
  • asked a question related to Evolutionary Algorithms
Question
18 answers
In the field of softcomputing, many researchers have written premature convergence. if we observe standard deviation, can we notice that algorithm has premature convergence or not. How can we find whether an algorithm has premature convergence or not?
Relevant answer
The premature convergence of a genetic algorithm arises when the population for an optimization problem converged too early. In order to avoid this situation, the following link may be helpful in this context:
  • asked a question related to Evolutionary Algorithms
Question
23 answers
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Relevant answer
Answer
Bat algorithm (BA) is a bio-inspired algorithm developed by Xin-She Yang in 2010 and BA has been found to be very efficient .
  • asked a question related to Evolutionary Algorithms
Question
6 answers
Is the using algorithms in finding interested area in images fruitful?
Relevant answer
Answer
Please read this article:
Medical Image Registration Using Evolutionary Computation: An Experimental Survey By: S. Damas; O. Cordón; J. Santamaría
Analyzing Evolutionary Algorithms: The Computer Science Perspective
By: Thomas Jansen
  • asked a question related to Evolutionary Algorithms
Question
12 answers
Relevant answer
Metaheuristic can be categorized upon their common features into at least nine basic categories:
  1. Evolution-inspired algorithms: These algorithms attempt to imitate the rules and laws of the natural evolution of the biological world. Regardless of their nature, these evolutionary-based optimization algorithms are regarded as generic population-based metaheuristic algorithms. The search process of this norm of algorithms has two focal stages; exploration and exploitation. The exploration phase precedes the exploitation phase which can be regarded as the process of exploring in detail the search space. At the exploration stage, the progress of the search process is launched with a randomly generated population which is then evolved over a number of subsequent generations. The most applicable point of these heuristics is that the next generation of individuals is shaped by collecting the best individuals and then integrating them together. Through this integration, the population is enhanced over the succeeding generations. On the basis of this, the optimizer of the exploration stage includes some design parameters that have to be randomized as much as possible to globally explore the promising solution search space.
.......................................
to complete the discussion, please refer to page 39 and figure 5 of the following reference:
  • asked a question related to Evolutionary Algorithms
Question
40 answers
Recently, there have been published many metaheuristic algorithms mostly based on swarm intelligence. The good future for these field can be applying these algorithms for solving some real problems in the different sectors such as business, marketing, management, intelligent traffic systems, engineering, health care and medicine. Please let's discuses about their applications in the real world and share our case studies.
Relevant answer
More info can be found here:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or simply refer to the same paper at the following address:
  • asked a question related to Evolutionary Algorithms
Question
8 answers
I want to solve some nonlinear optimisation problems like minimising/maximizing f(x)=x^2+sinx*y + 1/xy under the solution space 3<=x<=7, 4<=y<=11 using artificial neural network. Is it possible to solve it?
Relevant answer
Answer
Dear Harish Garg
Here is you are:
Ali Sadollah, Hassan Sayyaadi, Anupam Yadav (2018) “A dynamic metaheuristic optimization model inspired by biological nervous systems: Neural network algorithm”, Applied Soft Computing, 71, pp. 747-782
  • asked a question related to Evolutionary Algorithms
Question
9 answers
How we can measure the quality of an element (cell) in a candidate solution in evolutionary algorithms? For example, a gene in a chromosome? In other words, objective function calculates the quality of each candidate solutions, while I want to find a method to measure the quality of each element of the candidate solution?
Relevant answer
Answer
There are some technical programming points you should take care of them regardless of the above answers: first, you can follow what has been proposed above directly, as in such a case, maybe your solution goes infeasible when you keep a unique cell. So, your analysis will be a misleading chain of function feedbacks. Another mistake can be due to monitoring of weakly-defined fitness function, instead of monitoring the loss of a function that is sensitive to all independent genes. So, you can also change the definition of fitness function to be a another valid function, and also, it can track (sensing) each cell instead of the whole solution block. If you protect your model from these flaws or not able to redefine the objective function, you can focus on the way you programmed your code (functional programming or not). Then, you can simply eliminate the set of selected elements one by one (using a conditional selection scheme) and re-test the evaluations in the loop.
  • asked a question related to Evolutionary Algorithms
Question
7 answers
Local search method helps to increase the exploitation capability of optimization and meta-heuristic algorithm. It can help to avoid local optima also.
Relevant answer
Answer
A hint: it is not OK to be ignorant of nonlinear mathematical optimisation. First you absolutely need to understand optimality criteria - as those conditions are the goal to fulfill. Mathematical optimisation tools are DESIGNED such that accumulation points from the iterative scheme WILL satisfy the optimality criteria, and in most cases you will be able to see the "fault" in the optimality criterion shrinking.
In contrast metaheuristics have no such "guiding light" at all - they fumble in the dark! Few sensible scholars with knowledge in mathematical optimisation would use these tools.
  • asked a question related to Evolutionary Algorithms
Question
8 answers
Are there any global optimization algorithms that support categorical input variables?
Evolutionary algorithms generally need to perform local mutation, however, unlike real or integer variables, it is hard to define similarity of categorical variables
Relevant answer
Answer
We can optimise mixed continuous and categorical variables with Bayesian Optimisation. Please check out our paper Bayesian Optimisation over Multiple Continuous and Categorical Inputs and the code below.
  • asked a question related to Evolutionary Algorithms
Question
79 answers
I am interested to solve a mathematical problem (MILP) using evolutionary algorithms but confuse about which one to choose as a beginner in the programming languages. Suggest an algorithm easy to implements with better results.
Thanks
Relevant answer
Answer
The response from those who know about both - such as me - is that PSO and other meta things will very unlikely be as robust and accurate as a modern branch-and-bound code. I have always been puzzled why people seem to not be interested in what the optimum actually is - I thought that finding an optimum was the whole idea with using optimisation in the first place!
And yet there are more people here that seem to be content with non-optimal points. I would not be, and therefore I do not use meta things, and instead use tools that guarantee that the final output always is optimal. I have always thought that that is the whole purpose with the use of optimisation tools. But for some I guess not. I can't understand why - it seems to be a waste of time.
  • asked a question related to Evolutionary Algorithms
Question
13 answers
I am looking for single-objective problems for which most evolutionary algorithms can find the global optimum. For most of the new benchmark problems (i.e. CEC2015) algorithms are not able to converge to the global optimum. For my experiments, I need simpler problems such as Sphere, so that most algorithms can find the global optimum. Can anyone recommend some problems or a benchmark that I could use?
Thank you in advance!
Relevant answer
Following
  • asked a question related to Evolutionary Algorithms
Question
4 answers
I am working on comparing the performance of evolutionary algorithms in optimizing a suspension model. The suspension model is built using ADAMS/Car. I need the best way to implement these algorithms to work in fatest way with our model.
Relevant answer
Answer
Matlab and Adams easily coupled to each other. you should just write an appropriate objective function in matlab which can use the date in simulink. then use this function in GA tools. your objective function should go to simulink and run the box of Adams/Car in simulink then Return the response to objective function output.
If you write such an objective function you can use GA tools in matlab and optimize your problem easily.
  • asked a question related to Evolutionary Algorithms
Question
8 answers
Evolutionary Algorithms, namely Genetic Algorithm (GA), Particle Swarms Optimization (PSO) and Differential Evolution (DE) are used to solve optimization problems. As in photovoltaics (PV), the I-V characteristic of PV cells is non-linear, which requires a resolution method. So according to the three methods cited (GA, PSO, and DE), what is the best method for solving this non-linear problem according to the strengths and weaknesses of each ?.
Relevant answer
Dear Rachid,
The optimization problem in the pv system where it is required to locate the point of maximum power point is characterized by a relativity broad maximum not a sharp maximum so it will be not very critical for the resolution issue. I think the speed of convergence to the global maximum is important and distinguishing for such a problem. Also the oscillations around the steady sate solution is an other performance parameters of the optimization method.
Best wishes
  • asked a question related to Evolutionary Algorithms
Question
4 answers
I am working on a resource allocation problem using an SPEA 2 evolutionary algorithm. The problem involves a decision variable where each variable has a different domain e.g. Ei < di where Ei is the allocation to a user and di is individual demands. The problem involves a linear constraint such that sum(Ei) = total resource. The probability of the creation of feasible off-springs after crossover and mutation operators is extremely low. So, we need a repair operator for this purpose. I need guidance for the selection of suitable repair operators and how should I apply that, I mean should we repair all solutions or some percentage. I would appreciate the guidance, comments, or any literature reference.
Relevant answer
Answer
Saifullah Khalid What is the objective function? I do not find it in your description That would otherwise be very handy to have. Also, you need to restrict E_i to be non-negative.
I can - with almost certainty - say that this problem is a standard resource allocation problem, for which there are solvers that guarantee to provide an optimal solution, within a second. (My papers on it describe a great variety of models and approaches.) Hence, skip the metaheuristic, please - I do not think it is going to provide an optimum, while using the simple tools described in my papers will always yield optimal solutions.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
I'm working on a project and I wanted to know the Time and Space complexity Analysis of MOEAs. I have searched the google scholar but there isn't any valuable information on this matter. please help me and let me know if there is a good comparative study or article on this subject.
thanks
  • asked a question related to Evolutionary Algorithms
Question
18 answers
What is the best way to find the optimal solution of a non convex non linear optimization problem that really constitutes finding the optimal/ best combination of one member each from a number of groups or sub populations.
Each group of sub population has a number of members that are sequences - an array of elements.
The optimal solution is given by best matching a sequence or (as) member from each group or sub population.
It is similar to recombination of evolutionary algorithms (but not entirely identical to evolutionary algorithms).
Relevant answer
Answer
To me, the best optimal solutions to such problems are use:
1) Convectional neural works.
2) Multivariate exponential families of distributions.
  • asked a question related to Evolutionary Algorithms
Question
5 answers
I'm searching for some good tools that offers easy way to apply evolutionary/genetic algorithm for selecting best feature from a dataset. I was wondering if this task can be performed in KNIME, WEKA or Orange?
Relevant answer
Answer
use weka software
  • asked a question related to Evolutionary Algorithms
Question
3 answers
Is it possible to specify upper bounds on the coefficients other than an universal or global constraint?
I am using excel solver evolutionary algorithm at present. I do not know whether this is the same for other evolutionary algorithms for nonconvex nonlinear optimization problems.
the evolutionary algorithm requires both a lower and upper bound on the optimization problem coefficients.
If i dont specify: all coefficients <= x, but specify an upper bound per coefficient instead: c1 <= x1, c2 <= x2, c3 <= x3, it complains and refuses to run. So i implement and use both.
but specifying individual upper bounds on individual coefficients imply a (significantly) smaller solution space.
I also dont want the algorithm to search outside these individual coefficient upper boundaries. Yet i get the impression that it does. Can someone confirm or refute this? Also, if this does occur, i am of the impression that it can impact the solution i get.
Is there a way to specify individual coefficient (upper) boundaries and ensure that the algorithm only searches within these?
Relevant answer
Answer
With nonlinear problems there may be multiple local solutions which will "attract" the algorithm, preventing it from finding the optimum global solution. Your boundaries will impact the starting point and also the first few provisional steps, so that the outcome may depend on the boundaries. Even if they include the global optimum, this doesn't assure that the algorithm will find it. This problem is too complex for a simple solution that will work in any case. I have solved many such problems. When I don't get an adequate solution with Excel's solver, I resort to C code, which I have tons of. If you have a particular problem or data set, I'd be glad to solve it for you and send you the solution. I wrote a book on the subject, https://www.amazon.com/dp/B07FL7JR1J, which contains many examples. It will free on 1/7/2020.
  • asked a question related to Evolutionary Algorithms
Question
6 answers
Hi everyone,
Is it reasonable to apply a hybrid model based on Artificial Neural Network (ANN) and evolutionary algorithms as a classifier applicable to intrusion detection?
In addition, would we be able to divide data sets into training, validation and test as I have never seen this in papers? I mean researchers usually consider train and test records to build their model.
My last question is that whether we can generate data for intrusion detection using simulation of chemical operations or not.
Thanks you in advance.
Relevant answer
Answer
  • asked a question related to Evolutionary Algorithms
Question
9 answers
I am using genetic algorithm (GA) on different test functions with different hyper parameters to determine the effects of hyper parameters on the algorithm.
My criteria: Is the guessed answer (minimum) by GA close enough to real answer.
"close enough" is determined by "level of accuracy" or LOA.
Problem is, different functions have different input ranges and a using a static LOA for all test function does not seem right.
Question is how should I decide on LOA value? Should it be related to input range of the function being tested?
Example: Schwefel test function has input range of (-500,500) for all inputs and minimum of 0. If the guessed minimum by GA is 0.08 and LOA is 0.1 then this guessed answer is correct because |0 - 0.08| < 0.1 but if the guessed answer is 0.12 it is considered wrong.
Rastrigin test function has range of (-5.12, 5.12) for all inputs. Using same LOA for Rastrigin does not seems right since it has very smaller range and obviously GA will do better with the same LOA.
Should LOA be related to range? for example should LOA of 0.001 be used for Rastrigin since its range is 1/100 range of Schwefel .
Relevant answer
Answer
Each optimization problem ( in any type ) has the exact same components:
1- Upper and Lower Bounds for the search space.
2- Number of dimensions (D) or the size of the problem (number of Xs).
3- Type of the problem (Discrete , Continuous, Binary).
4- One (or more) Objective Function.
5- An optimal solution.
All solutions, or all generated Xs in Range (UperrBound, LowerBound) are correct. However, The better solution is the solution with near cost/fitness value to the optimal solution (component no. 5). Therefore, (in my opinion and based on so many research papers) you can not specify a range for LOA. However, we said that the algorithm (A) is better than algorithm (B) when the cost/fitness is near more than B to the optimal solution in that optimization problem.
You wrote two examples ( Rastrigin and Schwefel ) , these two problems (I think) have been widely tested.. What about "Quartic" test function ? It is a noisy function (i.e., contains a random part). This is a very challenging and interesting test problem, the average solutions for most of evolutionary algorithms and swarm intelligence algorithm reached to something around 10^-5, while the optimal solution is 0. You can not say there is a specific range for LOA for quartic test function, however, you just check how much your algorithm is near to 0.
One more thing, in some research papers, they said, "Any algorithm reached to cost value less than 10^-12, it will be converted to/considered as Zero, because it is a very low value". In this case, they used a very low value. In my opinion, I think this is not fare. For example in Algorithm (A) we reached to 10^150 using 500 iterations, while in algorithm (B) we reached to 10^30 in 500 iterations. Both of them will be Zero, however, A was faster and better than B when reached the optimal solution.
Comparing an algorithm with others should be done based on different evaluation measurements , such as the following:
Note: (after executing the algorithms R run times, R could be 10, 20, 30 )
1- The best (B) and worst (W) results.
2- The mean (Avg) and median (Med).
3- Standard deviation (Std.) .
4- Mean absolute error (MAE) between the algorithms.
5- Statistical analysis (Willxcon, Fridman .... ) between the algorithms.
6- Convergence analysis between the algorithms.
7- Time Complexity.
8- Execution Time (if you have executed all algorithms on the exact same machine).
Based on the above 8 measurements, you can discuss that the range of your algorithm (B,W) is better than the range of the algorithm (B, W). The low value of Std. means that your algorithm is stable more than the others. Statistical analysis methods show how much your algorithm is different to the others. The rest show how much your algorithm is fast as compared to the others in terms of (iterations/fitness evaluation, Algorithmic time, and execution time).
In any run time, you should specify your stop condition:
1- Fixed number of iterations.
2- Fixed number of fitness evaluation.
3- A value represents the algorithm is trapped (sometimes called "Tolerance"). For example, Tolerance = 100, means 100 iterations and the algorithm has not been enhanced/ found new better solution.
Hope I answered your question and did not explain something outside what you needed..
Regards ..
  • asked a question related to Evolutionary Algorithms
Question
5 answers
with evolutionary algorithms, i understand the concept of recombination and mutation, and that offspring may essentially constitute or approximate - be seen as - sub-populations.
but, what if your optimization problem really constitutes, and the initial population is best comprised out, of a combination of sub-populations?
creating a number of sub-populations, and combining them.
i suppose one can see this notion of sub-populations as a characteristic one would recombine and mutate on/ according to.
Relevant answer
Answer
Hi, if you run the algorithm using all the samples of the different sub populations you increase the size of the initial population and mutations will occur between the different samples. At the opposite you can perform different runs using a different sub population for each run. Mutations between sub populations need still to be defined in your approach.
  • asked a question related to Evolutionary Algorithms
Question
6 answers
What is the best/ next step if you suspect the evolutionary algorithm recombines and mutates in an inferior way, for your nonvonvex nonlinear optimization problem?
Write your own heuristic?
How do you implement it?
Relevant answer
Answer
Hi, instead of modifying the mutation rule it would be better in a first step to generate a larger initial population. Indeed, the new mutation rule will be adapted to a particular problem only.
  • asked a question related to Evolutionary Algorithms
Question
4 answers
How good are evolutionary algorithms with equality constraints?
I must add the objective function is specified as the sum of absolutes.
I need/ want to use equality constraints to guide the solution.
Some of the optimization coefficients (variable cells), when summed, must equal to a certain value.
Relevant answer
Answer
Thank you for the question and replies so far. My experience is that evolutionary algorithms (metaheuristics) are good for inequality constraints but not so for equality constraints. So, it is better to eliminate equality constraints to reduce the problem dimension before using evolutionary algorithms. When one equality constraint is eliminated to calculate one (dependent) variable, then inequality constraints may have to be added if there are lower and/or upper bounds on the (dependent) variable.
See the following book chapter for our study on using differential evolution for solving optimization problems with equality constraints.
Sharma S. and Rangaiah G.P., Improved Constraint Handling Technique for Multi-Objective Optimization with Application to Two Fermentation Processes in “Multi-Objective Optimization in Chemical Engineering: Developments and Applications” edited by G.P. Rangaiah and A. Bonilla-Petriciolet, John Wiley, 2013.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
I run a nested or multi level optimization problem, because the nonlinearity is simply too much for the evolutionary algorithm.
I use optimization to search an initial solution, to further search and interrogate at the second level.
It then essentially helps to cut up the solution space, to help the algorithm.
yet, In a sense, at the 2nd level of optimization, the evolutionary algorithm still descents too quickly. It then causes it to miss the better solution.
I have seen something similar with ordinary nonlinear programming/ optimization based on 2nd order partial derivatives, and newtons step etc.
Is there a way to prevent the evolutionary algorithm from descending too quickly?
It can rather descent slower and recalculate at that point.
What are the implications when this phenomenon is possible with nonlinear optimization and evolutionary algorithms? Still too much/ fine nonlinearity?
Relevant answer
Answer
If I understand correctly, you are facing the "exploration vs exploitation" problem. In essence you should experiment with the parameters that affect "exploration". Parameters depend on the algorithm you use. For example you could try with (listed in no particular order)
- representation
- population size (increase)
- mutation operators
- crossover operators
- selection operators
...
Probably you should run a lot of experiments to find the set of parameters that suits you. Also, you should be aware that none of the metaheuristic algorithms in practice (limited execution time) guarantees convergence to the optimal solution. However, with the proper parameters you might get the solutions that are good enough.
I hope that this helps!
  • asked a question related to Evolutionary Algorithms
Question
5 answers
What is the best commercially available non-linear optimization problem evolutionary solver/ algorithm?
Must be freely or easily obtainable.
Not necessarily free.
relatively Easy to install and use.
Anything better than excel solver's evolutionary algorithm?
Relevant answer
Answer
This question, like many others you have posted in the last few days, can all be answered in the same way: it depends on the problems you are wanting to solve.
I would suggest looking at the results of the Black Box Optimisation Benchmarks (BBOB).
There are many different algorithms that are compared and many are freely available via github, or from the authors of the codes. The tremendous advantage is that you can see how they perform on many different types of problems, some of which might be similar to what are wanting to solve in your own work. Furthermore, you won't have to trawl through reams of cloying self-assessment and self-appraisal of many commercial products which, not surprisingly, are always fulsome in praise of their own product.
  • asked a question related to Evolutionary Algorithms
Question
3 answers
Can it happen that the grid size or resolution (to use such language) of the local solutions of a non-linear optimization problem are, or become, too fine for the evolutionary algorithm to pick up?
Is this just a matter of proper sampling? Will the algorithm conduct the appropriate sampling in all cases?
Relevant answer
Answer
I am not sure if I understand your question clearly but if you are having a problem that your evolutionary algorithm (EA) gets stuck on local optimum, this is called as "premature convergence" and yes this can happen if the problem is too complex, local optima is too good, or your algorithm is designed badly.
You may try to redesign/change your genetic operators (selection, crossover, mutation). For example, selection operator of Excel EA may be allowing only good individuals to survive which decreases the diversity and after a number of generations the algorithm gets stuck in the same region.
Using a selection operator which also allows a small number of bad individuals to survive during the generations can increase diversity of the population. You may also increase the mutation rate but since mutation is a random process, it may take too long to leave the solution region you stucked in. Moreover, increasing randomness will decrease your convergence time.
I recommend you to use Python or R programming languages, which has well designed EA packages, or any programming language you are famililar. This will give you chance to modify the operators when neccessary.
  • asked a question related to Evolutionary Algorithms
Question
9 answers
What is the best way to choose initial solutions for a non-linear optimization problem sensitive to the initial solution?
If you have a significant number of optimization problem coefficients (50 in this case), and your optimization problem is such (non-linear with multiple possible local solutions) that it is sensitive to the initial solution, what are the best ways to choose (a set of) initial solutions?
Will the excel solver evolutionary algorithm do this?
Relevant answer
Answer
Optimization algorithms are really pretty simple. Just keep climbing up the hill until you get to the top. Take some precautions to make sure your step size isn't too big and try many different starting points far away form the optimum to make sure you have found the true optimum an not just a local optimum.
  • asked a question related to Evolutionary Algorithms
Question
4 answers
Why would an (the excel) evolutionary algorithm be better at, or simply be able to, solve what seems to be a linear problem, when the LP (and also NLP) algorithm struggle to, or simply fail to.
The problem is of the form:
Max X; X = x1 + x2 + x3...
x1 = (k1 × z1) + (k2 x z2) + (k3 x z3)...
x2 = (k1 x y1) + (k2 x y2) + (k3 x y3)...
x3 = (k1 x q1) + (k2 x q2) + (k3 x q3)...
k are the coefficients of the problem to optimize; z, y, q are factors and are given/ constants.
The evolutionary algorithm can find a (better) solution (to the initial solution). The LP algorithm occasionally finds a solution, and not better (mostly worse) than the evolutionary algorithm. The NLP algorithm does not really find a solution at all.
Relevant answer
Answer
You problem is equivalent to (if I understand your notation):
X=k1(z1+y1+q1)+k2(z2+y2+q2)+k3(z3+y3+q3)->max
If z, y, q are given constants, then their sums are also given constants, say K1, K2, K3
X=k1*K1 + k2*K2 +k3*K3 ->max
There are no constraints for the decision variables k1, k2, k3.
This is a linear form that is maximized as these variables take unlimited max values......, hence there is no unique solution. Something is missing in your problem statement.
  • asked a question related to Evolutionary Algorithms
Question
4 answers
Dear colleagues,
I would like to announce that we have started a Special Issue on AI Journal (MDPI), where you can share your latest research regarding evolutionary computations. Both theoretical contributions and application papers are welcome.
Papers accepted during 2019 will get APCs waived (published free of charge).
Please, find further info on the SI website: https://www.mdpi.com/journal/ai/special_issues/EAIAA
Best regards.
Relevant answer
Answer
Sounds great!
  • asked a question related to Evolutionary Algorithms
Question
8 answers
If we have multiple classifiers and we need to know which one is under-fitting, and which one is overfitting based on performance factors (classification accuracy, and model complexity)
Are there any method to select the dominate classifier (optimal fitting) that balance between the above-mentioned two factors?
Relevant answer
Thank you Salah Mortada for your explanation. However, I would like to draw a figure that compares different methods that used in the experiment result and clearly show the overfitting, under-fitting methods. If you have an example of such an experimental result will be grateful.
  • asked a question related to Evolutionary Algorithms
Question
9 answers
My problem consists:
1. More than thousand Constraints and Variables
2. It is purely 0-1 programming i.e. all variables are binary.
3. Kindly note that I am not a good programmer.
Please provide me some links of books or videos discussing application of GA in Matlab for solving 0-1 programming with large number of variables and constraints.
I have gone through many YouTube videos but they have taken examples with only two or three variables without integer restrictions.
Relevant answer
Answer
Simple:
1) Open optimization toolbox
2) Select GA solver (and enter your objective function and constraint file detail)
3) Set lower bound as 0 upper bound as 1 (this set must be as many variable)
4) Under integer variable put index of all the variable whose values to be either 0 or 1
5) Run
You can get the result.
You may require basic knowledge of how to use optimization toolbox.
Hope that you can do it now.
Sincerely,
Alam
  • asked a question related to Evolutionary Algorithms
Question
2 answers
Dear All,
I want to know how do we predict resonant frequency of antenna from Support Vector Machine by giving Antenna Parameter as input to the model?
Relevant answer
Answer
Thank you very much Raael Sir, I am doing Same and trying to minimize Error
  • asked a question related to Evolutionary Algorithms
Question
6 answers
I am currently working on a project which requires self tuning of BELBIC controllers parameters i.e PID gain and Learning rate of Amygdala and Orbitofrontal Cortex. I need some suggestions as to how I could integrate optimization algorithm i.e. PSO to tune these parameters for a 3rd order NON-Linear System. I know how PSO works, the only thing is, I am having difficulties in linking BELBIC and PSO together.
Relevant answer