Science topic

Global Optimization - Science topic

Explore the latest questions and answers in Global Optimization, and find Global Optimization experts.
Questions related to Global Optimization
  • asked a question related to Global Optimization
Question
12 answers
I am studying on global optimization algorithm now, it seems that there are many different versions of each kind of algorithm. Only for particle swarm optimization (PSO) algorithm, there are many versions like APSO, CPSO, PSOPC, FAPSO, ARPSO, DGHPSOGS (see Han and Liu, 2014, Neurocomputing). In addition, the class of genetic algorithm (GA), differential evolution (ED), ant colony optimization (ACO), simulated annealing (SA) and so on, can also be used to solve the problem. When we develop a new global algorithm, it is worthwhile comparing the performance of these different methods on some benchmark functions (like Resenbrock function and Rastrigin function) in a fair way. For example, I would say the average number of evaluations of cost functions, the success rate, and the average time-consuming is good as measurements for comparison.
So my question is "is there any source code (Matlab) has been developed for comparing different kinds of global optimization (GO) methods?". The code should be easy to use, convenient to fairly compare enough advanced GO methods, and should also have provided enough benchmark functions (given gradient of each function would be better, so that we can compare some gradient-based global optimization algorithm).
I am looking forward to your answer and greatly appreciate your help.
Best wishes,
Bin She.
Relevant answer
Answer
Dear Bin She,
I share my thoughts (I'm not an expert), perhaps it might help you in something:
If you want to compare performance between optimizers, you shouldn't limit yourself by the programming language.
I think that a good way to go forward is to consider each author's code regardless the language (as is pointed out by Luis Gerardo de la Fraga) and instead try an asymptotic analyzes (Big O), because as you said each strategy is quite different and comparing by time might happen that even being the same language (matlab) a strategy is efficiently implemented (e.g. vectorization) but the remaining methods are naively compared, which is also unfair.
Note that some contests (like CEC contests as is pointed out by Stephen Oladipo) considers a measurement of time regardless the language, which is understandable in a contest environment, which I think is not you case.
Fortunately, there are a lot of frameworks that integrates some populations meta heuristics and newton-based methods, and perhaps this list might help you in your research (I've used some of them in my short experience testing):
1) PlatEMO (MATLAB): https://github.com/BIMK/PlatEMO is matlab and I've read several papers that take into account this framework which is nice.
4)Pymo (python): https://pymoo.org/
5)Shark (CPP): I like this because is extremely fast and is ready to be tested in parallel as well.
6)MOEA framework (java) : http://moeaframework.org/
7)Also you might try to look at the code of some contest, check Ponnuthurai Nagaratnam Sugantham github: https://github.com/P-N-Suganthan
As Abubakar Bala said you might look for the code in github, the disadvantage is that the source code might be wrong.
I suggest you to compare them in both quality and complexity,
Good look! :)
  • asked a question related to Global Optimization
Question
4 answers
I have published below 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints:
1) AI++ : Artificial Intelligence Plus Plus
2) Artificial Excellence - A New Branch of Artificial Intelligence
and 18 others.
You can find 20 articles at below Elsevier SSRN pre-prints link:
Does publishing 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints equivalent to publishing 20 BREAKTHROUGH articles in ELSEVIER Journals?
Please kindly explain me your answer.
Relevant answer
Answer
It may be a matter of opinion. Mine is that even though preprints can possibly gain a lot of attention and citations, they are less valuable than the journal or conference papers(especially in Computer Science)...
  • asked a question related to Global Optimization
Question
7 answers
I'm working on some optimal strategies for an environmental surveillance network. My solution is almost based on the meta-heuristic solution. I have to know what the advantages or disadvantages are of heuristic and meta-heuristic optimizations.
Relevant answer
  • asked a question related to Global Optimization
Question
32 answers
Various metaheuristic optimization algorithms with different inspiration sources have been proposed in recent past decades. Unlike mathematical methods, metaheuristics do not require any gradient information and are not dependent on the starting point. Furthermore, they are suitable for complex, nonlinear, and non-convex search spaces, especially when near-global optimum solutions are sought after using limited computational effort. However, some of these metaheuristics are trapped in a local optimum and are not able to escape out, for example. For this purpose, numerous researchers focus on adding efficient mechanisms for enhancing the performance of the standard version of the metaheuristics. Some of them are addressed in the following references:
I will be grateful If anyone can help me to find other efficient mechanisms.
Thanks in advance.
Relevant answer
Answer
I recommend you to check also the CCSA algorithm implemented by a Conscious Neighborhood-based approach which is an effective mechanism to improve other metaheuristic algorithms as well. The CCSA and its full source code are available here:
  • asked a question related to Global Optimization
Question
6 answers
I am trying to solve a MINLP problem using genetic algorithm (from MATLAB's global optimization toolbox).
My number of decision variables is 168.
  • 96 of these decision variables are binary [0 1]
  • The remaining 72 variables can take the integer values of [1 2 3].
The problem is accurately formulated and there is no doubt about it.
Following are my doubts:
  1. What is the appropriate population size to be used? I am trying: 2*168, 3*168, and 4*168. But the size seems to be large. Since all the decision variables are integers, what do you suggest about the population size?
  2. For different initial guesses, I get different optimized solutions. I am using 20,50 and 60 % of the population size as the initial population matrix. Of course, I know that we cannot guarantee global optimum with GA; but still, what can you suggest to get global optimum? Trying multiple times and getting the lowest fval doesn't sound good to me.
  3. The mutants are taken as 10% of the total population. Can you suggest an appropriate size for it ?
Finally, when the initial population matrix is not defined at all, I do not get the linear constraints (inequality) satisfied. But with some initial population size, I get ( but i think the optimum values are local and not global.
  1. Other than genetic algorithm, are there other optimizers for such problems? (I do not like to think surrogate-optimization is a good idea ), which are free and can handle such large MINLP problem?
  2. Are there other toolboxes (apart from the global optimization toolbox), which are free but can be used to handle large MINLP?
Thanks
Relevant answer
Answer
Theodor: Perhaps the objective function has been decided by someone else - for example the company that you work for. Then you simply have to use what is given.
Personally I would not use a metaheuristic - as it is not even devised to find an optimum - something that everyone should know.
  • asked a question related to Global Optimization
Question
15 answers
I want to maximize my function which is Y = (a1 * log(x1) - x1 + b1) + (a2 * log(x2) - x2 + b2)
(a1,a2,b1,b2 constants)
subject to:
1) value <= x1 <= value
2) value <= x2 <= value
3) x1 + x2 <= 100k
Tried alot of techniques and methods but not getting satisfactory results. If someone can guide how to solve this and some python packages references?
Relevant answer
Answer
I would use
with Mosek (or Ecos) as the underlying optimizer.
You will get the global optimum in that case.
  • asked a question related to Global Optimization
Question
12 answers
When we are dealing with Large-Scale Global Optimization (LSGO), what is the best-based approach: algorithms-based decomposition or algorithms-based Non decomposition?
Relevant answer
  • asked a question related to Global Optimization
Question
8 answers
I am working on a global optimization algorithm and I am thinking to use my own benchmark function. The function is provided in the attached images. Please, take a look and share your thoughts about it?
Relevant answer
Answer
It looks very interesting and I would include it as part of a suite (send me the citation once it is published). What I would suggest is to include it with the standard functions such as :
De Jong's function, Axis parallel hyper-ellipsoid function, Rotated hyper-ellipsoid function, Moved axis parallel hyper-ellipsoid function, Langermann's function,
Michalewicz's function, Branins's rcos function,Rosenbrock's valley ,Rastrigin's function, Schwefel's function ,Easom's function,Goldstein-Price's function, Six-hump camel back function, Griewangk's function , Sum of different power function, Ackley's Path function.
Each of those test a particular anomaly and I would suggest characterising your anomaly and compare it with the rest of the functions to set it apart from the others.
Very nice
Regards
  • asked a question related to Global Optimization
Question
3 answers
When comparing two optimization methods on a function, should we use a two sample t-test or a paired t-test? I would say we should use the latter since paired t-test is used for correlated observations and in our case we can consider the unit of observation to be the function and the two methods as two treatments. Am I right?
Thank you in advance
Relevant answer
Answer
A novel statistical approach for comparing meta-heuristic stochastic optimization algorithms according to the distribution of the solutions in the search space is introduced, known as extended Deep Statistical Comparison. This approach is an extension of the recently proposed Deep Statistical Comparison approach used for comparing meta-heuristic stochastic optimization algorithms according to the solutions values. Its main contribution is that the algorithms are compared not only according to obtained solutions values, but also according to the distribution of the obtained solutions in the search space. The information it provides can additionally help to identify exploitation and exploration powers of the compared algorithms. This is important when dealing with a multimodal search space, where there are a lot of local optima with similar values. The benchmark results show that our proposed approach gives promising results and can be used for a statistical comparison of meta-heuristic stochastic optimization algorithms according to solutions values and their distribution in the search space. For more information, you can refer the following paper.
  • asked a question related to Global Optimization
Question
6 answers
Hello, I currently solve the optimization problem (please see the attached figure).
Basically, this problem is equivalent to find the confidence interval for logistic regression. The objective function is linear (no second derivative), meanwhile, the constraint is non-linear. Specifically, I used n = 1, alpha = 0.05, theta = logit of p where p = [0,1] (for detail, please see binomial distribution). Thus, I have a closed-form solution for the gradient and jacobian for objective and constraints respectively.
In R, I first tried the alabama::auglag function which used augmented Lagrangian method with BFGS (as a default) and nloptr::auglag function which used augmented Lagrangian method with SLSQP (i.e. SLSQP as a local minimizer). Although they were able to find the (global) minimizer most time, sometimes they failed and produced a far-off solution. After all, I could obtain the best (most stable) result using SLSQP method (nloptr::nloptr with algorithm=NLOPT_LD_SLSQP).
Now, I have a question of why SLSQP produced a better result in this setting than the first two methods and why the first two methods (augmented Lagrangian with BFGS and SLSQP as a local optimizer) did not perform well. Another question is, considering my problem setting, what would be the best method to find the optimizer?
Any comments and suggestions would be much appreciated. Thanks.
Relevant answer
Answer
Suyoung Park My advise is to first look at several methodologies from mathematical optimisation, since they are supported by theory.
A metaheuristic, or for that matter any heuristic, is not designed to find an optimum, but a "ballpark figure" - that is - perhaps in a nearby neighbourhood, but never very near an optimum. The best option is to look at papers or books that describe solid and convergent methods from the class of mathematical optimisation.
  • asked a question related to Global Optimization
Question
13 answers
I am looking for single-objective problems for which most evolutionary algorithms can find the global optimum. For most of the new benchmark problems (i.e. CEC2015) algorithms are not able to converge to the global optimum. For my experiments, I need simpler problems such as Sphere, so that most algorithms can find the global optimum. Can anyone recommend some problems or a benchmark that I could use?
Thank you in advance!
Relevant answer
Following
  • asked a question related to Global Optimization
Question
4 answers
Deterministic Global optimization relies on convex relaxation of the non-convex problems. Certain nonlinearities are duly converted into linear forms underestimators to be solved by efficient MILP solvers (e.g. signomial functions/ bilinear terms).
Most nonlinearityies are approximated to linear functions by piece-wise linearizations. However, I am wondering if this linearizations guarantees that the approximations are understimators of the original nonconvex problem (i.e. for all x in Domf, f(x) >= u(x) where u is the understimator)
because otherwise the understimator may miss the global optimum during the branch and bound process.
Can the solver still converge even if the relaxation is not an understimator?
Relevant answer
Answer
That's right. More precisely, if an inequality is of "<=" type, we use convex under-estimators for the left-hand side. If the inequality is of ">=" type, we use concave over-estimators (or you can just multiply the inequality by -1).
McCormick's original paper (from 1972) explains this very well. So does this paper:
Ryoo, H. S., & Sahinidis, N. V. (1996). A branch-and-reduce approach to global optimization. Journal of global optimization, 8(2), 107-138.
  • asked a question related to Global Optimization
Question
7 answers
Hello everyone,
What are typical reasons for an unstable behaviour of the CMA-ES?
I want to optimize a problem with a solution vector of dimension 3300 using the CMA-ES to find a (preferably global) minimum. The problem can be solved using PSO, but applying CMA-ES results in non-convergent behaviour. I have found several up-to-date articles regarding applying the CMA-ES on large scale problems, with the focus on increasing performance speed an reducing memory usage. But I have found no information regarding the CMA-ES being unstable for large dimensions, which is my main concern.
The (repeated) behaviour is convergent decreasing in the beginning, for then to increase again and continue increasing. I use the Python-Code published by Nikolaus Hansen on Github (https://github.com/CMA-ES/pycma) with the suggested settings for lambda etc.
I found the advice to increase the population size lambda and thereby increase stability. Until then I used the suggested lambda=4+int( 3*np.log(n) )=28, and increased it to lambda=88, with the same outcome (of an initially convergent behaviour followed by a divergent behaviour).
What are the typical reasons for an unstable behaviour of the CMA-ES?
Thank you!
Daniel Wrede
Relevant answer
Answer
In Chapter 2 and 7
  • asked a question related to Global Optimization
Question
12 answers
I'm using a genetic algorithm based global optimisation package, USPEX, to search stable structures of non-stoichiometric transition metal compounds. The code has a limitation - we aren't able to specify the number of k points explicitly, instead only a value called 'kresolution' can be entered. Higher is this value, lower is the density of the k grid. The thing is that it uses the same value of kresolution for all of the structures it searches. I get somewhere around 600 k points in the irreducible Brillouin zone for triclinic structures, which for most part are just slightly distorted versions of other crystal systems (cubic, hexagonal, monoclinic, etc), whereas I get anywhere around 30 to 70 k points for structures other than triclinic, all of these for the same value of kresolution. How reliable are these calculations? Can you please shed some light on what we may call an optimum number of k points in the irreducible Brillouin zone.
Relevant answer
Answer
Dear Hitanshu,
The reason gamma-centred even grids sometimes have better convergence is because they avoid the gamma point. The gamma point is an extremely high symmetry point, meaning that almost all states will have an energy maximum or minimum there -- in other words, it is highly atypical and hence a poor sampling point.
However if you have hexagonal symmetry then you *must* use the gamma-point, i.e. odd grids if you're using a gamma-centred grid. This is because the hexagonal symmetry operations applied to an even grid generate points outside the Brillouin zone (see the papers of Chadi and Cohen for details).
In practice, for most simulations it is usually sufficient to plot the k-point convergence for all the odd grids separately from all the even grids, and you'll see that both converge to the same value but at different rates.
The k-point convergence depends crucially on two things:
1) the size of the space being sampled (the Brillouin zone)
The larger the real-space cell, the smaller the Brillouin zone and so the fewer k-points you need. This is why a k-point spacing (or "resolution") is usually a better guide to the sampling quality than just the number of points. You need to be careful to check the units: some codes use units of 1/A and some use 2pi/A (and some use inverse Bohrs);
2) the smoothness of the Fermi surface
The Fermi-surface is smooth and well-behaved for an insulator, but for a metal it is discontinuous at 0 K. This means metals typically need much higher k-point sampling than insulators, not to get the band-energies accurately, but to determine the band occupancies. Most codes allow some kind of smoothing function to be applied to the Fermi level, usually a Gaussian, Fermi-Dirac distribution, or a Methfessel-Paxton scheme. This is essentially like heating the electrons up, so the Fermi-surface is smoothed out and isn't so ill-behaved, allowing calculations with fewer k-points.
Hope that helps,
Phil Hasnip
(CASTEP developer)
  • asked a question related to Global Optimization
Question
2 answers
I have a few embarrassingly basic questions about USPEX and job submission. Here goes...
  1. What are USPEX and USPEX.m. What is the difference between them?
  2. Where and how does the code make use of MATLAB?
  3. qsub VS mpirun VS llsubmit = Me pulling my hair out and going crazy!
  4. Can someone be kind enough to guide me (step by step) in running EX 01 (Si). I am able to do VASP calculations on my own for which I use the following file (multiple.cmd) for submission via the terminal command: llsubmit multiple.cmd
    • #!/bin/bash
    • #@ output     = test.out
    • #@ error      = test.err
    • #@ job_type   =  MPICH
    • #@ node = 1
    • #@ tasks_per_node = 4
    • #@ class      = Small
    • #@ environment = COPY_ALL
    • #@ queue
    • mpirun -n 4 vasp > log
    Thanks in advance.
Relevant answer
Answer
Hi.,
Which version of USPEX are u using? I would say both USPEX (in new version) and USPEX.m (old version) are same. In brief, USPEX uses the MATLAB to generate the structures based powerful algorithms (Evolutionary, META, PSO, etc ..) and variation operators (see the manual for more details) and it uses the external (ab-initio) codes vasp (in your case) to do local optimization to find out the global minimum energy and meta stable structures. I am not aware of llsubmit. If your using latest version of USPEX then you can just check using the following command USPEX -r and it will perform first generation of structures using MATLAB and you can realize the same by looking at the OUTPUT.txt and log files based on your INPUT.txt. Hope this will be helpful.
  • asked a question related to Global Optimization
Question
14 answers
I am trying to optimize a function that is non-linear in parameters, three in number. I am using Genetic Algorithms (GA) for this purpose. Thus, I have a function of time that is non-linear in three parameters as time-series data. I am using the ga() function of the GA package in R language for the purpose. However, as I see, the initial values that I set for the parameters heavily influences the parameters computed by the ga() function. I also read the following article:
Scrucca, L. (2013). GA: a package for genetic algorithms in R. Journal of Statistical Software, 53(4), 1-37.
In section 4.4 Curve Fitting, if I use the following initial values (min, max): a(1000, 10,000), b(0,10), and c(0.5, 10) instead of the ones used, that is a(3000, 4000), b(0, 1) and c(2, 4) I get completely different results from the ones obtained by the paper. I get a=2772, b=.0235, c=4.07 as against a= 3534.741, b=0.01575454, c= 2.800797 in the paper.
My understanding is that global optimization techniques such as GA would be able to find out global optima irrespective of initial values although it might take more or less iterations depending on the initial values of parameters. Why is this not happening in case of my function and also the example that I cited?
Thanking you all in advance.
Relevant answer
Answer
There is no guarantee to obtain a global optimal solution with any metaheuristic, unless you make near infinite iterations. Also, because GAs are random, the solution obtained may be different in any moment. If you can use another method to obtain a good approximation to the optimal solution, GA will improve it.
  • asked a question related to Global Optimization
Question
1 answer
I have studied and understood the Moment-SOS hierarchy proposed by Lasserre where a sequence of semi-definite programs are required to be solved and a rank condition is invoked in order to check if the global solution is found. I was not able to find such conditions for its dual viewpoint ( also known as the Putinar's Positivstellensatz). Alternatively, is there a similar rank condition for Parrilo's sum-of-squares relaxation?
Relevant answer
Answer
Dear
I don't understand ur question. Please explain it more. I suggest u that u should read my research paper by following me. Because my research work similar with your problem. May be it will helpful to solve ur problem.
  • asked a question related to Global Optimization
Question
5 answers
(Code Optimization) of Compiler
Relevant answer
Answer
Thank You so much @ Peter Breuer sir and @George Filatov...
Your reply is very much appreciated..!!
  • asked a question related to Global Optimization
Question
12 answers
Is there a deterministic solution to find the global optimum of a convex function , subject to non-convex constraints?
download the attachment as an example please
It's only an example; I am looking for a general method to find the optimal point in any convex function with non-convex constraints
Relevant answer
Answer
While it is correct in principle, you don't know a priori how many the critical points are or where they are. For non-convex problems with no known Lipschitz constants for the functions involved, there is NO methodology that can guarantee to find the optimum until the Sun swallows the Earth.
  • asked a question related to Global Optimization
Question
13 answers
I am looking for the codes for all 20 large-scale global optimization problems. They should not be confused with the Constrained Real-Parameter Optimization problems from the same year. I looked for the codes on the provided link but the folder is empty.
Relevant answer
Answer
Very good. Now all researchers can obtain those problems.
  • asked a question related to Global Optimization
Question
5 answers
Are there versions of CMA-ES specifically designed for high dimensional search spaces? Are there any implementations available (preferable in MATLAB)?
Relevant answer
Answer
you can check this:
Loshchilov, I. (2014, July). A computationally efficient limited memory CMA-ES for large scale optimization. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation (pp. 397-404). ACM.
  • asked a question related to Global Optimization
Question
2 answers
I am using NSGA-II for carrying out a study. For this I intend to stabilize the algorithm. Now when I am testing algorithm with ZDT-4 test function, I am not getting satisfactory results. I am using "https://in.mathworks.com/matlabcentral/fileexchange/10429-nsga-ii--a-multi-objective-optimization-algorithm" as source code and I have done some modifications to it according to my need. ZDT-4 is converging to local pareto optimal solution. How to get global optimal solution? All other functions are working fine.
Relevant answer
Answer
Dear Abhinav Kumar Sharma,
Code of NSGA_II is very complicated and in each iteration you should be sure about the correctness of the new matrixes. It may be a defect in this matter.
  • asked a question related to Global Optimization
Question
11 answers
Hello! For Global Optimization I often use random-search oriented methods, such as Genetic Alg., Cross-Entropy Alg., etc. I always see, that accuracy and speed of convergence very large depend of Goal Function structure and parameters. For example, for MLE optimization we can use Weibul CDF as
{1-exp(-a*t^b)} or {1-exp(-t/w)^b)}. May be, you have read some generic recommendations, how to select structure and set of Goal Function parameters.
Relevant answer
Answer
So, you are asking what parametrization of the otherwise well defined Goal Function (better known under the name of objective function) is better in terms of how quickly you will arive at final solution.  In the example shown the choice is between the pair (a,b) and (w,b).  They are mathematically equivalent, but in computational practice one of them may be better than the other.  Being a physicist I prefer the second form in which it is clear that b is dimensionless while w is expressed in the same units as t.  The other reason for my preference is that it is probably easier to estimate initial values for w and b than for (a,b) pair.  Needless to say that good initial guess never harms.
In more general setting:  the objective function should be close to a quadratic form of unknown parametrs at least when considered near optimum (or equivalently: its gradient should be close to multilinear expression in unknowns).  This, however, is easy to check when you already know the solution
  • asked a question related to Global Optimization
Question
3 answers
Can SQP solve a 6-variable nonlinear global optimization problem? In which, th e objective function has 6 variables including forces and angles. I have tried the toolbox in MATLAB, but can not get the optimal angle solution.
Relevant answer
Answer
You explain the detail of your problem.
  • asked a question related to Global Optimization
Question
9 answers
I want to find another algorithm instead of PSO to solve a 6-variable real-time global optimization problem. Does anyone can provide some ideas?
 Thank everyone!
The objective function is:
fun = (k1*(v1-((uu(1)*cos(uu(5)) - FL(1)*sin(uu(5)) +...
uu(2)*cos(uu(6)) - FL(2)*sin(uu(6)) +...
uu(3)*cos(uu7) - FL(3)*sin(uu7) +...
uu(4)*cos(uu8) - FL(4)*sin(uu8)) /M))^2 + ...
k2*(v2-((uu(1)*sin(uu(5)) + FL(1)*cos(uu(5)) +...
uu(2)*sin(uu(6)) + FL(2)*cos(uu(6)) +...
uu(3)*sin(uu7) + FL(3)*cos(uu7) +...
uu(4)*sin(uu8) + FL(4)*cos(uu8)) /M))^2 + ...
k3*(v3-(((uu(2)*cos(uu(6)) - FL(2)*sin(uu(6)) - uu(1)*cos(uu(5)) + FL(1)*sin(uu(5)) + ...
uu(3)*cos(uu7) - FL(3)*sin(uu7) - uu(4)*cos(uu8) + FL(4)*sin(uu8)) *Lh + ...
(uu(1)*sin(uu(5)) + FL(1)*cos(uu(5)) + uu(2)*sin(uu(6)) + FL(2)*cos(uu(6))) *Lf +...
(-uu(3)*sin(uu7) - FL(3)*cos(uu7) - uu(4)*sin(uu8) - FL(4)*cos(uu8)) *Lr) /J))^2);
in which, uu(i) is the variables while others are parameters.
I want to calculate the optimal solution of the following function.
I have used the fmincin() in matlab to solve this problem, but the result of angle variable uu(5) is always not proper.
And the range of uu(i)i=1..4 is from -500N to 500N, while the range of uu(5) is from -40 degrees to 40 degrees. M=200, J=100, v is a random number which has the range of [-1 1]. FL is also random from [0, 200] .
Relevant answer
As people mentioned above, PSO is probably not the way to go, because it is slow compared to deterministic methods, and cannot guarantee global optimality. Do you know if your problem is convex or nonconvex? Do you know of any particular structure it may have?
How big is your domain and what accuracy do you want? As an example, if you have a domain from 0-10, and you are only interested in an accuracy of +-0.5 you can very easily find an answer. However, if your variables can take very broad values, and you need very high accuracy, the you might be in trouble.
  • asked a question related to Global Optimization
Question
4 answers
In the first paper I linked it states that the activation function for the XOR problem is a step function, and in the second paper I linked the activation function is a logistic function. So, I was wondering, the activation function is problem specific? I am asking because I am trying to figure out why my implementation fails to learn the solution to XOR. In the second paper i linked, it states "the search range is set from -2 to +2", I haven't figured out - the search range of what - the weights? 
Thank you
Relevant answer
Answer
No, the activation function isn't problem-specific: it is possible to use either step functions or logistic functions for the network-it's the architecture of the network that affects its ability to represent a rule. Of course the tuning of weights necessary to realize a given rule depends on the activation function. What is sought is a point in the space of weights-so the interval refers to them, of course. 
  • asked a question related to Global Optimization
Question
13 answers
Hi There,
What is the performance of NSGA-II (Non-dominated sorted genetic algorithm)? Since this is a heuristic algorithm, it cannot guarantee global optimal. Then what is the difference between the solutions obtained by NSGA-II and the globally optimal solutions? 
Thanks
Relevant answer
Answer
Dear Arash Olia,
NSGA-II offers a range of  solutions while heuristic and meta-heuristic algorithms search only for a single solution. NSGA-II tries to find a good trade-off between some conflicting objectives. More precisely, we provide a set of solutions for user and the user can choose one of them according to some criteria (it is also possible to use some computer methods to do so). The performance of a multi-objective algorithm such as NSGA-II strongly depends on its diversification components. 
  • asked a question related to Global Optimization
Question
4 answers
Dear researchers?
I read an abstract here (http://www.dissertationtopic.net/doc/176801) and I do not really understand what the 'lookout algorithm' is, and also the overlook points.
Thank you.
Relevant answer
Answer
Dear Waqas;
I have seen it in other papers.  It is related to global optimization. The dissertation is not available freely, I may have to buy it :)
  • asked a question related to Global Optimization
Question
7 answers
Hi,
Which is the fair and relevant stopping criteria for comparing performance of different algorithms such as PSO, GA, CMAES, DE, etc. The optimization problem is the large scale one where we want to minimize the objective function of nonlinear least square.
1) Max number of iteration/generation (maybe not suitable in large scale problem?)
2) Max number of function evaluation
3) Max CPU time (how to know it will converge or not?)
4) Minimum value of cost function (this setting may lead to very high CPU times if algorithms is not perform well)
Thanks in advance
Relevant answer
Answer
If each function evaluation is costly (i.e., evaluating the function is the bottleneck), you shall use as stopping criterion a max number of evaluations. Otherwise, and in case you do not have any lower bound on the optimal solution value to compute some type of gap, CPU time can be a good choice.  If you want to compare the results of different algorithms on a set of instances you can then use data/performance profiles (see http://www.mcs.anl.gov/~more/dfo/). 
Regards,
Alberto 
  • asked a question related to Global Optimization
Question
4 answers
Hi Researchers/Experts,
I am starting a journey towards understanding and working out problems in optimization/Local an global optimization. Should I start from Operational research (OR) basics upwards or Calculus (Derivatives and so on...) or both. You can also give materials references/papers that can assist me realize this objective.
Thank you 
Relevant answer
Answer
That depends on your stamina, I would say, and mathematics background. There are several good monographs/books on the subject to start with, in order to get a feel for the difficulty of the problem, and the importance of actually having to work rather hard to find a globally optimal solution - and to KNOW that you have found one.
Here are some suggested books on the subject:
1. First: a site with several example books and links to sites:
3. Yet another site with example books, in fact a whole book series!
4. One of the better overviews of the problem and how to attack it: http://www.amazon.com/Global-Optimization-Algorithms-Applications-Mos-Siam/dp/1611972663
It is a fascinating problem, and one that probably never will be solved completely.
  • asked a question related to Global Optimization
Question
4 answers
Human eyes use very little time tell a part objects in terms of size, height etc, But for a machine, it would have to do some pairwise comparisons and this seems to waste time.Scheduling VM in appropriate PM in number of PMs in a cloud center is actually a global optimization problem and we can get around that, it can be a great success. So if we can emulate human eyes, it can be great. 
What do i need to know or read (books or anything) for me to embark on this research
  • asked a question related to Global Optimization
Question
3 answers
TOOL used Cadence virtuoso ADE gxl
analog circuit sizing using optimization
 
Thanks for your reply but when i do optimization for these value in 65nm technology
Wn -from 0.4u to 10u
Ln -from 60n to 300n
Wp -from 0.4u to 10u
Lp -from 60n to 300n
and
Vg1=300mv to 700mv
Vg2=600mv to 900mv
It took more than one day to optimize and atlast didnt gave the optimal solution so my workstation specs are i7, 8gb ram so guide me to make it faster as we go for larger circuits it will be even time consuming
 
Relevant answer
Answer
The time is a parameter depends mainly on  the minima  and the platform you run your program, if you want to get local minima, it will takes a few minutes, If it is global it will takes more.
  • asked a question related to Global Optimization
Question
2 answers
In the Real-Parameter Black-Box Optimization Benchmark (BBOB) multi-modal functions are divided into two sets: “Multi-modal functions with adequate global structure” and “Multi-modal functions with weak global structure”. What is the formal definition of “adequate” and “weak” global structure?
Relevant answer
Answer
Definitions of local solutions is important in continuous minimization problem. That is, there exist a point such that a local minimum also a local maximum inner of flat region of a objective function. For example, the point 0 on convex function: max{x2-1,0} is a local minimum and also local maximum. Such a duplicated definition for one point is clear up using our new definition of local minimal value set. For an objective function, if the number of connected component of all local minimal value set is one, the function is weak unimodal, otherwise the function multimodal.
Our original paper of the abovel aricle have been written in japanese, its translated version is see :"H.Kanemitsu, M.Miyakosi and M.Shimbo: Properties of Unimodal and Multimodal Functions Defined by Local Minimal Values Set,Electronics and Communications(Part III: Fundamental Electronic Science), Vol.81, No.1, pp.42-51, 1998.1."
Moreover, the following aritcle is available on RearchGate: Hideo Kanemitsu et al.:
``Definitions and Properties of (Local) Minima and Multimodal Functions using Level Set for Continuous Optimization Problems,'' Proc. of 2013 International Symposium on Nonlinear Theory and its Applications (NOLTA'2013),
pp.94-97, 2013(Sep.), Santa Fe(U.S.A.).
  • asked a question related to Global Optimization
Question
10 answers
Problem statement: to optimize the scheduling of a device(air conditioner) in  RTP price environment( where the prices are changing every hour) so that the  cost over a day is minimized.This can be called a case of individual optimization. Likewise i have to optimize 3 AC"s separately, where A=cost1+cost2+cost3.
Global optimization considers  for instance three houses(AC's), to minimize the total cost of three AC's  together. Which means now i have to solve for a consumption matrix,consisting of 3 AC consumption vectors.B=cost_total(not summation of the 3 costs !). Do you think the cost savings by global optimization will be lesser in any sense than the individual optimization(B<A).If so why ?
explanations needed plzzz :)
Relevant answer
Answer
dependence in the sense the q_i(t) is summed over the 3 and then minimized so do you not think that it can somehow balance out the consumption in the 3 ac's in every instant(t)...their temperature setting in purely independent of each other...but say if the temperature setting of 1--20 °C, 2--22° C, 3--25° C...and depending on the outdoor temperature which is fixed for all...if say individual optim...gives q_1(56)--3.5 kw, q_2(56)--2.5 kw ,q_3(56)--1.5 kw...note it (logically)....can it not become q_1(56)--3.6 kw, q_2(56)--2.3 kw , q_3(56)--1.3 kw...in global......(56 is any time slot)...is there any possibility :)
  • asked a question related to Global Optimization
Question
7 answers
I know that Branch and Bound can solve linear and integer optimization problems.
Relevant answer
Answer
Integer programming with quadratic constraints is "worse" than NP-hard.  In 1973, Robert G. Jeroslow proved that it is "undecidable", which means that there cannot exist any algorithm that is guaranteed to terminate in finite time.
Nevertheless, in practice, most problems have bounded variables.  In that case, the problem becomes "merely" NP-hard.
You might find this survey paper useful:
S. Burer & A.N. Letchford (2012) Non-convex mixed-integer nonlinear programming: a survey. Surveys in Oper. Res. and Mgmt. Sci., 17(2), 97-106.
  • asked a question related to Global Optimization
Question
3 answers
I need tests instances for a snack pack problem to test a simulating annealing algorithm
Relevant answer
Answer
If you mean knapsack problem, you should check David Pisinger' work using exact algorithms. Generators for test instances are available at http://www.diku.dk/~pisinger/codes.html
  • asked a question related to Global Optimization
Question
12 answers
Is there any simple way for obtaining global optimum when you have nonlinear conditions for optimization problem.(Toolbox of Matlab is useful however, it has some disadvantages)
Relevant answer
Answer
If you want to compute a proven global optimal solution to an optimization problem with nonlinear conditions, then it heavily depends on the type of nonlinearity: in case you have continuous variables only (that is, no integer conditions) and convex constraints, than any local optimum is global already. In case you have integer variables and/or non-convex constraints, you have to use branch-and-bound methods. Here the original problem is split into a sequence of subproblems, each being continuous and convex (maybe even linear).
So much for the theory. Now you're asking for "simple ways" in terms of tools. If you have some money, you can buy a commercial license of the solver "Baron". A cheaper variant is the solver "SCIP" from scip.zib.de, which is free for academic purposes, and open-source. It comes with a built-in modeling language "Zimpl", which is rather easy to use, so you can type in your problems in this language and solve them right away.
  • asked a question related to Global Optimization
Question
5 answers
I have read the papers of W.B.Dolan and G.Athier, other ideas may be useful.Thanks a lot.
Relevant answer
Answer
What's your current programming skill in C?
  • asked a question related to Global Optimization
Question
14 answers
Most search techniques for continuous search spaces use some type of gradient -- one solution is better than another, so solutions move in that direction.  A specific example is the attraction vector in Particle Swarm Optimization. 
Further, Particle Swarm Optimization begins with a cornfield vector/sphere function (see Section 3.2 in [J. Kennedy and R.C. Eberhart. (1995) “Particle swarm optimization,” IEEE ICNN, pp 1942-1948]), Differential Evolution builds its foundation from a simple unimodal cost function (see Figure 1 in [R. Storn and K. Price. (1997) “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces.” J. Global Optimization, 11:341-359]), and Evolution Strategies (ES) also explains its functionality starting with a mutation cloud shown against a unimodal search space (see Figure 2(b) in [H.-G. Beyer and H.-P. Schwefel. (2002) “Evolution strategies: a comprehensive introduction.” Natural Computing, 1:3-52]).
Are there any exmaples of metaheuristics that are explicitly and initially designed for multi-modal search spaces?
Relevant answer
Answer
How about Estimation of Distribution Algorithms? The idea of learning a probability distribution that generates optimal solutions is general enough to capture the multimodal case, although arguably multimodality isn't an explicit aim of many of these approaches. However, there are certainly a number of EDAs that attempt to solve multimodal problems, e.g.:
Pena et al, Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks
Yang et al, Globally Multimodal Problem Optimization Via an Estimation of Distribution Algorithm Based on Unsupervised Learning of Bayesian Networks
Also there seem to be quite a few swarm algorithms that are well suited to multimodal problems by merit of limiting communication between parts of the swarm - either explicitly via multiple swarms or implicitly via interactions that decay rapidly with distance (e.g. firefly, gravitational search). However, it's often unclear whether this is an explicit aim of the algorithm designer, or just a consequence of mimicking real world processes. You could also argue that these are derivatives of PSO, so 'initially designed' may also be up for debate.
  • asked a question related to Global Optimization
Question
2 answers
Hello,
For those of you who are familiar with Pincus Theorem which is based on finding a global maximum of ANY objective function. 
Can anyone, kindly, provide me with another similar theorem which finds the global maximum of any objective function (No constraints here).
Relevant answer
Answer
I hope this can be helpful: this paper deals with a similar problem, and indeed, cites Pincus's Theorem and compares its main results with it:
Conditions for Global Optimality in Nonlinear Programming
James E. Falk
Operations Research
Vol. 21, No. 1, Mathematical Programming and Its Applications (Jan. - Feb., 1973), pp. 337-340
  • asked a question related to Global Optimization
Question
1 answer
Is it possible to perform global optimization (GLOBOP) with effective fragment potential (EFP) in the latest version of GAMESS?
If anyone has tried, could you please let me see some example inputs?
Relevant answer
Answer
Dear friend
Yes it is possible to use it.
You give the inputs with lesser values and then gradually increase the inputs values.
All the best.
Best regards
Dr.Indrajit Mandal
  • asked a question related to Global Optimization
Question
6 answers
Hello,
I would like to know that how I can find the number of variables (especially the integer ones) in GAMS (General Algebraic Modeling System) codes.
Does GAMS platform have any options to show the number of variables?
Any help would be appreciated.
Regards,
Morteza
Relevant answer
Answer
Hi,
If you are working with GAMS IDE (the Integrated development Environment of GAMS that runs in Windows), then you can find the number of variables (and their type) in the log file. The log file is the window that pops up when you run your code. You can look for the number of columns (i.e. the variables) and the number of integer-columns (i.e. the number of integer variables)
That is the easiest way for me to identify the number of variables, but maybe someone else has a better way to do it.
Regards,
Laura
  • asked a question related to Global Optimization
Question
20 answers
Heuristics, Hyper-Heuristic and Meta heuristics are most often used with machine learning techniques and global optimization methods. firstly, i want to know clear difference between these terms. Secondly, i want to know how can one recognize that a particular machine learning or optimization technique falls under which of above mentioned three methods? e.g as i studied in literature review genetic algorithm falls under meta heuristic.  and  swarm intelligence uses meta heuristic and hyper heuristic too. etc 
But at the same time i read that genetic algorithm is machine learning method and machine learning methods use hyper heuristics. 
Thank you in advance.  
Relevant answer
Answer
When the problem to be solved is intractable (cannot be solved to optimal in polynomial time or takes long time to solve)  then we start thinking in alternative solutions and mostly we relax our requirements (i.e accept near optimal solutions).
Here comes to the scene many approaches to attack such hard problems, among them is what you mentioned; heuristics, meta-heuristics and hyper-heuristics.
Heuristics is simply to use domain knowledge (knowledge about the problem) to speedup the solution. For instance, if you are trying to solve the traveling sales man problem TSP, and you are at  city A, then your heuristic could be "next take the closet city to A using aerial distance" .  Usually this provides very quick solution (very fast convergence), however it can easily stuck at local optimal.
Now meta-heuristics. As already mentioned, heuristics 1) use domains specific knowledge and therefore cannot be easily used to solve other problems(thought for a specific problem). 2) Heuristics can easily stuck at local optima. Meta-heuristics are proposed both of these two drawbacks. So Genetic Algorithms (GAs) are kind of meta-heuristics which can be used to solve the TSP problem, Scheduling problems,.... and many other problem. This is because Meta-heuristics do not require problem domain knowledge (it is like a general heuristic that fits to many problems). GAs for instance use mutation which helps escaping from local optima. Meta-heuristics can also use heuristics inside them which usually helps but this is not a must. Compared to heuristics. Meta-heuristics converge slower than normal heuristics.
Now we have the idea that meta-heuristics fit many problems and therefore the are general. Example Meta-heuristics are GAs, Ant Colony Optimization, Simulated Annealing.....
Research and experiments indicated that some meta-heristics perform better for some type of problems. In addition, research indicated that for the same problem, different meta-heuristics perform better for different instances. Even, different meta-heuristics perform better at different stages of the solution process for the same instance. So for example, if we are solving and instance of TSP problem, it could result that the best solution is a result of running first GA, then Simulated Annealing then Ant-colony. And here comes Hyper-heuristics. Hyper-heuristics are to tell what sequence of Meta-heuristics to use to solve the problem at hand. Also it can be used to classify what meta-heuristic fits better to which problem.
For online queries, I would use heuristics.
Wish that helps
  • asked a question related to Global Optimization
Question
11 answers
I am working on an image processing research that uses global optimization techniques. We used Matlab(R) genetic algorithm, simulated annealing and we were planning to use particle swarm as well. Although Matlab documentation refers to particle swarm, i did not find in Matlab 13 or 14.
Relevant answer
Answer
Dear Ahmed Hassan Yousef 
You can send an email to 
Xin-She Yang
asking for the code of his book about Nature Inspired Algorithms.
Not only you'll immediately receive an implementation of particle swarm in Matlab, but also you will get codes of efficient soft computing techniques and samples of the book describing these wonderful techniques.
Good Luck
  • asked a question related to Global Optimization
Question
6 answers
The gamma selection methods proposed in this paper looks promising but without a lot information about selection of nu.
As I understand, nu and gamma are closely independant, then how should I know the gamma selected by the method in this paper is global optimal. I guess it is just a local optimal gamma for one fixed nu.
I feel very comfused with the tuning of nu and gamma. Could anyone give me a hint?
Relevant answer
Answer
You can try to use tuning packages like Optunity or HyperOpt to automate the tuning process. Such packages use general-purpose black-box function optimizers to efficiently find a good set of hyperparameters (much more efficient than grid search, manual search or random search).
Optunity has an example of tuning an SVM in Python using scikit-learn in the link I provided (lines 30-36, a C-SVC is used; nu-SVM would be a straightforward modification). 
  • asked a question related to Global Optimization
Question
61 answers
Hello,
As far as I know, the meta-heuristic algorithms such as GA, PSO, GSA, etc. generally find the optimal solution of 'unconstrained' optimization problems. If I have some constrains (equality and/or inequality equations), how will I be able to consider and model them in these kinds of algorithms?
I would greatly appreciate it if you kindly help me in this matter.
Regards,
Morteza Shabanzadeh
Relevant answer
Answer
Hi Morteza, I am new to participating in the Q & A. I hope I  have understood your question appropriately and that is that you are asking about "good" methods of constraint handling? If I have got tthe gist of your question, then this is perfect because I am very interested in this topic myself.
I put "good" in  quotation marks above because I am convinced that what is judged to be good is problem dependent, unfortunately/. To me, this is just a manifestation of Wolpert and MacReady's No Free Lunch theorem.
For a recent overview see: Mallipeddi et al (2010): Ensemble of Constraint HandlingTechniques in IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 14 (4).
There are of course a number of other reviews of constrained optimisation techniques available in the literature. Prof  Carlos Coello-Coello's repository found on the web at:http://www.cs.cinvestav.mx/~constraint/index.html is a useful starting point.
Recognising that the definition of "good" method might be problem specific, I give a brief summary of what I have, for my problem setting investigated, found to be efficient. Note that these algorithms can be used with population based stochastic heuristics (like GA/DE/EP/ES/PSO), I am not so sure about using them with Simulated Annealing (SA) as canonical SA operates with a single trial point rather than a population.
Epsilon constraint handling technique as proposed by T. Takahama and S. Sakai, iin their paper "Constrained Optimization by the " Constrained Differential Evolution with an Archive and Gradient-Based Mutation", pp.1680-1688. (Winner of the CEC 2010 competition on constrained single level optimisation)
The Stochastic Ranking Method of Runnarsson and Yao with code in MATLAB and c: https://notendur.hi.is/tpr/index.php?page=software/sres/sres It is easy to get the wrong idea that it only works in Evolutionary Programming (Prof Runarsson and Yao's EA of choice but I have managed to use it as a constraint handling method with Differential Evolution.)
Interestingly one of the simplest and most effective method which I have tried is the method given in Kim et al (2010): T-H Kim, I. Maruta and T. Sugie. A simple and efficient constrained particle swarm optimization and its application to engineering design problems, Proceedings of the Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering Science, Vol. 224, No. C2, pp. 389-400, 2010.
See particularly Equation 2 in this paper. It should be easy to program these in any language of your choice.
  • asked a question related to Global Optimization
Question
6 answers
How to compare two methods resolving a real world optimization problem? Is the canonical evaluation using instance of this problem sufficient? Is it more accurate to test their complexity or convergence or .. ? If yes how ?
Relevant answer
Answer
Generally speaking, you should compare different methods simultaneously for 2 criteria - accuracy and calculation time. It isn't easy, so really it is recommended to fix calculation time (e.g., 1h or 10 h) and compare only according accuracy. Please, see Article: The Cross-Entropy Method for Continuous Multi-Extremal Optimization. (Dirk P. Kroese, Sergey Porotsky, Reuven Y. Rubinstein). Methodology And Computing In Applied Probability 08/2006; 8(3):383-407.
  • asked a question related to Global Optimization
Question
8 answers
I already have the codes for CMA-ES, Nelder-Mead, Pattern Search.
Relevant answer
Answer
You could refer to Matlab optimization toolbox in which many gradient (and non-gradient based are available. CMA-ES is an evolutionary algorithm and is not a local search algorithm. regards, Behrouz
  • asked a question related to Global Optimization
Question
4 answers
Some optimization problems, such as Molecular Docking which includes 1D and 3D rotations, have non-Euclidean search spaces. Does this influence or bias the results and search behavior of global optimizers (e.g. GA, PSO, DE, ES, MA...)? Are there suggestions on which search strategies or design decisions are more effective when solving this kind of problems?
Relevant answer
Answer
.
I had encountered a somewhat similar problem (although at a much much more modest scale !) when faced with the apparently obvious problem of averaging rotations ... so as to implement some kind of k-means clustering on rotations ; i never completed this study but i had dug a few references (attached below)
.
.
  • asked a question related to Global Optimization
Question
6 answers
Let f :X->Z and g:Y->Z be a function with variable x and y, respectively. What kind of relation do we have between the solution of min( F(x,y))=min(f(x)+g(y)) and (minf(x),minf(y))?
For example; let f be a cost function of input 1 (variable x represents the input 1), and g be a cost function of input 2 (variable x represents the input 1). I want to minimize the total cost. And my question is the following: Is there any relation between the total cost minimizer and the minimizers of f(x), g(y)?
Relevant answer
Answer
@Warren: your elegant derivation is only valid when both x0 and y0 are global minimizers for f and g, respectively (the catch is in 'for all $x$' and 'for all $y$).
@Yilmaz: you are most likely thinking of (rediscovering) the Gauss-Seidel minimization procedure: one variable at a time. It is convergent, but not necessarily in a single step.
  • asked a question related to Global Optimization
Question
24 answers
I'm working with global optimization algorithms like genetic algorithms and differential evolution. In order to test and evaluate the quality of these optimization strategies I would like to use some standard benchmark function but only in a two dimensional space. Can anyone suggest some?
Relevant answer
Answer
I think that many of the 2-D standard benchmark problems express the same general features, but are not comprehensive in the surface aberrations that they reveal. Generally, not sharp valleys, not flat sections, not rare to find minima, and not stochastic. I've generated a collection of 2-D challenges for my graduate optimization applications course, and attached it.
  • asked a question related to Global Optimization
Question
5 answers
Do you think using a large number of ineffective parameters in a hydrological model affects the rapid convergence to the optimum solution when you are using a global optimization method? (like PSO and GAs), or actually moving inside the search space is controlled by effective parameters?
Relevant answer
Answer
The problem with having a large number of parameters is not having ineffectual parameters (it is rare for a parameter to have no effect on the value of the objective function), but interaction between parameters.Generally, the more parameters you have, the more interaction there is. This means that any search algorithm has difficulty in deciding which parameter values to adopt, leading to a slower convergence. Consider the case where there are 2 parameters that interact, forming a ridge in the objective function surface. Any algorithm will have difficulty in finding the global optimum as the gradient is very small along the ridge line. The result is that it will take a long time to find the global optimum. Generally, the calibration will be stopped before the global optimum is found, leading to a result that will depend on the starting point. Note that this is the "equifinality" issue discussed by Keith Beven (also known as identifiability in other discplines).
  • asked a question related to Global Optimization
Question
3 answers
There are many ways in which optimization has been done and decrease the number of transistors in the design. Some such trends follow a certain pattern or follow a law. Please suggest me an article that describes this topic.
Relevant answer
Answer
Karnaugh Maps (K-Maps) and Quine-Mc Clusky algorithms remain the best ways to reduce the digital designs in their minimum forms ( Sum of Product ) or (Product of Sum). However, I don't like the use of don't care terms as being arbitrarily logic 1 or logic 0. Don't care terms create spurious outputs if they are clubbed with 1s in KMAP minimization. Different computer programs have been developed to do the logic reduction for KMAP or Quine Mc Clusky.
  • asked a question related to Global Optimization
Question
3 answers
I am using a genetic algorithm to solve a multivariable optimization problem. The difficulty in exploring all the solutions is that the permissible set of each variable of the solution is of the form {0} U [a,b], where 0 < a < b (the magnitudes are around a=4 and b=15). "Solutions" that do not satisfy the constraints get a low fitness. So when the genetic explores the search space, it is difficult that it tries solutions with one variable at 0 (zero). I can try to enlarge the interval around 0 and to modify the fitness of variables close to zero. Does anybody know how to treat this kind of constraints? By the way I am using the DEAP genetic algorithms, more precisely this one: http://deap.gel.ulaval.ca/doc/default/examples/ga_onemax.html.
Relevant answer
Answer
Dear Ledesma,
your code is fine.
Good luck.