Science topic

# Global Optimization - Science topic

Explore the latest questions and answers in Global Optimization, and find Global Optimization experts.

Questions related to Global Optimization

I am studying on global optimization algorithm now, it seems that there are many different versions of each kind of algorithm. Only for particle swarm optimization (PSO) algorithm, there are many versions like APSO, CPSO, PSOPC, FAPSO, ARPSO, DGHPSOGS (see Han and Liu, 2014, Neurocomputing). In addition, the class of genetic algorithm (GA), differential evolution (ED), ant colony optimization (ACO), simulated annealing (SA) and so on, can also be used to solve the problem. When we develop a new global algorithm, it is worthwhile comparing the performance of these different methods on some benchmark functions (like Resenbrock function and Rastrigin function) in a fair way. For example, I would say the average number of evaluations of cost functions, the success rate, and the average time-consuming is good as measurements for comparison.

So my question is "is there any source code (Matlab) has been developed for comparing different kinds of global optimization (GO) methods?". The code should be easy to use, convenient to fairly compare enough advanced GO methods, and should also have provided enough benchmark functions (given gradient of each function would be better, so that we can compare some gradient-based global optimization algorithm).

I am looking forward to your answer and greatly appreciate your help.

Best wishes,

Bin She.

I have published below 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints:

1) AI++ : Artificial Intelligence Plus Plus

2) Artificial Excellence - A New Branch of Artificial Intelligence

and 18 others.

You can find 20 articles at below Elsevier SSRN pre-prints link:

Does publishing 20 BREAKTHROUGH articles in Elsevier SSRN Pre-prints equivalent to publishing 20 BREAKTHROUGH articles in ELSEVIER Journals?

Please kindly explain me your answer.

I'm working on some optimal strategies for an environmental surveillance network. My solution is almost based on the meta-heuristic solution. I have to know what the advantages or disadvantages are of heuristic and meta-heuristic optimizations.

Various metaheuristic optimization algorithms with different inspiration sources have been proposed in recent past decades. Unlike mathematical methods, metaheuristics do not require any gradient information and are not dependent on the starting point. Furthermore, they are suitable for complex, nonlinear, and non-convex search spaces, especially when near-global optimum solutions are sought after using limited computational effort. However, some of these metaheuristics are trapped in a local optimum and are not able to escape out, for example. For this purpose, numerous researchers focus on adding efficient mechanisms for enhancing the performance of the standard version of the metaheuristics. Some of them are addressed in the following references:

I will be grateful If anyone can help me to find other efficient mechanisms.

Thanks in advance.

I am trying to solve a MINLP problem using genetic algorithm (from MATLAB's global optimization toolbox).

My number of decision variables is 168.

- 96 of these decision variables are binary [0 1]
- The remaining 72 variables can take the integer values of [1 2 3].

The problem is accurately formulated and there is no doubt about it.

Following are my doubts:

- What is the appropriate population size to be used? I am trying: 2*168, 3*168, and 4*168. But the size seems to be large. Since all the decision variables are integers, what do you suggest about the population size?
- For different initial guesses, I get different optimized solutions. I am using 20,50 and 60 % of the population size as the initial population matrix. Of course, I know that we cannot guarantee global optimum with GA; but still, what can you suggest to get global optimum? Trying multiple times and getting the lowest fval doesn't sound good to me.
- The mutants are taken as 10% of the total population. Can you suggest an appropriate size for it ?

Finally, when the initial population matrix is not defined at all, I do not get the linear constraints (inequality) satisfied. But with some initial population size, I get ( but i think the optimum values are local and not global.

- Other than genetic algorithm, are there other optimizers for such problems? (I do not like to think surrogate-optimization is a good idea ), which are free and can handle such large MINLP problem?
- Are there other toolboxes (apart from the global optimization toolbox), which are free but can be used to handle large MINLP?

Thanks

I want to maximize my function which is Y = (a1 * log(x1) - x1 + b1) + (a2 * log(x2) - x2 + b2)

(a1,a2,b1,b2 constants)

subject to:

1) value <= x1 <= value

2) value <= x2 <= value

3) x1 + x2 <= 100k

Tried alot of techniques and methods but not getting satisfactory results. If someone can guide how to solve this and some python packages references?

When we are dealing with Large-Scale Global Optimization (LSGO), what is the best-based approach: algorithms-based decomposition or algorithms-based Non decomposition?

I am working on a global optimization algorithm and I am thinking to use my own benchmark function. The function is provided in the attached images. Please, take a look and share your thoughts about it?

When comparing two optimization methods on a function, should we use a two sample t-test or a paired t-test? I would say we should use the latter since paired t-test is used for correlated observations and in our case we can consider the unit of observation to be the function and the two methods as two treatments. Am I right?

Thank you in advance

Hello, I currently solve the optimization problem (please see the attached figure).

Basically, this problem is equivalent to find the confidence interval for logistic regression. The objective function is

**linear**(no second derivative), meanwhile, the constraint is non-linear. Specifically, I used n = 1, alpha = 0.05, theta = logit of p where p = [0,1] (for detail, please see binomial distribution). Thus, I have a closed-form solution for the gradient and jacobian for objective and constraints respectively.In R, I first tried the alabama::auglag function which used augmented Lagrangian method with BFGS (as a default) and nloptr::auglag function which used augmented Lagrangian method with SLSQP (i.e. SLSQP as a local minimizer). Although they were able to find the (global) minimizer most time, sometimes they failed and produced a far-off solution. After all, I could obtain the best (most stable) result using SLSQP method (nloptr::nloptr with algorithm=NLOPT_LD_SLSQP).

Now, I have a question of why SLSQP produced a better result in this setting than the first two methods and why the first two methods (augmented Lagrangian with BFGS and SLSQP as a local optimizer) did not perform well. Another question is, considering my problem setting, what would be the best method to find the optimizer?

Any comments and suggestions would be much appreciated. Thanks.

I am looking for single-objective problems for which most evolutionary algorithms can find the global optimum. For most of the new benchmark problems (i.e. CEC2015) algorithms are not able to converge to the global optimum. For my experiments, I need simpler problems such as Sphere, so that most algorithms can find the global optimum. Can anyone recommend some problems or a benchmark that I could use?

Thank you in advance!

Deterministic Global optimization relies on convex relaxation of the non-convex problems. Certain nonlinearities are duly converted into linear forms underestimators to be solved by efficient MILP solvers (e.g. signomial functions/ bilinear terms).

Most nonlinearityies are approximated to linear functions by piece-wise linearizations. However, I am wondering if this linearizations guarantees that the approximations are understimators of the original nonconvex problem (i.e. for all x in Domf, f(x) >= u(x) where u is the understimator)

because otherwise the understimator may miss the global optimum during the branch and bound process.

**Can the solver still converge even if the relaxation is not an understimator**?

Hello everyone,

What are typical reasons for an unstable behaviour of the CMA-ES?

I want to optimize a problem with a solution vector of dimension 3300 using the CMA-ES to find a (preferably global) minimum. The problem can be solved using PSO, but applying CMA-ES results in non-convergent behaviour. I have found several up-to-date articles regarding applying the CMA-ES on large scale problems, with the focus on increasing performance speed an reducing memory usage. But I have found no information regarding the CMA-ES being unstable for large dimensions, which is my main concern.

The (repeated) behaviour is convergent decreasing in the beginning, for then to increase again and continue increasing. I use the Python-Code published by Nikolaus Hansen on Github (https://github.com/CMA-ES/pycma) with the suggested settings for lambda etc.

I found the advice to increase the population size lambda and thereby increase stability. Until then I used the suggested lambda=4+int( 3*np.log(n) )=28, and increased it to lambda=88, with the same outcome (of an initially convergent behaviour followed by a divergent behaviour).

What are the typical reasons for an unstable behaviour of the CMA-ES?

Thank you!

Daniel Wrede

I'm using a genetic algorithm based global optimisation package, USPEX, to search stable structures of non-stoichiometric transition metal compounds. The code has a limitation - we aren't able to specify the number of k points explicitly, instead only a value called 'kresolution' can be entered. Higher is this value, lower is the density of the k grid. The thing is that it uses the same value of kresolution for all of the structures it searches. I get somewhere around 600 k points in the irreducible Brillouin zone for triclinic structures, which for most part are just slightly distorted versions of other crystal systems (cubic, hexagonal, monoclinic, etc), whereas I get anywhere around 30 to 70 k points for structures other than triclinic, all of these for the same value of kresolution. How reliable are these calculations? Can you please shed some light on what we may call an optimum number of k points in the irreducible Brillouin zone.

I have a few embarrassingly basic questions about USPEX and job submission. Here goes...

- What are USPEX and USPEX.m. What is the difference between them?
- Where and how does the code make use of MATLAB?
- qsub VS mpirun VS llsubmit = Me pulling my hair out and going crazy!
- Can someone be kind enough to guide me (step by step) in running EX 01 (Si). I am able to do VASP calculations on my own for which I use the following file (multiple.cmd) for submission via the terminal command: llsubmit multiple.cmd
- #!/bin/bash
- #@ output = test.out
- #@ error = test.err
- #@ job_type = MPICH
- #@ node = 1
- #@ tasks_per_node = 4
- #@ class = Small
- #@ environment = COPY_ALL
- #@ queue
- mpirun -n 4 vasp > log

Thanks in advance.

I am trying to optimize a function that is non-linear in parameters, three in number. I am using

**Genetic Algorithms**(GA) for this purpose. Thus, I have a function of time that is non-linear in three parameters as time-series data. I am using the**ga() function**of the**GA package in R language**for the purpose. However, as I see, the**initial values**that I set for the parameters**heavily influences**the parameters computed by the ga() function. I also read the following article: Scrucca, L. (2013). GA: a package for genetic algorithms in R.

*Journal of Statistical Software*,*53*(4), 1-37.In section

**4.4 Curve Fitting,**if I use the following initial values (min, max):**a(1000, 10,000), b(0,10), and c(0.5, 10)**instead of the ones used, that is a(3000, 4000), b(0, 1) and c(2, 4) I get**completely different results**from the ones obtained by the paper. I get**a=2772, b=.0235, c=4.07**as against a= 3534.741, b=0.01575454, c= 2.800797 in the paper.My understanding is that global optimization techniques such as GA would be able to

**find out global optima irrespective of initial values**although it might take more or less iterations depending on the initial values of parameters. Why is this not happening in case of my function and also the example that I cited?Thanking you all in advance.

I have studied and understood the Moment-SOS hierarchy proposed by Lasserre where a sequence of semi-definite programs are required to be solved and a rank condition is invoked in order to check if the global solution is found. I was not able to find such conditions for its dual viewpoint ( also known as the Putinar's Positivstellensatz). Alternatively, is there a similar rank condition for Parrilo's sum-of-squares relaxation?

(Code Optimization) of Compiler

Is there a

**deterministic**solution to find the**global optimum**of a convex function , subject to**non-convex constraints?**download the attachment as an example please

It's only an example; I am looking for a general method to find the optimal point in any convex function with non-convex constraints

I am looking for the codes for all 20 large-scale global optimization problems. They should not be confused with the Constrained Real-Parameter Optimization problems from the same year. I looked for the codes on the provided link but the folder is empty.

Are there versions of CMA-ES specifically designed for high dimensional search spaces? Are there any implementations available (preferable in MATLAB)?

I am using NSGA-II for carrying out a study. For this I intend to stabilize the algorithm. Now when I am testing algorithm with ZDT-4 test function, I am not getting satisfactory results. I am using "https://in.mathworks.com/matlabcentral/fileexchange/10429-nsga-ii--a-multi-objective-optimization-algorithm" as source code and I have done some modifications to it according to my need. ZDT-4 is converging to local pareto optimal solution. How to get global optimal solution? All other functions are working fine.

Hello! For Global Optimization I often use random-search oriented methods, such as Genetic Alg., Cross-Entropy Alg., etc. I always see, that accuracy and speed of convergence very large depend of Goal Function structure and parameters. For example, for MLE optimization we can use Weibul CDF as

{1-exp(-a*t^b)} or {1-exp(-t/w)^b)}. May be, you have read some generic recommendations, how to select structure and set of Goal Function parameters.

Can SQP solve a 6-variable nonlinear global optimization problem? In which, th e objective function has 6 variables including forces and angles. I have tried the toolbox in MATLAB, but can not get the optimal angle solution.

I want to find another algorithm instead of PSO to solve a 6-variable real-time global optimization problem. Does anyone can provide some ideas?

Thank everyone!

The objective function is:

fun = (k1*(v1-((uu(1)*cos(uu(5)) - FL(1)*sin(uu(5)) +...

uu(2)*cos(uu(6)) - FL(2)*sin(uu(6)) +...

uu(3)*cos(uu7) - FL(3)*sin(uu7) +...

uu(4)*cos(uu8) - FL(4)*sin(uu8)) /M))^2 + ...

k2*(v2-((uu(1)*sin(uu(5)) + FL(1)*cos(uu(5)) +...

uu(2)*sin(uu(6)) + FL(2)*cos(uu(6)) +...

uu(3)*sin(uu7) + FL(3)*cos(uu7) +...

uu(4)*sin(uu8) + FL(4)*cos(uu8)) /M))^2 + ...

k3*(v3-(((uu(2)*cos(uu(6)) - FL(2)*sin(uu(6)) - uu(1)*cos(uu(5)) + FL(1)*sin(uu(5)) + ...

uu(3)*cos(uu7) - FL(3)*sin(uu7) - uu(4)*cos(uu8) + FL(4)*sin(uu8)) *Lh + ...

(uu(1)*sin(uu(5)) + FL(1)*cos(uu(5)) + uu(2)*sin(uu(6)) + FL(2)*cos(uu(6))) *Lf +...

(-uu(3)*sin(uu7) - FL(3)*cos(uu7) - uu(4)*sin(uu8) - FL(4)*cos(uu8)) *Lr) /J))^2);

in which, uu(i) is the variables while others are parameters.

I want to calculate the optimal solution of the following function.

I have used the fmincin() in matlab to solve this problem, but the result of angle variable uu(5) is always not proper.

And the range of uu(i)i=1..4 is from -500N to 500N, while the range of uu(5) is from -40 degrees to 40 degrees. M=200, J=100, v is a random number which has the range of [-1 1]. FL is also random from [0, 200] .

In the first paper I linked it states that the activation function for the XOR problem is a step function, and in the second paper I linked the activation function is a logistic function. So, I was wondering, the activation function is problem specific? I am asking because I am trying to figure out why my implementation fails to learn the solution to XOR. In the second paper i linked, it states "the search range is set from -2 to +2", I haven't figured out - the search range of what - the weights?

Thank you

Hi There,

What is the performance of NSGA-II (Non-dominated sorted genetic algorithm)? Since this is a heuristic algorithm, it cannot guarantee global optimal. Then what is the difference between the solutions obtained by NSGA-II and the globally optimal solutions?

Thanks

**Dear researchers?**

I read an abstract here (http://www.dissertationtopic.net/doc/176801) and I do not really understand what the 'lookout algorithm' is, and also the overlook points.

**Thank you.**

Hi,

Which is the fair and relevant stopping criteria for comparing performance of different algorithms such as PSO, GA, CMAES, DE, etc. The optimization problem is the large scale one where we want to minimize the objective function of nonlinear least square.

1) Max number of iteration/generation (maybe not suitable in large scale problem?)

2) Max number of function evaluation

3) Max CPU time (how to know it will converge or not?)

4) Minimum value of cost function (this setting may lead to very high CPU times if algorithms is not perform well)

Thanks in advance

Hi Researchers/Experts,

I am starting a journey towards understanding and working out problems in optimization/Local an global optimization. Should I start from Operational research (OR) basics upwards or Calculus (Derivatives and so on...) or both. You can also give materials references/papers that can assist me realize this objective.

**Thank you**

Human eyes use very little time tell a part objects in terms of size, height etc, But for a machine, it would have to do some pairwise comparisons and this seems to waste time.Scheduling VM in appropriate PM in number of PMs in a cloud center is actually a global optimization problem and we can get around that, it can be a great success. So if we can emulate human eyes, it can be great.

What do i need to know or read (books or anything) for me to embark on this research

TOOL used Cadence virtuoso ADE gxl

analog circuit sizing using optimization

Thanks for your reply but when i do optimization for these value in 65nm technology

Wn -from 0.4u to 10u

Ln -from 60n to 300n

Wp -from 0.4u to 10u

Lp -from 60n to 300n

and

Vg1=300mv to 700mv

Vg2=600mv to 900mv

It took more than one day to optimize and atlast didnt gave the optimal solution so my workstation specs are i7, 8gb ram so guide me to make it faster as we go for larger circuits it will be even time consuming

In the Real-Parameter Black-Box Optimization Benchmark (BBOB) multi-modal functions are divided into two sets: “Multi-modal functions with adequate global structure” and “Multi-modal functions with weak global structure”. What is the formal definition of “adequate” and “weak” global structure?

**Problem statement**: to optimize the scheduling of a device(air conditioner) in RTP price environment( where the prices are changing every hour) so that the cost over a day is minimized.This can be called a case of

*individual optimization*. Likewise i have to optimize 3 AC"s separately, where

**A=**cost1+cost2+cost3.

*Global optimization*considers for instance three houses(AC's), to minimize the total cost of three AC's together. Which means now i have to solve for a consumption matrix,consisting of 3 AC consumption vectors.

**B**=cost_total(not summation of the 3 costs !).

*Do you think the cost savings by global optimization will be lesser in any sense than the individual optimization(*

**B<A**

*).If so why ?*

explanations needed plzzz :)

I know that Branch and Bound can solve linear and integer optimization problems.

I need tests instances for a snack pack problem to test a simulating annealing algorithm

Is there any

**simple**way for**obtaining global optimum**when you have nonlinear conditions for optimization problem.(Toolbox of Matlab is useful however, it has some disadvantages)I have read the papers of W.B.Dolan and G.Athier, other ideas may be useful.Thanks a lot.

Most search techniques for continuous search spaces use some type of gradient -- one solution is better than another, so solutions move in that direction. A specific example is the attraction vector in Particle Swarm Optimization.

Further, Particle Swarm Optimization begins with a cornfield vector/sphere function (see Section 3.2 in [J. Kennedy and R.C. Eberhart. (1995) “Particle swarm optimization,” IEEE ICNN, pp 1942-1948]), Differential Evolution builds its foundation from a simple unimodal cost function (see Figure 1 in [R. Storn and K. Price. (1997) “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces.” J. Global Optimization, 11:341-359]), and Evolution Strategies (ES) also explains its functionality starting with a mutation cloud shown against a unimodal search space (see Figure 2(b) in [H.-G. Beyer and H.-P. Schwefel. (2002) “Evolution strategies: a comprehensive introduction.” Natural Computing, 1:3-52]).

Are there any exmaples of metaheuristics that are explicitly and initially designed for multi-modal search spaces?

Hello,

For those of you who are familiar with Pincus Theorem which is based on finding a global maximum of ANY objective function.

Refer to page 63 here, http://www.iis.sinica.edu.tw/page/jise/2001/200101_04.pdf

Can anyone, kindly, provide me with another similar theorem which finds the global maximum of any objective function (No constraints here).

Is it possible to perform global optimization (GLOBOP) with effective fragment potential (EFP) in the latest version of GAMESS?

If anyone has tried, could you please let me see some example inputs?

Hello,

I would like to know that how I can find the number of variables (especially the integer ones) in GAMS (General Algebraic Modeling System) codes.

Does GAMS platform have any options to show the number of variables?

Any help would be appreciated.

Regards,

Morteza

Heuristics, Hyper-Heuristic and Meta heuristics are most often used with machine learning techniques and global optimization methods. firstly, i want to know clear difference between these terms. Secondly, i want to know how can one recognize that a particular machine learning or optimization technique falls under which of above mentioned three methods? e.g as i studied in literature review genetic algorithm falls under meta heuristic. and swarm intelligence uses meta heuristic and hyper heuristic too. etc

But at the same time i read that genetic algorithm is machine learning method and machine learning methods use hyper heuristics.

Thank you in advance.

I am working on an image processing research that uses global optimization techniques. We used Matlab(R) genetic algorithm, simulated annealing and we were planning to use particle swarm as well. Although Matlab documentation refers to particle swarm, i did not find in Matlab 13 or 14.

The gamma selection methods proposed in this paper looks promising but without a lot information about selection of nu.

As I understand, nu and gamma are closely independant, then how should I know the gamma selected by the method in this paper is global optimal. I guess it is just a local optimal gamma for one fixed nu.

I feel very comfused with the tuning of nu and gamma. Could anyone give me a hint?

Hello,

As far as I know, the meta-heuristic algorithms such as GA, PSO, GSA, etc. generally find the optimal solution of 'unconstrained' optimization problems. If I have some constrains (equality and/or inequality equations), how will I be able to consider and model them in these kinds of algorithms?

I would greatly appreciate it if you kindly help me in this matter.

Regards,

Morteza Shabanzadeh

How to compare two methods resolving a real world optimization problem? Is the canonical evaluation using instance of this problem sufficient? Is it more accurate to test their complexity or convergence or .. ? If yes how ?

I already have the codes for CMA-ES, Nelder-Mead, Pattern Search.

Some optimization problems, such as Molecular Docking which includes 1D and 3D rotations, have non-Euclidean search spaces. Does this influence or bias the results and search behavior of global optimizers (e.g. GA, PSO, DE, ES, MA...)? Are there suggestions on which search strategies or design decisions are more effective when solving this kind of problems?

Let f :X->Z and g:Y->Z be a function with variable x and y, respectively. What kind of relation do we have between the solution of min( F(x,y))=min(f(x)+g(y)) and (minf(x),minf(y))?

For example; let f be a cost function of input 1 (variable x represents the input 1), and g be a cost function of input 2 (variable x represents the input 1). I want to minimize the total cost. And my question is the following: Is there any relation between the total cost minimizer and the minimizers of f(x), g(y)?

I'm working with global optimization algorithms like genetic algorithms and differential evolution. In order to test and evaluate the quality of these optimization strategies I would like to use some standard benchmark function but only in a two dimensional space. Can anyone suggest some?

Do you think using a large number of ineffective parameters in a hydrological model affects the rapid convergence to the optimum solution when you are using a global optimization method? (like PSO and GAs), or actually moving inside the search space is controlled by effective parameters?

There are many ways in which optimization has been done and decrease the number of transistors in the design. Some such trends follow a certain pattern or follow a law. Please suggest me an article that describes this topic.

I am using a genetic algorithm to solve a multivariable optimization problem. The difficulty in exploring all the solutions is that the permissible set of each variable of the solution is of the form {0} U [a,b], where 0 < a < b (the magnitudes are around a=4 and b=15). "Solutions" that do not satisfy the constraints get a low fitness. So when the genetic explores the search space, it is difficult that it tries solutions with one variable at 0 (zero). I can try to enlarge the interval around 0 and to modify the fitness of variables close to zero. Does anybody know how to treat this kind of constraints? By the way I am using the DEAP genetic algorithms, more precisely this one: http://deap.gel.ulaval.ca/doc/default/examples/ga_onemax.html.