Science topic

Optimization Algorithms - Science topic

Explore the latest questions and answers in Optimization Algorithms, and find Optimization Algorithms experts.
Questions related to Optimization Algorithms
  • asked a question related to Optimization Algorithms
Question
1 answer
Which tuning method is optimal for adjusting PID controller parameters, such as Ziegler-Nichols (ZN), Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Whale Optimization Algorithm (WOA)? Can you provide an overview of each tuning technique for the manipulator?
Relevant answer
The choice of method depends on factors such as the complexity of the system, availability of computational resources, desired level of optimization, and familiarity with the optimization technique. For simple systems with known dynamics, heuristic methods like Ziegler-Nichols may suffice. However, for complex or nonlinear systems, metaheuristic optimization algorithms like GA, PSO, ACO, or WOA may offer better performance and convergence properties. Experimentation and comparative analysis can help determine the most suitable method for a particular application.
  • asked a question related to Optimization Algorithms
Question
4 answers
I am working in fuzzy graph and its application.
Relevant answer
Answer
sir
i need the article related to domination in fuzzy graph and its application
  • asked a question related to Optimization Algorithms
Question
2 answers
I want to use it in optimization of PID controller's parameter, I do not know how implement and coordinate them by using MATLAB.
Relevant answer
Answer
Well, apparently, there is one (or at least one) paper which is doing something similar:
"Design of PID Controller for Magnetic Levitation System using Harris Hawks Optimization"
where pages 2 and 6 say-and-consider "All the experimental investigations are implemented using MATLAB"
Just try to find other similar papers - good luck.
  • asked a question related to Optimization Algorithms
Question
3 answers
in mathematical modeling of whales encircling prey in (2.2)-(2.3) (paper in ), the search agents update their location accordance to the best agent. That is, the search agents approach the prey by the amount A.D when A>0 or move away by the same amount when A<0. But I could not understand the role of coefficient C. If there were no coefficient C, wouldn't there still be approach or going away?
Thanks
Relevant answer
Answer
1) If you plot f1 and f2 you can see the curves are exactly the same. Therefore a good iterative algorithm must have the same behaviour on both.
If the sequence of sampled points on f1 is x1, x2, ... then the sequence of sampled point on f2 must be x1+1000, x2+1000, ...
If not, it is biased. WOA is biased for translation.
Just try with a non biased algorithm, even a rudimentary one like, say, gradient descent.
2) It is perfectly possible that WOA has other biases (assuming, or course, your code is correct). There is a simple way to see that by plotting a signature:
f(x)=1 on [0,1]^2 (i.e. a Plateau function)
Run ten times with N sampled points, say N=10,000
Plot the 10*N sampled points. If there is no bias the result should look like the one produced by uniform random search.
I read the paper you quote. According to the formulae of WOA I guess such a signature is not very good and probably contains straight lines starting from (0,0).
  • asked a question related to Optimization Algorithms
Question
7 answers
How do I evaluate accuracy of different meta heuristic algorithms such as Chimp Optimization algorithm, Spider monkey optimization algorithm? I came across a term called to Bonferroni Dunn's test but unable to find any related related tutorials to perform that test. Can someone please shed some light on this?
Relevant answer
Answer
You are required to perform statistical tests such as
1. Wilcoxin's Rank Test
2. Quad Test
3. Bonferroni-Dunn
  • asked a question related to Optimization Algorithms
Question
4 answers
How to people propose a new optimization algorithm. I mean what is the baseline? Is there any intuition or mathematical foundation behind it?
Relevant answer
Answer
The question is too general. To generate the basic idea of the algorithm, you need to know the detailed statement of the problem. Typically, the process of developing an algorithm starts with the identification of your problem complexity class. If there is no evidence or clear feeling that the problem is NP-hard, then it is reasonable to try to develop a polynomial algorithm for solving it. Such algorithms are usually based on the use of specific properties of the problem. Sometimes it is possible to construct a polynomial algorithm based on the general scheme of dynamic programming, taking into account the specific properties of the problem.
If the problem is known to be NP-hard, then branch-and-bound methods, dynamic programming, and their modifications often work well for a relatively small problem dimension. Sometimes it is possible to build a successful formulation of the problem in the form of an integer programming model, followed by the use of appropriate methods or ready-made software. For high-dimensional problems, you can either use well-known metaheuristics, or develop your own approximate algorithm. In the latter case, success is usually based on the use of the problem properties. As you can see, in any case, it is useful to start by studying the specific properties of your particular problem.
  • asked a question related to Optimization Algorithms
Question
6 answers
After getting the results of any Topology Optimization Algorithm, how could we generate the CAD/CAM model?
Knowing that the algorithm will give the optimum shape in a form of matrix containing the pseudo density (a number between 0 and 1).
Relevant answer
Answer
You might be interested in the free education edition of PERMAS.
Check out the Design Wizard in VisPER. You can easily generate .stl files from the result of your topology optimization.
  • asked a question related to Optimization Algorithms
Question
37 answers
I have the following paper:
Nidhal El-Omari, “Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem”, International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Nidhal El-Omari, “An Efficient Two-level Dictionary-based Technique for Segmentation and Compression Compound Images”, Modern Applied Science, The Canadian Center of Science and Education, published by Canadian Center of Science and Education, Canada, p-ISSN:1913-1844, e-ISSN:1913-1852, DOI:10.5539/mas.v14n4p52, 14(4):52-89, 2020.
How can I add them to the Google Scholar?
Relevant answer
What is the difference between "Google Scholar" and "Google"?
Both are Web search engines. While Google searches for public Web content, Google Scholar searches for scholarly literature and academic resources.
While Google searches the entire Web, Google Scholar limits its searches to only academic journal articles produced by commercial publishers or scholarly societies. Google Scholar eliminates material from the following:
  • corporations,
  • non-scholarly organizations, and
  • from individuals.
The latter is very useful when the topic being researched is multidisciplinary.
  • asked a question related to Optimization Algorithms
Question
5 answers
I am working on the development of a PMMS model. To select the best-performing tools and models, several models are needed to be developed and validated. Can this be replaced by some optimization algorithms?
Relevant answer
Answer
1. Use the downhill simplex method to minimize a function.
2. Use the BFGS algorithm to minimize a function.
3. Use the nonlinear conjugate gradient technique to minimize a function.
4. Using the Newton-CG approach, minimize the function f.
5. Use a modified Powell's approach to minimize a function.
  • asked a question related to Optimization Algorithms
Question
17 answers
Metaheuristics are a class of Artificial Intelligence algorithms. There exists several metaheuristics to solve a variety of optimization problems. The list is lengthy enough. Researches are in search of more efficient algorithms, and hence proposing newer metaheuristics. Sea Lion Optimization Algorithm is one such metaheuristics. The algorithm is claimed to outperform Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), Grey Wolf Optimization (GWO), Sine Cosine Algorithm (SCA) and Dragonfly Algorithm (DA). Can anyone please provide some sources for implementing the Sea Lion Optimization Algorithm in any programming language?
Relevant answer
  • asked a question related to Optimization Algorithms
Question
16 answers
(A)What is Exploitation and Exploration in Optimization Algorithms? (B) Describe the local and global search for the PSO, GA, ABC and ACO algorithms and compare them.
Relevant answer
Dear Prof. Tishko N. Muhamad,
I have the following paper:
In this paper I said the following paragraph which may be the solution to your valuable question about Exploitation and Exploration:
Fig. 2 exhibits that the searching process falls into quadruple pivot points: stochastic, exploration versus exploitation, iterated (i.e. iterative), and guided:
  • The searching has a stochastic nature when the set of random variables is employed in extracting a new generation, called a potential solution [9][16]. Each generation's new values of these variables are chosen stochastically in harmony with the general paradigm. On the other side, some problems are evolving stochastically at different points in time which makes the optimization hard to grasp and solve. For instance, the passengers' numbers of airlines occur stochastically which calls the airlines for implementing the statistical theories, including stochastic analysis and probability distributions, in forecasting the numbers of passengers.
  • Exploration and exploitation are two supplementary activities used to explore the search space. The first one is the activity through which the search algorithm tries to explore as much broad search space as possible to evade falling in the local-optima traps. Whereas the second one is representing the activity in which the searching algorithm tries to develop the finest discovered solutions through some targeted approaches. While exploitation is the optimized outcome derived out of exploration, the progress of the search for more solutions continues in both cases to find more-optimal ones. [10][17][18]
  • For highly efficient exploiting the former iteration (also called trial or time-step) to the greatest possible extent, the outcomes in the guided search are frequently improved over the course of the iteration steps where the newly generated trials are influenced by the older ones. The basic idea of that is wrapping around deliberating knowledge and extracting patterns from the former good-quality searching iterations in an effort to harmony guide the searching process in the subsequent iteration steps hoping to be in the optimal solution direction, hence the notion of “guided” is used to continuously approximate the goal based on replacing the old solutions with the new successful ones. From a more general angle, the useful information related to the optimas of the preceding iteration steps is stored on to get benefit from them in the succeeding ones. That is, the key philosophy of deriving more successful trials is coupled with building up a well-stocked knowledge store containing the past trials. [10][17][16]
  • Unlike the conventional local search that stops when it gets stuck by any local optima, the guided local search makes the best use of the available features of the current optima to escape away and then form another more optimum feasible solution. In the guided search absence, an optimization algorithm (OA) inevitably takes on a longer exploratory time range that is mainly driven by the trial-and-error aspects. [10][17][18]
  • - The progress of the searching process will be continuing iteratively with the same searching procedure until one or more of the predefined evaluation functions, called termination conditions or stopping criteria, imposed by the user are reached, and accordingly, the best possible candidate solution is produced. Without that, the progress of the searching mechanism will obviously be in an infinite loop [10][17]. The followings are some of the possible termination conditions that may be imposed by the user to terminate the series of iterations: the maximum number of iterations steps previously defined (i.e. no. of runs) is reached, the maximum allowable CPU computational run-time (i.e. max-CPU-time), coming across some significant evidence that an optimal solution has been achieved, the maximum number of iteration attempts that comes amid two successive developments is reached, the maximum number of iteration attempts has reached without noticing any difference or making any forward positive progress for the problem of interest, or there is no way to get more developments for the problem of interest (i.e. there is no progress) [10][17]. Related to the first termination condition, the applications of the Artificial Neural Networks (ANNs) use the name “epoch” to represent an iteration step (i.e. time-step) [10][18]. In-line with the second termination condition, it's very crucial to balance between the quality of given feasible solutions and the overall computation time frame that is utilized in generating these solutions and decide accordingly how to set up the timeout bound [10][18]. On the other hand, the last termination condition represents the case that after many generations, the solutions start approaching each other in the hope that this approaching is a good indicator that the final achieved solution is closer to the ideal solution of the problem under research [10][18]. In the report of this, there is an essential need for suitable criteria to define the quality of the acceptable solution and to decide according to that whether to stop the searching activity or not [10][17][18]. If the procedure fails to reach a visible solution or a practical compromise within the timeout bound, it is inevitably stopped [10][18].
  • asked a question related to Optimization Algorithms
Question
14 answers
It is about optimization.
Relevant answer
Answer
Exploitation and Exploration are important two-phase in each optimization algorithm, they help the algorithm to keep balanced during the search process.
  1. Exploitation: is the final phase that makes the algorithm attacking the last best solution
  2. Exploration: is the first phase that makes the algorithm search in large space area
  • asked a question related to Optimization Algorithms
Question
4 answers
If anyone has, please send it to me or share it with me.
  • asked a question related to Optimization Algorithms
Question
4 answers
I would like to compare my algorithm (the improved LPA) with Louvain, infomap and CNM (fast greedy) algorithms (available in Mat lab toolbox community) that has been implemented on LFR dataset.
I confronted with a problem when I cannot use the outputs of the algorithms for NMI criteria.
I will gratitude everyone could guide me about the matter!
Relevant answer
Answer
please provide me the complete solution of your problem . i am also stuck solving the same problem.
  • asked a question related to Optimization Algorithms
Question
7 answers
Dear All,
I have successfully applied NSGA-II on a multi-objective optimization problem.
While I was observing the results during the optimization process, I have noticed that some variables (genes) have reached very good values that matches my objectives, while others haven't. However during the optimization, the good genes are being frequently changed and swapped to undesired results (due to genetic operations; mutation and crossover) until the algorithm reaches a good local optima, or hits a condition.
My question is this:
Can I exclude some genes from the optimization process since they have already reached my condition sand finally combine them with the remained genes in the final chromosome?
Relevant answer
Answer
All the genes play an important rolein in the optimization process, The likely good genes contribute information for improving the qulity of new ones and the likely bad genes could help to avoid non factible solutions and to re-direct the search to new areas of the domain. So, I think that all the genes must to participate in the solution search process.
  • asked a question related to Optimization Algorithms
Question
7 answers
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
Relevant answer
Answer
I'm not keen on calling anything "smart". Any method will fail under some circumstances, such as for some outlier that no-one have thought of.
  • asked a question related to Optimization Algorithms
Question
4 answers
I have executed a optimization algorithm 'A' towards an objective.After 50 independent runs i have got 50 values.I have executed a optimization algorithm 'B' towards the same objective.After 50 independent runs i have got 50 values. i need to compare the performance of two algorithms. ranksum or signedrank wilcoxon is best to appply?
Relevant answer
Answer
@Ibrahim Aljarah Could you tell me, how can I select null hypothesis?
  • asked a question related to Optimization Algorithms
Question
7 answers
I have to solve a optimization problem. I already find Pulp library at Python but I want to solve it with metaheuristic algorithm. My problem involves with discrete decision variables and constraints, minimization objective function. I can't decide which algorithm go better with my problem. Also, I need a similar code for it.
Relevant answer
You can check the CCSA algorithm implemented by a Conscious Neighborhood-based approach which is an effective mechanism to improve other metaheuristic algorithms as well. The CCSA and its full source code are available here:
  • asked a question related to Optimization Algorithms
Question
17 answers
I am using the ANN for a product reliability assurance application, i.e.picking some sample within the production process and then estimating the overall quality of the production line output. What kind of optimization algorithm do you think works the best for solving the ANN in such a problem. ?
Relevant answer
Answer
Optimization algorithm in neural network
The process of minimizing (or maximizing) any mathematical expression is calledoptimization. Optimizers are algorithms or methods used to change the attributes of theneural network such as weights and learning rate to reduce the losses. Optimizers are used to solve optimization problems by minimizing the function.
Regards,
Shafagat
  • asked a question related to Optimization Algorithms
Question
8 answers
I observed that some algorithms obtained very small fitness values of the given optimization problems like 10^-150. and some of proposed methods obtained 10^-190. They are very small numbers. What is the significant difference between 10^-150 and 10^-190.
Relevant answer
The quality-measures or as so-called fitness functions, objective functions, or goodness levels are quantifiable and revolve around maximizing some desired features and/or minimizing the undesirable ones until a predefined optimization goal is achieved [6]. Generally speaking, When the fitness values are small, it means that the values of the control variable are close enough to its optimum solutions. This means that the goodness levels are high; this is unfortunately at the expense of time.
  • asked a question related to Optimization Algorithms
Question
9 answers
As we know, computational complexity of an algorithm is the amount of resources (time and memory) required to run it.
If I have algorithm that represents mathematical equations , how can estimate or calculate the computational complexity of these equations, the number of computation operations, and the space of memory that are used.
Relevant answer
Answer
Muhammad Ali Thank you for answer
  • asked a question related to Optimization Algorithms
Question
4 answers
i have two questions sir the first one Kindly give design procedure for FOPID Controller and Give some sample m file program.
second one How to design FOPID controller tunning methods by using optimization algorithms
Pls reply me sir
Relevant answer
  • asked a question related to Optimization Algorithms
Question
14 answers
I am preparing a comparison between a couple of metaheuristics, but I would like to hear some points of view on how to measure an algorithm's efficiency. I have thought of using some standard test functions and comparing the convergence time and the value of the evaluated objective function. However, any comments are welcome, and appreciated.
Relevant answer
The 7th section, namely "Results, Data Analysis, and Comparison", of the following current-state-of-the-art research paper have a sufficient answer for this question:
  • asked a question related to Optimization Algorithms
Question
16 answers
There are many research on metaheuristic optimization e.g. Particle Swarm Optimization, Genetic Algorithm, etc. Some study show that they are good for clustering task. But I don't find any comparation about it. 
Which one is the best to be applied for optimizing the clustering process?
Relevant answer
The following current-state-of-the-art optimization algorithms give you the answer:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
It has a complete discussion about your question.
Or refer to the same paper at the following link:
  • asked a question related to Optimization Algorithms
Question
25 answers
Since the early 90’s, metaheuristic algorithms have been continually improved in order to solve a wider class of optimization problems. To do so, different techniques such as hybridized algorithms have been introduced in the literature. I would be appreciate if someone can help me to find some of the most important techniques used in these algorithms.
- Hybridization
- Orthogonal learning
- Algorithms with dynamic population
Relevant answer
The following current-state-of-the-art paper has the answer for this question:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or simply refer to the same paper at the following address:
  • asked a question related to Optimization Algorithms
Question
7 answers
I am working on a closed-loop system which has a PI controller. I have made a Simulink environment for the overall scenario but now I want to optimize the coefficients of the PI controller through an optimization Algorithm which will be having a Matlab code. So, can anybody help in this regard?
Relevant answer
Answer
this simple program to tuning PID controller based PSO
  • asked a question related to Optimization Algorithms
Question
11 answers
I am trying to implement the grey wolf optimizer for analyzing data of different brain tumour patients across a specific region but I am unable to exploit the algorithm for my work. I intuitively feel that a swarm algorithm can provide me a good data analysis.  
Relevant answer
Answer
Dear @Shantanu Panwar
I had compared the performance of lots of newest optimization algorithms to solve some benchmark problems in high dimensional space. I suggest the whale optimization algorithm (WOA). Its performance is realy awsome.
  • asked a question related to Optimization Algorithms
Question
7 answers
Local search method helps to increase the exploitation capability of optimization and meta-heuristic algorithm. It can help to avoid local optima also.
Relevant answer
Answer
A hint: it is not OK to be ignorant of nonlinear mathematical optimisation. First you absolutely need to understand optimality criteria - as those conditions are the goal to fulfill. Mathematical optimisation tools are DESIGNED such that accumulation points from the iterative scheme WILL satisfy the optimality criteria, and in most cases you will be able to see the "fault" in the optimality criterion shrinking.
In contrast metaheuristics have no such "guiding light" at all - they fumble in the dark! Few sensible scholars with knowledge in mathematical optimisation would use these tools.
  • asked a question related to Optimization Algorithms
Question
5 answers
I am trying to schedule the VMs into host machines using one of the optimization algorithms (ant colony or GA or ...) to find optimal solution for energy consumption SLA aware. Can someone sugests ideas?
Relevant answer
Answer
Hi dear researcher.
I want to use GA for placement VM in Hosts in cloudsimSDN and I have to change VmAllocationPolicy but I do not know how I should do it!!!
  • asked a question related to Optimization Algorithms
Question
7 answers
Hello, 
I have some more multi-objective optimization questions using the exhaustive search method(brute-force method) that help would be much appreciated. An answer to all or one of the questions are all very very welcome.
I have 3 different multi-objective optimization problems with 3 discrete variables.
Optimization Problem 1) 2 objective functions
Optimization Problem 2) 2 objective functions + 1 constraint
Optimization Problem 3)  3 objective functions
The ranges for the variables are 15~286, 2~15, 2~6, respectively.
I have been told that the search space is small enough that exhaustive search space is my best bet and that it is easy to implement so I wanted to try it.
My questions are
1) Is it possible to apply the exhaustive search method to all three optimization problems?
2) How would I go about doing this using a computer software?
I was thinking that for
Optimization Problem 1 and 3 
I would find all the function values first and then move on to finding and plotting the pareto fronts  -> Is this the right approach?
Also, is there any example code I could follow(especially for optimization with three objectives)
For Optimization Problem 2 with a constraint
How would I incorporate this constraint?
Just make the program give me the function solution values that do not violate the constraint (meaning ignoring solutions that do) and than use them in plotting the pareto front?
Is there example codes/programs for doing this?
I would in particular find any Matlab and R codes helpful.
If there is a specific program/software or what not that does this, I would be very grateful as well.
Thank you to everyone in advance!
Relevant answer
Answer
I don't think brute or exhaustive search is convenient to solve a complicate problem such as multi-objective optimisation. You can use the brute search on small scale to examine the design variables and explore the search domain.
  • asked a question related to Optimization Algorithms
Question
4 answers
Hi Guys,
I need to write a N-dimensional Downhill Simplex Algorithm. Does one of you happen to have a implementation for Matlab which I could use as a reference?
Thanks a lot in advance!
Relevant answer
Answer
Adaptive Nelder-Mead Simplex Method Matlab Code
  • asked a question related to Optimization Algorithms
Question
18 answers
I have learnt that to implement different task scheduling policies (like RR, FCFS, ACO, SJF), I need to make changes to SubmitCloudlet method in DataCentreBroker class. However, I am having trouble coding the Round Robin Task Scheduling Algorithm. While in VMs, do we have to include Time Quantum or we just assign cloudlet to VMs in a Round Robin way.
Could you please send me the code or help me out with it.
Relevant answer
Answer
It will differ on case to case basis.
  • asked a question related to Optimization Algorithms
Question
7 answers
I'm struggling with my qualification work. Excel methods solve the math problem easy, but I don't catch the idea how he does it. I need to understand it to create the code or just to describe the algorithm with help of block-diagrams.
The issue is that I have a mix of continuous function and discrete vars.
I attach two files: the first contains the function which is need to be minimized and constraints, the second one is excel file with the solution.
Relevant answer
Answer
GRG stands for “Generalized Reduced Gradient”. In its most basic form, this solver method looks at the gradient or slope of the objective function as the input values (or decision variables) change and determines that it has reached an optimum solution when the partial derivatives equal zero.
GRG non-linear solver is based on work published by L. Lasdon in 1973 and Alan Waren and their 1975 published code implementation.
At best, the GRG Solving method alone – like virtually all “classical” nonlinear optimization algorithms – can find a locally optimal solution to a reasonably well-scaled, non-convex model. At times, Solver will stop before finding a locally optimal solution, when it is making very slow progress (the objective function is changing very little from one trial solution to another) or for other reasons.
When the message “Solver found a solution” appears, it means that the GRG method has found a locally optimal solution – there is no other set of values for the decision variables close to the current values that yields a better value for the objective function.  This means that Solver has found a “valley” (if minimizing) – but if the model is non-convex, there may be other deeper valleys far away from the current solution. 
Also, in your Excel file with the solution решение.xlsx there is a Misprint: $B$3>=0.72 rather than correct $B$3<=0.72
  • asked a question related to Optimization Algorithms
Question
3 answers
Multiple Meta Heuristic Optimization Algorithms like Grey Wolf Optimizer face a problem of Shift In-variance, i.e. when optimum of an optimization model is at (0,0), the algorithm performs quiet well. However, when the same model is shifted by some coefficient, the performance of the same algorithm goes to drain.
An example might be taken from f1 & f6 of standard Benchmark Functions (CEC2005).
Relevant answer
Answer
Hi
It is usually convenient for an optimization algorithm to obtain the optimal solution when the optimal point is in the symmetric search space. When the optimal point of a test function is shifted, the search space goes out of symmetry. Therefore, the power of the algorithm in detecting optimal solutions may be reduced.
  • asked a question related to Optimization Algorithms
Question
10 answers
Hello ... the objective here is maximizing Z for each product (i)
function [ Zi ] = myfitness( X )
P=X(1);
C=X(2);
Q=X(3);
% Zi= fitness value
% C,P,Q = variables vectors
for i=1:10;
Zi = P(i).*Q(i)-C(i).*Q(i);
end
end
the outputs should be a 1*n matrice
when i run the function it works but i get only one value , and doesn't work with ga toolbox i keep getting the same error (index exceeds Matrix dimensions) ....how can i fix this error?
any help would be appreciated ....Thank you
Relevant answer
Answer
Aicha Ghedamsi , you need to call myfitness with X as the input.
X=rand(3,10)
myfitness(X)
  • asked a question related to Optimization Algorithms
Question
4 answers
The Quantum Approximate Optimization Algorithm is a very promising way to have a trade off between results correctness and speedup. My question is how to implement this practically by having this closed loop feedback between a quantum and a classical processor? Can a simulator be used instead of a quantum device?
Relevant answer
Answer
Yes you can refer the GitHub repo of the qiskit QAOA to find the code and implement it using qiskit
  • asked a question related to Optimization Algorithms
Question
11 answers
How I can implement the SWARM Optimization Algorithm, prefer Java Code, to maximize an objective function with three variables? as shown below:
Y= 2+2.3*X1-.3*X2+.1*X3
1< X1 <3
10< X2 <100
30< X3 <100
Relevant answer
Answer
I do not see what's the problem at all and why do you need swarm optimization or any other optimization algorithm or Java code or any other code??!
This is a simple linear objective function with simple variables domain constraints.
It is obvious that the max value of your objective function will be when X1 and X3 variables are maximal allowed, while the value of X2 is minimal allowed, i.e. the optimal solution is
X1=3, X2=10, and X3=100
The max objective function is Y=15.9
If your constraints do not include the boundary values (i.e.domain is strictly < or >), then
x1=2.99999999999...., x2=9.999999999999, and x3=99.9999999999 depending on the needed accuracy, and Y max= 15.8999976930.......
  • asked a question related to Optimization Algorithms
Question
4 answers
Why dual algorithm is used in optistruct to solve the topology optimization? Why they are not using Optimality Criteria method?
Relevant answer
Answer
I guess that the proposal was based on the fact that it worked well in the project - it wouldn't be recommended otherwise. :-) I have also had fun with dual methods in this domain of problems.
  • asked a question related to Optimization Algorithms
Question
2 answers
Hi,
We developed a subpixel image registration algorithm for finding sub-pixel displacement and I want to test that against existing methods. I have compared that with subpixel image registration algorithm by Guizar et al. and also the algorithm developed by Foroosh et al. Does anyone knows any other accurate algorithm for subpixel image registration (preferably with an open-source code)?
Thank you.
Relevant answer
Answer
Hi Amir,
Hope you have find the solution.
  • asked a question related to Optimization Algorithms
Question
4 answers
"Sperm Swarm Optimization Algorithm"
Relevant answer
Answer
Dear Mohamed-Mourad Lafifi,
Thank you sir for your valuable response.
  • asked a question related to Optimization Algorithms
Question
6 answers
I am currently working on a project which requires self tuning of BELBIC controllers parameters i.e PID gain and Learning rate of Amygdala and Orbitofrontal Cortex. I need some suggestions as to how I could integrate optimization algorithm i.e. PSO to tune these parameters for a 3rd order NON-Linear System. I know how PSO works, the only thing is, I am having difficulties in linking BELBIC and PSO together.
Relevant answer
  • asked a question related to Optimization Algorithms
Question
6 answers
The goal is to develop general guidelines that can lead to the improved performance of (population-based) metaheuristics on (continuous domain) multi-modal fitness functions. Assuming a fixed or constrained number of function evaluations, our plan is to use multiple restarts of a metaheuristic tuned to converge relatively fast.
Our current focus is to identify and restart at the "critical" search scale. In general, on (benchmark) multi-modal fitness functions, there are rapid improvements in fitness at the beginning (as the search scale is larger than the attraction basins of the local optima and the search process rapidly explores the overall global structure of the search space), an "elbow" (which we believe could be around the "critical" search scale), and then slow improvement as mostly local search occurs to find a local optimum.
We are looking for features that we can identify in real-time that indicate this "elbow" so that we can restart at it and thus have the new metaheuristic procedure spend more time and effort at this "critical" search scale. We believe that this transition from coarse global search to the specific selection of an attraction basin to exploit will heavily affect the performance of metaheuristics on multi-modal fitness functions.
Any ideas?
Relevant answer
Answer
All the answer are great
  • asked a question related to Optimization Algorithms
Question
6 answers
I want to use the differential evolution technique in order to get rate constant at given reaction conditions.
Relevant answer
Answer
The main thing is to traslate the problema is a optimization task
  • asked a question related to Optimization Algorithms
Question
13 answers
I am using differential evolution (DE) for estimation of parameters in a model. If a parameter value lies between 0 to 1, the optimization algorithm has ideally infinite values between 0 and 1 to look at. If I were to restrict the decimal to be just one, there will be just nine values the algorithm needs to visit. I am not sure if rounding the parameters in each iteration, in mutation, cross over,etc. is going to be efficient. Please advise me if you have any idea.
Thanks,
Ahammed
Relevant answer
Answer
You can use the accuracy as a termination criteria.
  • asked a question related to Optimization Algorithms
Question
11 answers
I want to use Particle Swarm Optimization (PSO)for finding hyper parameters of a support vector regression problem. Initially I tried to find the same using grid search method,but the Matlab code is taking too long to produce results. Even after reading a lot on PSO, I am still not clear on how to apply it. Can anybody help me understand or  refer me to a good text which outlines how step by step PSO can be used in my case.
Relevant answer
Answer
It is necesary to identify the decisión variables and the way to evaluate the quality of each solution. Then, the algorithm ca be directly implemented.
  • asked a question related to Optimization Algorithms
Question
4 answers
Which way do I proceed? Nothing that I read helps me make that decision
Relevant answer
Answer
Hi All
The ACO meta heuristic has been defined by their authors (Dorigo et al.) when they try to generalize their Ant System heuristic used first for the TSP for other kinds of problems, under four conditions:
  1. To see the problem as a search graph;
  2. To have an auto-catalytic process (a positive reinforcement);
  3. To have an Heuristics which allows a constructive definition of the solution (the greedy force);
  4. To have a constraint satisfaction method (stopping criteria).
The ACO meta heuristic can be and have been used for QAP and WSN (both QAP and WSN's applications verify the four previous conditions).
Your confusion is, so, very obvious and inevitable :
You try to apply the ACO meta heuristic for WSN's applications using one of two combinatorial optimization problems: TSP ( Travelling Salesman Problem) or QAP (Quadratic Assignment Problem) !!!!!!!!!!!!!!!!!!!!!!!!!!!
Applying a meta-heuristic for a problem using a problem!
I'm missing some thing or I miss-understand your problem?
Sincerely, Djamel.
  • asked a question related to Optimization Algorithms
Question
17 answers
It will be really good if the suggested journal doesn't spend much time in revision cycles, because I submitted this algorithm to "Applied soft computing" journal 1 year ago, and after 6 revision cycles, they just reject it with no real reasons.
Relevant answer
Answer
Hello, I suggest IJIETAP, Scopus indexed.
  • asked a question related to Optimization Algorithms
Question
6 answers
in Matlab GA solver when i try to solve my objective function it displays Too many output arguments. how to resolve this error. i try different problems with different variables . in each time it displays the same. i want some suggestions from anybody.
with regards 
L.mamundi azaath
Relevant answer
Answer
Thanks for your information..
  • asked a question related to Optimization Algorithms
Question
5 answers
Cuckoo search clustering algorithm.
Relevant answer
Answer
This might be helpful
function [z] = levy(n,m,beta) % Input parameters % n -> Number of steps % m -> Number of Dimensions % beta -> Power law index % Note: 1 < beta < 2 % Output % z -> 'n' levy steps in 'm' dimension num = gamma(1+beta)*sin(pi*beta/2); % used for Numerator den = gamma((1+beta)/2)*beta*2^((beta-1)/2); % used for Denominator sigma_u = (num/den)^(1/beta);% Standard deviation u = random('Normal',0,sigma_u^2,n,m); v = random('Normal',0,1,n,m); z = u./(abs(v).^(1/beta)); end
  • asked a question related to Optimization Algorithms
Question
3 answers
Backtracking Search Optimization Algorithm (BSA) is one of the recent meta-heuristic algorithms. In spite of its success, it might have some drawbacks. What are the main drawbacks of BSA?
Relevant answer
Answer
As any metaheuristic algorithms. BSA has advantages and disadvantages. Based on your request, the drawbacks of BSA are:
Being easy to fall into local optimum, lacking the learning from the optimal individual, and being difficult to adjust the control parameter F.
  • asked a question related to Optimization Algorithms
Question
5 answers
Any one who can help me with Collective Animal Behavior (CAB) Optimization Algorithm MATLAB?
Relevant answer
Answer
Thank you all for your valuable comments.
  • asked a question related to Optimization Algorithms
Question
4 answers
Hi, I want to know if there is a way to combine various ensembles using Multi Objective Optimization Algorithms in Matlab. Can body please point me in the right direction ?
Thank you in advance.
Relevant answer
Answer
I know python:
Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness and accuracy through ensembles of deep learning architectures. RDML can accept as input a variety data to include text, video, images, and symbolic.
  • asked a question related to Optimization Algorithms
Question
8 answers
Please refer this paper to know what are the problems they have performed the test.
Relevant answer
Answer
Dear Amit,
I suggest you to see links and attached file in topic.
-Solving Rotated Multi-objective Optimization Problems Using ...
- An improved NSGA-III procedure for... (PDF Download Available)
- U-NSGA-III - MSU College of Engineering - Michigan State University
https://www.egr.msu.edu/~kdeb/papers/c2014022.pdf -Multi-Objective Test Problems, Linkages, and Evolutionary ...
Best regards
  • asked a question related to Optimization Algorithms
Question
3 answers
How to perform recommendation system with Optimization algorithm like ant colony algorithm?
Relevant answer
Answer
Dear M. Sandeep,
I suggest you to see links and attached file in topic.
-Nature Inspired Recommender Algorithms for ... - Semantic Scholar
https://pdfs.semanticscholar.org/.../23dc564fd88e5f813b8bf96c4c... -A Semantic Recommender System Based on Ant Colony ... - SERSC
- Enhancing Collaborative Filtering Recommendation by ... - Scientific.net
- Recommender System Based on Collaborative Behavior of Ants ...
- Comparative analysis of a recommender system based on ant colony ...
- An Improved Ant Colony Optimization Algorithm for Recommendation ...
Best regards
  • asked a question related to Optimization Algorithms
Question
5 answers
i'm trying to optimize a simple 2D truss structure using optimization solver.
the objective function is total strain energy of the structure (mod1.truss.Ws_tot) and the constraint is total weight (mod1.mass1.mass).
control variables are a set of three length parameters of position of truss members and cross section area.
I read everything available on comsol documentation about optimization, but i still couldnt figure out how to use derivative based optimization algorithms to this model.
I think the problem is that my objective and constraint is indirectly related to control variables but i dont know how to connect them in the right way.
  • asked a question related to Optimization Algorithms
Question
1 answer
1- in neb does the force act on each image or each atoms in the
2- if the force acts on each atoms,is it the same for all atoms in each image?
3- is the interpolation between images linear? if yes, how to understand that the path is the best when there are diffrent paths between initial and final images? (Becuase different MEPs exist between initial and final images)
4- How does optimization algorithm determine the best path? Is there any criterion ?
Relevant answer
Answer
Dear Maedeh Zahedifar,
I too have some similar doubts about NEB. If you are clear about NEB, can you please answer your questions so that it will help me and others in better understanding.
  • asked a question related to Optimization Algorithms
Question
5 answers
Hi,
Can anyone explain it to me "what is Velocity in PSO and how to calculate Velocity in PSO". Explanation with small example will be appreciated!
Regards
Muddsair
Relevant answer
Answer
Dear Muddsair Sharif,
The “velocity equation” differs for all optimization. But, the concept is similar for all.
Consider the following example,
I am asking 10 peoples (including you) to find the x value to produce Max (y).
y=2*x^3-3*x^2-12*x+4 , -50<=x>=50
All the 10 students tells 10 different ‘x ‘values, each x values have one y values.
Next times, I am giving again one chance (i.e second iteration) to give a correct answer. Now all the 10 students will tell different x values. The change is X from (1st attempt) to 2nd attempt is called velocity increment.
i.e., initially you told x is 5, … in second attempt, you changed you ‘x’ values to near the best one by seeing other students y values for their x values.
(student whos x is produced best valueof ‘y’ compared to all 10 students may considered as a leader).
In second attempt, Assume that you said x is 8 (randomly) by considering other students x value. (i.e, you have changed your x value from 5 to 8 , i.e, the velocity increment is +3. This magical way of velocity increment is converted as an equation all optimization program. Where the self best and others (global) best is considered with some randomness.
  • asked a question related to Optimization Algorithms
Question
2 answers
Hello
I would like to ask about Chaotic  Optimization  Algorithms. Why most chaotic optimization algorithms using  the  initial  value of chaotic map equal  to  0.7?
Relevant answer
Answer
What do you mean by Chaotic  Optimization  Algorithms?
  • asked a question related to Optimization Algorithms
Question
4 answers
Hi there.
Recently I focus on the link scheduling problem in wireless sensor network, especially in IEEE802.15.4e Time Sync Channel Hopping (TSCH) networks. The key of the scheduling is to decide the order of the links, a.k.a different oder result in different performance such as average delay time.
The problem maybe related to permutation optimization,somebody suggest stochastic optimization(like simulation annealing) or genetic algorithm or swarm optimization. While these method requires high computation cost.
Any other feasible idea?
Relevant answer
Answer
Try the stochastic optimization like GA, ACO, PSO and firework optimization. They are not so slow, it depends on the scale of your problem.
  • asked a question related to Optimization Algorithms
Question
4 answers
I have installed contiki framework for my experiment, what is the best techniques to implement query optimization algorithm ??  
Relevant answer
Answer
 Dear Dr. Me Strange
Thank you very much for the link. I have already accomplished that task.
  • asked a question related to Optimization Algorithms
Question
6 answers
I am using accelerated bender decomposition algorithm which lower and upper bound reach convergence before second iteration ,is it normal such as case?
,as I have attached file,it is shown after iteration 1 lower and upper bound convergence,what happened ?,in fact when I use classic bender decomposition in iter num 2 lb and ub are convergence even with large size,I try to increase size of my problem however classic BDA is convergence in second iter so in this case is it worth to use accelerated BDA?
Relevant answer
Answer
I think something is wrong when upper bound becomes smaller than lower bound - as it can be seen in the attached figure.
  • asked a question related to Optimization Algorithms
Question
6 answers
under which criteria does SSO and DE algorithms come?
Relevant answer
Answer
There are many papers that discuss the differences between optimization algorithms, for example, check the following paper:
  • asked a question related to Optimization Algorithms
Question
6 answers
Can anyone suggest me some references (preferably papers or articles) that discuss the sensitivity of computational intelligent optimization algorithms, more specifically soft computing techniques, to an initial solution?
It seems, regardless of the type of the techniques, e.g. evolutionary, swarm, network-base, etc. the quality of ultimate solution of some techniques is affected by the initial solutions while others show less sensitivity. Please let me know if you have any comment, suggestion or information on this topic.
Relevant answer
Answer
you can check the following paper:
Amini, M. M., Racer, M., & Ghandforoush, P. (1998). Heuristic sensitivity analysis in a combinatoric environment: An exposition and case study. European journal of operational research, 108(3), 604-617.
  • asked a question related to Optimization Algorithms
Question
3 answers
Scenario: Randomly deployed sensor nodes with finite energy source each sensing some information from close surrounding and transmitting it to base station for further processing.
Problem formulation: Select some node as a cluster head such that they are energy rich and well distributed in the field. Well distributed so that they minimize the inter-cluster distance and energy rich so that they can be Cluster head for a long duration of time (highest energy node should be selected as cluster head in order to reduce the frequency of re-clustering because if the energy of cluster head goes below a threshold we will change the cluster head ).
I need a centralized algorithm which will be executed on Base Station and given the current location and energy state of all nodes it will select some node as cluster head, such that the cluster heads are well distributed and energy rich. The algorithm should balance both the factor in such a way that the energy of nodes must be saved up to the maximum possible level.
Relevant answer
Answer
Usually the placement of the cluster head is also very crucial along with its energy content. Since,  if  it is place in hard to reach place say at the edge or away from base station ,we are wasting energy trying to send data to cluster head. Hence a combination of placement and energy should be considered while selecting a cluster head 
  • asked a question related to Optimization Algorithms
Question
3 answers
In the SIM reconstruction algorithm, it is necessary to separate the measurement results, but separation need to know the exact translation phase of illumination stripes. I wrote a phase calibration procedure according to the method of literature, but the calibration results are not accurate. I would like to know whether there is a large necessary to calibrate the phase? If necessary, what needs to pay attention in the calibration algorithm? How to remove the measurement error superimposed on the results?
Relevant answer
Answer
Dear Luo,
I suggest for you a links and attached files in topics.
-SIMcheck: a Toolbox for Successful Super-resolution Structured ...
-QSIM: quantitative structured illumination microscopy image ... - NCBI
https://www.ncbi.nlm.nih.gov › ... › PubMed Central (PMC)
-Optimal 2D-SIM reconstruction by two filtering steps with Richardson ...
www.nature.com › Scientific Reports › Articles
-System Calibration — SIMcheck 1.0.0 documentation
-Electronic Microwave Imaging with Planar Multistatic Arrays
Best regards
  • asked a question related to Optimization Algorithms
Question
10 answers
Hello everyone,
I am trying to remove this absolute operator  for cplex.
Zi=1/2 | ∑_(j=1)^J▒(-1)^j (x_(i,j) - x_(i+1,j)) |
where Zi has be  {0,1}
but if I remove the mode, the values I am getting is {0,1,-1}.
Xi,j is Binary decision variable
How can I replace the absolute operator  to make it an equation for Mixed integer Linear programming.
PFA  word file to clearly understand the equation.
any suggestions please.
Thank you.
Relevant answer
Answer
Min | v^T x | is equivalent to
MIn z
z >= v^T x
z >= - v^T x
where x is a vector of decision variables, v is a vector and z is a new continuous variable. 
As Ricardo said, this only works when you are minimising.
  • asked a question related to Optimization Algorithms
Question
9 answers
I am doing an optimization with the genetic algorithm. I have bounds and variable constraints. Then from literature; I found that we have to add constraints as Penalty function. But I could not understand what penalty function is clearly.  And how to add penalty function to GA optimization?
Relevant answer
Answer
Dear Gabour Amina,
Penalty function approaches penalize candidate solutions that violate constraints
or that come close to violating constraints.Assume that your constraint problem is formulated as follow:
Max   f(x)
s.t.       g(x) ≤ 0    and    h(x)=0
So, to avoid violation of constraints, unfeasible solutions should be adapted to feasible solutions. To do this, a penalty function is employed to convert the constrained optimization problem in to the unconstrained one. Therefore, based on the penalty function method the above formulation is converted to the following formulation.
Max F(x) = f(x) - λ∑ max(0,g)
k represents the penalty coefficient.
I hope my explanations help you.
Regards,
Soheila.
  • asked a question related to Optimization Algorithms
Question
4 answers
Hello All
I'm using Genetic algorithm to solve multi objective problem optimization. Like any optimization algorithm the solution should be the minimum of the objective function value, but if I want to add a constrain on the minimum value. Assume I have two objectives Y1 and Y2, I want to make the stopping criteria like this
*if Y1<z and Y2 is the minimum value
stop*
Is it possible?
Thanks in advance
Regards
Mansour Alramlawi
Relevant answer
Answer
Dear Mansour,
I think it is possible and to ensure the operation please review the following information:
Please see similar questions posed through ResearchGate with the following links:
-Multiobjective Optimization - ResearchGate
-Can you help me in solving an objective function in optimization ...
Also see links and attached files in topics.
-Constrained Minimization Using the Genetic Algorithm - MATLAB ...
https://www.mathworks.com › Examples Home › Genetic Algorithm
-Constraint-Handling in Genetic Algorithms Through the Use of ...
-]Multi-Objective Optimization using Genetic Algorithms - DiVA
Best redards
  • asked a question related to Optimization Algorithms
Question
10 answers
Still papers are coming using weighted sum approach for solving bi-objective problems even though Pareto optimal algorithms such as NSGA-II are popular.
how to explain the concept of Pareto front using weighted sum approach. Is it possible to draw Pareto front curves using weighted sum approach.
Relevant answer
Answer
Good morning,
You could find some answers to your interesting question in these two papers:
1) Fonseca, Carlos M., and Peter J. Fleming. "An overview of evolutionary algorithms in multiobjective optimization." Evolutionary computation 3.1 (1995): 1-16.
2) Khan, Salman A., and Shafiqur Rehman. "Iterative non-deterministic algorithms in on-shore wind farm design: A brief survey." Renewable and Sustainable Energy Reviews 19 (2013): 370-384.
They underline foure main drawbacks:
1.optimal solution could be not acceptable, either due to the function used excluding aspects of the problem or to an inappropriate setting of the coefficients (Fonseca and Flaming,1995)
2. If the solution is not acceptable, new runs of the optimizer is required;
3.A small change in in weights may results in big changes in the objective vectors (Khan and Rehman, 2013)
4.Significantly different weights may produce nearly similar objective vectors (Khan and Rehman, 2013)
  • asked a question related to Optimization Algorithms
Question
6 answers
require an explanation supported with the steps for evaluate the RSA and AES algorithms in cloud computing , by using google engine or any cloud engine . the procedure to encrypt  the data by algorithms (text file) and uploading / downloading to/from the google engine and then methods to evaluate the performance of algorithms in different metrics .
Relevant answer
Answer
The following paper might help u in this regard sir
  • asked a question related to Optimization Algorithms
Question
8 answers
Is there any methodology to find proper parameter settings for a given meta-heuristic algorithm, eg. Firefly Algorithm or Cuckoo Search? Is this an open issue in optimization? Is extensive experimentation, measurements and intuition the only way to figure out which are the best settings?
Relevant answer
Answer
A good way to optimize/adjust parameters for any meta-heuristic/heuristic algorithm is by the use of Design of Experiments (DOE). You must define ranges for any parameter  where you think the best values are. Among the advantages of this technique, I can point out that your experiments are run sistematically, so you avoid waist of time with meaningless experiments. You can also identify trends for the values of parameters and find out if the best value is inside your range or if you need further experiments. Another impotant feature is that you can analyse relations between parameters, so you are able to find the best combination of parameters instead of only isolated values. My personal advice is that you start with a reasonable wide range, to have an idea of the behavior of your algorithm for different settings, and based on this results perform a fine tune. It is important to mention that your are fitting your algorithm to the specific problem that you are evaluating. 
  • asked a question related to Optimization Algorithms
Question
5 answers
I am using PSO for document clustering , i implemented pso simple algorithm but i have problem with implementing the objective function of pso  for document clustering  and then i extend PSO algorithm for Multiview documents clustering .any body help me about the simple objective function of PSO  for document clustering.
  • asked a question related to Optimization Algorithms
Question
4 answers
I am looking for a method to compare the dataset represented in blue with the one in red and I need to extract one single value from this comparison. The idea is to compare datasets generated with different combinations of parameters to a 'optimal' dataset and get a 'score' of each one so I can see which combination of parameters is the closest to the optimal model. I came up with a few options like Fréchet distance, Hausdorff distance and MSE but I don't know which one would work best for me.
Does anybody have any suggestion or another method that could work?
Thank you.
Relevant answer
Answer
You already said:
"I came up with a few options like Fréchet distance, Hausdorff distance and MSE but I don't know which one would work best for me."
You have all the basics tool needed.  Which and which variant might work best four you depend on the application: classification, recognition, etc.  I think the best way is to create a setup where you can run multiple experiments using these different distances and you improve based on your findings (or stop when it's "good enough")