Science topic
Evolutionary Algorithms - Science topic
Explore the latest questions and answers in Evolutionary Algorithms, and find Evolutionary Algorithms experts.
Questions related to Evolutionary Algorithms
Evolutionary algorithm, convergence curve, iteration count, maximum function evaluations, minimization objective.
I am applying a multi-objective algorithm, specifically NSGA-II. In some instances of my problem, I obtain a Pareto front, but in others, I only get a single solution or two solutions. Does this mean that my objectives are not conflicting? What method can I apply to verify the conflict between two objectives? I’m not sure what to do since sometimes I get a Pareto front, but in many cases, I don’t.
Hi, I wanna to implement evolutionary algorithms in ryu sdn controller in mininet, i have some challenges, how i can run the big scale topo with one sdn contoller??? and another question is to generate real dataset( for example for 12000 node) some real cpu,ram bw value... Do you have any suggestions for this dataset? How can I get data set? with real values.
Thanks...
International Journal of Complexity in Applied Science and Technology,Topics covered include
- Evolutionary algorithms
- Particle swarm optimisation
- Single-/multi-/many-objective optimisation
- Constrained optimisation
- Multi-modal optimisation
- Dynamic optimisation
- Data-driven optimisation
- Large-scale optimisation
- Engineering design optimisation
- Applications associated with intelligent computation
- Machine learning
- Big data
International Journal of Complexity in Applied Science and Technology,
Topics covered include
- Evolutionary algorithms
- Particle swarm optimisation
- Single-/multi-/many-objective optimisation
- Constrained optimisation
- Multi-modal optimisation
- Dynamic optimisation
- Data-driven optimisation
- Large-scale optimisation
- Engineering design optimisation
- Applications associated with intelligent computation
- Machine learning
- Big data
I am working on flexible job shop scheduling problem, which I want to solve using a hybrid algorithm like VNS based NSGA-II or TS/SA + NSGA. Can I use Pymoo to implement this problem? Pymoo has specific structure to call the built-in algorithms, but in case of customization and using two algorithms how can I use pymoo framework?
2024 3rd International Conference on Artificial Intelligence, Internet and Digital Economy (ICAID 2024) will be held in Bangkok, Thailand on April 19-21, 2024.
Important Dates:
Full Paper Submission Date: March 19, 2024
Registration Deadline: March 29, 2024
Final Paper Submission Date: April 09, 2024
Conference Dates: April 19-21, 2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
- The development of artificial intelligence (AI tools, artificial intelligence and evolutionary algorithms, intelligent user interfaces, intelligent information fusion, etc.) and their applications in economic and social development.
- The development of mobile Internet, artificial intelligence, big data and other technologies and their application in economic and social development.
- Artificial intelligence and digital economy development frontier in the Internet era and typical cases.
- Technology, methods and applications of the integration and development of digital economy and artificial intelligence.
- Other topics related to artificial intelligence, Internet and digital economy can be contributed.
All accepted papers will be published in the AHIS-Atlantis Highlights in Intelligent Systems (2589-4919), and submitted to EI Compendex, Scopus for indexing.
For More Details please visit:
I have 'N' number of inputs (which correspond to temperature) to calculate a specific output parameter. I'm getting 'N' number output data based on the inputs.
However, my goal is to select an optimum number from all the output data and use it in another calculation.
'N' number of input data --> Output parameter calculation --> Identification of an optimized output parameter --> Use that value for another calculation.
How to find an optimal value among 'N' "number of output data. Can we employ any algorithm or process?
To start with, I want to believe for a modification of an algorithm like evolutionary algorithm to be considered novel, it has to offer some benefit (like automation) and should not perform worse than the manually designed algorithm.
Base on the above, If I want to implement an evolutionary algorithm to design another evolutionary algorithm, I wouldn't know before hand whether the proposed method will perform as good or even better than the manual method or some other similar method.
I can only tell how the method performs after the completion of the implementation.
Is there a way to know before hand that a proposal will not bring some benefit, before putting energy and time into developing it?
Please, what do you advise in this situation from your experience?
Thank you in anticipation of your kind response.
The set of optimal solutions obtained in the form of Pareto front includes all equally good trade-off solutions. But I was wondering, whether these solutions are global optima or local optima or mix of both. In other words, does an evolutionary algorithm like NSGA-II guaranties global optimum solutions?
Thank you in anticipation.
These datasets will be used for data classification and predicting new information
How to initialize the population size and maximum number of iterations in the evolutionary algorithms? Is there any hard and fast rule for that?
Researchers are using different algorithms like GA, PSO and other variants of evolutionary algorithms. My question is why to do so as Lagrange itself can provide a solution to it?
Although the genetic algorithm is very helpful to find a good solution, it is very time-consuming. I've recently read an article in which the authors used the genetic algorithm for pruning a neural network on a very small dataset. They pruned a third of their network and reduces the size of the model from 90MG to 60MG. Their pruning also decreased the inference time of their model by around 0.5 seconds. Although, unlike other methods, pruning using the genetic algorithm does not deteriorate their model's performance, finding redundant parts of the network using the genetic algorithm is an overhead. I wonder why they think their method is useful and why people still use the genetic algorithm.
Hello, I want to solve a bilevel optimization problem using evolutionary algorithms for my research project. Can somebody tell me where can I find code for such problems?
Hello, I've been focusing on an evolutionary algorithm based on prediction ,it has good exploration ability, but unfortunately his exploitation ability is very poor. What strategies can improve his exploitation ability?
To solve multi-objective problems using the lexicographic method.
Can we use for each step different algorithms? For instance, when we have two objectives to be minimized, can we use firstly the Genetic Algorithm to optimize the first objective, then use Particle swarm optimization Algorithm to solve the second objective while taking the result of the Genetic Algorithm as a constraint?
I need matlab code to reproduce the attached research paper
Dear Researchers,
I'm looking for the implementation of advanced metaheuristic algorithms in Python. I would be incredibly thankful if someone could assist me with the execution of evolutionary algorithms or can provide me with the necessary codes in Python.
Thank you very much.
Hello
Can any one help me about the meaning of chaotic maps and the opposition based learning population initialization and the difference between them? Are these strategies used for population initialization in all evolutionary algorithms, and when is it efficient?
Thanks so much.
Population size = N
fi represents the fitness of ith candidate solution
fmax = max {fi : 1 <= i <= N}
fmin = min {fi : 1 <= i <= N}
Find jth solution whose fitness value is the mean of all fitness values.
Dear Experts,
I am looking for a solution for a convex optimization problem (linear programming) and I am thinking about evolutionary algorithms. Do you think it would be helpful?
Hello RG Community,
As we have ZDT, DTLZ, WFG test functions for continuous multiobjective optimization problems, are there any test functions for mixed integer multiobjective optimization problems?
Thanks!
Dear All,
I have successfully applied NSGA-II on a multi-objective optimization problem.
While I was observing the results during the optimization process, I have noticed that some variables (genes) have reached very good values that matches my objectives, while others haven't. However during the optimization, the good genes are being frequently changed and swapped to undesired results (due to genetic operations; mutation and crossover) until the algorithm reaches a good local optima, or hits a condition.
My question is this:
Can I exclude some genes from the optimization process since they have already reached my condition sand finally combine them with the remained genes in the final chromosome?
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
Why Particle Swarm Optimization works better for this classification problem?
Can anyone give me any strong reasons behind it?
Thanks in advance.
Hi everyone,
We have implemented four metaheuristic algorithms to solve an optimization problem. Each algorithm is repeated 30 times for an instance of the problem, and we have stored the best objective function values for 30 independent runs for each algorithm.
We want to compare these four algorithms. Apart from maximum, minimum, average, and standard deviation, is there any statistical measure for comparison?
Alternatively, we have four independent samples each of size 30, and we want to test the null hypothesis that the means (or, medians) of these four samples are equal against an alternative hypothesis that they are not. What kind of statistical test should we perform?
Regards,
Soumen Atta
I used uniform crossover to solve eight queens problem. It is taking more than 3hrs to get the output. Is there any way to reduce the running time or better crossover to solve eight queens problem? Any answer regarding this will be appreciated.
Thank you
In optimization of sphere we already knows that the optimal point is the Origin and this is achieved by analytical optimization procedures and as we run Many/Multi Evolutionary Algorithms, the results are compared with the true PF.
Now I wanna know how to compute the true PF of normalized test Function such as DTLZ, DTZ and etc!!!
We are trying to breed some parameter configurations controlling the search of a deduction system. Some parameters are integers, some are reals, some are boolean, and the most complex one is a variable length list of different elements, where each of the elements has its own (smallish) sub-set of parameters. Since we have Python competence and Python is already used in the project, that looks like a good fit. I've found DEAP and PyEvolve as already existing frameworks for genetic algorithms. Does anybody have experience with these and can tell me about the strengths and weaknesses of the two (or any other appropriate) systems?
If it helps: In our application, determining the fitness of the individual is likely the most expensive part - it will certainly be minutes per generation, and if we are not careful and/or rich (to buy compute clusters), could be hours per individual. So time taken by the rest of the GA is probably not a major factor - think "several generations per day", not "several generations per second".
There is an idea to design a new algorithm for the purpose of improving the results of software operations in the fields of communications, computers, biomedical, machine learning, renewable energy, signal and image processing, and others.
So what are the most important ways to test the performance of smart optimization algorithms in general?
hi
is there any python package that implements the epsilon constraint method for solving multiobjective optimization problems using the evolutionary algorithms
Genetic algorithm can be implemented using ga tool in matlab Plz suggest any tool to implement the evolutionary algorithm or nature inspired algorithm.
I'm looking to compare an evolutionary algorithm to a state-of-the-art algorithm using a set of test functions. I'd like to use modern test functions instead of those defined in Deb (2002) work.
Various metaheuristic optimization algorithms with different inspiration sources have been proposed in recent past decades. Unlike mathematical methods, metaheuristics do not require any gradient information and are not dependent on the starting point. Furthermore, they are suitable for complex, nonlinear, and non-convex search spaces, especially when near-global optimum solutions are sought after using limited computational effort. However, some of these metaheuristics are trapped in a local optimum and are not able to escape out, for example. For this purpose, numerous researchers focus on adding efficient mechanisms for enhancing the performance of the standard version of the metaheuristics. Some of them are addressed in the following references:
I will be grateful If anyone can help me to find other efficient mechanisms.
Thanks in advance.
If yes, could anyone provide me with some references/resources on how to compute the theoretical performance guarantee (i.e. approximation ratio) for a population-based differential evolutionary algorithm?
For my research purpose and to compare the results with the results of swarm algorithm I need an evolutionary algorithm. And I consider GA as one of them.
Hi,
I just want to make sure that I understand the mechanics of the NSGA-II (the non-dominating sorting genetic algorithm) for a multiobjective optimization problem, since the resources I have (I am not satisfied with, I would be grateful if anyone can recommend me a good paper or source to read more about the NSGA-II)
Here is what I got so far:
1- We start with a random population lets call P_0 of N individuals
2- Generate an off-spring population Q_0 of size N from P_0 by using binary tournament selection, crossover and mutation)
3- Let R_0=P_0 U Q_0
While (itr<= maxitr) do,
5- Identify the non-dominating fronts in R_0, (F_1,..F_j)
6- create P_1 (of size N) as follows:
for i=1:j
if |P_1| + |F_i| <= N
set P_1=P_1 U F_i
else,
add the least crowded N - |P_1| solutions from F_i to P_1
end
end
7- set P_1=P_0;
8- generate an off-spring Q_0 from P_0 and set R_0=Q_0 U P_0
9- itr=itr+1;
end(do)
My question (assuming the previous algorithm is correct, how do I generate Q_0 from P_0 in step 8?
Do I choose randomly any 2 solutions from P_0 and mate them or do I choose according to or is it better to select the parents according to some condition like those who have the highest rank should mate?
Also, if you can leave me some well-written papers on NSGA-II I would be grateful.
Thanks
In most MOEA publications, researchers will use IGD or hyper-volume as their evaluation metrics. However, in real-world problems, both nadir points and ideal points are unknown. What's worse, in many cases, the domain knowledge is unavailable. Thus, in this case, how do I compare the performance between different algorithms? For example, how do I compare the performance of different MOEAs on a real-world TSP problem without any domain knowledge?
Hi,
Ok so I'm all new to computer science and metaheuristics, and need to implement multi-objective optimization for an environemental economics problem with real world data.
The problem goes as this :
I have a large set of possible locations for an infrastructure within a city. Each location provides two benefits A and B, as well as costs (more benefits may be added in the future, depending on further research). Data points are unrelated, cost and benefit functions are not linear and not continuous.
I need a method for selecting a handful of locations that maximizes simultaneously A and B, under the constraint that the total costs < a budget parameter.
I went towards genetic algorithm (GA) for this problem, as it is is highly combinatorial, but I am facing the fact that most "traditional" GA I've looked at have fixed chromosome lengths, and ultimately only give out final individuals of n items. In my case, i am quite flexible on the quantity of best locations, as long as it either minimizes total costs, or handles it as a constraint. As a matter of fact, it would be quite interesting to have as a final result a pareto-front of individuals of different size (for example : in my problem, locations in city center have more "value" than locations in periurban areas, so a few centric locations could be as pareto-optimal as more numerous periurban locations). So I see the problem as a knapsack problem, where costs would be the "weight"; however there can't be any repetition of items here (a same location cannot be used twice).
Is there a way to handle costs constraint as to make a knapsack genetic algorithm that can provide a pareto front of individuals of heteogeneous length. I have been trying it with DEAP library but don't really find details in its documentation.
Many thanks
Georges Farina
In the field of softcomputing, many researchers have written premature convergence. if we observe standard deviation, can we notice that algorithm has premature convergence or not. How can we find whether an algorithm has premature convergence or not?
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Is the using algorithms in finding interested area in images fruitful?
Recently, there have been published many metaheuristic algorithms mostly based on swarm intelligence. The good future for these field can be applying these algorithms for solving some real problems in the different sectors such as business, marketing, management, intelligent traffic systems, engineering, health care and medicine. Please let's discuses about their applications in the real world and share our case studies.
I want to solve some nonlinear optimisation problems like minimising/maximizing f(x)=x^2+sinx*y + 1/xy under the solution space 3<=x<=7, 4<=y<=11 using artificial neural network. Is it possible to solve it?
How we can measure the quality of an element (cell) in a candidate solution in evolutionary algorithms? For example, a gene in a chromosome? In other words, objective function calculates the quality of each candidate solutions, while I want to find a method to measure the quality of each element of the candidate solution?
Local search method helps to increase the exploitation capability of optimization and meta-heuristic algorithm. It can help to avoid local optima also.
Are there any global optimization algorithms that support categorical input variables?
Evolutionary algorithms generally need to perform local mutation, however, unlike real or integer variables, it is hard to define similarity of categorical variables
I am interested to solve a mathematical problem (MILP) using evolutionary algorithms but confuse about which one to choose as a beginner in the programming languages. Suggest an algorithm easy to implements with better results.
Thanks
I am looking for single-objective problems for which most evolutionary algorithms can find the global optimum. For most of the new benchmark problems (i.e. CEC2015) algorithms are not able to converge to the global optimum. For my experiments, I need simpler problems such as Sphere, so that most algorithms can find the global optimum. Can anyone recommend some problems or a benchmark that I could use?
Thank you in advance!
I am working on comparing the performance of evolutionary algorithms in optimizing a suspension model. The suspension model is built using ADAMS/Car. I need the best way to implement these algorithms to work in fatest way with our model.
Evolutionary Algorithms, namely Genetic Algorithm (GA), Particle Swarms Optimization (PSO) and Differential Evolution (DE) are used to solve optimization problems. As in photovoltaics (PV), the I-V characteristic of PV cells is non-linear, which requires a resolution method. So according to the three methods cited (GA, PSO, and DE), what is the best method for solving this non-linear problem according to the strengths and weaknesses of each ?.
I am working on a resource allocation problem using an SPEA 2 evolutionary algorithm. The problem involves a decision variable where each variable has a different domain e.g. Ei < di where Ei is the allocation to a user and di is individual demands. The problem involves a linear constraint such that sum(Ei) = total resource. The probability of the creation of feasible off-springs after crossover and mutation operators is extremely low. So, we need a repair operator for this purpose. I need guidance for the selection of suitable repair operators and how should I apply that, I mean should we repair all solutions or some percentage. I would appreciate the guidance, comments, or any literature reference.
I'm working on a project and I wanted to know the Time and Space complexity Analysis of MOEAs. I have searched the google scholar but there isn't any valuable information on this matter. please help me and let me know if there is a good comparative study or article on this subject.
thanks
What is the best way to find the optimal solution of a non convex non linear optimization problem that really constitutes finding the optimal/ best combination of one member each from a number of groups or sub populations.
Each group of sub population has a number of members that are sequences - an array of elements.
The optimal solution is given by best matching a sequence or (as) member from each group or sub population.
It is similar to recombination of evolutionary algorithms (but not entirely identical to evolutionary algorithms).
I'm searching for some good tools that offers easy way to apply evolutionary/genetic algorithm for selecting best feature from a dataset. I was wondering if this task can be performed in KNIME, WEKA or Orange?
Is it possible to specify upper bounds on the coefficients other than an universal or global constraint?
I am using excel solver evolutionary algorithm at present. I do not know whether this is the same for other evolutionary algorithms for nonconvex nonlinear optimization problems.
the evolutionary algorithm requires both a lower and upper bound on the optimization problem coefficients.
If i dont specify: all coefficients <= x, but specify an upper bound per coefficient instead: c1 <= x1, c2 <= x2, c3 <= x3, it complains and refuses to run. So i implement and use both.
but specifying individual upper bounds on individual coefficients imply a (significantly) smaller solution space.
I also dont want the algorithm to search outside these individual coefficient upper boundaries. Yet i get the impression that it does. Can someone confirm or refute this? Also, if this does occur, i am of the impression that it can impact the solution i get.
Is there a way to specify individual coefficient (upper) boundaries and ensure that the algorithm only searches within these?
Hi everyone,
Is it reasonable to apply a hybrid model based on Artificial Neural Network (ANN) and evolutionary algorithms as a classifier applicable to intrusion detection?
In addition, would we be able to divide data sets into training, validation and test as I have never seen this in papers? I mean researchers usually consider train and test records to build their model.
My last question is that whether we can generate data for intrusion detection using simulation of chemical operations or not.
Thanks you in advance.
I am using genetic algorithm (GA) on different test functions with different hyper parameters to determine the effects of hyper parameters on the algorithm.
My criteria: Is the guessed answer (minimum) by GA close enough to real answer.
"close enough" is determined by "level of accuracy" or LOA.
Problem is, different functions have different input ranges and a using a static LOA for all test function does not seem right.
Question is how should I decide on LOA value? Should it be related to input range of the function being tested?
Example: Schwefel test function has input range of (-500,500) for all inputs and minimum of 0. If the guessed minimum by GA is 0.08 and LOA is 0.1 then this guessed answer is correct because |0 - 0.08| < 0.1 but if the guessed answer is 0.12 it is considered wrong.
Rastrigin test function has range of (-5.12, 5.12) for all inputs. Using same LOA for Rastrigin does not seems right since it has very smaller range and obviously GA will do better with the same LOA.
Should LOA be related to range? for example should LOA of 0.001 be used for Rastrigin since its range is 1/100 range of Schwefel .
with evolutionary algorithms, i understand the concept of recombination and mutation, and that offspring may essentially constitute or approximate - be seen as - sub-populations.
but, what if your optimization problem really constitutes, and the initial population is best comprised out, of a combination of sub-populations?
creating a number of sub-populations, and combining them.
i suppose one can see this notion of sub-populations as a characteristic one would recombine and mutate on/ according to.
What is the best/ next step if you suspect the evolutionary algorithm recombines and mutates in an inferior way, for your nonvonvex nonlinear optimization problem?
Write your own heuristic?
How do you implement it?
How good are evolutionary algorithms with equality constraints?
I must add the objective function is specified as the sum of absolutes.
I need/ want to use equality constraints to guide the solution.
Some of the optimization coefficients (variable cells), when summed, must equal to a certain value.
I run a nested or multi level optimization problem, because the nonlinearity is simply too much for the evolutionary algorithm.
I use optimization to search an initial solution, to further search and interrogate at the second level.
It then essentially helps to cut up the solution space, to help the algorithm.
yet, In a sense, at the 2nd level of optimization, the evolutionary algorithm still descents too quickly. It then causes it to miss the better solution.
I have seen something similar with ordinary nonlinear programming/ optimization based on 2nd order partial derivatives, and newtons step etc.
Is there a way to prevent the evolutionary algorithm from descending too quickly?
It can rather descent slower and recalculate at that point.
What are the implications when this phenomenon is possible with nonlinear optimization and evolutionary algorithms? Still too much/ fine nonlinearity?
What is the best commercially available non-linear optimization problem evolutionary solver/ algorithm?
Must be freely or easily obtainable.
Not necessarily free.
relatively Easy to install and use.
Anything better than excel solver's evolutionary algorithm?
Can it happen that the grid size or resolution (to use such language) of the local solutions of a non-linear optimization problem are, or become, too fine for the evolutionary algorithm to pick up?
Is this just a matter of proper sampling? Will the algorithm conduct the appropriate sampling in all cases?
What is the best way to choose initial solutions for a non-linear optimization problem sensitive to the initial solution?
If you have a significant number of optimization problem coefficients (50 in this case), and your optimization problem is such (non-linear with multiple possible local solutions) that it is sensitive to the initial solution, what are the best ways to choose (a set of) initial solutions?
Will the excel solver evolutionary algorithm do this?
Why would an (the excel) evolutionary algorithm be better at, or simply be able to, solve what seems to be a linear problem, when the LP (and also NLP) algorithm struggle to, or simply fail to.
The problem is of the form:
Max X; X = x1 + x2 + x3...
x1 = (k1 × z1) + (k2 x z2) + (k3 x z3)...
x2 = (k1 x y1) + (k2 x y2) + (k3 x y3)...
x3 = (k1 x q1) + (k2 x q2) + (k3 x q3)...
k are the coefficients of the problem to optimize; z, y, q are factors and are given/ constants.
The evolutionary algorithm can find a (better) solution (to the initial solution). The LP algorithm occasionally finds a solution, and not better (mostly worse) than the evolutionary algorithm. The NLP algorithm does not really find a solution at all.
Dear colleagues,
I would like to announce that we have started a Special Issue on AI Journal (MDPI), where you can share your latest research regarding evolutionary computations. Both theoretical contributions and application papers are welcome.
Papers accepted during 2019 will get APCs waived (published free of charge).
Please, find further info on the SI website: https://www.mdpi.com/journal/ai/special_issues/EAIAA
Best regards.
If we have multiple classifiers and we need to know which one is under-fitting, and which one is overfitting based on performance factors (classification accuracy, and model complexity)
Are there any method to select the dominate classifier (optimal fitting) that balance between the above-mentioned two factors?
My problem consists:
1. More than thousand Constraints and Variables
2. It is purely 0-1 programming i.e. all variables are binary.
3. Kindly note that I am not a good programmer.
Please provide me some links of books or videos discussing application of GA in Matlab for solving 0-1 programming with large number of variables and constraints.
I have gone through many YouTube videos but they have taken examples with only two or three variables without integer restrictions.
Dear All,
I want to know how do we predict resonant frequency of antenna from Support Vector Machine by giving Antenna Parameter as input to the model?
I am currently working on a project which requires self tuning of BELBIC controllers parameters i.e PID gain and Learning rate of Amygdala and Orbitofrontal Cortex. I need some suggestions as to how I could integrate optimization algorithm i.e. PSO to tune these parameters for a 3rd order NON-Linear System. I know how PSO works, the only thing is, I am having difficulties in linking BELBIC and PSO together.