Questions related to Swarm Intelligence
Over the last few decades, there have been numerous metaheuristic optimization algorithms developed with varying inspiration sources. However, most of these metaheuristics have one or more weaknesses that affect their performances, for example:
- Trapped in a local optimum and are not able to escape.
- No trade-off between the exploration and exploitation potentials
- Poor exploitation.
- Poor exploration.
- Premature convergence.
- Slow convergence rate
- Computationally demanding
- Highly sensitive to the choice of control parameters
Metaheuristics are frequently improved by adding efficient mechanisms aimed at increasing their performance like opposition-based learning, chaotic function, etc. What are the best efficient mechanisms you suggest?
Hi, I am implementing path planning using PSO but I have no idea what would be the max and min values.
I have done some tests with arbitrary values but they only work for some cases.
Can you help me?
Hi, I have been researching about swarm algorithms and I need to implement some of them in path planning. I understand the algorithms, but I don't understand the process in which they are applied to path planning.
I have found some projects where they do it but I have not finished understanding the logic behind them. It is a bit confusing for me as I am a beginner.
Do you have any resources where you explain how to implement these algorithms in path planning? Can you share some codes?
Thanks for your help.
Suppose that if we compare two metaheuristics X and Y in a given real problem, X returns a better solution than Y, while when we use the same metaheuristics to solve global optimization problems, Y returns a better solution than X. Does this make sense? what is the reason?
In recent years, the field of combinatorial optimization has witnessed a true tsunami of so-called "novel" metaheuristic methods, most of them based on a metaphor of some natural or man-made process. The behavior of virtually any species of insects, the flow of water, musicians playing together -- it seems that no idea is too far-fetched to serve as an inspiration to launch yet another metaheuristic.
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
The article, Simoiu, C., Sumanth, C., Mysore, A., & Goel, S. (2019). Studying the "Wisdom of Crowds" at Scale. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 171-179. Retrieved from https://ojs.aaai.org/index.php/HCOMP/article/view/5271 notes that it is (p. 172):
“... one of the most comprehensive studies of the wisdom -of -crowds effect to date ..”
Are there any other comparable studies? If so, can you please provide the citations?
A different approach is to use emergent collectively solved problem sets, such as the English lexicon or rates measuring increases in efficiency for emergent collectively solved problem sets, such as :
1) The rate of improvement in domestic lighting: Nordhaus, W. D. (1994). Do real-output and real-wage measures capture reality? The history of lighting suggests not. Technical Report 1078, Cowles Foundation for Research in Economics, Yale University;
2) The rate of increase in average IQs. A theory of intelligence.arXiv:0909.0173v8;
3) By assuming a general collective problem solving rate, and finding a formula connecting the collective rate with the average individual rate of problem solving:
Preprint The rate of thought
Various metaheuristic optimization algorithms with different inspiration sources have been proposed in recent past decades. Unlike mathematical methods, metaheuristics do not require any gradient information and are not dependent on the starting point. Furthermore, they are suitable for complex, nonlinear, and non-convex search spaces, especially when near-global optimum solutions are sought after using limited computational effort. However, some of these metaheuristics are trapped in a local optimum and are not able to escape out, for example. For this purpose, numerous researchers focus on adding efficient mechanisms for enhancing the performance of the standard version of the metaheuristics. Some of them are addressed in the following references:
I will be grateful If anyone can help me to find other efficient mechanisms.
Thanks in advance.
What are the standard parameter values of the commonly used classifiers such as Support-vector machine, k-nearest neighbors, Decision tree, Random forest?
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
I am preparing a comparison between a couple of metaheuristics, but I would like to hear some points of view on how to measure an algorithm's efficiency. I have thought of using some standard test functions and comparing the convergence time and the value of the evaluated objective function. However, any comments are welcome, and appreciated.
Recently, there have been published many metaheuristic algorithms mostly based on swarm intelligence. The good future for these field can be applying these algorithms for solving some real problems in the different sectors such as business, marketing, management, intelligent traffic systems, engineering, health care and medicine. Please let's discuses about their applications in the real world and share our case studies.
What kind of small mobile robots are available in market to undertake multi-robot research? Would you have anything to be recommended?
Chaomin Luo, Ph.D.
Associate Editor of IEEE Transactions on Cognitive and Developmental Systems
Associate Editor of International Journal of Robotics and Automation (SCI-indexed)
Associate Editor of International Journal of Swarm Intelligence Research (IJSIR)
Department of Electrical and Computer Engineering
Mississippi State University
312 Simrall Bldg., 406 Hardy Rd., Box 9571
Mississippi State, MS 39762
Local search method helps to increase the exploitation capability of optimization and meta-heuristic algorithm. It can help to avoid local optima also.
What is the effect of increasing or decreasing population size and the number of iterations on the quality of solutions and the computational effort required by the Swarm Intelligence algorithms?
Multiple Meta Heuristic Optimization Algorithms like Grey Wolf Optimizer face a problem of Shift In-variance, i.e. when optimum of an optimization model is at (0,0), the algorithm performs quiet well. However, when the same model is shifted by some coefficient, the performance of the same algorithm goes to drain.
An example might be taken from f1 & f6 of standard Benchmark Functions (CEC2005).
New hybrid techniques helps to reduce the energy consumption in routing that needs to be identified .I am reviewing this .
I'm doing some research on dimensionality reduction using swarm intelligent algorithms. As per the no free lunch rule, there is no algorithm that best suit all the problems. So, to be able to find the best subset I need to determine whether it's unimodal or multimodal? The data is of 300 features and 1000 instance. Is there any visualization methods that can help in this regard?
Opposition-based Learning (OBL) is a new concept in machine learning, inspired from the opposite relationship among entities. This concept is used to initial population that randomly generated.
What is the most efficient way to measure the impact of the adjustment of each hyperparameter of a given Evolutionary Algorithm with many hyperparameters? Is there any way to graphically visualize it? If not, how can we do it numerically?
Swarm Intelligence (SI) techniques are more and more used by researchers in a wide range of areas: physics, economy, materials, biomedical sciences, computer sciences, engineering...
Recently, new published papers propose enhanced variant of SI techniques to improve the optimal solutions and achieve better results than other conventional optimization techniques, namely ACO, PSO, differential Evolution (DE)...
Does anyone have worked on these enhanced variant of SI techniques?
Switching topology means that the topology of the agents switch into a different topology over time. Time-varying formation happens when the formation pattern changes over time for several different reasons. Can we interpret that topology switching is a unavoidable consequential step in time-varying formation?
I want to calculate inverted generational distance (IGD) to evaluate the performance of a multi-objective optimization algorithm. I have the approximate Pareto fronts. But, I could not find the true Pareto front for the structural engineering problems, such as, welded beam, spring, gear design problems,... etc. Any one has the data of true PF for them?
I am looking forward to start my Master's thesis and I am not sure it is a very good idea to combine Blockchain with either swarm intelligence or machine learning !
Ant colony optimization (ACO) was first proposed by Dorigo et al. as a multi-agent approach applied to the classical traveling sales man problem (TSP) [Dorigo M., Maniezzo V. and Colorni A. Positive feedback as a search strategy. Report No.: 91-016. 1991]. In support of deep learning and deep neural networks where hundreds to thousands of collaborative neural layers consume billions of operations, and cannot be operational unless the efficient and optimized corporation of a multiprocessor or multicore CPU with a thousandcore GPU [Sze, V., Chen, Y. H., Yang, T. J., & Emer, J. S. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12), 2295-2329, 2017].
Iam working on medical data prediction using evolutionary algorithms and stuck on data classification .Now iam seeking help for this
Pareto solutions are found below true PF in case of 3D DTLZ1 problem. Whereas they are found above true PF in case of 2D DTLZ1 problem.
I have checked and confirmed problem formulation and variable bound. ( variables used are 3).
Algorithm converges on DTLZ-2, 3, 4.
What might be the cause or area of improvement in MO algorithm.
Reference: Deb K., Thiele L., Laumanns M., Zitzler E. (2005) Scalable Test Problems for Evolutionary Multiobjective Optimization. In: Abraham A., Jain L., Goldberg R. (eds) Evolutionary Multiobjective Optimization. Advanced Information and Knowledge Processing. Springer, London
Swarm intelligence (SI) techniques are population based algorithms that are inspired by the social behavior of animal.
I am looking for new research direction in cooperative control of multi-agent systems. What are the latest trends in this field of study? any comment is much appreciated.
I would like to find out any novel way to design an algorithm that can be used to manage and monitor intermodal transportation and supply chain
I proposed an improved Firefly and compare it with variant of Firefly and sent it to a journal after getting major and minor revision, one of the four reviewers said me should be compare with another metaheuristic as well.
I viewed some papers when they proposed the new modified algorithm, they tried to compare with variant (like firefly) of it not other metaheuristic (swarm intelligence and evolution algorithm).
The other thing, one of the reviewer said me comparing swarm intelligence and evolution algorithm are obfuscate and we cannot compare them with to gather.
Should I compare it or not ?
I have two data sets and I want to use these data sets to tune the parameters of particles swarm optimization (PSO) algorithm using machine learning method.
blind source separation include many methods to separate the mixed signals as independent component analysis.
there are many methods used to enhancing the separated signals as particle swarm optimization.
Are there ability to using the Quantum Particle Swarm Optimization in this field?
Two years of observations on oil palm trees plantations in Malaysia had shown novel nesting behavior of the Asian weaver ants Oecophylla smaragdina (Fabricius) that was never reported.
These contained an average of 3.98 ±1.74 (mean ± SD, range 1-13) nests per tree with the only odd number of nests in each surveyed trees.
The phenomena exist during both dry and rainy seasons of the year. In the biological system, only one case is clearly reported in North America for the cicadas insect eaten by birds when with a life cycle of 13 or 17 years, which still remain a mystery. The ants exhibited polydomous nesting behaviour, as reported by other authors (Debout et al. 2007), with multiple nests in a single palm tree, and multiple queens were sometimes observed in the main nest, suggesting polygyny (Exélis Pierre and Azarae, 2012- in press).
Four experimental design testing had shown all positive results demonstrating that there are factors regulating the mechanism, from the queens. How and why? it is yet to be found out...
I would like to know if the modeling equation system could help to explain the underlying biological mechanism regulating this. Beside the swarm intelligence of these ants. Any ideas or suggestions are welcome.
I am working on Fitness case importance for Symbolic Regression and found a Paper "Step-wise Adaptation of Weights for Symbolic Regression with Genetic Programming" which talks about weights of fitness cases to give importance to points which are not evolved to boost performance and also get GP out of local optima.
This publication is too old and i am looking for new work which talk about fitness cases importance. But i am not able to find any such publication. Instead i find Publications related to sampling on Random selection in different ways.
So, Can someone point me towards research works relevant to Importance or Weighting Datapoints like SAW(Stepwise adaptation of weights) technique?
I'm using Nature Inspired Intelligent algorithm a lot for my research. One problem that I've faced so far is the correction of solutions to match an interval [a,b] .
For example, when testing the performance of such an algorithm in a benchmark function like Goldstein–Price function which has a search domain [-2,2], algorithms (mostly Swarm Intelligence) violate this ..constraint.
One solution is to use a penalty function.
But what am I seeking for is some kind of normalization to match the values to the right search domain.
Note that a solution is a vector of values in different characteristics of the problem. So I cannot use the simple normalize of an array (http://stackoverflow.com/questions/10364575/normalization-in-variable-range-x-y-in-matlab).
I am trying to build a Neural Network to study one problem with a continuous output variable. A schematic representation of the neural network used is described in the figure below.
[Figure 1: Schematic representation of neural network: input layer size = 1; hidden layer size = 8; output layer size = 1.]
Is there any reason why I should use the tanh() activation function instead of the sigmoid() activation function in this case? I have been using in the past the sigmoid() activation function to solve logistic regression problems using neural networks, and it is not clear to me whether I should use the tanh() function when there is a continuous output variable.
Does it depend on the values of the continuous output variable? For example:
(i) Use sigmoid() when the output variable is normalized from 0 to 1.
(ii) Use tanh() when the output variable has negative values.
Thanks in advance
I have recently opted for the COCO framework for comparing and analysing continuous algorithms to deeply analyse an algorithm I am working on.
The paper of Hansen et al. (coco: performance assessment) was of great help for understanding the rationale of the adopted result analysis procedure in the case of a mult-algorithm comparison. However, some details remain still unrevealed, hence:
1. Could anyone cite some references talking about the categories of functions cited in the benchmarks, namely: separable fcts, moderate fcts, ill-conditioned fcts, multi-modal fcts, weakly structured multi-modal fcts ?
2. The three types of graphs, offered when comparing only two algorithms, have not been presented in the above-mentioned reference, see attached file.
Hence, I would be really thankful for any clarifications.
What behavior of which animals/birds/insects shows Swarm intelligence. And what are the practical aspects of using them to solve different problems?
I am trying to do a hybrid of two swarm intelligence algorithms to form a new algorithm which will be applied in task scheduling integrated with cloudsim tookit.
i hope someone could help me ,recommend me some papers or source code
Apart from economic frameworks like oracle or sql server, I am looking for academic business intelligence data to conduct some experimental researches such as applying data mining algorithms or optimization algorithms. Can suggest links for such data?
In recent time, there are a number of research work have been published on enhancing the search capability of evolutionary and swarm intelligence based algorithms by employing a chaos based local search technique. How does this chaotic local search actually affect the performance of the algorithm in terms of exploration and exploitation? Also, what are the key factors that should be kept in mind before designing such kind of local search process?
Let's assume we have a standard feedforward ANN with just a single hidden layer. It is standard practice to normalize the input data ,usually in [0,1] or [-1,1]. Let's assume min-max normalization. If we have sigmoid activation function, wouldn't it be more sensible to normalize in a range like [-4,4] or [-5,5] ? The sigmoid function is in essence linear in [-2,2] so if we normalize in [-1,1] the approximated function is linear for the most part. It might be argued that for certain weights the input for the sigmoid activation function can be outside the normalized range but still that's generally few cases (depending also on what values the weights might take).
As for how to initialize the weights: A common formula is in the range [-b,b] where b = 1 / sqrt(Ninput+Nhidden) (assuming sigmoid function).
It is usually said that small weights are more preferable (also if you like at regularized error functions that penalize large weights ) since large weights have higher chances of leading to overfitting.
Actually, a large plethora of muli-objective metaheuristics is used in the available literature. Among these optimization approaches, the newly proposed Swarm Intelligence techniques (MOPSO, MOACO, MOBFO..), and the Evolutionary Algorithms (MOGA, NSGA-II, AMOSA...) are the most used ones.
- How can we compare Pareto fronts of each metaheuristic?
- What type of methods or mathematic metrics can we use to classify these metaheuristics?
I recently read an excellent paper detailing the problems with metaphors in the design of metaheuristics:
The components are relatively easy: a solution representation, a fitness function to evaluate solutions, and methods to create new solutions.
Key strategic components include means to avoid and/or escape from local optima, balance exploration and exploitation, etc.
Where do people stand on these? I'm particularly interested on insights into cooperative coevolution that involves solving multiple problems in lower dimensions. How does this work for non-separable problems? Other than some optimization is better than no optimization, what's the justification/insight?
I am looking for Java code for ACO (Ant colony Optimization)- based Feature extraction. I want to adopt the strategy on my Dataset which has real (Numeric) attributes and a binary nominal class attribute. Does anyone know how can I find such a code?
In recent years lots of papers were published about ant algorithm-, and swarm intelligence applications on logisitcs. But are there any further findings for logistics provided by biomimetics?
I am a PhD student in computer science . The object of my thesis is "solving optimization problems in cellular networks". It consists of resolving optimization problems that exist in radio mobile networks using bio-inspired algorithm called "metheuristics". Any comments regarding this?
After the success of the Artificial Bee Colony algorithm for optimization tasks of local search algorithms, Artificial Neural Networks is among the most irresistible research areas in the field of Soft Computing. Furthermore, researchers have developed attractive algorithms based on nature intelligence behavior Like CS, BA, DSA... etc. The one research article "Honey Bees Inspired Learning Algorithm: Nature Intelligence Can Predict Natural Disaster," is going to publish in Springer. This paper was successfully used to predict earthquake hazard with high efficiency. Can we say that nature intelligence algorithms can solve optimization problems?
I am working on an idea of complexity reduction for unconstrained optimization problems.
I want to test the idea on some derivative unconstrained real-world optimization problems. It seems not easy to find such real-world optimization problems.
Can someone suggest any?
I am trying to used the improved method of the said algorithm but I have experienced some difficulties any idea, Engineers?
The context is in the analysis of various techniques of optimization done using particle swarms and with also using methods like granular computing which could be applied keeping in view the advantages.
I'm currently revising my optimization algorithm for a specific part of a problem. I have trouble in wrapping my head around a new approach and my mind is having this tunnel-vision of ideas. I could really use some fresh perspectives.
I'll try my best to simply the explanation.
(Please see attached file "Example-1.png" and "Example-2.png")
Say, we are given 3 distinct persons.
Each of these person have a specific Supply (an item that he/she possess) and a Need (an item that needs to be acquired). Now, if these Supply-Need is reversed, and their reverse can be found in another person, they can be traded.
Moreover, the pairs have a numerical value called Gravity that specifies the importance of the pair to the person. We can treat it as a weight on how much a person can be "satisfied" if the Supply-Need pair is met. Each person can only allocate and distribute 100-points of Gravity among all of his/her Supply-Need pairs.
Now, the Total Satisfaction of this process can be computed by getting the sum of all the Persons' individual satisfactions.
The objective is to have the group of Persons trade their Supply-Need pair/s in different combinations such that we can acquire the largest Total Satisfaction as possible.
In PSO, each Particle represents a candidate solution based on its location in the Search Space.
Given the attached examples in this post, we can say that Example-1.PNG is a distinct candidate solution to this problem, as well as Example-2.PNG.
What's the best approach in how this problem can be represented and evaluated by a Fitness/Objective Function?
How would you characterize this problem in PSO?
Do you have any recommendations of published work with the same problem as this?
Hi, I am trying to implement Particle (or Genetical) Swarm Optimization. However, I am already stuck in the first step...
I am getting confused on how to initialise the particles, and what these particles (in terms of code) are.
Can anyone explain, please?
I want a matlab code that shows how to make integration between any sI algorithm like chicken with a genetic algorithm as a wrapper feature selection method and then use the reduced subset for data classification using svm-knn-c4.5 and so on.
the problem in the hybridization step (how to mix two algorithms together : sequential - parallel or what)
Examples such as those found in ants, bees, bacteria etc. point to the fact that collective decision making is also an important aspect of decentralized decision-making process. Most bacteria utilize quorum sensing to maintain population density, ants use it to find new nests, and recently, robots and self-organizing systems are using this signalling mechanism for decision-making. This forms the basis of Swarm intelligence. Incorporating some of these signalling aspects of SI into AI systems would make them more efficient and 'intelligent', do you agree?
A textbook or classic are both fine. It is much better if it has pedagogical approach with a focus on optimization and algorithms implementation.
Want to have implementation details (code if possible) for solving job shop scheduling using bio-inspired algorithms like ant-colony optimization, genetic algorithm, cat swarm opt. etc. Especially want to know how to set initial parameters like number of iterations & other parameters like number of ants in case of ant-colony optimization etc.
i have no idea of researchers. Is it possible to find solution of network size issue in cloud computing using swarm intelligence?
In the domain of UAV swarming, many experimental set-ups may require at the same time accuracy and reliability of an off-the-shelf autopilot, plus some extra computational power that those autopilots cannot (yet) provide. If the experiment can be made cost-efficient, that would be even better. Those are currently my own user requirements, and I therefore would like to easily interconnect a Raspberry Pi and a regular autopilot (preferably Arduino-based, but any other combination is worth considering too) to steer a multi-copter and perform some computational intensive jobs such as computer vision.
I am currently in the early stages of testing an APM 2.6 + Raspberry Pi combination, notably following the following design guidelines: http://dev.ardupilot.com/wiki/companion-computers/raspberry-pi-via-mavlink/ (well, this tutorial is intended for Pixhawk autopilots but using MAVLink w/ APM should be ok too).
Now, I would be grateful for any practical return of experience about this combination or any other Raspberry/autopilot dual layout, to be able to consider other design approaches if that one failed. I also heard about Raspberry shields like Navio/Navio+ but I did not yet find backed up opinions about the ability of such set-ups to correctly deal with the stringent constraints of real-time for a multi-copter design. So here again, any practical experience on the subject will be greatly appreciated!