Conference PaperPDF Available

Cat Swarm Optimization

Authors:

Abstract and Figures

In this paper, we present a new algorithm of swarm intelligence, namely, Cat Swarm Optimization (CSO). CSO is generated by observing the behaviors of cats, and composed of two sub-models, i.e., tracing mode and seeking mode, which model upon the behaviors of cats. Experimental results using six test functions demonstrate that CSO has much better performance than Particle Swarm Optimization (PSO).
Content may be subject to copyright.
Q. Yang and G. Webb (Eds.): PRICAI 2006, LNAI 4099, pp. 854 858, 2006.
© Springer-Verlag Berlin Heidelberg 2006
Cat Swarm Optimization
Shu-Chuan Chu1, Pei-wei Tsai2, and Jeng-Shyang Pan2
1 Department of Information Management,
Cheng Shiu University
2 Department of Electronic Engineering,
National Kaohsiung University of Applied Sciences
Abstract. In this paper, we present a new algorithm of swarm intelligence,
namely, Cat Swarm Optimization (CSO). CSO is generated by observing the
behaviors of cats, and composed of two sub-models, i.e., tracing mode and
seeking mode, which model upon the behaviors of cats. Experimental results
using six test functions demonstrate that CSO has much better performance than
Particle Swarm Optimization (PSO).
1 Introduction
In the field of optimization, many algorithms were being proposed recent years, e.g.
Genetic Algorithm (GA) [1-2], Ant Colony Optimization (ACO) [6-7], Particle
Swarm Optimization (PSO) [3-5], and Simulated Annealing (SA) [8-9] etc. Some of
these optimization algorithms were developed based on swarm intelligence. Cat
Swarm Optimization (CSO), the algorithm we proposed in this paper, is motivated
from PSO [3] and ACO [6].
According to the literatures, PSO with weighting factor [4] usually finds the better
solution faster than the pure PSO, but according to the experimental results, Cat
Swarm Optimization (CSO) presents even much better performance.
Via observing the behavior of creatures, we may get some idea for solving the
optimization problems. By studying the behavior of ants achieves ACO, and with
examining the movements of the flocking gulls realizes PSO. Through inspecting the
behavior of cat, we present Cat Swarm Optimization (CSO) algorithm.
2 Behaviors of Cats
According to the classification of biology, there are about thirty-two different species of
creatures in feline, e.g. lion, tiger, leopard, cat etc. Though they have different living
environments, there are still many behaviors simultaneously exist in most of felines.
In spite of the hunting skill is not innate for felines, it can be trained to acquire. For
the wild felines, the hunting skill ensures the survival of their races, but for the indoor
cats, it exhibits the natural instinct of strongly curious about any moving things.
Though all cats have the strong curiosity, they are, in most times, inactive. If you
spend some time to observe the existence of cats, you may easily find that the cats
spend most of the time when they are awake on resting.
Cat Swarm Optimization 855
The alertness of cats are very high, they always stay alert even if they are resting.
Thus, you can simply find that the cats usually looks lazy, lying somewhere, but
opening their eyes hugely looking around. On that moment, they are observing the
environment. They seem to be lazy, but actually they are smart and deliberate.
Of course, if you examine the behaviors of cats carefully, there would be much
more than the two remarkable properties, which we discussed in the above.
3 Proposed Algorithm
In our proposed Cat Swarm Optimization, we first model the major two behaviors of
cats into two sub-models, namely, seeking mode and tracking mode. By the way of
mingling with these two modes with a user-defined proportion, CSO can present bet-
ter performance.
3.1 The Solution Set in the Model -- Cat
No matter what kind of optimization algorithm, the solution set must be represented
via some way. For example, GA uses chromosome to represent the solution set; ACO
uses ant as the agent, and the paths made by the ants depict the solution sets; PSO
uses the positions of particles to delineate the solution sets. In our proposed algorithm,
we use cats and the model of behaviors of cats to solve the optimization problems, i.e.
we use cats to portray the solution sets.
In CSO, we first decide how many cats we would like to use, then we apply the
cats into CSO to solve the problems.
Every cat has its own position composed of M dimensions, velocities for each di-
mension, a fitness value, which represents the accommodation of the cat to the fitness
function, and a flag to identify whether the cat is in seeking mode or tracing mode.
The final solution would be the best position in one of the cats due to CSO keeps the
best solution till it reaches the end of iterations.
3.2 Seeking Mode
This sub-model is used to model the situation of the cat, which is resting, looking
around and seeking the next position to move to. In seeking mode, we define four
essential factors: seeking memory pool (SMP), seeking range of the selected dimen-
sion (SRD), counts of dimension to change (CDC), and self-position considering
(SPC).
SMP is used to define the size of seeking memory for each cat, which indicates the
points sought by the cat. The cat would pick a point from the memory pool according
to the rules described later.
SRD declares the mutative ratio for the selected dimensions. In seeking mode, if a
dimension is selected to mutate, the difference between the new value and the old one
will not out of the range, which is defined by SRD.
CDC discloses how many dimensions will be varied. These factors are all playing
important roles in the seeking mode.
SPC is a Boolean variable, which decides whether the point, where the cat is al-
ready standing, will be one of the candidates to move to. No matter the value of SPC
856 S.-C. Chu, P.-w. Tsai, and J.-S. Pan
is true or false; the value of SMP will not be influenced. How the seeking mode works
can be described in 5 steps as follows:
Step1: Make j copies of the present position of catk, where j = SMP. If the value of
SPC is true, let j = (SMP-1), then retain the present position as one of the
candidates.
Step2: For each copy, according to CDC, randomly plus or minus SRD percents of
the present values and replace the old ones.
Step3: Calculate the fitness values (FS) of all candidate points.
Step4: If all FS are not exactly equal, calculate the selecting probability of each
candidate point by equation (1), otherwise set all the selecting probability of
each candidate point be 1.
Step5: Randomly pick the point to move to from the candidate points, and replace
the position of catk.
minmax FSFS
FSFS
Pbi
i
=, where 0 < i < j (1)
If the goal of the fitness function is to find the minimum solution, FSb = FSmax, oth-
erwise FSb = FSmin.
3.3 Tracing Mode
Tracing mode is the sub-model for modeling the case of the cat in tracing some tar-
gets.
Once a cat goes into tracing mode, it moves according to its’ own velocities for
every dimension. The action of tracing mode can be described in 3 steps as follows:
Step1: Update the velocities for every dimension (vk,d) according to equation (2).
Step2: Check if the velocities are in the range of maximum velocity. In case the
new velocity is over-range, set it be equal to the limit.
Step3: Update the position of catk according to equation (3).
(
)
dkdbestdkdk xxcrvv ,,11,, ××+= , where d = 1,2,…,M (2)
xbest,d is the position of the cat, who has the best fitness value; xk,d is the position of
catk. c1 is a constant and r1 is a random value in the range of [0,1].
dkdkdk vxx ,,, += (3)
3.4 Cat Swarm Optimization
As we described in the above subsection, CSO includes two sub-models, the seeking
mode and the tracing mode. To combine the two modes into the algorithm, we define
a mixture ratio (MR) of joining seeking mode together with tracing mode.
By observing the behaviors of cat, we notice that cat spends mot of the time when
they are awake on resting. While they are resting, they move their position carefully
and slowly, sometimes even stay in the original position. Somehow, for applying this
behavior into CSO, we use seeking mode to represent it.
Cat Swarm Optimization 857
The behavior of running after targets of cat is applied to tracing mode. Therefore, it
is very clear that MR should be a tiny value in order to guarantee that the cats spend
most of the time in seeking mode, just like the real world.
The process of CSO can be described in 6 steps as follows:
Step1: Create N cats in the process.
Step2: Randomly sprinkle the cats into the M-dimensional solution space and ran-
domly select values, which are in-range of the maximum velocity, to the ve-
locities of each cat. Then haphazardly pick number of cats and set them into
tracing mode according to MR, and the others set into seeking mode.
Step3: Evaluate the fitness value of each cat by applying the positions of cats into
the fitness function, which represents the criteria of our goal, and keep the
best cat into memory. Note that we only need to remember the position of
the best cat (xbest) due to it represents the best solution so far.
Step4: Move the cats according to their flags, if catk is in seeking mode, apply the
cat to the seeking mode process, otherwise apply it to the tracing mode
process. The process steps are presented above.
Step5: Re-pick number of cats and set them into tracing mode according to MR,
then set the other cats into seeking mode.
Step6: Check the termination condition, if satisfied, terminate the program, and
otherwise repeat step3 to step5.
4 Experimental Results
We applied CSO, PSO and PSO with weighting factor into six test functions to com-
pare the performance. All the experiments demonstrate the proposed Cat Swarm Op-
timization (CSO) is superior to PSO and PSO with weighting factor. Due to the space
limit of this paper, only the experimental results of test function one shown in Fig. 1.
Fig. 1. The experimental result of test function 1
858 S.-C. Chu, P.-w. Tsai, and J.-S. Pan
References
1. Goldberg, D.E.: Genetic Algorithm in Search. Optimization and Machine Learning. Addi-
son-Wesley Publishing Company (1989)
2. Pan, J. S., McInnes, F. R., Jack, M. A. : Application of Parallel Genetic Algorithm and
Property of Multiple Global Optima to VQ Codevector Index Assignment. Electronics Let-
ters 32(4) (1996) 296-297
3. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. Sixth International
Symposium on Micro Machine and Human Science (1995) 39-43
4. Shi, Y., Eberhart, R.: Empirical study of particle swarm optimization. Congress on Evolu-
tionary Computation. (1999) 1945-1950
5. Chang, J. F., Chu, S. C., Roddick, J. F., Pan, J. S. : A Parallel Particle Swarm Optimization
Algorithm with Communication Strategies. Journal of Information Science and Engineering
21(4) (2005) 809-818
6. Dorigo, M., Gambardella, L. M.: Ant colony system: a cooperative learning approach to the
traveling salesman problem. IEEE Trans. on Evolutionary Computation. 26 (1) (1997) 53-
66
7. Chu, S. C., Roddick, J. F., Pan, J. S.: Ant colony system with communication strategies.
Information Sciences 167 (2004) 63-76
8. Kirkpatrick, S., Gelatt, Jr. C.D., Vecchi, M.P.: Optimization by simulated annealing. Sci-
ence (1983) 671-680
9. Huang, H. C., Pan, J. S., Lu, Z. M., Sun, S. H., Hang, H.M.: Vector quantization based on
generic simulated annealing. Signal Processing 81(7) (2001) 1513-1523
... Chu et al. [25] introduced Cat Swarm Optimization (CSO). This was inspired by the hunting and foraging behaviors of cats. ...
... The ocean current is the ocean wave with sufficient nutrition. When an ith jellyfish follows the ocean current, Eq. (25) gives the next position. r 1 and r 2 are random numbers between 0 and 1. gbest is the best location found in the jellyfish boom. is the mean location vector of all jellyfish locations, and is a positive coefficient that shows the distribution of jellyfish, which is three according to [63]. ...
Article
Full-text available
Nonlinear, complex optimization problems are prevalent in many scientific and engineering fields. Traditional algorithms often struggle with these problems due to their high dimensionality and intricate nature, making them time-consuming. Many researchers have proposed new metaheuristic algorithms inspired by biological behaviors in nature, which comparatively show higher performance and accuracy than traditional optimization algorithms. Nature-inspired algorithms, particularly those based on swarm intelligence, offer adaptable and efficient solutions to these challenges. In recent years, swarm intelligence algorithms have made significant advancements. Classical and CEC benchmark suits are immersively useful for studying the performance of optimization algorithms. According to our literature survey, we identified that many algorithms were evaluated based on accuracy. Currently, swarm intelligence algorithms are used in many applications, and efficiency and computational complexity need to be evaluated. A broad-level study of the computational complexity and accuracy of popular swarm intelligence algorithms has not been done recently. Therefore this study we comprehensively evaluate and compare 21 bio-inspired swarm intelligence algorithms on eight non-separable unimodal, eight separable unimodal, five non-separable multimodal, seven separable multimodal functions, and two CEC 2018 many objective functions. We study the structure and mathematical model of the selected algorithms. Then we categorized selected algorithms into six different behavioral groups. We calculated the root mean square error between expected and actual values. Then we performed an RMSE cross-validation statistical test to understand how accurately an algorithm resolves an average problem. We found that Artificial Lizard Search Optimization (ALSO) is the most prominent algorithm in accuracy and efficiency. Besides that, Cat Swarm Optimization (CSO), Squirrel Search Algorithm (SSA), and Chimp Optimization Algorithm (CHOA-B) are also considered more universal algorithms. The Squirrel Search Algorithm (SSA) is ALSO’s second-best algorithm in time complexity. Wasp Swarm Algorithm (WSO), and Bat-Inspired Algorithm (BA) presented the lowest time complexity. Finally, several important issues and research directions are discussed.
... There are some classic metaheuristic algorithms, such as PSO(Particle Swarm Optimization) [15], GA(Genetic Algorithm) [16] and DE(Differential Evolution) [17]. A new algorithm, CSO(Cat Swarm Optimization) [18], imitates the behaviors of cats to improve PSO. It uses Seeking Mode and Tracing Mode to make PSO globally optimal. ...
Article
Relevance. In recent decades, metaheuristic optimization methods have become popular for solving complex problems that require searching for global extrema. Algorithms such as genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), as well as more modern approaches such as cat pack optimization (CSO) and gray wolf pack optimization (GWO) demonstrate high efficiency, but their application is often limited by the conditions of continuity and differentiability of the objective functions. This is a challenge when solving problems with discrete data, where such requirements are not met. In this context, the search for methods that allow adapting metaheuristic algorithms to work with discrete functions is of particular relevance. Aim. The study is aimed at testing the hypothesis about the possibility of using a neural network trained on a limited set of discrete data as an approximation of a function sufficient for the correct execution of the GWO algorithm when searching for a global minimum. The implementation of this hypothesis can significantly expand the scope of GWO, making it available for a wider range of problems where functions are defined on discrete sets. Methods. The study is based on the analysis of existing approaches and experimental verification of the hypothesis on two test functions: a linear function and a Booth function, which are widely used as standards for evaluating the performance of optimization algorithms. Numerical experiments were conducted using neural networks as an approximating model to obtain the results. Solution. During the experiments, an analysis of the applicability of neural networks for approximating discrete functions was carried out, which showed the success of this approach. It was found that neural networks can approximate discrete functions with high accuracy, creating conditions for a successful search for a global minimum using the GWO algorithm. Novelty. For the first time, a hypothesis was proposed and tested on the use of neural networks for approximating objective functions in metaheuristic optimization problems on discrete data. This direction has not previously received due coverage in the scientific literature, which adds significance to the obtained results and confirms the effectiveness of the proposed approach. Practical significance. The results of the study open up new prospects for the application of algorithms such as GWO in optimization problems based on discrete data, expanding the capabilities of metaheuristic methods and facilitating their implementation in a wider class of applied problems, including problems where the use of other methods is limited.
Article
Full-text available
Dung beetle optimizer (DBO) is a novel meta-heuristic algorithm proposed to imitate the habits of dung beetles. However, the parameter changes in the DBO affect the stability of the results. As the boundary shrunk is likely to cause overlap solutions, the algorithm eventually traps in local solutions. To overcome the weaknesses of DBO, the proposed version presents an integrated variant of DBO with the adaptive strategy, the dynamic boundaries individual position micro-adjustment strategy, and the mutation strategy, called BGADBO. First, an adaptive strategy is applied to overcome the instability caused by parameter changes. Then, introducing the linear scaling method to adjust the position of individuals within the dynamic boundary enriches the population diversity. The dynamic learning mechanism is introduced to enhance the adaptive capability of individuals when adjusting their positions. Finally, a Gaussian mutation mechanism is introduced to enhance the performance of the algorithm to escape the local optimum. In the experiment, we take the CEC2005 and CEC2019 benchmark functions to verify the performance of the proposed algorithm. In addition, the BGADBO is applied to several engineering optimization problems and feature selection (FS) problems to evaluate the application value. The experimental results indicate the proposed algorithm superior performance compared with the DBO and other well-established algorithms.
Chapter
Full-text available
This chapter describes the remaining part of the population-based meta-heuristic algorithms for optimization in various fields of science and technology. Meta-heuristic algorithms have great significance, and they are preferred in multi-objective optimization of complex problems. Nature-inspired algorithms mimic the optimization exhibited by the creatures in this world to solve their community problems for survival. Nevertheless, these algorithms have their own set of advantages and disadvantages. In this chapter, we aim to perform an in-depth comparison of population-based meta-heuristics algorithms to examine the dialectical relationship of benefits and drawbacks associated with each approach.
Chapter
Nature-inspired methods are finding more and more practical applications. At present, one can observe a strong development of these techniques associated with the design of new algorithms or new modifications of currently known algorithms. Among the whole family of nature-based optimization algorithms, swarm intelligence algorithms take a special place. In these algorithms, the main focus is on cooperation between individual solutions, which results in the broader perspective of the algorithm to reach better and better solutions. This chapter presents a short overview of swarm intelligence algorithms (SIAs). The basic algorithm derived from this family, which is the particle swarm optimization algorithm, is shown in detail. The chapter presents the popularity of SIAs based on publicly available scientific databases such as Springer, Google Scholar, Science Direct, IEEE Explore and Web of Science. The main areas in which the previously indicated algorithms are applied are also presented, and several papers related to engineering applications of swarm intelligence algorithms are briefly discussed. The chapter concludes with a summary.
Chapter
Full-text available
This chapter provides a review of meta-heuristic algorithms for the optimization in various fields of knowledge ranging from math to computer science, management to finance and engineering to industrial sectors, as all require solving complex optimization problems. So far, researchers have tried to present a vast array of potential solutions that either reduce or maximize the complexity of the optimization problem. However, while selecting such an algorithm, it is difficult to find an unambiguous optimum, resulting in time-consuming investments. Meta-heuristic algorithms are at the forefront, curbing the vastness of the search space with the judicious use of creativity and computing power. Within the realm of population-based methods, a group of agents work together to accelerate convergence. Nevertheless, many algorithms exist for achieving global optimization, each has its own set of advantages and disadvantages. In this chapter, we aim to perform an in-depth comparison of population-based meta-heuristics algorithms to examine the dialectical relationship of benefits and drawbacks associated to each approach.
Article
Full-text available
There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.
Article
Full-text available
A parallel version of the particle swarm optimization (PPSO) algorithm is presented together with three communication strategies which can be used accord-ing to the independence of the data. The first strat-egy is designed for the parameters of solutions that are independent or are only loosely correlated such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to those pa-rameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid commu-nication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO al-gorithm.
Article
Particle swarm optimization (PSO) is an alternative population-based evolutionary computation technique. It has been shown to be capable of optimizing hard mathematical problems in continuous or binary space. We present here a parallel version of the particle swarm optimization (PPSO) algorithm together with three communication strategies which can be used according to the independence of the data. The first strategy is designed for solution parameters that are independent or are only loosely correlated, such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to parameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid communication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO algorithm.
Article
Genetic algorithm (GA) has been successfully applied to codebook design for vector quantization (VQ). However, most conventional GA-based codebook design methods need long runtime because candidate solutions must be fine tuned by LBG. In this paper, a partition-based GA is applied to codebook design, which is referred to as genetic vector quantization (GVQ). In addition, simulated annealing (SA) algorithm is also used in GVQ to get more promising results and the corresponding method is referred to as GSAVQ. Both GVQ and GSAVQ use the linear scaling technique during the calculation of objective functions and use special crossover and mutation operations in order to obtain better codebooks in much shorter CPU time. Experimental results show that both of them save more than 71–87% CPU time compared to LBG. For different codebook sizes, GVQ outperforms LBG by 1.1–2.1 dB in PSNR, and GSAVQ outperforms LBG by 1.2–2.2 dB in PSNR. In addition, GVQ and GSAVQ need a little longer CPU time than, the maximum decent (MD) algorithm, but they outperform MD by 0.2–0.5 dB in PSNR.
Article
In this paper an ant colony system (ACS) with communication strategies is developed. The artificial ants are partitioned into several groups. Seven communication methods for updating the pheromone level between groups in ACS are proposed and work on the traveling salesman problem using our system is presented. Experimental results based on three well-known traveling salesman data sets demonstrate the proposed ACS with communication strategies are superior to the existing ant colony system (ACS) and ant system (AS) with similar or better running times.