An improved wolf colony search algorithm based on mutual communication by a sensor perception of wireless networking
- Chenghai Wu^{1}Email authorView ORCID ID profile,
- Kaiyu Qin^{1},
- Penghui He^{2} and
- Houbiao Li^{2}
https://doi.org/10.1186/s13638-018-1171-9
© The Author(s). 2018
Received: 2 May 2018
Accepted: 30 May 2018
Published: 15 June 2018
Abstract
Intelligence optimization algorithms have an important application in engineering calculation. Swarm intelligence optimization algorithms belong to the class of intelligent algorithms, which are derived from the simulation of natural biological evolution or foraging behaviours. But these algorithms have sometimes slow convergence, and for multi-peak problems, they are easy to fall into the local optimal solution in the later period of the algorithms. To solve these problems, in this paper, the advantages of these algorithms and the wolf colony search algorithm based on the strategy of the leader (LWCA) algorithm are used, to propose an improved wolf colony search algorithm with the strategy of the leader algorithm (ILWCA). ILWCA is based on mutual communication by the sensor perception of wireless networking, adding a global update strategy and a step acceleration network. In addition, by introducing the concept of individual density to depict the distribution density of the wolf, the problem of excessive input parameters of traditional wolf group algorithm is solved. Moreover, by adding the way of mutual migration, the algorithm increases the communication between wolves, and strengthens the overall performance of the optimization process. Finally, the experimental results show that the ILWCA algorithm achieves higher solution accuracy and better robustness compared with the accuracy and robustness of particle swarm optimization (PSO), gravitational search algorithm (GSA), swarm optimization algorithm (SOA), grey wolf optimizer (GWO), and genetic algorithms (GA).
Keywords
1 Introduction
Intelligent optimization techniques have become very popular over the past few decades. In engineering practice, “novel” algorithms or theories are often encountered. For example, simulated annealing, genetic algorithms, tabu search, and neural networks are representative of these innovations. These algorithms and theories have some common features, such as simulation of natural processes, which cast them as “intelligent algorithms”. They are very useful in solving some complex engineering problems.
Swarm intelligence optimization algorithm is a type of intelligent algorithm that is derived from the simulation of natural biological evolution or foraging behaviour. In recent years, intelligent algorithms have become very popular, including genetic algorithms (GA) [1, 2], particle swarm algorithm (PSA) [3, 4], monkey search algorithm [5], and artificial bee colony algorithm (ABC) [6]. However, these algorithms sometimes encounter slow convergence, and for multi-peak problems, they are prone to fall into a local optimal solution in the later period of the algorithm.
On the basis of the hunting behaviour of the wolf group, the wolf swarm algorithm is proposed. The wolf swarm algorithm has both better robustness and global characteristic. The prey hunting behaviour of wolves consists of wandering, attack, and siege. There are now many algorithms related to the prey behaviour of wolves. For examples, the LWCA [7] incorporated the addition of a leader strategy to the original wolf pack algorithm (WPA) [8]. The GWPA [9, 10] approach added the direction dual chaos update strategy. The MWCA [11] method joined an interactive mode with an interactive movement feature for the walking of raiding wolves in the foraging process, and increased the exchanges to enhance the overall algorithm. Additionally, MWCA [8] is aimed at multiple-peak problems, and MO-WCA [12] is used to solve a multi-objective problem of WCA. This paper primarily uses the double gauss function updating method in the small wolf pack algorithm (SWPA) [12]. The double gauss function update method is used to update the wolves, producing a better individual. Although the wolf swarm algorithm was proposed based on bionic wolves in 2007, it has been widely developed in recent years [13, 14]. Based on WPA, GWO [15, 16], and grey wolf optimization algorithm based on strengthening hierarchy of wolves (GWOSH) [17] divided the wolves into four grades, and different grades of wolves were capable of different ways to walk and run. In this paper, the advantages of these algorithms were extended to reduce the input of the parameters of the wolf swarm algorithm, join an interactive walking movement, change the manner of walking, improve the update strategy, and propose a new ILWCA algorithm. The experiment is performed with 22 test functions and compared with GWO, PSO, gravitational search algorithm (GSA) [18], swarm optimization algorithm (SOA) [19], and GA. Based on experimental results, the accuracy of the ILWCA algorithm is higher and shows better robustness.
The biological background of wolves is closely related to information science, as wolves in nature have a strict hierarchy. A pack of wolves usually has a leader wolf, an explorer wolf, and a fierce wolf, as well as other wolves apart from this hierarchy. Wolves, as the top group of predators at the peak of the natural food chain, have typical overall characteristics in their systems such as relevance and integrity [20, 21]. The wolf that finds the best game will howl and attract other wolves to round up the prey. They capture the prey, ensure their overall survival, safeguard the integrity of the pack, and exhibit a clear division of labour for each individual in the wolf pack. According to the call behaviour of the wolves and their mutual communication [22], the leader wolf, the explorer wolf, and the fierce wolf are able to communicate with each other in real time. While they work independently but interact with each other as a wolf pack, they must also work and hunt together so that they can accomplish what one wolf alone cannot do. With various kinds of behaviour, such as their visual observation, collaborative predatory detection, sense of smell, psychological induction, and body language, an intelligent sensor network of real-time communication with each other is established. Their communication methods are concise and efficient [23], and the wolf pack has strong abilities for seeking, coordinating, and controlling. These habits and characteristics of wolves are beneficial for a search algorithm based on the behaviour of wolves.
The rest of the paper is organised as follows: Section 2 outlines the proposed ILWCA algorithm. The results and discussion of benchmark functions that are used comprise Section 3 and Section 4, respectively. Finally, Section 5 concludes the work and suggests some directions for future studies.
2 Problem formulation and methods
In this section, based on step acceleration technology and the global mutual communication update strategy, a new wolf group algorithm called ILWCA is presented that combines the advantages of other algorithms. ILWCA also adds an interactive way of walking, which can improve the wolf’s communication and optimization process. The algorithm also introduces the double gauss update strategy, which makes the algorithm produce better individuals while optimising the entire group of wolves.
2.1 Initialization of the wolf group
2.2 Wolves moved to three leading wolves (raid)
The farther the α wolf is from the X_{ i } wolf, the smaller the effect. The bigger the α wolf’s fitness value is, the greater the influence on the X_{ i } wolf. The βand γwolves are the same. The introduction of \( {e}^{{\left(t/t\max \right)}^2} \) is used to increase the influence of the α wolf with the increase in the number of iterations and to accelerate the speed and convergence trend of the algorithm. If the location of the wolf X_{ i } moves to \( {Z}_i\left(\left({z}_i^1,{z}_i^2,\cdots, {z}_i^D\right)\right) \), the fitness of Z_{ i } is better than the current location. The position of the X_{ i } wolf is updated to Z_{ i }. On the other hand, the X_{ i } wolf’s position does not move. The adaptation values of the X_{ i } wolf after updating were compared with the fitness values of the α, β, and γ wolves. The best three wolves were selected as the new α,β, and γ wolves to complete the update of α, β, and γ wolves.
2.3 Surrounded by hunting (siege)
2.4 Updated decision
In the early stage, wolves tend to search globally, so the wolves should explore more unknown areas. If the wolves in the region are denser in this area, each wolf in the region is well-adapted, and it can be inclined to select some of the poorly adapted wolves among the pack to search in other regions. Because wolves in the area are searching, they can avoid falling into a local optimum. In the later stage, the tendency is to search carefully to find better prey. Therefore, most of the wolves can be concentrated in a certain area for a detailed search. Every time the wolves are updated, the poorly adapted individuals tend to be eliminated.
Next, we introduce the method of the authors in [13] to depict the individual density of the X_{ i } wolf.
The truncated distance is defined by dc, which is a parameter for determining whether a wolf is in the vicinity of another wolf. If the distance between the two wolves is smaller than the truncation distance, the distance between the two wolves is very close.
λ is a truncated distance parameter that determines the truncation distance, and it is determined by the user in advance. After sorting all the distances between all of the wolves, the cut-off distance λ% is defined.
If the wolf’s distance from the X_{ i } wolf is less than dc, the greater the number is, the greater the local density of the α wolf is. d_{ ij } represents the Euclidean distance between the X_{ i } wolf and the X_{ j } wolf, and P = {1, 2, ⋯, n}, where n is the total number of wolves.
The wolf in set C has a large local density and poor fitness. The elimination of wolves can be selected at the early stage of collection C.
Random numbers are generated by the rand function for the interval [0, 1]. The probability of the establishment of \( \operatorname{rand}>{e}^{{\left(\frac{t}{t\max}\right)}^2} \) in the early stage is larger. With the increase in iteration steps, the probability will be increasingly smaller. The algorithm tends to search globally, and later, the algorithm tends to search locally.
N_{1} is a collection of eliminated wolves. C[1 : m] indicates that the former m elements are taken out in the collection C. (A/C)[1 : m − l] means that after removing the intersection from set A to set B, we choose the first m elements in the remaining elements. (B/C)[1 : m − l] is empathy. \( \operatorname{rand}>{e}^{{\left(\frac{t}{t\max}\right)}^2} \) has great probability in the early stage of the algorithm, and with the increase of iteration steps, the probability will be smaller and smaller. This indicates that ILWCA algorithm tends to eliminate individuals with larger local density in the earlier stage. To facilitate the exploration of more unknown areas, in the later stage, ILWCA algorithm tends to eliminate individuals with poor fitness, which should increase local search speed, speed up convergence, and search for better prey.
If the d dimensional position \( {W}_i^{\ast } \)of the \( {X}_i^{\ast } \) new generation artificial wolf \( {W}_i^{\ast } \) is beyond the scope of hunting, the new position of the wolf is regenerated with the double Gauss function method. The update process is not stopped until the new artificial wolf’s d dimensional position is in the hunting space.
3 Experimental design
Multimodal benchmark functions
Function (Fn*) | Dim | Range | f _{min} |
---|---|---|---|
\( {F}_4(X)={\sum}_{i=1}^n{\left({x}_i-1/i\right)}^2 \) | 30 | [− 10, 10] | 0 |
\( {F}_{10}(X)={\sum}_{i=1}^n-{x}_i\sin \left(\sqrt{\left|{x}_i\right|}\right) \) | 30 | [− 500,50] | − 12,569.4 |
\( {\displaystyle \begin{array}{l}{F}_{12}(X)=\left(\pi /n\right)\Big\{10{\sin}^2\left(\pi {y}_1\right)+{\sum}_{i=1}^{D-1}{\left({y}_i-1\right)}^2\left[1+{\sin}^2\left(\pi {y}_{i+1}\right)\right]+\\ {}\kern3.899998em {\left({y}_n-1\right)}^2\Big\}+{\sum}_{i=1}^Du\left({x}_i,10,100,4\right),\end{array}} \) | 30 | [− 50,-50] | 0 |
\( {F}_{16}(X)=\frac{1}{n}{\sum}_{i=1}^n\left({x}_i^4-16{x}_i^2+5{x}_i\right) \) | 100 | [− 5,5] | − 78.3323 |
The initialization scale of ILWCA and GWO, GSA, SOA, PSO, and GA is 30, and the maximum iterations are set to 800. The number of wolves in which other parameters are eliminated is m = 5. The truncated distance parameter is λ = 3. The updated double Gauss function selects the Gauss function with a mean value of 0 and a variance of 1. For GWO, the parameter a = 2.For GSA, the initial gravitational constant G = 100, α = 20, and the speed range is [− 1,1].For SOA, the maximum membership degree is Umax = 0.9500, and the minimum membership degree is Umin = 0.0111. For PSO, the inertia weight is C_{1} = C_{2} = 2. For GA, the cross probability is 0.75, and the mutation probability is 0.05.
4 Results and discussion
4.1 Results of benchmark functions
Results of benchmark functions
F | ILWCA | GWO | PSO | GSA | SOA | GA | |
---|---|---|---|---|---|---|---|
F4 | Average | 2.3098e-07 | 1.6809e-01 | 5.9244e + 00 | 2.5968e + 03 | 8.7209e + 00 | 3.7533e-01 |
Best | 2.0231e-08 | 6.5745e-02 | 2.7091e + 00 | 2.3267e + 03 | 3.7315e + 00 | 1.5339e-02 | |
Worst | 1.0011e-06 | 8.6775e-01 | 2.2001e + 01 | 2.7596e + 03 | 1.7043e + 01 | 1.2087e + 00 | |
F10 | Average | − 8.3013e + 03 | − 5.9874e + 03 | − 5.6100e + 03 | − 2.7100e + 03 | − 2.6426e + 03 | − 2.0209e + 03 |
Best | − 1.0709e + 04 | − 7.1086e + 03 | − 8.4041e + 03 | −4.0804e + 03 | − 3.4663e + 03 | − 3.3395e + 03 | |
Worst | − 7.0223e + 03 | − 4.7540e + 03 | − 2.8426e + 03 | −1.7474e + 03 | − 2.0591e + 03 | −1.3446e + 03 | |
F12 | Average | 1.0207e + 00 | 4.1775e-02 | 3.4556e-03 | 4.5266e + 08 | 1.8131e + 00 | 2.2822e + 00 |
Best | 3.8368e-21 | 6.7666e-03 | 3.2972e-11 | 1.2343e + 08 | 1.3187e-01 | 3.4683e-01 | |
Worst | 9.8116e + 00 | 1.2092e-01 | 1.0366e-01 | 6.5390e + 08 | 3.8369e + 00 | 3.9012e + 00 | |
F16 | Average | − 6.7145e + 01 | − 4.6695e + 01 | − 6.5902e + 01 | − 3.0162e + 01 | − 5.1201e + 01 | − 2.0334e + 01 |
Best | − 7.0698e + 01 | − 5.3634e + 01 | − 6.9157e + 01 | − 3.7542e + 01 | − 6.4863e + 01 | − 2.7265e + 01 | |
Worst | − 6.4195e + 01 | − 3.9875e + 01 | − 6.3337e + 01 | − 2.6260e + 01 | − 3.8281e + 01 | − 1.5495e + 01 |
The above six algorithms were analysed in terms of the following two aspects: the best value and the average value.
For function F4, the best value found by the ILWCA algorithm is the best among the algorithms, reaching 1E-08. The optimal values for GWO and GA are only 1E-02, and the other three algorithms achieve only up to 1E + 00. For the functions F10, F12, and F16, the ILWCA is superior to the other algorithms in finding the optimal values of these functions.
Comparison of mean value, as for functions F4, F10, and F16, the average value that ILWCA finds has good results, which are better than the other algorithms. For function F12, the average value searched by GWO and PSO is the best, and ILWCA is second. For function F16, the average value obtained by ILWCA and PSO is 1E-30, and the exact solution is 0.
To make a clearer comparison of these algorithms, some functions within these twenty-two test functions were chosen. The images of the functions were observed, and the algorithm converges in the contrast graph and the track of the first-dimension variables in the search process of the ILWCA algorithm, see Figs. 3, 4, 5, 6, 7, and 8.
4.2 Convergence behaviour analysis
In order to get a clearer understanding of the gaps in these algorithms and to make a clearer comparison of these algorithms, we choose some functions in these 22 test functions. The images of the functions are observed. We also draw six algorithms converging trend diagram of some functions, as well as the convergent contrast diagram (see Figs. 3, 4, 5, 6, 7, and 8), and the first dimension variable of α wolf position in ILWCA algorithm is used to change the graph of optimization process (see Fig. 5 and Fig. 8).
From the convergence diagram of ILWCA algorithm, we can know that as the number of iterations increases, the value of functions presents monotonically increasing trend and approaches the global optimal solution. In order to further observe the convergence behaviour of the algorithm, we also drew the search history track of the first search agent in the first search dimension. From the first dimension of the first dimension of the above seven functions, we can learn that the trend of the first dimension of the first dimension of the wolf is very large in the early period of the search, but the search trajectory tends to be stable in the later period of the iteration. This behaviour guarantees that a SI algorithm will eventually converge to the point in search space. This gives a good indication of the stability of the ILWCA algorithm. For Fig. 8 F12, to the later stage, the change trend graph is also unstable, because the function is a process overlap, and the point of the function image is stereoscopic in the vicinity of the global best advantage. The optimal prey searched by ILWCA algorithm is generally better than other algorithms.
5 Conclusions
This article presents a swarm intelligence algorithm based on the predation behaviour of wolves. The survival rule of the wolf group through the survival of the fittest was evaluated to find the global optimal solution. IWCA was combined with the characteristics of other algorithms, and added an interactive walk to increase the overall algorithm. To reduce the probability of leading wolves to bring the entire group of wolves to the local solutions, an improved manner of wolves’ wandering was adopted.
In addition, this paper introduced a second excellent moderate value and the third best fitness value of a wolf. Together with the leader wolf, they led the entire group of wolves to find the best global value. The other wolves were affected by the three wolves, which were closely related to the wolves’ adaptation value and the distance from the wolves. Therefore, the greater the adaptation value is, the closer the distance is, and the greater the effect on the wolf is.
To make the wolves explore more unknown areas, if the wolves in a region were denser in a particular area, each wolf in the region had a better fitness value and was inclined to select some of the poorly adapted wolves to go to other regions and search (elimination). Since there were already wolves in the area, falling into a local optimum could be avoided. In the later period of the algorithm, the search tended to be more careful to facilitate the search for better prey. Therefore, most of the wolves were concentrated in a certain area for a detailed search. Every time the wolves were updated, the poorly adapted individuals tended to be eliminated. In addition, the double Gauss function update method was introduced to generate better individuals.
Declarations
Acknowledgements
The authors would like to thank Dr. Seyedali Mirjalili and other authors for providing us some codes during the research and preparation of the manuscript.
Funding
The authors are partially supported by NSFC (61702083), The Doctoral Programme Foundation of State Education Ministry Grant (20130185110023) and Fundamental Research Funds for the Central Universities (ZYGX2016J131, ZYGX2016J138).
Authors’ contributions
CW and PH contributed to the conception and algorithm design of the study. PH and KQ contributed to the acquisition of simulation. CW and HL contributed to the analysis of the simulation data and approved the final manuscript. All authors read and approved the final manuscript.
Authors’ information
Chenghai Wu received an M.S. degree and PhD as a student from the University of Electronic Science and Technology of China, in 2010 and 2012, respectively. He is a Senior Engineer in Beijing Special Vehicle Institute, Beijing, China. His research interests lie in Multi-Agent System, Coordinated Control, Vehicle Formation, Modelling and Simulation, Computational Intelligence, Communication and Electronics, Wireless Sensor Networks, Software Engineering and Internet of Things. (e-mail:uestchwu@126.com).
Kaiyu Qin received the M.S. and PhD degrees from University of Electronic Science and Technology of China, in 1994 and 1999, respectively. He is a professor in the School of Astronautics and Aeronautics, University of Electronic Science and Technology of China, Chengdu, Sichuan, China. His research interests lie in the Simulation, Testing, Verification and Evaluation of Complex Electronic Systems, Space Electronic Systems, Microwave Modules and Component Technologies. (e-mail:kyqin@uestc.edu.cn).
Penghui He is currently pursuing the M.S. degree with the School of Mathematical Science, University of Electronic Science and Technology of China, Chengdu, China. His research interests lie in artificial intelligence and image restoration. (e-mail: m18380132550@163.com).
Houbiao Li received the M.S. and PhD degrees in computational mathematics from the School of Mathematical Science, University of Electronic Science and Technology of China (UESTC), Chengdu, China, in 2004 and 2007, respectively. He is currently an Associate Professor with the School of Mathematical Sciences, UESTC. He has authored over 40 scientific papers. His research interests lie in numerical algebra and parallel computing. (e-mail:lhb0189@uestc.edu.cn).
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- PM Pardalos, HE Romehn, Recent developments and trends in global optimization. J. Comput. Appl. Math. 124(1–2), 209–228 (2000). https://doi.org/10.1016/S0377-0427(00)00425-8 MathSciNetView ArticleMATHGoogle Scholar
- X Yan, D Peng, J Ma, Analog circuit diagnosis based on wolf pack algorithm radical basis function network. Comput. Eng. Appl. 53(19), 152–156 (2017). https://doi.org/10.3778/j.issn.1002-8331.1607-0182 Google Scholar
- Hollandjh, Outline for alogical theory of adaptive systems. J. Assoc. Comput. Mach. 9(3), 297–314 (1962)View ArticleGoogle Scholar
- S Mirjalili, SM Mirjalili, A Lewis, Grey wolf optimizer[J]. Adv. Eng. Softw. 69(3), 46–61 (2014). https://doi.org/10.1016/j.advengsoft.2013.12.007 View ArticleGoogle Scholar
- LIU Chang-an, YAN Xiao—hu, LIU Chun-yang, et al., The wolf colony algorithm and its application. Chin. J. Electron. 20(2), 212–216 (2011)Google Scholar
- D Karaboga, B Basturk, A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 39(3), 459–471 (2007). https://doi.org/10.1007/s10898-007-9149-x MathSciNetView ArticleMATHGoogle Scholar
- Q Zhou, Y Zhou, Wolf colony search algorithm based on leader strategy. Appl. Res. Comput. 30(9), 2629–2632 (2013). https://doi.org/10.3969/j.issn.1001-3695.2013.09.018 Google Scholar
- J bin, Z Jin, Multi-objective sink nodes coverage algorithm based on quantum wolf pack evolution[J]. J. Electron. Inf. 39(5), 1–7 (2017). https://doi.org/10.11999/JEIT160693 Google Scholar
- X Hui, G Qing, Z Yu, et al., An improved wolf pack algorithm. Control and Decision 32(7), 1164–1172 (2017). https://doi.org/10.13195/j.kzyjc.2016.0690 MATHGoogle Scholar
- X Zhang, TU Qiang, Q Kang, et al., Grey wolf optimization algorithm with double-hunting modes and its application to multi-threshold image segmentation. J. Shanxi Univ. 39(3):378-85 (2016). https://doi.org/10.13451/j.cnki.shanxi.univ(nat.sci.).2016.03.006
- Y Liu, J Yong, W Song, et al., Track planning for unmanned aerial vehicles based on wolf pack algorithm. J. Syst. Simul. 27(8), 1838–1843 (2015). https://doi.org/10.16182/j.cnki.joss.2015.08.027 Google Scholar
- J Xue, W Ying, J Xiao, et al., A small wolf pack algorithm and its convergence analysis. Control and Decision 31(12), 2131–2139 (2016). https://doi.org/10.13195/j.kzyjc.2016.0690 Google Scholar
- H Zheng, Chenkeng, An incremental dynamic clustering method base on the representative points and the density peaks. J. Zhejiang Univ. Technol. 45(4), 427–433 (2017). https://doi.org/10.3969/j.issn.1006-4303.2017.04.014 Google Scholar
- T Wu, F Zhang, L Wu, New swarm intelligence algorithm-wolf pack algorithm. Syst. Eng. Electron. 35(11), 2431–2438 (2013). https://doi.org/10.3969/j.issn.1001-506X.2013.11.33 MATHGoogle Scholar
- Z Qiang, Y Zhou, Wolf colony search algorithm based on leader strategy. Appl. Res. Comput. 30(9), 2630–2632 (2013). https://doi.org/10.3969/j.issn.1001-3695.2013.09.018 Google Scholar
- PC Pinto, TA Runkler, JM Sousa, in Wasp swarm algorithm for dynamic MAX-SAT problems, ed. by Adaptive and Natural Computing Algorithms. (Springer, 2007), pp. 350–357. https://doi.org/10.1007/978-3-540-71618-1_39
- X Zhang, T Qiang, K Qiang, et al., Grey wolf optimization algorithm based on stregthening hierarchy of wolves. J. Date Acquis. Process. 32(5), 880–889 (2017). https://doi.org/10.16337/j.1004-9037.2017.05.004 Google Scholar
- NM Sabri, M Puteh, MR Mahmood, A review of gravitational search algorithm. Int. J. Adv. Soft Comput. Appl. 5(3), 1-39 (2013)Google Scholar
- KK Seo, Content-based image retrieval technique by combining swarm optimization algorithm and support vector machines[J]. J. Comput. Theor. Nanosci. 10(8), 1693–1700 (2013)View ArticleGoogle Scholar
- X. Lu and Y. Zhou, A novel global convergence algorithm: bee collecting pollen algorithm. Advanced Intelligent Computing Theories and Applications. ed. by With Aspects of Artificial Intelligence (Springer, 2008), pp. 518–525. DOI: https://doi.org/10.1007/978-3-540-85984-0_62
- AH Gandomi, AH Alavi, Krill herd: A new bio-inspired optimization algorithm[J]. Commun. Nonlinear Sci. Numer. Simul. 17(12), 4831–4845 (2012)MathSciNetView ArticleMATHGoogle Scholar
- X-S Yang, Test problems in optimization. arXiv preprint arXiv:1008.0549, 2010.al Review Letters, vol. 116, no. 6, Article ID 061102 (2016). https://doi.org/10.1002/9780470640425 Google Scholar
- J Huang, Z Liang, Q Zang, Dynamics and swing control of double-pendulum bridge cranes with distributed-mass beams. Mech. Syst. Signal Process. 54–55, 357–366 (2015)View ArticleGoogle Scholar
- X-S Yang, Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspired Comput. 2, 78–84 (2010). https://doi.org/10.1504/IJBIC.2010.032124 View ArticleGoogle Scholar