CHEMICAL ENGINEERING TRANSACTIONS VOL. 51, 2016 A publication of The Italian Association of Chemical Engineering Online at www.aidic.it/cet Guest Editors: Tichun Wang, Hongyang Zhang, Lei Tian Copyright Β© 2016, AIDIC Servizi S.r.l., ISBN 978-88-95608-43-3; ISSN 2283-9216 An Improved Ant Colony Algorithm for Continuous Optimization Based on Levy Flight Hongzhang Wang*, Conggang Liang School of Mathematics and Information Science, Pingdingshan University, Pingdingshan, Henan 467000, China 13903759297@126.com Ant colony optimization (ACO) is proposed on the study of the foraging behavior of ants on the basis of the proposed and widely used in the optimization. However, it has some shortcoming such as longer time, hardly implement and local optimal etc. For overcoming the above shortcoming, combined with the characteristic of Levy flight, based on Levy flight ant colony optimization is proposed which used Levy flight instead of local search for improving the searching efficiency. In order to test the performance of the new algorithm, we apply it to 20 benchmark function test and compare it with GA, PSO, ACO and LFACO algorithms. The comparison result shows that LFACO is far better than the other three algorithms in quality. 1. Introduction With the development of computing technology, it is possible to make random search in the solution space. Random search according to the different search methods can be divided into random search and blind random search. Random search method is a feasible solution to complete random search in the solution space, and the method is very efficient when the scale of the problem is large. A guided random search rule is according to certain strategy in solution space to focus on the implementation of random search, and accumulated experience in the process of random search, makes the search is intelligent, commonly used intelligent methods such as genetic algorithm (Goldberg, 2002), particle swarm optimization (Kennedy and Eberhard, 1995). Biologists found that by a lot of research, ant colony foraging in the process is the reason why to find the shortest path between a food source and their nest is due to ant individuals can on the way after the leave a special substance pheromones and become other ants searching for food or nest of clues. Subsequent ants encounter pheromone, not only can detect the content of pheromone, but also according to the concentration of pheromone to determine the direction of its progress. Information over time will gradually evaporate, so the length of the path and the number of residues in the number of ants on the choice of a greater impact. In a certain period of time the path by more ants, ants select the path of the greater the probability; ant chooses the path, the corresponding path information hormone concentration strengthened, to promote more ants select the path. Therefore, a large number of ants through this simple information exchange, to achieve a positive feedback of the information learning mechanism, and can quickly find the shortest path from the food source to the nest. A typical representative of the biological colony intelligent algorithm is the ant colony algorithm. Ant Colony Optimization (ACO) is proposed on the study of the foraging behaviour of ants on the basis of the proposed (Dorigo and Caro, 1999). After years of development, the model construction algorithm relatively mature and perfect, with the characteristics of strong robustness positive feedback, distributed computing etal, has been successfully applied to discrete combinatorial optimization field, become one of the most effective means of optimization. However, ACO also has the following shortcomings: (1) The algorithm only through the guidance of pheromone search optimization, search for a longer time; (2) When solving the practical problem by ant colony algorithm, it is necessary to describe the problem firstly, and the algorithm is not powerful enough to describe the complex problem; (3) In the process of searching, the algorithm is prone to search stagnation phenomenon; DOI: 10.3303/CET1651082 Please cite this article as: Wang H.Z., Liang C.G., 2016, An improved ant colony algorithm for continuous optimization based on levy flight, Chemical Engineering Transactions, 51, 487-492 DOI:10.3303/CET1651082 487 mailto:13903759297@126.com In fact, it is not entirely random, but in the nature, that the individual's flight or foraging methods are not entirely random, but it obeys the Levy flight (Yang and Deb, 2009). The so-called Levy flight is a step size obeys Levy distribution random walk. Specifically, individuals generally only in a small area of flight or foraging, but there are a small fraction of an individual will suddenly fly to the distant place. This behavior is very conducive to the search process and widely used in the CS algorithm (Yang and Deb, 2010) and CSB algorithm (Yin and Liu 2015). In order to improve the search time and search stagnation, this paper proposes an ant colony algorithm based on Levy flight. In order to Levy flight applications in more practical problems, this paper for continuous space optimization of function optimization problems, combined with the ACO algorithm, we proposed a based on Levy flight ant colony optimization algorithm (LFACO). This algorithm use Levy flight to perform local search algorithm and unlike ACO algorithm and random search, so local search ability of the algorithm has been greatly improved. In order to test the performance of the new algorithm, we apply it to 20 benchmark function test and compare it with GA, PSO, ACO and LFACO algorithms. 2. The description of LFACO 2.1 ACO algorithm Assume that solving the problem of the scale is 𝑛, the total number of ant colony is π‘š, the path (I, J) in time t amount of information concentration is πœπ‘–π‘— (𝑑). An ant of K in the process of moving path selection will accord the information on each path in the pigment concentration and path of the heuristic information such as factors to calculate the state transition probability by which the ant selects the path. Here we assume that 𝑝𝑖𝑗 π‘˜ (𝑑) represents the probability that in the time t ant K is transferred from the node I to the node J, and computing the probability calculation is: 𝑝𝑖𝑗 π‘˜ (𝑑) = { [πœπ‘–π‘—(𝑑)] π›Όβˆ—[πœ‡π‘–π‘—(𝑑)] 𝛽 βˆ‘ [πœπ‘–π‘ (𝑑)] π›Όβˆ—[πœ‡π‘–π‘ (𝑑)] 𝛽 π‘ βŠ‚π‘Žπ‘™π‘™π‘œπ‘€π‘’π‘‘π‘˜ , 𝑖𝑓 𝑗 ∈ π‘Žπ‘™π‘™π‘œπ‘€π‘’π‘‘π‘˜ 0, π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ (1) In the above equation, π‘Žπ‘™π‘™π‘œπ‘€π‘’π‘‘π‘˜ represents a collection of the ant K step allowing the selection of node. 𝛼 is the pheromone heuristic factor, 𝛽 is expected heuristic factor, πœ‡π‘–π‘— (𝑑) is the heuristic function. In order to avoid earlier information residues too much and cause residue information flooded the heuristic information of the phenomenon, in each ant finish or complete traversal of all n cities, to update operation treatment of residual information. So in the t+1 time the pheromone concentration of the path (I, J) can be adjusted and updated as the following rules: πœπ‘–π‘— (𝑑 + 1) = (1 βˆ’ 𝜌) βˆ— πœπ‘–π‘— (𝑑) + βˆ†πœπ‘–π‘— (𝑑) (2) βˆ†πœπ‘–π‘— (𝑑) = βˆ‘ βˆ†πœπ‘–π‘— π‘˜π‘š π‘˜=1 (𝑑) (3) In the above equations, 𝜌 is the pheromone evaporation coefficient, and for preventing unlimited information accumulation, the range of P is [0, 1). βˆ†πœπ‘–π‘— (𝑑) is said as the path (I, J) pheromone increment, βˆ†πœπ‘–π‘— π‘˜ (𝑑) is said the information concentration the π‘˜th ant after path (I, J). So the pseudo code of ACO is as Figure 1. Figure 1: The pseudo code of ACO 2.2 Levy Flight Animals foraging path was considering as a random or quasi-random manner in nature. However, various studies have shown that the flight behavior of many animals and insects obeys the typical characteristics of Levy flights. Broadly speaking, Levy flights are a random walk whose step length is drawn from the Levy distribution, often in terms of a simple power-law formula L(s)~|s|^(-1-Ξ²) where 0<Ξ²<2. Obviously, the generation of step sizes samples is not trivial using Levy flights. A simple scheme discussed in detail can be summarized as following (Yang, 2010): 488 𝐿(𝑠)~ 𝑒 |𝑣| 1 𝛽 (4) In which 𝑒 ∼ 𝑁(0, πœŽπ‘’2), 𝑣 ∼ 𝑁(0, 𝛿𝑣 ) are normal distribution. 2.3 The description of LFACO For improving the performance of ACO, a possible method is to change the pheromone volatilization coefficient ρ using Levy distribution while ρ is a constant in ACO, so we propose an new improved ACO algorithm based on Levy Flight (LFACO) in which each movement of each ant individual obey the levy distribution. In LFACO, the equation (4) is introduced into the equation (2) and gets the following equation for updating the location of each ant individual: πœπ‘–π‘— (𝑑 + 1) = (1 βˆ’ πœŒπ‘–,𝑗 ) βˆ— πœπ‘–π‘— (𝑑) + βˆ†πœπ‘–π‘— (𝑑) (5) In the eq(5), πœŒπ‘–,𝑗 ∼ 𝑙𝑒𝑣𝑦(𝛽) , 𝑙𝑒𝑣𝑦(𝛽) ∼ 𝑒 |𝑣| 1 𝛽 ( πœπ‘–π‘— (𝑑 + 1) βˆ’ πœπ‘–π‘— (𝑑)) and 𝑒 ∼ 𝑁(0, πœŽπ‘’ 2) , 𝑣 ∼ 𝑁(0,1) , πœŽπ‘’ = 𝒯(1+𝛽)𝑠𝑖𝑛 (π›½πœ‹/2) 𝒯( 1+𝛽 2 )𝛽2(π›½βˆ’1)/2 1/𝛽 .Combined the above analysis, the step of LFACO is following as figure 2. Begin Initializing the oopuation Constructing the matrix of information Performing the moving operator for each individual Performing the Levy flight as local search for each individual Selecting the best individual according the fitness Updating the matrix of information stop N End Y Figure 2: The description of LFACO. Function Optimization Function Optimization is often expressed in the following form: π‘šπ‘–π‘› 𝑓(𝑋) 𝑠𝑒𝑏𝑗𝑒𝑐𝑑 π‘‘π‘œ 𝐿 ≀ 𝑋 ≀ π‘ˆ (6) In which X = {x1, x2, x3, … , xn} is a vector, and L = {l1, l2, l3, … , ln} is the lower bound of X , while U = {u1, u2, u3, … , un} is the upper bound of X. In this work, we test and verify the effectiveness of the proposed LFQPSO algorithm in this chapter by using the following 15 benchmark functions of which its definition, domains and optimal value are respectively defined as: (1)The definition of f1 is following: 𝑓1(π‘₯) = βˆ‘ π‘₯𝑖 2𝑛 𝑖=1 (7) Its domains is βˆ’5.12 ≀ xi ≀ 5.12, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (2)The definition of f2 is following: 𝑓2(π‘₯) = βˆ‘ (𝑖π‘₯𝑖 2)𝑛𝑖=1 (8) 489 Its domains is βˆ’5.12 ≀ xi ≀ 5.12, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (3)The definition of f3 is following: 𝑓3(π‘₯) = βˆ‘ [100(π‘₯𝑖 2 βˆ’ π‘₯𝑖+1) 2 + (π‘₯𝑖 βˆ’ 1) 2]π‘›βˆ’1𝑖=1 (9) Its domains is βˆ’2.048 ≀ xi ≀ 2.048, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (1,1,1, … ,1), f(xβˆ—) = 0 (4)The definition of f4 is following: 𝑓4(π‘₯) = 10𝑛 + βˆ‘ (π‘₯𝑖 2 βˆ’ 10π‘π‘œπ‘  (2πœ‹π‘₯𝑖 )) 𝑛 𝑖=1 (10) Its domains is βˆ’5.12 ≀ xi ≀ 5.12, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (5)The definition of f5 is following: 𝑓5(π‘₯) = βˆ‘ π‘₯𝑖 2 4000 𝑛 𝑖=1 βˆ’ ∏ π‘π‘œπ‘  ( π‘₯𝑖 βˆšπ‘– )𝑛𝑖=1 + 1 (11) Its domains is βˆ’1 ≀ π‘₯𝑖 ≀ 1, 𝑖 = 1,2,3, … , 𝑛 Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (6)The definition of f6 is following: 𝑓6(π‘₯) = βˆ‘ |π‘₯𝑖 | 𝑖+1𝑛 𝑖=1 (12) Its domains is βˆ’32.768 ≀ xi ≀ 32.768, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (7)The definition of f7 is following: 𝑓7(π‘₯) = 20 + 𝑒 βˆ’ 20𝑒 βˆ’ 1 5 √ 1 𝑛 βˆ‘ π‘₯𝑖 2𝑛 𝑖=1 βˆ’ 𝑒 1 𝑛 βˆ‘ π‘π‘œπ‘  (2πœ‹π‘₯𝑖) 𝑛 𝑖=1 (13) Its domains is βˆ’32.768 ≀ xi ≀ 32.768, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (8)The definition of f8 is following: 𝑓8(π‘₯) = π‘šπ‘Žπ‘₯(|π‘₯𝑖 |) , 𝑖 = 1,2,3, … , 𝑛 (14) Its domains is βˆ’100 ≀ xi ≀ 100, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (9)The definition of f9 is following: 𝑓9(π‘₯) = π‘šπ‘Žπ‘₯(|π‘₯𝑖 |) , 𝑖 = 1,2,3, … , 𝑛 (15) Its domains is βˆ’100 ≀ xi ≀ 100, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (10)The definition of f10 is following: 𝑓10 (π‘₯) = βˆ‘ π‘₯𝑖 2𝑛 𝑖=1 + (βˆ‘ 0.5𝑖π‘₯𝑖 𝑛 𝑖=1 ) 2 + (βˆ‘ 0.5𝑖π‘₯𝑖 𝑛 𝑖=1 ) 4 (16) Its domains is βˆ’5 ≀ xi ≀ 10, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (11)The definition of f11 is following: 𝑓11 (π‘₯) = 𝑠𝑖𝑛 2(πœ‹π‘¦1) + βˆ‘ [((𝑦𝑖 βˆ’ 1) 2(1 + 10𝑠𝑖𝑛2(πœ‹π‘¦π‘– + 1)))] π‘›βˆ’1 𝑖=1 + (𝑦𝑛 βˆ’ 1) 2(1 + 𝑠𝑖𝑛2(2πœ‹π‘¦π‘› )), 𝑦𝑖 = 1 + π‘₯π‘–βˆ’1 4 , 𝑖 = 1,2,3, … , 𝑛 (17) Its domains is βˆ’10 ≀ xi ≀ 10, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (1,1,1, … ,1), f(xβˆ—) = 0 (12)The definition of f12 is following: 𝑓12 (π‘₯) = βˆ‘ (βˆ‘ π‘₯𝑗 2𝑖 𝑗=1 ) 𝑛 𝑖=1 (18) Its domains is βˆ’65.536 ≀ xi ≀ 65.536, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (13)The definition of f13 is following: 490 𝑓13 (π‘₯) = 1 βˆ’ π‘π‘œπ‘  (2πœ‹βˆšβˆ‘ π‘₯𝑖 2𝑛 𝑖=1 ) + 0.1βˆšβˆ‘ π‘₯𝑖 2𝑛 𝑖=1 (19) Its domains is βˆ’100 ≀ xi ≀ 100, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (14)The definition of f14 is following: 𝑓14 (π‘₯) = 0.1 + βˆ‘ (𝑠𝑖𝑛 (π‘₯𝑖 )) 2 βˆ’ 0.1 ∏ (𝑒π‘₯𝑝 (βˆ’π‘₯𝑖 2))𝑛𝑖=1 𝑛 𝑖=1 (20) Its domains is βˆ’10 ≀ xi ≀ 10, i = 1,2,3, … , n Its argument and optimal value are xβˆ— = (0,0,0, … ,0), f(xβˆ—) = 0 (15)The definition of f15 is following: 𝑓15 (π‘₯) = βˆ‘ (π‘₯𝑖 βˆ’ 1) 2𝑛 𝑖=1 βˆ’ βˆ‘ π‘₯𝑖 π‘₯π‘–βˆ’1 𝑛 𝑖=2 + 𝑛(𝑛+4)(π‘›βˆ’1) 6 (21) Its domains is βˆ’π‘›2 ≀ π‘₯𝑖 ≀ 𝑛2, 𝑖 = 1,2,3, … , 𝑛 Its argument and optimal value are π‘₯βˆ— = 𝑖(𝑛 βˆ’ 𝑖 + 1), i = 12,3, … , n, 𝑓(π‘₯βˆ—) = 0 4. Experiment Result In order to compare the results of LFACO algorithm, GA, PSO, ACO and LFACO algorithm use the R programming language implementation, and all the program results in 3.3GHz Core Duo processor, 4GB of ram and windows 7 operating system of PC operation. For the 15 benchmark functions, the four algorithms are run 100 times respectively, and the average experimental results are shown in Table 1 and table 2. As Table 1 shown, for all 20 benchmark test functions, the convergence speed of PSO is the best, and of LFACO is better than that of GA and ACO, which shows that LFACO can indeed faster than ACO in optimizing the optimal searching process. Next we focus on analysis the result of LFACO and the rest of the three algorithms. Table 2 shown the result is the best LFACO optimization, and although the execution time of the PSO algorithm best as Table 1 shown, but the analysis table 2, we can see that PSO result is the worst in four algorithms, which shows the no an algorithm in which some indexes can reflect the best. Table 1: Runtime of GA, PSO, ACO and LFACO on the 15 benchmark functions Function GA PSO ACO LFACO f1 10.41 2.21 10.21 7.34 f2 10.74 2.28 10.53 7.57 f3 11.04 22.64 10.82 7.78 f4 11.67 2.48 11.45 8.23 f5 11.49 23.58 11.27 8.10 f6 10.39 2.44 10.73 8.10 f7 11.84 2.51 11.62 8.35 f8 10.71 2.27 10.50 7.55 f9 10.84 2.30 10.63 7.64 f10 11.18 2.37 10.97 7.89 f11 11.50 2.44 11.28 8.11 f12 10.82 2.30 10.61 7.63 f13 10.96 2.33 10.75 7.73 f14 11.49 2.44 11.27 8.10 f15 11.42 2.42 11.20 8.05 5. Conclusion In order to improve performance ACO algorithm in function optimization, in this paper the Levy flight introduced to ACO algorithm and get the new update of the individual equations. Based on the above discuss, we propose the new algorithm of based on Levy Flight of the ACO algorithm (LFACO) and detailed describe its steps and processes. In order to verify the LFACO algorithm, 15 benchmark functions are used to test the 491 LFACO which is compared with GA, PSO algorithm and ACO algorithm. The comparison results show that for the function optimization problem LFACO in effectiveness as well as run time is far better than other three algorithms. Table 2: the average and standard variation of GA, PSO, ACO and LFACO on the 15 benchmark functions Function GA PSO ACO LFACO 𝑓1 0.00Β±0.06 0.00Β±8.06 2.05Β±1.29 0.00Β±0.00 𝑓2 0.16Β±0.37 5.43Β±16.97 87.13Β±20.54 0.00Β±0.00 𝑓3 28.71Β±0.96 44.12Β±34.10 84.24Β±32.06 12.67Β±2.97 𝑓4 15.80Β±4.69 97.41Β±32.9 99.27Β±13.52 3.01Β±1.48 𝑓5 0.67Β±0.19 0.03Β±2.67 208.24Β±39.01 0.00Β±0.00 𝑓6 0.00Β±0.00 0.00Β±0.00 0.00Β±0.00 0.00Β±0.00 𝑓7 1.86Β±0.53 0.01Β±3.41 8.25Β±0.84 0.04Β±0.03 𝑓8 12.43Β±2.46 17.91Β±5.35 24.08Β±2.52 4.90Β±1.11 𝑓9 0.25Β±0.30 80.00Β±23.41 46.99Β±8.60 0.00Β±0.00 𝑓10 27.32Β±4.18 86.19Β±14.73 94.91Β±101.01 0.68Β±0.60 𝑓11 0.01Β±0.35 1.85Β±5.09 4.17Β±13.83 0.00Β±0.00 𝑓12 11.32Β±77.17 258.88Β±28.96 139.01Β±31.99 0.00Β±0.00 𝑓13 3.30Β±0.50 0.82Β±2.06 6.00Β±0.97 1.80Β±0.32 𝑓14 0.13Β±0.05 3.36Β±0.75 0.23Β±0.08 0.10Β±0.00 𝑓15 222.33Β±163.57 192.27Β±125.16 122.36Β±43.15 82.83Β±33.85 Acknowledgment Fund Project: 2011 Henan provincial science and Technology Department of science and technology foundation project; Exact solutions of higher order nonlinear partial differential equation; Project approval number: 112300410199. Reference Dorigo M., Caro G.D., 1999, Ant colony optimization: A new meta-heuristic, Proceedings of the 1999 Congress on Evolutionary Computation, 2:1470-1477. Goldberg D.E., 2002, the Design of Innovation: Lessons from and for Competent Genetic Algorithms. Norwell, MA: Kluwer Academic Publishers. ISBN 978-1402070983 Kennedy J., Eberhart R., 1995, Particle swarm optimization, IEEE International Conference on Neural Networks, 4, 1942-1948, DOI: 10.1109/ICNN.1995.488968. Yang X.S., Deb S., 2009, Cuckoo Search via lΓ©vy flights. Proceeding of world congress on Nature & Biologically Inspired Computing, India, 210-214. Yang X.S., Deb S., 2010, Engineering optimization by Cukcoo Search, Int. J. Mathematical Modeling and Numerical Optimization, 1(4), 330-343. Yin L., Liu Y.G., 2015, Biclustering of the gene expression data by coevolution cuckoo search. International Journal Bioautomation, 19(2): 161-176. 492