石建平,李培生,劉國(guó)平,劉 鵬
(1. 南昌大學(xué)機(jī)電工程學(xué)院 南昌 330031;2. 貴陽(yáng)學(xué)院電子與通信工程學(xué)院 貴陽(yáng) 550005;3. 河北地質(zhì)大學(xué)寶石與材料工藝學(xué)院 石家莊 050031)
Optimization problems are often encountered in the fields of scientific research, engineering t echnology, and management decision. How to obtain the optimal solution or suboptimal solution of these problems at the minimum cost has been one of the hot topics for researchers. In recent years, swarm intelligent optimization algorithms, such as the genetic algorithm[1], the particle swarm optimization[2], the ant colony algorithm[3], the differential evolution algorithm[4], and the artificial bee colony algorithm[5]are widely used to solve the above-mentioned optimization problems. In most cases, the swarm intelligence optimization algorithm can achieve more satisfactory results than traditional optimization methods, so it has attracted more and more attention from scholars.
Inspired by the foraging behavior of fruit flies,Ref.[6] proposed a novel swarm intelligence-based meta-heuristic algorithm, namely the fruit fly optimization algorithm (FOA). Compared with other swarm intelligence-based algorithms, FOA has a simple algorithmic framework with a few tuned parameters, which makes it easily understood and implemented. As a novel method for finding global optimum, FOA technique has been obtained much attention and widely applied in real-world problems,such as the financial distress model solving[6], the multidimensional knapsack problem[7], neural network training[8], continuous mathematical function optimization problems[9], power loads forecasting[10-11],PID controller parameters tuning[12-13], semiconductor final testing[14], joint replenishment problems[15-16], as well as many other problems in scientific and engineering fields[17-21].
However, FOA also has its own shortcomings.Particularly, the taste concentration judgment value cannot be taken a negative value, so FOA cannot deal with the optimization problem with negative decision variables. In addition, with the iteration of algorithm,the diversity of the swarm is rapidly lost, making the algorithm easy to fall into local optimum.Furthermore, the weak local search ability of FOA leads to low convergence accuracy of the FOA algorithm. To improve the convergence performance of FOA, a series of improvement studies have been carried out by many scholars at home and abroad.Ref.[22] proposed an improved FOA by adding an escaping parameter to the taste concentration judgment value, and the fruit fly individual in this algorithm is searched in three-dimensional space. Ref.[23] used a linear generation mechanism to generate candidate solutions, instead of nonlinear generation mechanism in basic FOA, and a new improved fruit fly algorithm based on a linear mechanism was proposed. Ref.[24]presented a novel multi-swarm fruit fly optimization algorithm, and the performance of the algorithm was greatly improved by employing the multi-swarm behavior. By introducing a new control parameter and an effective solution generating method, Ref.[9] also proposed an improved fruit fly optimization algorithm for solving continuous optimization problems.Ref.[25] introduced a novel parameter integrated with chaos into the basic FOA and put forward a chaotic fruit fly algorithm. Ref.[26] modified the expression of the smell concentration judgment value in FOA, and a differential vector was introduced to replace the stochastic search mechanism. Ref.[27] proposed an improved fruit fly optimization algorithm based on selecting evolutionary direction intelligently. Ref.[28]introduced a multi-scale cooperative mutation mechanism and proposed a multi-scale cooperative mutation fruit fly optimization algorithm. Ref.[29]added two sign parameters into the original FOA for dealing with not only the positive side of the search space, but also the whole. In order to better balance the global search and local search abilities, Ref.[30]proposed an improved FOA based on the hybrid location information exchange mechanism. By embedding the trend search strategy into the original FOA and combining with the co-evolution mechanism,Ref.[31] developed a novel fruit fly optimization algorithm. Ref.[16] proposed an improved fruit fly optimization algorithm with a level probability policy and a new mutation parameter aiming at expanding search space and skipping local optima. Ref.[32]introduced a normal cloud model into fruit fly osphresis foraging and proposed a normal cloud model based FOA. According to historical memory and elite strategy mechanism, Ref.[33] proposed an adaptive fruit fly optimization algorithm.
The afore-mentioned variants of FOA are all based on a single evolutionary strategy. However, this single evolutionary model often lacks an effective mechanism for maintaining diversity of the swarm,and it is difficult to reasonably balance the global exploration and local exploitation of the algorithm, so the optimization process is prone to fall into premature convergence. In addition, according to the foraging behavior of the actual fruit fly swarm, the visual foraging behavior of each individual fruit fly should follow its olfactory foraging behavior. In order to overcome the shortcomings mentioned above, this article presents an improved FOA based on multistrategy evolution and dynamic updating of swarm optimal information (MDFOA). The MDFOA is applied to various standard optimization problems and compared with the original FOA, six variants of the FOA, and two state-of-the-art algorithms.
The rest of the paper is organized as follows. In Section 1, the basic FOA is introduced. The MDFOA is described in detail in Section 2. Experimental design and comparisons are illustrated in Section 3. Finally,Section 4 gives the concluding remarks.
The basic FOA is a swarm intelligence-based meta-heuristic algorithm inspired by the foraging behavior of fruit flies. The fruit fly itself is superior to other species in osphresis and vision. The osphresis organs of fruit flies can sense odors floating in the air even 40 km far away from them. During hunting for food, they use their olfactory sense and find direction to the food source. Then, after they get close to the food location, they fly toward the food source by using their visual senses. The procedure of basic FOA can be described as follows[6]:
1) Initialize relate parameters, including population size, the maximum number of iterations,and the location of fruit fly swarm (X_axis, Y_axis).
2) For each fly in the swarm, a random fly direction and distance are provided, and the new location is generated by using Eq. (1):
where i is the ith fly in the swarm. RandomValue represents the fly range parameter.
3) Given the unknown food source, calculate the distance between each fly and the origin by using Eq.(2). Calculate the smell concentration judgment value by using Eq. (3):
4) Substitute the smell concentration judgment value ( Si) into smell concentration judgment function(or called fitness function) and the smell concentration value is calculated by using Eq. (4):
5) Identify the best smell concentration value in the fruit fly swarm (finding the minimum value) by using Eq. (5):
6) Keep the best smell concentration value and the corresponding X and Y coordinates, and at this moment, the fruit fly swarm moves toward the location by using their vision:
7) Repeat step 2)~6). Stop when the iterative number reaches the maximum iterative number.
Due to its simple evolutionary mechanism, FOA is unable to effectively balance the global search and local search abilities of the algorithm, which makes it difficult to achieve the ideal optimization results in dealing with complex optimization problems. Inspired by other state-of-the-art swarm intelligence-based algorithms, such as particle swarm optimization algorithm (PSO)[34-35]and differential evolution algorithm (DE)[36], an improved version of FOA(MDFOA) is developed in this study. We detail the MDFOA as follows.
Multi-strategy evolution refers to selecting one strategy from a strategy base as an iterative equation for the current individual olfactory search of fruit flies.Different fly individuals of the same generation may have different evolutionary strategies, so that the diversity of the swarm is improved, and the global search ability of the algorithm is also enhanced. The selection of strategies is a key factor that affects the optimization performance of algorithm. The more strategies in the strategy base, the more powerful the algorithm is in solving complex optimization problems. With fewer strategies in the strategy base, it may be difficult for the algorithm to achieve the expected convergence quality. After repeated numerical simulation experiments, the strategy base consists of the following strategies in this study:
In the basic FOA, the search radius is fixed and cannot be changed during iterations. This leads to a low convergence speed and a tendency to fall into a local optimal solution. In the early iterations, the fruit fly swarm location is usually far from the optimal solution, so the perturbation component should be valued in a larger range to find a promising region. In the final generations, the swarm location is close to the optimal solution, and the perturbation component should be taken in a small scope to further improve the convergence accuracy of the algorithm. Therefore, the parameter ω in the above evolutionary strategies is changed dynamically with iterations as the following:
where α and β are two adjustment coefficients. For the sake of simplicity, we take the same value in this paper, that is, α=β. Iteration is the iteration number.Iterationmaxis the maximum iteration number.
In the basic FOA, the visual search of fly is based on the premise that the olfactory search of the fruit fly swarm has been completed, which lags the update of the optimal position information of the swarm, thus reducing the convergence speed of the algorithm. To solve this problem, we propose a mechanism of realtime dynamic updating in this paper, that is, after the fly completes the olfactory search, the new position is immediately evaluated. If the new position is better than the current optimal position of swarm, the optimal location and its corresponding information will be updated, so the purpose of updating swarm information in real time can be achieved.
The detailed steps of implementing the MDFOA are described as follows.
1) Initialize relate parameters, including population size ps, the maximum number of iterations Iterationmax, and adjustment coefficients α and β. The position of each fruit fly is randomly generated and evaluated, and the optimal position is selected as the current optimal position of the swarm.
2) For the ith fly, a strategy selected from equations (7)~(11) is used as an evolutionary strategy for its olfactory search, and the value of parameter ω is calculated by using Eq.(12).
3) According to Eq.(13) and Eq.(14), the best historical position of the fly individual and the best historical position of the fruit fly swarm are updated in real time, respectively. Here, we abandon the calculation of the distance Disti, and directly take the position vector Xias the smell concentration judgment value, i.e., Si=Xi, which solves the shortcoming that Sicannot take a negative value in the basic FOA:
4) If all flies of the current generation complete the evolution operation, the next step is executed;otherwise return to step 2).
5) If the iterative number reaches the maximum iterative number, the algorithm ends; otherwise return to step 2).
For the purpose of verifying the performance of MDFOA, 29 well-known benchmark functions are considered in this test. For a detailed description of these benchmark functions, please refer to ref.[9].Among these test functions, the first 15 problems are unimodal functions and the remaining 14 problems are multimodal functions. The effectiveness of the algorithm is evaluated on these benchmark functions with varying dimensions as 30, 60, 100, 200, 300, and 400. The results of the MDFOA are compared with those of the basic FOA, LGMS-FOA[23], AE-LGMSFOA[19], MFOA[22], IFFO[9], SFOA[29], IFOA[15], PSO and DE. All the proposed algorithms are coded in MATLAB R2013a. The computation is conducted on a personal computer (PC) with Intel (R) Core(TM) i7-7700, 3.6 GHz CPU, 16 GB RAM, and Windows 10 Operational System.
The experimental parameters of the other nine algorithm are set according to the corresponding papers. The parameters setting of different algorithms are described below.
In LGMS-FOA, we set ω0=1, α=0.95, and n=0.005. The parameters of AE-LGMS-FOA are p=0.005, ω0=1, and n=10, and 80% of the best population are used to generate XAv. For FOA and MFOA, the random initialization fruit fly swarm location zone is [0,10], the random direction and distance of iterative fruit fly food searching is [?1,1].The parameters of IFFO are λmax=(UB?LB)/2 and λmin=10?5, where UB and LB are the upper and lower bounds of independent variables, respectively. In IFOA, 50% of the individuals in a swarm fly toward local optimal solution and the others fly randomly, and the perturbation amplification factor ( ω) is 0.3. For PSO, ωmax=0.9, ωmin=0.4, c1=c2=2, and vmaxis set to be 20% of xmax. The mutation operator of DE is best/1, and the differentiation factor(F) and crossover probability (Cr) are 0.5, 0.5, respectively. The parameters of MDFOA are α=β=6. The population size of each algorithm is set to 50, and the number of iterations is set to 500 times.
In this section, we present the optimization results of different algorithms.
In order to show the reliability, stability, and robustness of the results, each function is optimized over 30 independent runs, and the mean value (Mean)and standard deviation (Std) are reported in Tables 1 and 2 for all functions with dimensions equal to 30 and 60, respectively. The average convergence curves of the optimization processes for some typical functions by different algorithms are illustrated in Fig.1 and Fig.2 (To facilitate evaluate and observation, the fitness of the objective function in the graph is a logarithm with the base of 10).
Fig.1 Average convergence curves on some 30-dimensional functions
From Table 1 and 2, we can find that MDFOA outperforms other algorithms in many respects:
1) For unimodal functions (F1-F15), MDFOA obtains almost all the best mean results (except F5). In terms of algorithm stability, MDFOA gets 13 best results (except for F2, F5). For F2 with dimensions equal to 30, AE-LGMS-FOA gets the smallest standard deviation. While for F2 with dimensions equal to 60, SFOA gets the smallest standard deviation. For F5 with dimensions equal to 30 and 60,IFOA obtains the best mean result and the smallest standard deviation, followed by MDFOA. Eight algorithms including LGMS-FOA, AE-LGMS-FOA,MFOA, IFFO, SFOA, IFOA, DE and DMDFOA can converge to the theoretical optimal value of function F11 with dimensions equal to 30. When the dimensions are increased to 60, six algorithms including LGMS-FOA, AE-LGMS-FOA, MFOA,SFOA, IFOA and DMDFOA can converge to the theoretical optimal value of function F11.
Fig.2 Average convergence curves on some 60-dimensional functions
2) For multimodal functions (F16-F29), MDFOA obtains all the best mean results. In terms of algorithm stability, MDFOA also gets 13 best results (except F23). For F23 with dimensions equal to 30 and 60,SFOA obtains the smallest standard deviation.
3) The increment of the problem dimension has little effect on the convergence performance of MDFOA. While the convergence performance of other algorithms decreases obviously with an increase in the dimension of the problem.
Table 1 Comparison with other algorithms on 30-dimensional functions
Continued
Table 2 Comparison with other algorithms on 60-dimensional functions
Continued
4) For most of the 29 complex benchmark functions, MDFOA can converge to the theoretical optimal value, which shows that MDFOA has strong global search and local search abilities, as well as strong algorithm stability.
From Fig.1 and Fig.2, it can be observed that the evolution curves of the MDFOA descend much faster and reach lower level than that of other nine algorithms, indicating that MDFOA has the advantages of fast convergence speed and high convergence accuracy. The results in Fig.1 and Fig.2 also show that several other algorithms are easy to be trapped to the local optimal.
To test the ability of MDFOA of dealing with high-dimensional complex optimization problems, the dimensions of the benchmark functions increase to 100, 200, 300, and 400, respectively. The parameter setting and operating environment of MDFOA are kept the same as described above. The mean value and standard deviation are calculated based on the 30 independent replications, and the results are shown in Table 3. The average convergence curves of the optimization processes for some typical functions by MDFOA are illustrated in Fig.3.
From Table 3, we can find that MDFOA is still highly effective and reliable in solving highdimensional complex problems. The change in the dimension of the problems has little effect on the optimization performance of MDFOA. Among the 29 100-dimensional benchmark functions, MDFOA can converge to the theoretical optimal value of 23 functions. The same conclusion is reached when the dimensions increased to 200, 300, and 400. MDFOA also shows high convergence accuracy and excellent algorithm stability for benchmark functions that fail to converge to the theoretical optimal value. The above results are further verified by Fig.3. It should be noted that with an increase in the scale of the problem, the running time of the algorithm also increases.
Table 3 High-dimensional benchmark functions optimization results of MDFOA
Continued
Fig.3 Average convergence curves on some high-dimensional functions
In order to overcome the shortcomings of the basic FOA, such as low convergence accuracy and easy to fall into local optimum, we propose a novel improved fruit fly optimization algorithm, namely multi-strategy dynamic fruit fly optimization algorithm. A strategy randomly selected from the strategy base is used as the current evolution operator of each fruit fly individual, and different evolutionary strategies complement each other to effectively improve the overall optimization performance of the algorithm. To further improve the convergence speed of the algorithm, the algorithm employs a mechanism of dynamically updating the swarm information in real time, thus the non-evolutionary individuals of current generation can obtain the latest global optimal information in time. Tests on 29 benchmark functions indicate that the proposed MDFOA can perform better than FOA, LGMS-FOA, AE-LGMS-FOA, MFOA,IFFO, SFOA, IFOA, PSO and DE in terms of convergence accuracy, convergence speed and convergence stability. MDFOA also shows good performance in solving high-dimensional function optimization problems.
In this paper, we choose an evolutionary strategy from the strategy base by random selection. Adaptive strategy selection may be a better choice, and it will be one of the research contents in the future. In addition,the parameters α and β in Eq. (12) are two key parameters that affect the convergence performance of the algorithm, and their reasonable settings also need to be further studied in the future work.