## Peng Wang* , Jiyun Bai* and Jun Meng*## |

[TeX:] $$s_{1}$$ | [TeX:] $$s_{11}$$ | [TeX:] $$s_{12}$$ | [TeX:] $$s_{1i}$$ | [TeX:] $$s_{1 n}$$ | [TeX:] $$f\left(s_{1}\right)$$ | [TeX:] $$\omega_{1}$$ | ||
---|---|---|---|---|---|---|---|---|

[TeX:] $$s_{l}$$ | [TeX:] $$s_{l 1}$$ | [TeX:] $$s_{l 2}$$ | [TeX:] $$s_{l i}$$ | [TeX:] $$S_{1 \mathrm{n}}$$ | [TeX:] $$f\left(s_{l}\right)$$ | [TeX:] $$\omega_{l}$$ | ||

[TeX:] $$s_{k}$$ | [TeX:] $$s_{k 1}$$ | [TeX:] $$s_{k 2}$$ | [TeX:] $$s_{k i}$$ | [TeX:] $$s_{k n}$$ | [TeX:] $$f\left(s_{k}\right)$$ | [TeX:] $$\omega_{k}$$ | ||

[TeX:] $$G_{1}$$ | [TeX:] $$G_{2}$$ | [TeX:] $$G_{i}$$ | [TeX:] $$G_{n}$$ |

The Gaussian kernel PDF used in [TeX:] $$\mathrm{ACO}_{\mathbb{R}}$$ is a weighted combination of Gaussian functions defined as follows:

where [TeX:] $$g_{l i}(x)$$ is a Gaussian function, [TeX:] $$\mu$$ is the vector of means, is the vector of standard deviations and [TeX:] $$\omega$$ is the vector of weights. Each solution sli have an associated Gaussian function [TeX:] $$g_{l i}(\mathrm{x}), \sigma_{\mathrm{li}}$$ can be calculated by:

where is a parameter of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ algorithm which needs to be set by experience, is a positive number, the larger value of , the lower the convergence speed of the algorithm. In the solution archive, the solutions sl were sorted according to the value of , [TeX:] $$f\left(s_{l}\right) . \mathrm{ACO}_{\mathrm{R}}$$ gives good solutions a relatively larger weight to increase its pheromone and thus the next solutions are more likely to be generated near the pervious good solution. Thus, the weight [TeX:] $$\omega l$$ can be defined by the following equation:

where q is another parameter of weight and usually set by experience. When q is small, the best-ranked solutions are strongly preferred, and when it is large, the probability becomes more uniform. A more detailed analysis of the parameters and q are presented in Section 2.2.

In the initialization steps, the solution archive is filled with solutions generated by uniform random sampling and sorted according to [TeX:] $$f(s), \xi$$ and q are set by experience, [TeX:] $$\mu_{l i}$$ in Eq. (1) are initialized to sli. In the optimization steps, ACOR generates new solutions by sampling the Gaussian kernel PDF and adding them to the archive. In the Gaussian kernel PDF good solutions have larger weights, thus the new solutions are likely to follow around the good solutions. This strategy has similar effects on pheromone deposition. When adding new solutions to the solution archive, the worst solutions were removed from the archive. This strategy has similar effects to pheromone evaporation. In this way, the solution archive can always get better solutions, and the optimization steps will run constantly until the termination conditions are met.

By shifting the discrete PDF to Gaussian kernel function, ACOR extends ACO algorithm to continuous domain. However, initialize the solution archive by uniform random sampling may not be the best option, because bad initialization may mislead the algorithms into converging to a local optimal and the convergence speed is slow, moreover, the optimization result is influenced by the parameters. In this section, we will use an example to show these limitations. The testing function f is defined as flows:

The extreme point of function f is (0,0) and f(0,0) = 1. Beyond the extreme point there is a ring-shaped ridge with a height of 0.9903 (local maximum). Fig. 1 shows the optimization results of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}.$$ We ran [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ 10 times and 400 generations each time, from the result we found [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ only converged to global optimal twice and in most of the time it was converged to 0.9903. The reason might be that uniform random sampling contains less information about the optimal solution. When the search cycle reaches a certain number, the pheromone concentrates around the local optimal and [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ can hardly find a new path.

The parameters and q will influence the optimization result, and it is difficult to set these parameters manually. is the weight of , when is small the Gaussian function will have a small standard deviation, and results in the newly generated solutions are close to . This will weaken the ability to explore new feasible solutions and make the algorithm tend to converge to a local optimal. When is large, newly generated solutions can be far from and will enhance the ability to explore new feasible solutions but also will slow down the convergence speed. Parameter q is defined the weights of the Gaussian function. When q approaches 0, only the Gaussian function associated with the current best solution is used for generating new solutions. For instance, the length of the solution archive is 50 and q=0.01 new solutions have the nearly 90% probability to be generated using only the Gaussian function associated with the current best solution. Thus, when q is large, the algorithm samples the search space using a larger number of good solutions in the archive. When q is small the algorithm tends to sample the search space only using the best solution in the archive.

To evaluate the influence of the parameters, we run a test on f with different parameters ( is in {0.0001, 0.05, 0.5} and q is in {0.5, 1, 1.4}). In each test, we ran) [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ 10 times and 400 generations each time, we found the result was not stable. The experiment using parameter of 0.0001 and q of 1 had the best performance and the experiment using parameter of 1.4 and q of 0.5 had the worst performance.

To overcome the limitations of [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ in this section we present a CG-ACO. In CG-ACO we use a genetic algorithm to generate the initial solutions, we also embedded a cloud model in the updating loop to adaptively set the parameters and q.

Similar to ACO, genetic algorithm (GA) is also a metaheuristic optimization algorithm. GA was inspired by the process of natural selection [27]. GA has the ability of rapid global convergence, recently, some of the work attempting to combine GA and ACO has been presented [22,23,28-30]. Xiong et al. [29] compared the convergence speed between ACO and GA and presented a convergence curve. Generally, the convergence speed between GA and ACO is shown in Fig. 2.

From Fig. 2 we can see that at the beginning of GA the convergence speed was fast but it became slower and slower over time. That is because GA doesn’t have a positive feedback mechanism. However, ACO shows the opposite trend, the convergence speed was slow at first, but it grew faster and faster over time. The reason is that the pheromone was not enough at the beginning, thus the convergence speed was slow at first, with the positive feedback mechanism (the cumulate of the pheromone) the convergence speed grows faster and faster.

Fig. 2 has a crucial point that is the junction point of the two convergence curves (point a). Before [TeX:] $$t_{a}$$ GA was more efficient and after [TeX:] $$t_{a}$$ ACO was more efficient. Therefore, if we combine the two methods, [TeX:] $$t_{a}$$ is the best time to shift the algorithm. In CG-ACO the iteration number t of GA satisfied [TeX:] $$t_{b}<t<t_{c}$$ where [TeX:] $$t_{b}$$ and[TeX:] $$t_{c}$$ are the minimum and maximum iterations repetitively. To achieve this, after each iteration we calculate the evolution rate R by:

where [TeX:] $$\bar{f}_{i}(s)$$ is the average of fitness function. If R is small enough [TeX:] $$\left(R<R_{\min }, R_{\min }\right.$$ is the shifting threshold) for three successive iterations, we stop GA and use the result of GA as the initial solutions of ACO.

Cloud model is a cognitive model proposed by Li et al [31]. It can synthetically describe the randomness and fuzziness of concepts and implement the uncertain transformation between a qualitative concept and its quantitative instantiations. Cloud model has been successfully applied in artificial intelligent control system, data miming and other fields [32-38]. Qi and Yang [32] embedded the cloud model into particle swarm optimization algorithm by introducing the two conditional cloud generators to the hybrid operation and a basic cloud generator to the mutation operation. Wei et al. [33] embedded the cloud model into artificial fish swarm algorithm by using the stable tendency of a basic cloud generator to modify the behavior of the fishes. In this section, we will briefly introduce the cloud model and use it for adaptive parameter selection.

[TeX:] $$X=\{x\}$$ is a universe of discourse defined by a qualitative concept [TeX:] $$\tilde{A}, x \in X$$ is a random instantiation of concept X, [TeX:] $$\mu_{i}(x)$$ is the certainty degree of x belonging to X, which corresponds to a random number with a steady tendency. Then, the distribution of x in the universe X can be defined as a cloud and x can be called a cloud drop.

The cloud model can effectively integrate the randomness and fuzziness of concepts and describe the overall quantitative property of a concept using its expectation [TeX:] $$\left(E_{x}\right),$$ entropy [TeX:] $$\left(E_{n}\right),$$ and hyper entropy [TeX:] $$\left(H_{e}\right).$$ There are two kinds of cloud generators, i.e., the forward generator and the backward generator. The forward generator generates cloud drops according to its numerical characteristics and the backward generator calculates the numerical characteristics from the existing cloud drops.

Fig. 3 illustrates a typical normal cloud model. From Fig. 3, we can see that the entropy represents the uncertainty measurement of a qualitative concept and reflects the dispersing extent of the cloud drops. 99.7% of cloud drops are in the range of [TeX:] $$\left[E_{x}-3 E_{n}, E_{x}+3 E_{n}\right]$$ and the hyper entropy represents the uncertain degree of entropy.

In this work, the solutions in the solution archive are regarded as cloud drops and the solution archive is regarded as a cloud. If the cloud is random and fuzzy, it means we still lack the pheromone thus, we should increase and q to search more solutions on a wider scale; and if the cloud is no longer fuzzy, we should decrease and q to narrow the searching scale and increase the convergence speed.

We used the backward cloud generator to calculate the numerical characteristics of the cloud (the solution archive) and used the entropy to measure its randomness and fuzziness. The entropy [TeX:] $$E_{n}$$ of the initial solution archive was used as the standard. We compared the entropy of the i-th iteration [TeX:] $$E_{n i}$$ with [TeX:] $$E_{n}$$ and defined a fuzzy ratio r as follows:

In the i+1-th iteration and q were set by the following equation adaptively.

The metaheuristic framework is presented in Algorithm 1 and in the following we will explain these steps in detail.

**GAInitialization():** generate M initial population randomly and initialize the crossover rate [TeX:] $$P_{c,}$$ mutation rate [TeX:] $$P_{m},$$ maximum iterations [TeX:] $$I_{\max },$$ minimum iterations [TeX:] $$I_{\min }$$ and minimum evolution rate [TeX:] $$R_{\min }.$$

After the parameter initialization, GA was used to generate initial solutions until the shifting conditions were met. The shifting conditions are: the number of iterations is greater than [TeX:] $$I_{\min }$$ and the evolution rate is smaller than [TeX:] $$R_{\min }$$ for three successive iterations, or the number of iteration reaches [TeX:] $$I_{\max }.$$

**ACOInitialization(): ** Initialize the size of solution archive K, the initial values of and q, fill half of the solution archive with K/2 best solutions found by GA and fill the other half with K/2 solutions generated by uniform sampling.

The ACO optimization contains three parts, solution construction, pheromone update and the updating of and q.

**SolutionConstruction():** similar as the [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ a solution with n variables needs to be constructed with n steps. In each step we constructed one variable, e.g., for [TeX:] $$s_{l i}, \mu_{i}$$ was initialized to [TeX:] $$s_{i}, \sigma_{l i}, \text { and } \omega_{l}$$ was calculated according to Eqs. (2) and (3). Then choose one of the Gaussian functions consisting the Gaussian kernel by the probability [TeX:] $$p_{l}, \text { and } p_{l}$$ is calculated by:

Generate the solution variable [TeX:] $$s_{l i}$$ by sampling the Gaussian function [TeX:] $$g_{l i}(x)(x)$$ in Eq. (1) and use the same way to generate other solution variables.

**PheromoneUpdate(): ** since the pheromone is stored as a solution archive, the pheromone update is accomplished by adding the set of newly generated solutions to the archive, sorting the solution archive, and removing the same number of worst solutions.

**ACOParameterUpdate(): ** the fuzzy ratio r is calculated according to Eq. (7) and parameters and q are updated according to Eqs. (8) and (9).

In this experiment the CG-ACO was compared with its parents, i.e. [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and simple genetic algorithm (SGA). To facilitate the comparison, the testing functions in [5] were used in this experiment:

The global minimum point of [TeX:] $$f_{1} \text { is } x_{i}=0,$$ and the minimum value is 0. Since it is a recursive function, in the experiment, the algorithm has successfully found the global minimum of [TeX:] $$f_{1}$$ which means that the output is smaller than 0.0001. The global maximum point of [TeX:] $$f_{2} \text { is } x_{i}=5$$, and the maximum value is 1, in the experiment, the algorithm has successfully found the global maximum of[TeX:] $$f_{2}$$ which means that the output is greater than 0.999 [5].

The parameter configuration of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ is the same with the best parameters used in [5], where the ant number m=70, the size of the solution archive K=45, the parameters [TeX:] $$\xi=1 \text { and } q=0.0001.$$ The parameters configuration of SGA is that: the size of the population M=50, one-point crossover, one-point mutation and the roulette wheel selection strategies were used, the crossover rate [TeX:] $$P_{c}=0.9,$$ the mutation rate [TeX:] $$P_{m}=0.1.$$ The parameter configuration of CG-ACO shares most of the values in the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and SGA, i.e., m=70, K=45, the initial value of [TeX:] $$\xi=1,$$ the initial value of q=0.0001, the crossover rate [TeX:] $$P_{c}=0.9,$$ the mutation rate [TeX:] $$P_{m}=0.1.$$ Other configurations of CG-ACO’s parameters are maximum iterations [TeX:] $$I_{\max }=400$$, minimum iterations [TeX:] $$I_{\min }=100,$$ and minimum evolution rate [TeX:] $$R_{\min }=10.$$ Each optimization algorithm was tested ten times, and the results are presented in Table 2. From Table 2, we can see that the CG-ACO can achieve better performance than [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and SGA, the average output of CG-ACO is closest to the global optimal. Moreover, we found the hybrid of GA and ACO is also helpful in avoiding the local optimal solution. The CG-ACO reaches the global optimal value for 19 of 20 times, which are much more than other methods. In the experiment, the CG-ACO usually shifts to ACO part after around 100 GA iterations, and for most of the time the ACO part can reach the global optimal within 50 iterations due to the adaptive parameter configuration strategy. In the experiment, CG-ACO used no more than 150 iterations (total iterations in GA part and ACO part) on average to reach the global optimal, which is faster than the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ (more than 200 iterations).

Table 2.

Test function | Algorithm | Best output | Average output | Average iterations to reach the global optimal | Successful rate |
---|---|---|---|---|---|

[TeX:] $$f_{1}$$ | SGA | 0.002 | 0.0895 | - | 0/10 |

[TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ | 0 | 0.0267 | 267 | 3/10 | |

CG-ACO | 0 | 0.0051 | 131 | 9/10 | |

[TeX:] $$f_{2}$$ | SGA | 1 | 0.9621 | 354 | 2/10 |

[TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ | 1 | 0.9865 | 211 | 6/10 | |

CG-ACO | 1 | 1 | 147 | 10/10 |

In the second experiment, we did some comparison between CG-ACO and some optimization algorithms also embedded with cloud model, i.e., the CQPSO [32] and the CAFSA [33]. The function reported in [20] was tested in this section:

The global minimum point of [TeX:] $$f_{3}$$ is at (0,0) and the minimum value is 0. [TeX:] $$f_{3}$$ has an infinite number of local minimums. In the experiment the algorithm has successfully found the global minimum of [TeX:] $$f_{3}$$ only if the output is 0.

The parameters of CG-ACO are the same as the previous experiment and the parameters of CQPSO and CAFSA are configured as [32] and [33], with the maximum iterations of 400. Each optimization algorithm was tested 10 times and the results are presented in Table 3. From Table 3, we can see that the CG-ACO also has the advantages in reaching the global optimal than CQPSO and CAFSA.

Table 3.

Test function | Algorithm | Best output | Average output |
---|---|---|---|

[TeX:] $$f_{3}$$ | CQPSO [32] | [TeX:] $$8.1531 \times 10^{-8}$$ | [TeX:] $$3.9251 \times 10^{-7}$$ |

CAFSA [33] | [TeX:] $$8.1531 \times 10^{-8}$$ | [TeX:] $$3.9254 \times 10^{-7}$$ | |

CG-ACO | 0 | 0 |

In this section, the parameter stability was evaluated by testing the proposed method with different parameters. To evaluate the influence of the parameters effectively, we designed the experiment according to the uniform design method [39]. In the experiment the ant number m and size of the solution archive K varied from 40 to 60, the parameters ξ varied from 0.4 to 1.5 and q varied from 0.0001 to 0.5. These parameters were mixed according to uniform design table [TeX:] $$U^{*}\left(6^{4}\right)$$ and listed in Table 4. From Table 4, we can see the parameters are almost randomly distributed.

Table 4.

Parameter set | m | k | q | |
---|---|---|---|---|

1 | 40 | 40 | 0.8 | 0.5 |

2 | 40 | 50 | 1.5 | 0.2 |

3 | 50 | 60 | 0.6 | 0.1 |

4 | 50 | 40 | 1.2 | 0.01 |

5 | 60 | 50 | 0.4 | 0.001 |

6 | 60 | 60 | 1 | 0.0001 |

In this section we also used function [TeX:] $$f_{3}$$ since it is more complicated than [TeX:] $$f_{1} \text { and } f_{2}.$$ Each parameter set was tested 10 times and the maximum iteration number is 400. The experiment results are listed in Table 5. From Table 5, we can see that the algorithm not only has good performance but also is robust to ACO parameter variations.

Table 5.

Test | Best output | Average output | Successful rate |
---|---|---|---|

1 | 0 | 0.0044 | 8/10 |

2 | 0 | 0.0077 | 8/10 |

3 | 0 | 0 | 10/10 |

4 | 0 | 0 | 10/10 |

5 | 0 | 0 | 10/10 |

6 | 0 | 0 | 10/10 |

Combining GA with ACO can increase the convergence speed and the possibility to reach the global optimal solution; however, it also makes the algorithm have more parameters to optimize, i.e., the population size M, the crossover rate [TeX:] $$P_{c}$$ and the mutation rate [TeX:] $$P_{m}.$$ In this work to facilitate the comparison the population size M was set to 50, the crossover rate [TeX:] $$P_{c}$$ was set to 0.9 and the mutation rate [TeX:] $$P_{m}$$ was set to 0.1 which are the same as the optimal GA parameters in the compared literature. To evaluate the parameter stability, we did experiments to evaluate the sensitivity of GA parameters in the hybrid algorithm.

In the experiments the M was set between 40 to 60 with the step of 1, [TeX:] $$P_{c}$$ was set between 0.8 to 0.9 with the step of 0.01 and [TeX:] $$P_{m}$$ was set between 0.05 to 0.15 with the step of 0.01, thus, we got 1,000 sets of GA parameters. We reran each pervious test with 1,000 different parameter sets, from the experiment we found the results were the same as that in Table 5. One reason for this may be that introducing GA is to get some better initial solutions for ACO but not the best solution, thus the performance of GA doesn’t need to be tuned very high, some typical parameters would be sufficient and the ACO algorithm with Cloud model will further improve these solutions.

From the parameter stability experiment we can see the performance of CG-ACO is relatively stable to some changes in both ACO and GA parameters. First, by introducing the cloud model the parameters of ant colony can be set adaptively. Second, after we used a genetic algorithm to initialize the solution archive with some good solutions, these solutions would be updated to better solutions by the ant colony algorithm. Thus, we can get good performance without taking too much effort to tune these parameters.

In this paper, we proposed a new scheme of [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ the genetic algorithm and cloud model were introduced to the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ scheme. Generally, GA converges to the global optimal faster than [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ thus for the initialization of the solution archive, half of the solutions were filled using the best solutions founded by genetic algorithm and half were filled using uniform sampling. This strategy is helpful to increase the convergence speed and decrease the probability of falling into the local minimums. We also embedded a cloud model to configure the parameters of ACO according to the fuzziness of the current solution archive adaptively.

The experiments show that CG-ACO can achieve better performance than [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ SGA, CQPSA and CAFSA. Moreover, CG-ACO reached the global optimal more often than other algorithms. The parameter stability test showed that the CG-ACO was also robust to the parameter variations.

- 1 M. Dorigo, "Optimization, learning and natural algorithms,"
*Ph.D. dissertationDipartimento di Elettronica, Politecnico di Milano, Milan, Italy*, 1992.custom:[[[-]]] - 2 M. Dorigo, L. M. Gambardella, "Ant colony system: a cooperative learning approach to the traveling salesman problem,"
*IEEE Transactions on Evolutionary Computation*, vol. 1, no. 1, pp. 53-66, 1997.doi:[[[10.1109/4235.585892]]] - 3 M. Dorigo, T. Stutzle,
*Ant Colony Optimization*, MA: MIT Press, Cambridge, 2004.custom:[[[-]]] - 4 M. Guntsch, M. Middendorf, "A population based approach for ACO,"
*in Applications of Evolutionary Computing. HeidelbergGermany: Springer*, pp. 72-81, 2002.custom:[[[-]]] - 5 K. Socha, M. Dorigo, "Ant colony optimization for continuous domains,"
*European Journal of Operational Research*, vol. 185, no. 3, pp. 1155-1173, 2008.doi:[[[10.1016/j.ejor.2006.06.046]]] - 6 M. de Paula Marques, F. R. Durand, T. Abrao, "WDM/OCDM energy-efficient networks based on heuristic ant colony optimization,"
*IEEE Systems Journal*, vol. 10, no. 4, pp. 1482-1493, 2016.doi:[[[10.1109/JSYST.2014.2345665]]] - 7 C. Liu, "Optimal design of high-rise building wiring based on ant colony optimization,"
*Cluster Computing*, vol. 22, pp. 3479-3486, 2018.custom:[[[-]]] - 8 S. N. Sabri, R. Saian, "Predicting flood in Perlis using ant colony optimization,"
*Journal of Physics: Conference Series*, vol. 855, no. 012040, 2017.custom:[[[-]]] - 9 L. M. Gambardella, M. Dorigo, "Solving symmetric and asymmetric TSPs by ant colonies," in
*Proceedings of IEEE International Conference on Evolutionary Computation*, Nagoya, Japan, 1996;pp. 622-627. custom:[[[-]]] - 10 E. D. Taillard, "FANT: fast ant system,"
*Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA)Manno, Switzerland, Technical Report IDSIA-46-98*, 1998.custom:[[[-]]] - 11 T. Stutzle, H. Hoos, "MAX-MIN Ant System and local search for the traveling salesman problem," in
*Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC)*, Indianapolis, IN, 1997;pp. 309-314. custom:[[[-]]] - 12 K. Doerner, R. F. Hartl, M. Reimann, "Are COMPETants more competent for problem solving? The case of a multiple objective transportation problem,"
*Central European Journal of Operations Research*, vol. 11, no. 2, pp. 115-141, 2003.custom:[[[-]]] - 13 K. Doerner, W. J. Gutjahr, R. F. Hartl, C. Strauss, C. Stummer, "Pareto ant colony optimization: a metaheuristic approach to multiobjective portfolio selection,"
*Annals of Operations Research*, vol. 131, no. 1-4, pp. 79-99, 2004.doi:[[[10.1023/B:ANOR.0000039513.99038.c6]]] - 14 S. Iredi, D. Merkle, M. Middendorf, "Bi-criterion optimization with multi colony ant algorithm,"
*in Evolutionary Multi-Criterion Optimization. HeidelbergGermany: Springer*, pp. 359-372, 2001.custom:[[[-]]] - 15 M. Guntsch, M. Middendorf, "Solving multi-criteria optimization problems with population-based ACO,"
*in Evolutionary Multi-Criterion Optimization. HeidelbergGermany: Springer*, pp. 464-478, 2003.custom:[[[-]]] - 16 X. M. Hu, J. Zhang, Y. Li, "Orthogonal methods based ant colony search for solving continuous optimization problems,"
*Journal of Computer Science and Technology*, vol. 23, no. 1, pp. 2-18, 2008.doi:[[[10.1007/s11390-008-9111-5]]] - 17 X. M. Hu, J. Zhang, H. S. H. Chung, Y. Li, O. Liu, "SamACO: variable sampling ant colony optimization algorithm for continuous optimization,"
*IEEE Transactions on SystemsMan, and Cybernetics, Part B (Cybernetics)*, vol. 40, no. 6, pp. 1555-1566, 2010.doi:[[[10.1109/TSMCB.2010.2043094]]] - 18 S. H. Pourtakdoust, H. Nobahari, "An extension of ant colony system to continuous optimization problems,"
*in Ant Colony Optimization and Swarm Intelligence. HeidelbergGermany: Springer*, pp. 294-301, 2004.custom:[[[-]]] - 19 F. O. de Franca, G. P. Coelho, F. J. Von Zuben, R. R. de Faissol Attux, "Multivariate ant colony optimization in continuous search spaces," in
*Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation*, Atlanta, GA, 2008;pp. 9-16. custom:[[[-]]] - 20 T. Liao, K. Socha, M. A. Montes de Oca, T. Stutzle, M. Dorigo, "Ant colony optimization for mixed-variable optimization problems,"
*IEEE Transactions on Evolutionary Computation*, vol. 18, no. 4, pp. 503-518, 2014.doi:[[[10.1109/TEVC.2013.2281531]]] - 21 L. M. Gambardella, M. Dorigo, "Ant-Q: a reinforcement learning approach to the traveling salesman problem," in
*Machine Learning Proceedings 1995. San Francisco*, CA: Morgan Kaufmann Publishers, 1995;pp. 252-260. custom:[[[-]]] - 22 M. Mahi, O. K. Baykan, H. Kodaz, "A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem,"
*Applied Soft Computing*, vol. 30, pp. 484-490, 2015.doi:[[[10.1016/j.asoc.2015.01.068]]] - 23 S. Nemati, M. E. Basiri, N. Ghasem-Aghaee, M. H. Aghdam, "A novel ACO–GA hybrid algorithm for feature selection in protein function prediction,"
*Expert Systems with Applications*, vol. 36, no. 10, pp. 12086-12094, 2009.custom:[[[-]]] - 24 W. Alsaeedan, M. E. B. Menai, S. Al-Ahmadi, "A hybrid genetic-ant colony optimization algorithm for the word sense disambiguation problem,"
*Information Sciences*, vol. 417, pp. 20-38, 2017.doi:[[[10.1016/j.ins.2017.07.002]]] - 25 R. Goel, R. Maini, "A hybrid of ant colony and firefly algorithms (HAFA) for solving vehicle routing problems,"
*Journal of Computational Science*, vol. 25, pp. 28-37, 2018.custom:[[[-]]] - 26 I. Karakonstantis, A. Vlachos, "Hybrid ant colony optimization for continuous domains for solving emission and economic dispatch problems,"
*Journal of Information and Optimization Sciences*, vol. 39, no. 3, pp. 651-671, 2018.custom:[[[-]]] - 27 M. Mitchell,
*An Introduction to Genetic Algorithms, Cambridge, MA: MIT Press, 1996*, CambridgeMA: MIT Press, An Introduction to Genetic Algorithms, 1996.custom:[[[-]]] - 28 B. Ge, J. H. Han, Z. Wei, L. Cheng, Y. Han, "Dynamic hybrid ant colony optimization algorithm for solving the vehicle routing problem with time windows,"
*Pattern Recognition And Artificial Intelligence*, vol. 28, no. 7, pp. 641-650, 2015.custom:[[[-]]] - 29 Z. H. Xiong, S. K. Li, J. H. Chen, "Hardware/Software partitioning based on dynamic combination of genetic algorithm and ant algorithm,"
*Journal of Software*, vol. 16, no. 4, pp. 503-512, 2005.custom:[[[-]]] - 30 J. Li, "Combination of genetic & ant colony algorithms for multi-project resource leveling problem,"
*Computer Integrated Manufacturing Systems*, vol. 16, no. 3, pp. 643-649, 2010.custom:[[[-]]] - 31 D. Li, C. Liu, W. Gan, "A new cognitive model: cloud model,"
*International Journal of Intelligent Systems*, vol. 24, no. 3, pp. 357-375, 2009.doi:[[[10.1002/int.20340]]] - 32 M. Qi, A. Yang, "Quantum particle swarm optimization based on cloud model cloud droplet strategy,"
*Computer Engineering and Applications*, vol. 48, no. 24, pp. 49-52, 2012.custom:[[[-]]] - 33 X. Wei, H. Zeng, Y. Zhou, "Cloud theory-based artificial fish swarm algorithm,"
*Computer Engineering and Applications*, vol. 46, no. 22, pp. 26-29, 2010.custom:[[[-]]] - 34 P. Wang, X. Xu, S. Huang, C. Cai, "A linguistic large group decision making method based on the cloud model,"
*IEEE Transactions on Fuzzy Systems*, vol. 26, no. 6, pp. 3314-3326, 2018.custom:[[[-]]] - 35 X. Shang, P. Ma, T. Chao, "Performance evaluation of electromagnetic railgun exterior ballistics based on cloud model,"
*IEEE Transactions on Plasma Science*, vol. 45, no. 7, pp. 1614-1621, 2017.custom:[[[-]]] - 36 J. Cui, Q. Zheng, Y. Xin, C. Zhou, Q. Wang, N. Zhou, "Feature extraction and classification method for switchgear faults based on sample entropy and cloud model,"
*IET GenerationTransmission & Distribution*, vol. 11, no. 11, pp. 2938-2946, 2017.custom:[[[-]]] - 37 S. Y ang, R. Jiang, H. Wang, S. S. Ge, "Road constrained monocular visual localization using Gaussian-Gaussian Cloud model,"
*IEEE Transactions on Intelligent Transportation Systems*, vol. 18, no. 12, pp. 3449-3456, 2017.doi:[[[10.1109/TITS.2017.2685436]]] - 38 Y. Lin, L. Zhao, H. Li, Y. Sun, "Air quality forecasting based on cloud model granulation,"
*EURASIP Journal on Wireless Communications and Networking*, vol. 2018, no. 1, 2018.custom:[[[-]]] - 39 K. T. Fang, D. K. J. Lin, P. Winker, Y. Zhang, "Uniform design: theory and application,"
*Technometrics*, vol. 42, no. 3, pp. 237-248, 2000.doi:[[[10.1080/00401706.2000.10486045]]]