** A Hybrid Genetic Ant Colony Optimization Algorithm with an Embedded Cloud Model for Continuous Optimization **

Peng Wang* , Jiyun Bai* and Jun Meng*

## Article Information

## Abstract

**Abstract:** The ant colony optimization (ACO) algorithm is a classical metaheuristic optimization algorithm. However, the conventional ACO was liable to trap in the local minimum and has an inherent slow rate of convergence. In this work, we propose a novel combinatorial ACO algorithm (CG-ACO) to alleviate these limitations. The genetic algorithm and the cloud model were embedded into the ACO to find better initial solutions and the optimal parameters. In the experiment section, we compared CG-ACO with the state-of-the-art methods and discussed the parameter stability of CG-ACO. The experiment results showed that the CG-ACO achieved better performance than ACO , simple genetic algorithm (SGA), CQPSO and CAFSA and was more likely to reach Rthe global optimal solution.

**Keywords:** Ant Colony Algorithm , Cloud Model , Genetic Algorithm

## 1. Introduction

Ant colony optimization (ACO) algorithm is a metaheuristic optimization algorithm that is inspired by the foraging behavior of real ants. When ants walking from or to a food source it deposits the pheromone on the ground. Other ants can smell the pheromone and follow the path with a greater concentration of pheromone to find the food source. ACO algorithm borrows a similar idea, when the artificial ants find better solutions they update the pheromone to increase the probability of the search by subsequent ants in the promising regions of the search space. ACO algorithm was first introduced by Italian scholar Dorigo [1] to solve the combinatorial optimization problems in his Ph.D. thesis and has been successfully applied to many discrete optimization tasks [2,3].

In respect of updating the pheromone, Guntsch and Middendorf [4] proposed a population-based ACO (PB-ACO). They keep track of all good solutions up to a certain age in a solution archive to update the pheromone. Instead of using pheromone evaporation, the pheromone associated with the oldest solutions is removed by performing a negative update on the pheromone table.

Based on PB-ACO and Gaussian probability density function (PDF), Socha and Dorigo [5] proposed a new extension of ACO algorithm (ACOR) in 2008. They reported a mixed Gaussian PDF named Gaussian kernel PDF and a rank-based solution archive. ACOR obtains higher precision compared with the existing metaheuristic algorithms and has been applied in many aspects such as the construction of energy-efficient networks, design of high-rise building wiring, flood forecasting and so on [6-8]. However, ACOR also has some limitations, i.e., the random initial solutions may mislead the algorithms into converging to a local optimal and the convergence speed is slow, moreover, the parameters in ACOR are usually determined by experience.

In this work, to overcome these limitations we present a hybrid genetic ACO algorithm with an embedded cloud model for continuous optimization. The genetic algorithm was employed to find better initial solutions and the cloud model was employed to set these ACO parameters adaptively.

This paper is organized as follows: Section 2 discusses the recent improvements and applications of the ant colony algorithm. Section 3 discusses the principle of ACOR and gives the limitation analysis. In Section 4, we propose the genetic ACO with an embedded cloud model (CG-ACO). In Section 5, we present experiments and analysis of the performance. Section 6 concludes this paper.

## 2. Related Work

The improvement on the ant colony algorithm can be divided in to two categories, i.e., method improvement and method fusion.

##### 2.1 Method Improvement

After ACO has been proposed a lot of enhanced versions have been reported, Gambardella and Dorigo [9] introduced a local pheromone update rule to help ACO escape from local minimum. Taillard [10] used a shared memory and a Queen to enable the ant to cooperate with each other and increase the convergence speed. Stutzle and Hoos [11] used bounds to prevent a premature stagnation and proposed a MAX-MIN ant system. Guntsch and Middendorf [4] proposed a PB-ACO which uses a solution archive to track all good solutions instead of pheromone evaporation.

Among these improvements, extending the ACO’s capability to tackle multiobjective problems and continuous problems are two major enhancements. For multiobjective optimization, if these objectives can be ordered with importance weight, one can combine them into one objective using a weighted sum. Doerner et al. [12,13] used two ant colonies of variable sizes with different priority rule over the objectives. If these objectives conflict with each other, one has to find Pareto-optimal solutions. Iredi et al. [14] used multiple ant colonies with multiple pheromone trail matrices for different objectives. Guntsch and Middendorf [4,5] used an external population to extend the PB-ACO to a multiobjective version.

Extending ACOs algorithm to continuous domain is another important enhancement. One way to accomplish this goal is to discretize the search space. Hu et al. [16,17] sampled the search space into keys and proposed the SamACO. The other way is shifting the discrete PDF to a continuous one (Gaussian PDF in most cases). Pourtakdoust and Nobahari [18] designed a continuous pheromone model and a new pheromone update mechanism. Socha and Dorigo [5] used a mixed Gaussian PDF and a ranked solution archive and proposed an ACOR. De Franca et al. [19] used a multivariate Gaussian PDF instead of Gaussian PDF to increase the independence of each dimension of the search space. Liao et al. [20] further presented an ACOMV which includes a continuous optimization mechanism, a continuous relaxation mechanism and a categorical optimization mechanism to extend ACO’s capability to tackle mixed-variable.

##### 2.2 Method Fusion

Method fusion is another important branch of ACO improvement. Ant-Q may be the first hybrid ant colony algorithm to solve the Traveling Salesman Problem (TSP) problem proposed by Gambardella and Dorigo [21]. They embedded the ant colony algorithm with Q-learning to describe if the move to some city is useful. Mahi et al. [22] combined the particle swarm optimization (PSO) and 3-opt algorithm with ant colony algorithm to solve the TSP. They used PSO to optimize the parameters of ant colony algorithm and used 3-opt algorithm to improve city selection operations. Nemati et al. [23] combined a genetic algorithm with ant colony algorithms to search in parallel for a better solution. Alsaeedan et al. [24] combined a genetic algorithm with MAX-MIN ant system and used genetic algorithm to optimize the parameters of ant colony algorithm. Goel and Maini [25] used a similar idea as [22] they combined firefly algorithm with ant colony system, and used firefly algorithm to search for the unexplored solution space. Karakonstantis and Vlachos [26] embedded the pattern search into [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and proposed a PSACO to solve emission and economic dispatch problems.

To sum up, the improvements generally aim to alleviate the limitations of ACO algorithm, such as slow convergence speed, the probability of falling into the local minimum, lack of support in multiobjective optimization or continuous domain optimization. The improvements include modifying the searching framework, introducing new rules to ACO and borrowing good properties from another algorithm. However, except for the algorithm in [26], most of the hybrid ant colony algorithms were based on the traditional discrete ant colony algorithm which aims at various combinatorial optimization problems like the TSP. In this work, we proposed a novel hybrid ant colony algorithm based on the continuous ant colony algorithm.

## 3. Ant Colony Algorithm for Continuous Optimization

##### 3.1 The [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ Algorithm

Typically, the optimization problem is to find the minimum value of function f(x) in a search space S subject to the constraints [TeX:] $$\Omega . \mathrm{ACO}_{\mathrm{R}}$$ algorithm generates new solution [TeX:] $$S l$$ from S by sampling Gaussian kernel PDF constantly and stores these solutions in a solution archive. It uses two strategies, which have an effect similar to pheromone deposition and pheromone evaporation to find a better solution.

The solution archive is constructed as Table 1, where sil is the i-th variable of the l-th solution and K is the size of the archive T.

The Gaussian kernel PDF used in [TeX:] $$\mathrm{ACO}_{\mathbb{R}}$$ is a weighted combination of Gaussian functions defined as follows:

##### (1)

[TeX:] $$G_{i}(x)=\sum_{l=1}^{k} \omega_{l} g_{l i}(x)=\sum_{l=1}^{k} \omega_{l} \frac{1}{\sigma_{l i} \sqrt{2 \pi}} e^{-\frac{\left(x-\mu_{i i}\right)^{2}}{2 \sigma^{2}}}$$where [TeX:] $$g_{l i}(x)$$ is a Gaussian function, [TeX:] $$\mu$$ is the vector of means, is the vector of standard deviations and [TeX:] $$\omega$$ is the vector of weights. Each solution sli have an associated Gaussian function [TeX:] $$g_{l i}(\mathrm{x}), \sigma_{\mathrm{li}}$$ can be calculated by:

where is a parameter of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ algorithm which needs to be set by experience, is a positive number, the larger value of , the lower the convergence speed of the algorithm. In the solution archive, the solutions sl were sorted according to the value of , [TeX:] $$f\left(s_{l}\right) . \mathrm{ACO}_{\mathrm{R}}$$ gives good solutions a relatively larger weight to increase its pheromone and thus the next solutions are more likely to be generated near the pervious good solution. Thus, the weight [TeX:] $$\omega l$$ can be defined by the following equation:

where q is another parameter of weight and usually set by experience. When q is small, the best-ranked solutions are strongly preferred, and when it is large, the probability becomes more uniform. A more detailed analysis of the parameters and q are presented in Section 2.2.

In the initialization steps, the solution archive is filled with solutions generated by uniform random sampling and sorted according to [TeX:] $$f(s), \xi$$ and q are set by experience, [TeX:] $$\mu_{l i}$$ in Eq. (1) are initialized to sli. In the optimization steps, ACOR generates new solutions by sampling the Gaussian kernel PDF and adding them to the archive. In the Gaussian kernel PDF good solutions have larger weights, thus the new solutions are likely to follow around the good solutions. This strategy has similar effects on pheromone deposition. When adding new solutions to the solution archive, the worst solutions were removed from the archive. This strategy has similar effects to pheromone evaporation. In this way, the solution archive can always get better solutions, and the optimization steps will run constantly until the termination conditions are met.

##### 3.2 Limitations of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$

By shifting the discrete PDF to Gaussian kernel function, ACOR extends ACO algorithm to continuous domain. However, initialize the solution archive by uniform random sampling may not be the best option, because bad initialization may mislead the algorithms into converging to a local optimal and the convergence speed is slow, moreover, the optimization result is influenced by the parameters. In this section, we will use an example to show these limitations. The testing function f is defined as flows:

##### (4)

[TeX:] $$f(x, y)=0.5-\frac{\sin ^{2} \sqrt{x^{2}+y^{2}}-0.5}{\left(1+0.001\left(x^{2}+y^{2}\right)\right)^{2}} \quad x, y \in[-100,100]$$The extreme point of function f is (0,0) and f(0,0) = 1. Beyond the extreme point there is a ring-shaped ridge with a height of 0.9903 (local maximum). Fig. 1 shows the optimization results of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}.$$ We ran [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ 10 times and 400 generations each time, from the result we found [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ only converged to global optimal twice and in most of the time it was converged to 0.9903. The reason might be that uniform random sampling contains less information about the optimal solution. When the search cycle reaches a certain number, the pheromone concentrates around the local optimal and [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ can hardly find a new path.

The parameters and q will influence the optimization result, and it is difficult to set these parameters manually. is the weight of , when is small the Gaussian function will have a small standard deviation, and results in the newly generated solutions are close to . This will weaken the ability to explore new feasible solutions and make the algorithm tend to converge to a local optimal. When is large, newly generated solutions can be far from and will enhance the ability to explore new feasible solutions but also will slow down the convergence speed. Parameter q is defined the weights of the Gaussian function. When q approaches 0, only the Gaussian function associated with the current best solution is used for generating new solutions. For instance, the length of the solution archive is 50 and q=0.01 new solutions have the nearly 90% probability to be generated using only the Gaussian function associated with the current best solution. Thus, when q is large, the algorithm samples the search space using a larger number of good solutions in the archive. When q is small the algorithm tends to sample the search space only using the best solution in the archive.

To evaluate the influence of the parameters, we run a test on f with different parameters ( is in {0.0001, 0.05, 0.5} and q is in {0.5, 1, 1.4}). In each test, we ran) [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ 10 times and 400 generations each time, we found the result was not stable. The experiment using parameter of 0.0001 and q of 1 had the best performance and the experiment using parameter of 1.4 and q of 0.5 had the worst performance.

## 4. Genetic ACO with an Embedded Cloud Model

To overcome the limitations of [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ in this section we present a CG-ACO. In CG-ACO we use a genetic algorithm to generate the initial solutions, we also embedded a cloud model in the updating loop to adaptively set the parameters and q.

##### 4.1 Initialize the Solution Archives using Genetic Algorithm

Similar to ACO, genetic algorithm (GA) is also a metaheuristic optimization algorithm. GA was inspired by the process of natural selection [27]. GA has the ability of rapid global convergence, recently, some of the work attempting to combine GA and ACO has been presented [22,23,28-30]. Xiong et al. [29] compared the convergence speed between ACO and GA and presented a convergence curve. Generally, the convergence speed between GA and ACO is shown in Fig. 2.

From Fig. 2 we can see that at the beginning of GA the convergence speed was fast but it became slower and slower over time. That is because GA doesn’t have a positive feedback mechanism. However, ACO shows the opposite trend, the convergence speed was slow at first, but it grew faster and faster over time. The reason is that the pheromone was not enough at the beginning, thus the convergence speed was slow at first, with the positive feedback mechanism (the cumulate of the pheromone) the convergence speed grows faster and faster.

Fig. 2 has a crucial point that is the junction point of the two convergence curves (point a). Before [TeX:] $$t_{a}$$ GA was more efficient and after [TeX:] $$t_{a}$$ ACO was more efficient. Therefore, if we combine the two methods, [TeX:] $$t_{a}$$ is the best time to shift the algorithm. In CG-ACO the iteration number t of GA satisfied [TeX:] $$t_{b}<t<t_{c}$$ where [TeX:] $$t_{b}$$ and[TeX:] $$t_{c}$$ are the minimum and maximum iterations repetitively. To achieve this, after each iteration we calculate the evolution rate R by:

where [TeX:] $$\bar{f}_{i}(s)$$ is the average of fitness function. If R is small enough [TeX:] $$\left(R<R_{\min }, R_{\min }\right.$$ is the shifting threshold) for three successive iterations, we stop GA and use the result of GA as the initial solutions of ACO.

##### 4.2 Embedding Cloud Model for Adaptive Parameter Selection

Cloud model is a cognitive model proposed by Li et al [31]. It can synthetically describe the randomness and fuzziness of concepts and implement the uncertain transformation between a qualitative concept and its quantitative instantiations. Cloud model has been successfully applied in artificial intelligent control system, data miming and other fields [32-38]. Qi and Yang [32] embedded the cloud model into particle swarm optimization algorithm by introducing the two conditional cloud generators to the hybrid operation and a basic cloud generator to the mutation operation. Wei et al. [33] embedded the cloud model into artificial fish swarm algorithm by using the stable tendency of a basic cloud generator to modify the behavior of the fishes. In this section, we will briefly introduce the cloud model and use it for adaptive parameter selection.

[TeX:] $$X=\{x\}$$ is a universe of discourse defined by a qualitative concept [TeX:] $$\tilde{A}, x \in X$$ is a random instantiation of concept X, [TeX:] $$\mu_{i}(x)$$ is the certainty degree of x belonging to X, which corresponds to a random number with a steady tendency. Then, the distribution of x in the universe X can be defined as a cloud and x can be called a cloud drop.

The cloud model can effectively integrate the randomness and fuzziness of concepts and describe the overall quantitative property of a concept using its expectation [TeX:] $$\left(E_{x}\right),$$ entropy [TeX:] $$\left(E_{n}\right),$$ and hyper entropy [TeX:] $$\left(H_{e}\right).$$ There are two kinds of cloud generators, i.e., the forward generator and the backward generator. The forward generator generates cloud drops according to its numerical characteristics and the backward generator calculates the numerical characteristics from the existing cloud drops.

Fig. 3 illustrates a typical normal cloud model. From Fig. 3, we can see that the entropy represents the uncertainty measurement of a qualitative concept and reflects the dispersing extent of the cloud drops. 99.7% of cloud drops are in the range of [TeX:] $$\left[E_{x}-3 E_{n}, E_{x}+3 E_{n}\right]$$ and the hyper entropy represents the uncertain degree of entropy.

In this work, the solutions in the solution archive are regarded as cloud drops and the solution archive is regarded as a cloud. If the cloud is random and fuzzy, it means we still lack the pheromone thus, we should increase and q to search more solutions on a wider scale; and if the cloud is no longer fuzzy, we should decrease and q to narrow the searching scale and increase the convergence speed.

We used the backward cloud generator to calculate the numerical characteristics of the cloud (the solution archive) and used the entropy to measure its randomness and fuzziness. The entropy [TeX:] $$E_{n}$$ of the initial solution archive was used as the standard. We compared the entropy of the i-th iteration [TeX:] $$E_{n i}$$ with [TeX:] $$E_{n}$$ and defined a fuzzy ratio r as follows:

In the i+1-th iteration and q were set by the following equation adaptively.

##### (8)

[TeX:] $$\xi_{i+1}=\left\{\begin{array}{lc} \xi_{0} & r_{i}>0.6 \\ 2 \xi_{0} & 0.2<r_{i}<0.6 \\ 5 \xi_{0} & r_{i}<0.2 \end{array}\right.$$

##### (9)

[TeX:] $$q_{i+1}=\left\{\begin{array}{cc} q_{0} & r_{i}>0.6 \\ 10 q_{0} & 0.2<r_{i}<0.6 \\ 100 q_{0} & r_{i}<0.2 \end{array}\right.$$##### 4.3 The Steps of CG-ACO

The metaheuristic framework is presented in Algorithm 1 and in the following we will explain these steps in detail.

**GAInitialization():** generate M initial population randomly and initialize the crossover rate [TeX:] $$P_{c,}$$ mutation rate [TeX:] $$P_{m},$$ maximum iterations [TeX:] $$I_{\max },$$ minimum iterations [TeX:] $$I_{\min }$$ and minimum evolution rate [TeX:] $$R_{\min }.$$

After the parameter initialization, GA was used to generate initial solutions until the shifting conditions were met. The shifting conditions are: the number of iterations is greater than [TeX:] $$I_{\min }$$ and the evolution rate is smaller than [TeX:] $$R_{\min }$$ for three successive iterations, or the number of iteration reaches [TeX:] $$I_{\max }.$$

**ACOInitialization(): ** Initialize the size of solution archive K, the initial values of and q, fill half of the solution archive with K/2 best solutions found by GA and fill the other half with K/2 solutions generated by uniform sampling.

The ACO optimization contains three parts, solution construction, pheromone update and the updating of and q.

**SolutionConstruction():** similar as the [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ a solution with n variables needs to be constructed with n steps. In each step we constructed one variable, e.g., for [TeX:] $$s_{l i}, \mu_{i}$$ was initialized to [TeX:] $$s_{i}, \sigma_{l i}, \text { and } \omega_{l}$$ was calculated according to Eqs. (2) and (3). Then choose one of the Gaussian functions consisting the Gaussian kernel by the probability [TeX:] $$p_{l}, \text { and } p_{l}$$ is calculated by:

Generate the solution variable [TeX:] $$s_{l i}$$ by sampling the Gaussian function [TeX:] $$g_{l i}(x)(x)$$ in Eq. (1) and use the same way to generate other solution variables.

**PheromoneUpdate(): ** since the pheromone is stored as a solution archive, the pheromone update is accomplished by adding the set of newly generated solutions to the archive, sorting the solution archive, and removing the same number of worst solutions.

**ACOParameterUpdate(): ** the fuzzy ratio r is calculated according to Eq. (7) and parameters and q are updated according to Eqs. (8) and (9).

## 5. Experiments and Results

##### 5.1 Comparison with [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and SGA

In this experiment the CG-ACO was compared with its parents, i.e. [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and simple genetic algorithm (SGA). To facilitate the comparison, the testing functions in [5] were used in this experiment:

##### (11)

[TeX:] $$f_{1}=\sum_{i=1}^{4}\left[100\left(x_{i}^{2}-x_{i+1}\right)^{2}+\left(x_{i}-1\right)^{2}\right], \quad x_{i} \in[-10,10]$$

##### (12)

[TeX:] $$f_{2}=\sin \left(\sum_{i=1}^{5}\left|x_{i}-5\right|\right) / \sum_{i=1}^{5}\left|x_{i}-5\right|, \quad x_{i} \in[1,10]$$The global minimum point of [TeX:] $$f_{1} \text { is } x_{i}=0,$$ and the minimum value is 0. Since it is a recursive function, in the experiment, the algorithm has successfully found the global minimum of [TeX:] $$f_{1}$$ which means that the output is smaller than 0.0001. The global maximum point of [TeX:] $$f_{2} \text { is } x_{i}=5$$, and the maximum value is 1, in the experiment, the algorithm has successfully found the global maximum of[TeX:] $$f_{2}$$ which means that the output is greater than 0.999 [5].

The parameter configuration of [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ is the same with the best parameters used in [5], where the ant number m=70, the size of the solution archive K=45, the parameters [TeX:] $$\xi=1 \text { and } q=0.0001.$$ The parameters configuration of SGA is that: the size of the population M=50, one-point crossover, one-point mutation and the roulette wheel selection strategies were used, the crossover rate [TeX:] $$P_{c}=0.9,$$ the mutation rate [TeX:] $$P_{m}=0.1.$$ The parameter configuration of CG-ACO shares most of the values in the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and SGA, i.e., m=70, K=45, the initial value of [TeX:] $$\xi=1,$$ the initial value of q=0.0001, the crossover rate [TeX:] $$P_{c}=0.9,$$ the mutation rate [TeX:] $$P_{m}=0.1.$$ Other configurations of CG-ACO’s parameters are maximum iterations [TeX:] $$I_{\max }=400$$, minimum iterations [TeX:] $$I_{\min }=100,$$ and minimum evolution rate [TeX:] $$R_{\min }=10.$$ Each optimization algorithm was tested ten times, and the results are presented in Table 2. From Table 2, we can see that the CG-ACO can achieve better performance than [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ and SGA, the average output of CG-ACO is closest to the global optimal. Moreover, we found the hybrid of GA and ACO is also helpful in avoiding the local optimal solution. The CG-ACO reaches the global optimal value for 19 of 20 times, which are much more than other methods. In the experiment, the CG-ACO usually shifts to ACO part after around 100 GA iterations, and for most of the time the ACO part can reach the global optimal within 50 iterations due to the adaptive parameter configuration strategy. In the experiment, CG-ACO used no more than 150 iterations (total iterations in GA part and ACO part) on average to reach the global optimal, which is faster than the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ (more than 200 iterations).

##### 5.2 Comparison with Other Fusion Methods

In the second experiment, we did some comparison between CG-ACO and some optimization algorithms also embedded with cloud model, i.e., the CQPSO [32] and the CAFSA [33]. The function reported in [20] was tested in this section:

##### (13)

[TeX:] $$f_{3}(x, y)=0.5+\frac{\sin ^{2} \sqrt{x^{2}+y^{2}}-0.5}{\left(1+0.001\left(x^{2}+y^{2}\right)\right)^{2}}, \quad x, y \in[-100,100]$$The global minimum point of [TeX:] $$f_{3}$$ is at (0,0) and the minimum value is 0. [TeX:] $$f_{3}$$ has an infinite number of local minimums. In the experiment the algorithm has successfully found the global minimum of [TeX:] $$f_{3}$$ only if the output is 0.

The parameters of CG-ACO are the same as the previous experiment and the parameters of CQPSO and CAFSA are configured as [32] and [33], with the maximum iterations of 400. Each optimization algorithm was tested 10 times and the results are presented in Table 3. From Table 3, we can see that the CG-ACO also has the advantages in reaching the global optimal than CQPSO and CAFSA.

##### 5.3 Parameter Stability

##### 5.3.1 ACO parameters

In this section, the parameter stability was evaluated by testing the proposed method with different parameters. To evaluate the influence of the parameters effectively, we designed the experiment according to the uniform design method [39]. In the experiment the ant number m and size of the solution archive K varied from 40 to 60, the parameters ξ varied from 0.4 to 1.5 and q varied from 0.0001 to 0.5. These parameters were mixed according to uniform design table [TeX:] $$U^{*}\left(6^{4}\right)$$ and listed in Table 4. From Table 4, we can see the parameters are almost randomly distributed.

In this section we also used function [TeX:] $$f_{3}$$ since it is more complicated than [TeX:] $$f_{1} \text { and } f_{2}.$$ Each parameter set was tested 10 times and the maximum iteration number is 400. The experiment results are listed in Table 5. From Table 5, we can see that the algorithm not only has good performance but also is robust to ACO parameter variations.

##### 5.3.2 GA parameters

Combining GA with ACO can increase the convergence speed and the possibility to reach the global optimal solution; however, it also makes the algorithm have more parameters to optimize, i.e., the population size M, the crossover rate [TeX:] $$P_{c}$$ and the mutation rate [TeX:] $$P_{m}.$$ In this work to facilitate the comparison the population size M was set to 50, the crossover rate [TeX:] $$P_{c}$$ was set to 0.9 and the mutation rate [TeX:] $$P_{m}$$ was set to 0.1 which are the same as the optimal GA parameters in the compared literature. To evaluate the parameter stability, we did experiments to evaluate the sensitivity of GA parameters in the hybrid algorithm.

In the experiments the M was set between 40 to 60 with the step of 1, [TeX:] $$P_{c}$$ was set between 0.8 to 0.9 with the step of 0.01 and [TeX:] $$P_{m}$$ was set between 0.05 to 0.15 with the step of 0.01, thus, we got 1,000 sets of GA parameters. We reran each pervious test with 1,000 different parameter sets, from the experiment we found the results were the same as that in Table 5. One reason for this may be that introducing GA is to get some better initial solutions for ACO but not the best solution, thus the performance of GA doesn’t need to be tuned very high, some typical parameters would be sufficient and the ACO algorithm with Cloud model will further improve these solutions.

From the parameter stability experiment we can see the performance of CG-ACO is relatively stable to some changes in both ACO and GA parameters. First, by introducing the cloud model the parameters of ant colony can be set adaptively. Second, after we used a genetic algorithm to initialize the solution archive with some good solutions, these solutions would be updated to better solutions by the ant colony algorithm. Thus, we can get good performance without taking too much effort to tune these parameters.

## 6. Conclusion

In this paper, we proposed a new scheme of [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ the genetic algorithm and cloud model were introduced to the [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ scheme. Generally, GA converges to the global optimal faster than [TeX:] $$\mathrm{ACO}_{\mathrm{R}}$$ thus for the initialization of the solution archive, half of the solutions were filled using the best solutions founded by genetic algorithm and half were filled using uniform sampling. This strategy is helpful to increase the convergence speed and decrease the probability of falling into the local minimums. We also embedded a cloud model to configure the parameters of ACO according to the fuzziness of the current solution archive adaptively.

The experiments show that CG-ACO can achieve better performance than [TeX:] $$\mathrm{ACO}_{\mathrm{R}},$$ SGA, CQPSA and CAFSA. Moreover, CG-ACO reached the global optimal more often than other algorithms. The parameter stability test showed that the CG-ACO was also robust to the parameter variations.

## References

- 1 M. Dorigo, "Optimization, learning and natural algorithms,"
*Ph.D. dissertationDipartimento di Elettronica, Politecnico di Milano, Milan, Italy*, 1992.custom:[[[-]]] - 2 M. Dorigo, L. M. Gambardella, "Ant colony system: a cooperative learning approach to the traveling salesman problem,"
*IEEE Transactions on Evolutionary Computation*, vol. 1, no. 1, pp. 53-66, 1997.doi:[[[10.1109/4235.585892]]] - 3 M. Dorigo, T. Stutzle,
*Ant Colony Optimization*, MA: MIT Press, Cambridge, 2004.custom:[[[-]]] - 4 M. Guntsch, M. Middendorf, "A population based approach for ACO,"
*in Applications of Evolutionary Computing. HeidelbergGermany: Springer*, pp. 72-81, 2002.custom:[[[-]]] - 5 K. Socha, M. Dorigo, "Ant colony optimization for continuous domains,"
*European Journal of Operational Research*, vol. 185, no. 3, pp. 1155-1173, 2008.doi:[[[10.1016/j.ejor.2006.06.046]]] - 6 M. de Paula Marques, F. R. Durand, T. Abrao, "WDM/OCDM energy-efficient networks based on heuristic ant colony optimization,"
*IEEE Systems Journal*, vol. 10, no. 4, pp. 1482-1493, 2016.doi:[[[10.1109/JSYST.2014.2345665]]] - 7 C. Liu, "Optimal design of high-rise building wiring based on ant colony optimization,"
*Cluster Computing*, vol. 22, pp. 3479-3486, 2018.custom:[[[-]]] - 8 S. N. Sabri, R. Saian, "Predicting flood in Perlis using ant colony optimization,"
*Journal of Physics: Conference Series*, vol. 855, no. 012040, 2017.custom:[[[-]]] - 9 L. M. Gambardella, M. Dorigo, "Solving symmetric and asymmetric TSPs by ant colonies," in
*Proceedings of IEEE International Conference on Evolutionary Computation*, Nagoya, Japan, 1996;pp. 622-627. custom:[[[-]]] - 10 E. D. Taillard, "FANT: fast ant system,"
*Istituto Dalle Molle di Studi sull'Intelligenza Artificiale (IDSIA)Manno, Switzerland, Technical Report IDSIA-46-98*, 1998.custom:[[[-]]] - 11 T. Stutzle, H. Hoos, "MAX-MIN Ant System and local search for the traveling salesman problem," in
*Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC)*, Indianapolis, IN, 1997;pp. 309-314. custom:[[[-]]] - 12 K. Doerner, R. F. Hartl, M. Reimann, "Are COMPETants more competent for problem solving? The case of a multiple objective transportation problem,"
*Central European Journal of Operations Research*, vol. 11, no. 2, pp. 115-141, 2003.custom:[[[-]]] - 13 K. Doerner, W. J. Gutjahr, R. F. Hartl, C. Strauss, C. Stummer, "Pareto ant colony optimization: a metaheuristic approach to multiobjective portfolio selection,"
*Annals of Operations Research*, vol. 131, no. 1-4, pp. 79-99, 2004.doi:[[[10.1023/B:ANOR.0000039513.99038.c6]]] - 14 S. Iredi, D. Merkle, M. Middendorf, "Bi-criterion optimization with multi colony ant algorithm,"
*in Evolutionary Multi-Criterion Optimization. HeidelbergGermany: Springer*, pp. 359-372, 2001.custom:[[[-]]] - 15 M. Guntsch, M. Middendorf, "Solving multi-criteria optimization problems with population-based ACO,"
*in Evolutionary Multi-Criterion Optimization. HeidelbergGermany: Springer*, pp. 464-478, 2003.custom:[[[-]]] - 16 X. M. Hu, J. Zhang, Y. Li, "Orthogonal methods based ant colony search for solving continuous optimization problems,"
*Journal of Computer Science and Technology*, vol. 23, no. 1, pp. 2-18, 2008.doi:[[[10.1007/s11390-008-9111-5]]] - 17 X. M. Hu, J. Zhang, H. S. H. Chung, Y. Li, O. Liu, "SamACO: variable sampling ant colony optimization algorithm for continuous optimization,"
*IEEE Transactions on SystemsMan, and Cybernetics, Part B (Cybernetics)*, vol. 40, no. 6, pp. 1555-1566, 2010.doi:[[[10.1109/TSMCB.2010.2043094]]] - 18 S. H. Pourtakdoust, H. Nobahari, "An extension of ant colony system to continuous optimization problems,"
*in Ant Colony Optimization and Swarm Intelligence. HeidelbergGermany: Springer*, pp. 294-301, 2004.custom:[[[-]]] - 19 F. O. de Franca, G. P. Coelho, F. J. Von Zuben, R. R. de Faissol Attux, "Multivariate ant colony optimization in continuous search spaces," in
*Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation*, Atlanta, GA, 2008;pp. 9-16. custom:[[[-]]] - 20 T. Liao, K. Socha, M. A. Montes de Oca, T. Stutzle, M. Dorigo, "Ant colony optimization for mixed-variable optimization problems,"
*IEEE Transactions on Evolutionary Computation*, vol. 18, no. 4, pp. 503-518, 2014.doi:[[[10.1109/TEVC.2013.2281531]]] - 21 L. M. Gambardella, M. Dorigo, "Ant-Q: a reinforcement learning approach to the traveling salesman problem," in
*Machine Learning Proceedings 1995. San Francisco*, CA: Morgan Kaufmann Publishers, 1995;pp. 252-260. custom:[[[-]]] - 22 M. Mahi, O. K. Baykan, H. Kodaz, "A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem,"
*Applied Soft Computing*, vol. 30, pp. 484-490, 2015.doi:[[[10.1016/j.asoc.2015.01.068]]] - 23 S. Nemati, M. E. Basiri, N. Ghasem-Aghaee, M. H. Aghdam, "A novel ACO–GA hybrid algorithm for feature selection in protein function prediction,"
*Expert Systems with Applications*, vol. 36, no. 10, pp. 12086-12094, 2009.custom:[[[-]]] - 24 W. Alsaeedan, M. E. B. Menai, S. Al-Ahmadi, "A hybrid genetic-ant colony optimization algorithm for the word sense disambiguation problem,"
*Information Sciences*, vol. 417, pp. 20-38, 2017.doi:[[[10.1016/j.ins.2017.07.002]]] - 25 R. Goel, R. Maini, "A hybrid of ant colony and firefly algorithms (HAFA) for solving vehicle routing problems,"
*Journal of Computational Science*, vol. 25, pp. 28-37, 2018.custom:[[[-]]] - 26 I. Karakonstantis, A. Vlachos, "Hybrid ant colony optimization for continuous domains for solving emission and economic dispatch problems,"
*Journal of Information and Optimization Sciences*, vol. 39, no. 3, pp. 651-671, 2018.custom:[[[-]]] - 27 M. Mitchell,
*An Introduction to Genetic Algorithms, Cambridge, MA: MIT Press, 1996*, CambridgeMA: MIT Press, An Introduction to Genetic Algorithms, 1996.custom:[[[-]]] - 28 B. Ge, J. H. Han, Z. Wei, L. Cheng, Y. Han, "Dynamic hybrid ant colony optimization algorithm for solving the vehicle routing problem with time windows,"
*Pattern Recognition And Artificial Intelligence*, vol. 28, no. 7, pp. 641-650, 2015.custom:[[[-]]] - 29 Z. H. Xiong, S. K. Li, J. H. Chen, "Hardware/Software partitioning based on dynamic combination of genetic algorithm and ant algorithm,"
*Journal of Software*, vol. 16, no. 4, pp. 503-512, 2005.custom:[[[-]]] - 30 J. Li, "Combination of genetic & ant colony algorithms for multi-project resource leveling problem,"
*Computer Integrated Manufacturing Systems*, vol. 16, no. 3, pp. 643-649, 2010.custom:[[[-]]] - 31 D. Li, C. Liu, W. Gan, "A new cognitive model: cloud model,"
*International Journal of Intelligent Systems*, vol. 24, no. 3, pp. 357-375, 2009.doi:[[[10.1002/int.20340]]] - 32 M. Qi, A. Yang, "Quantum particle swarm optimization based on cloud model cloud droplet strategy,"
*Computer Engineering and Applications*, vol. 48, no. 24, pp. 49-52, 2012.custom:[[[-]]] - 33 X. Wei, H. Zeng, Y. Zhou, "Cloud theory-based artificial fish swarm algorithm,"
*Computer Engineering and Applications*, vol. 46, no. 22, pp. 26-29, 2010.custom:[[[-]]] - 34 P. Wang, X. Xu, S. Huang, C. Cai, "A linguistic large group decision making method based on the cloud model,"
*IEEE Transactions on Fuzzy Systems*, vol. 26, no. 6, pp. 3314-3326, 2018.custom:[[[-]]] - 35 X. Shang, P. Ma, T. Chao, "Performance evaluation of electromagnetic railgun exterior ballistics based on cloud model,"
*IEEE Transactions on Plasma Science*, vol. 45, no. 7, pp. 1614-1621, 2017.custom:[[[-]]] - 36 J. Cui, Q. Zheng, Y. Xin, C. Zhou, Q. Wang, N. Zhou, "Feature extraction and classification method for switchgear faults based on sample entropy and cloud model,"
*IET GenerationTransmission & Distribution*, vol. 11, no. 11, pp. 2938-2946, 2017.custom:[[[-]]] - 37 S. Y ang, R. Jiang, H. Wang, S. S. Ge, "Road constrained monocular visual localization using Gaussian-Gaussian Cloud model,"
*IEEE Transactions on Intelligent Transportation Systems*, vol. 18, no. 12, pp. 3449-3456, 2017.doi:[[[10.1109/TITS.2017.2685436]]] - 38 Y. Lin, L. Zhao, H. Li, Y. Sun, "Air quality forecasting based on cloud model granulation,"
*EURASIP Journal on Wireless Communications and Networking*, vol. 2018, no. 1, 2018.custom:[[[-]]] - 39 K. T. Fang, D. K. J. Lin, P. Winker, Y. Zhang, "Uniform design: theory and application,"
*Technometrics*, vol. 42, no. 3, pp. 237-248, 2000.doi:[[[10.1080/00401706.2000.10486045]]]