PDF  PubReader

Zhou and Zhu: Sinusoidal Map Jumping Gravity Search Algorithm Based on Asynchronous Learning

Xinxin Zhou and Guangwei Zhu

Sinusoidal Map Jumping Gravity Search Algorithm Based on Asynchronous Learning

Abstract: To address the problems of the gravitational search algorithm (GSA) in which the population is prone to converge prematurely and fall into the local solution when solving the single-objective optimization problem, a sine map jumping gravity search algorithm based on asynchronous learning is proposed. First, a learning mechanism is introduced into the GSA. The agents keep learning from the excellent agents of the population while they are evolving, thus maintaining the memory and sharing of evolution information, addressing the algorithm’s shortcoming in evolution that particle information depends on the current position information only, improving the diversity of the population, and avoiding premature convergence. Second, the sine function is used to map the change of the particle velocity into the position probability to improve the convergence accuracy. Third, the Levy flight strategy is introduced to prevent particles from falling into the local optimization. Finally, the proposed algorithm and other intelligent algorithms are simulated on 18 benchmark functions. The simulation results show that the proposed algorithm achieved improved the better performance.

Keywords: Asynchronous Learning , Gravitational Search Algorithm , Levy Flight , Sinusoidal Map

1. Introduction

Optimization theory has made great progress in recent years [1], and swarm intelligence algorithms attracting extensive attention. Their common goal is to seek the optimal solution for the problems [2]. In 2009, the gravitational search algorithm (GSA) was proposed by Rashedi et al. [3]. The inspiration for this algorithm was derived from Newtonian gravity. Using the interaction between agents in the group, each agent attracts each other to generate swarm intelligence, and the optimization search is completed. The algorithm has strong development ability, and the convergence accuracy and convergence rate are also significantly superior to those of other algorithms [4-6]. It has attracted more and more scholars’ attention and is widely used in many engineering fields, such as engineering production scheduling [7], because of its simple concept, few setting parameters, and easy implementation.

Many papers have been proposed to further improve the efficiency of GSA. Rashedi et al. [8] combined binary and gravitational search algorithms and proposed a binary gravitational search algorithm. The particle velocity value is related to the probability of the particle position change, which expands the application scope of the gravitational search algorithm. The authors of [9] and [10] combined the particle swarm algorithm (PSO) with the gravity algorithm, and improving the performance of GSA. Yang et al. [11] proposed immune GSA based on the basic framework of GSA, combined with the immune information processing mechanism of the immune system. To increase population diversity, and avoid premature convergence, relevant scholars introduced the idea of chaos into the GSA: cat chaotic mapping is introduced into GSA in [12], which changed the way the original GSA population was generated, changing random initialization into cat chaotic initialization population, and adopting a little chaos interference to jump out of the local optimum. Gao et al. [13] replaced the original random sequence with the chaotic sequence that was generated by logistic mapping and used chaos as the population local search’s method. A universal GSA that was based on adaptive chaotic mutation was proposed by literature [14]. In this paper, the concepts of average particle distance and chaotic search mutation were introduced into the algorithm. Boundary mutation constraint processing was adopted, and the local exploration ability of the algorithm was enhanced. Xu and Wang [15] proposed gravity search algorithm based on weight. During the iterative process, a weight-related to the mass of contemporary particles is added to the inertial mass of the particle, and the accuracy of the algorithm was improved effectively. Zhang and Gong [16] and Li et al. [17] introduced a differential mutation strategy when updating individual particle positions, both of which showed that the optimization performance of the algorithm was improved by using differential evolution strategy.

Although GSA has shown good performance compared with some traditional methods, it still confronts some problems when solving single objective optimization problems. In this paper, a sine map jumping gravity search algorithm based on asynchronous learning is proposed. The main contributions of this paper are listed as follows:

(1) By introducing learning factors, the diversity of the population is improved and the premature convergence of this algorithm is avoided.

(2) An improved map method based on a sine function is proposed. The sine value of particle velocity is mapped to the probability of particle position change. This enhances the convergence accuracy of this algorithm.

(3) This particle jumping mechanism is adopted. This jumping strategy prevents particles from going down into local optimal solution.

The remainder of this article is shown as follows: the GSA algorithm is given in Section 2. The improved SIN-GSA is presented in Section 3. Simulation experiments and results analysis are presented in Section 4. Finally, conclusions and future research contents are brought in Section 5.

2. Gravity Search Algorithm

The GSA has four elements: agent position, active gravity mass, passive gravity mass, and inertial mass. Consider a system with N agents (masses). The position of [TeX:] $$i^{t h}$$ agent is defined as:

(1)
[TeX:] $$X_{i}=\left(x_{i}^{1}, x_{i}^{2}, \ldots, x_{i}^{D}\right)$$

where, [TeX:] $$i=1,2, \ldots, N, X_{i}^{D}$$ is the information of [TeX:] $$i^{t h}$$ agent in the [TeX:] $$d^{t h}$$ dimension. In [TeX:] $$i^{t h}$$ iteration, the force of agent “i” on agent “j” is as follows:

(2)
[TeX:] $$F_{i j}^{D}(t)=G(t) \frac{M_{a j}(t) \times M_{b i}(t)}{R_{i j}(t)+\varepsilon}\left(X_{j}^{D}(t)-X_{i}^{D}(t)\right)$$

where [TeX:] $$\varepsilon$$ is a small constant, and G(t) is the gravitational constant on time t, which is related to the age of the universe as follows:

(3)
[TeX:] $$\mathrm{G}(\mathrm{t})=G_{0} \times e^{-\alpha t / T}$$

where [TeX:] $$G_{0}$$ is the gravitational constant, [TeX:] $$\alpha$$ is the given constant, and [TeX:] $$T$$ is the current iteration number.

With the assumption that the gravitational mass and the inertial mass are equal, the mass of the object can be updated according to appropriate rules. The method of individual mass and inertial mass is calculated as follows:

(4)
[TeX:] $$\left\{\begin{array}{c} M_{a i}=M_{b i}=M_{i i}, i=1,2, \ldots, N \\ m_{i}(t)=\frac{f i t_{i}(t)-\operatorname{worst}(t)}{\operatorname{best}(t)-\operatorname{worst}(t)} \\ M_{i}(t)=m_{i}(t) / \sum_{j=1}^{N} m_{j}(t) \end{array}\right.$$

where [TeX:] $$M_{i i}$$ is the inertial gravitational mass of [TeX:] $$i^{t h}$$ agent, [TeX:] $$f i t_{i}(t)$$ represents the fitness value of the agent at the time t, worst(t) is the fitness value of the agent with the smallest mass, and best(t) is the fitness value of the agent with the largest mass. With the global minimization problem taken as an example, best(t) and worst(t) can be defined as follows:

(5)
[TeX:] $$\left\{\begin{array}{c} \operatorname{worst}(t)=\max f i t_{i}(t) \\ \operatorname{best}(t)=\min f i t_{i}(t) \end{array}, \text { for } i \in\{1,2, \ldots, N\}\right.$$

On the basis of Newton’s second law, the acceleration of agent i in [TeX:] $$d^{t h}$$ dimension at time [TeX:] $$t$$ is as follows:

(6)
[TeX:] $$a_{i}^{d}(t)=\frac{F_{i}^{d}(t)}{M_{i i}(t)}$$

During the iteration of agents, the speed and position update method of agent [TeX:] $$i$$ can be defined as:

(7)
[TeX:] $$v_{i}^{d}(t+1)=\operatorname{rand}_{i} \times v_{i}^{d}(t)+a_{i}^{d}(t)$$

(8)
[TeX:] $$x_{i}^{d}(t+1)=x_{i}^{d}(t)+v_{i}^{d}(t+1)$$

where [TeX:] $$\operatorname{rand}_{i}$$ takes a value in the interval [0,1], and used in the speed update formula to increase the randomness of agent search.

3. Sinusoidal Map Jumping Gravity Search Algorithm based on Asynchronous Learning

3.1 Asynchronous Learning Factors

The update formulas of agent velocity and displacement in the iterative process of GSA are shown as formulas (7) and (8). Each agent depends only on the gravity between the agents for optimization, which is affected by the current position information only, which indicating a lack of memory algorithm. At the beginning of the iteration, the agents are evenly distributed in the search space. As the iteration progresses, the surrounding agents will gather towards this better solution as long as a better solution is found. As the agents continue to gather, in the later stages of the iteration, the agents that gathered around the local optimal solution almost all have the same inertial mass, their attracting and attracting forces are almost equal, the population diversity disappears, and the algorithm will stagnate.

To alleviate the deficiency of the GSA, in which the diversity of the population is reduced in the late stage of iterations, the concept of a learning factor is introduced during the optimization of the GSA. By adjusting the learning factor, the memory and information sharing capabilities of the population agents during the evolution process are adjusted. Through the sharing of the elite individual’s own position information and the exchange and sharing of elite individual information during the population iteration process, the population diversity is improved to avoid premature convergence. Learning factor [TeX:] $$c_{1}$$ represents the agent’s learning from its own evolutionary mechanism, which is called memory, and retains its own individuals as much as possible. Thus, the diversity of the population is maintained and the overall development capability is enhanced based on this strategy. Individuals should enhance their ability to learn and communicate with the best individuals, that is, to share information, and to enhance local exploration capabilities. Therefore, the learning factor [TeX:] $$c_{2}$$ represents the learning of the agent evolution mechanism to the population. The learning factor [TeX:] $$c_{2}$$ can effectively alleviate the stagnation of the GSA. The agents obtain the optimal solution through memory and information sharing. SIN-GSA uses the currently obtained optimal solution to guide the agents with large inertial mass to move toward the global optimal direction, thus preventing all agents from converging toward the optimal solution.

The two learning factors change differently with time during the optimization process, so they are called asynchronously changing learning factors. During the early stage of evolution, the self-learning ability should be stronger to avoid the loss of the optimal solution; in the later stage of the evolution, the population learning ability should be stronger to avoid the local optimal solution, so the formula for the learning factor is as follows:

(9)
[TeX:] $$c_{1}=c_{1_{-} i n i}+\left(c_{1_{-} \text {fin }}-c_{1_{-} i n i}\right) \times t / T$$

(10)
[TeX:] $$c_{2}=c_{2_{-} i n i}+\left(c_{2_{-} \text {fin }}-c_{2_{-} i n i}\right) \times t / T$$

where, [TeX:] $$c_{i n i}$$ is the initial learning ability, [TeX:] $$c_{f i n}$$ is the learning ability at the end of the iteration, [TeX:] $$t$$ is the current iteration number, and [TeX:] $$T$$ is the maximum iteration number.

3.2 Sine Function Mapping

To further improve the convergence performance of the GSA, a sine function mapping strategy is proposed. With the use of the sine function, the sine value of the agent velocity is mapped to the probability that the agent position will change, and the performance of the algorithm is improved.

The search speed of the agent changes from fast to slow during the algorithm optimization process. When the agents speed is fast, it indicates that the current position of the agent has not reached the optimal position. Thus, the optimal value needs to be found as soon as possible; when the agent’s speed is slow, the position of the agents is close to the optimal position. When the optimal position is reached, the speed of the agent becomes zero. On the basis of these conditions, a sine function mapping strategy is proposed to improve the convergence performance of the GSA. The sine mapping function is shown in formula (11):

(11)
[TeX:] $$f(v)=\left\{\begin{array}{l} 1, \quad v<-\frac{\pi}{2} \text { or } v>\frac{\pi}{2} \\ |\sin (v)|, \quad v \in\left[-\frac{\pi}{2}, \frac{\pi}{2}\right] \end{array}\right.$$

where [TeX:] $$v$$ is the velocity value of the agent, and [TeX:] $$f(v)$$ is the sine value of the velocity mapped to the probability of the agent position vector will change. When the velocity value is within the interval [TeX:] $$\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$$ , the sine value of velocity is mapped to the probability of agent position change. In this algorithm, a mandatory position update strategy is adopted. When the absolute value of the speed is larger, the greater probability value is given to the agent, and the convergence speed of the algorithm is thus increased. When the absolute value of the speed is small, a smaller probability is given to the agent and the convergence accuracy of the algorithm is thus improved. When the speed is outside the interval [TeX:] $$\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$$ , the probability of the position changing is 1.

3.3 Jumping Mechanism

In the GSA, the agent adjusts its own speed and position according to the gravity it receives. The agent will be limited by itself and the global optimal. Multiple extreme points are present in the multimodal problem, which is why the agents will gather in the local optimal when they are close to it. When the agents fall into the local optimal, jumping out of the region to explore new unknown regions is difficult. Therefore, the Levy flight mechanism is introduced, and the local optimal agent is given the ability to explore new areas.

Levy flight [18] is a random walk search method that can easily produce drastic changes during the search process, enabling the algorithm to jump out of the local optimal. Retain the position of the optimal agent of the population after the [TeX:] $$t^{t h}$$ iteration, and do a Levy search for it. The path of the Levy flight search is calculated as follows:

(12)
[TeX:] $$x=u / v^{\frac{1}{\beta}}$$

Among them, [TeX:] $$x$$ is Levi flight search path, both [TeX:] $$x$$ and [TeX:] $$u$$ follow a normal distribution, where [TeX:] $$u \sim N\left(0, \sigma^{2}\right), v \sim N(0,1) . \sigma$$ as following:

(13)
[TeX:] $$\sigma=\left\{\Gamma(1+\beta) \sin (\pi \beta / 2) /_{\Gamma[(1+\beta) / 2] 2^{\frac{\beta-1}{2}} \beta}\right\}^{1 / \beta}$$

Among them, [TeX:] $$\beta$$ takes the value in (0 ~ 2), [TeX:] $$\Gamma$$ is the gamma function. The search path [TeX:] $$\operatorname{Levy}(\varepsilon)$$ for Levy flight can be determined by the above two formulas.

3.4 Pseudo-Code of SIN-GSA

Algorithm 1 is the pseudo-code of the SIN-GSA.

Algorithm 1.
Sinusoidal map jumping gravity search algorithm based on asynchronous learning
pseudo1.png

4. Experiment and Analysis

4.1 Test Functions and Evaluation Criteria

To evaluate the performance of SIN-GSA, 18 test functions with different characteristics are selected. The details of the test functions are shown in Table 1. The functions are divided into three groups: [TeX:] $$F_{1}-F_{7}$$ F_1-F_7 are high-dimensional unimodal functions, which are used to test the optimization accuracy of the algorithms. [TeX:] $$F_{8}-F_{13}$$ F_8-F_13 are high-dimensional multimodal functions, which are used to test the global search performance of the algorithms and the ability to avoid premature convergence. [TeX:] $$F_{14}-F_{18}$$ F_14-F_18 are low-dimensional multimodal functions, which are used to test the robustness of the algorithms.

The following performance indicators are mainly involved:

(1) Solution accuracy: When the algorithm reaches a certain number of evaluations, the best accuracy can be obtained. The closer the value of the solution is to the theoretical optimal value, the better.

(2) Convergence speed: The algorithm is measured by the optimal solution that can be obtained under the same evaluation times, or by the evaluation times required to reach the optimal solution.

The algorithms were executed 30 times for each test function to obtain the statistical results. When the max number of iterations is 1000, the mean, best and the standard deviation (Std) of the solutions at the max number of iterations of 1000 are reported.

4.2 Comparison of Convergence Accuracy

The proposed SIN-GSA is compared with GSA [3] and PSO-GSA [10]. In this experiment, the mean, best and Std values were obtained by GSA, PSO-GSA and SIN-GSA. The experimental results are reported in Table 2, and the best results are highlighted in bold.

Table 1.

Test functions
Benchmark functions Dimension Range Optimal
[TeX:] $$F_{1}(X)=\sum_{i=1}^{n} x_{i}^{2}$$ 30 [TeX:] $$[-100,100]^{n}$$ 0
[TeX:] $$F_{2}(X)=\sum_{i=1}^{n}\left|x_{i}\right|+\prod_{i=1}^{n}\left|x_{i}\right|$$ 30 [TeX:] $$[-10,10]^{n}$$ 0
[TeX:] $$F_{3}(X)=\sum_{i=1}^{n}\left(\sum_{j=1}^{i} x_{j}\right)^{2}$$ 30 [TeX:] $$[-100,100]^{n}$$ 0
[TeX:] $$F_{4}(X)=\max \left\{\left|x_{i}\right|, 1 \leq i \leq n\right\}$$ 30 [TeX:] $$[-100,100]^{n}$$ 0
[TeX:] $$F_{5}(X)=\sum_{i=1}^{n-1}\left[100\left(x_{i+1}-x_{i}^{2}\right)^{2}+\left(x_{i}-1\right)^{2}\right]$$ 30 [TeX:] $$[-30,30]^{n}$$ 0
[TeX:] $$F_{6}(X)=\sum_{i=1}^{n}\left(\left[x_{i}+0.5\right]\right)^{2}$$ 30 [TeX:] $$[-100,100]^{n}$$ 0
[TeX:] $$F_{7}(X)=\sum_{i=1}^{n} i x_{i}^{4}+\operatorname{random}[0,1)$$ 30 [TeX:] $$[-1.28,1.28]^{n}$$ 0
[TeX:] $$F_{8}(X)=\sum_{i=1}^{n}-x \sin \left(\sqrt{\left|x_{i}\right|}\right)$$ 30 [TeX:] $$[-500,500]^{n}$$ [TeX:] $$-1.2570 \mathrm{e}+04$$
[TeX:] $$F_{9}(X)=\sum_{i=1}^{n}\left[x_{i}^{2}-10 \cos \left(2 \pi x_{i}\right)+10\right]$$ 30 [TeX:] $$[-5.12,5.12]^{n}$$ 0
[TeX:] $$F_{10}(X)=-20 \exp \left(-0.2 \sqrt{\frac{1}{n} \sum_{i=1}^{n} x_{i}^{2}}\right)-\exp \left(\frac{1}{n} \sum_{i=1}^{n} \cos \left(2 \pi x_{i}\right)\right)+20+e$$ 30 [TeX:] $$[-32,32]^{n}$$ 0
[TeX:] $$F_{11}(X)=\frac{1}{4000} \sum_{i=1}^{n} x_{i}^{2}-\prod_{i=1}^{n} \cos \left(\frac{x_{i}}{\sqrt{i}}\right)+1$$ 30 [TeX:] $$[-50,50]^{n}$$ 0
[TeX:] $$F_{12}(X)=\frac{\pi}{n}\left\{10 \sin \left(\pi y_{1}\right)+\sum_{i}^{n-1}\left(y_{i}-1\right)^{2}\left[1+10 \sin ^{2}\left(\pi y_{i+1}\right)\right]+\left(y_{n}^{-1}\right)^{2}\right\}+\sum_{i=1}^{n} u\left(x_{i}, 10,100,4\right)$$ 30 [TeX:] $$[-600,600]^{n}$$ 0
[TeX:] $$ \begin{aligned} &F_{13}(X)=0.1\left\{\sin ^{2}\left(3 \pi x_{1}\right)+\sum_{i=1}^{n}\left(x_{i}-1\right)^{2}\left[1+\sin ^{2}\left(3 \pi x_{i}+1\right)\right]+\left(x_{n}-1\right)^{2}\left[1+\sin ^{2}\left(2 \pi x_{n}\right)\right]\right\} \\ &+\sum_{i=1}^{n} u\left(x_{i}, 5,100,4\right) \end{aligned} $$ 30 [TeX:] $$[-50,50]^{n}$$ -1.15044
[TeX:] $$ F_{14}(X)=\left(\frac{1}{500}+\sum_{j=1}^{25} \frac{1}{j+\sum_{i=1}^{2}\left(x_{i}-a_{i j}\right)^{6}}\right)^{-1} $$ 2 [TeX:] $$[-65.53,65.53]$$ 0.998
[TeX:] $$ F_{15}(X)=\sum_{i=1}^{11}\left[a_{i}-\frac{x_{i}\left(b_{i}+b_{i} x_{2}\right)}{b_{i}^{2}+b_{i} x_{3}+x_{4}}\right]^{2} X $$ 4 [TeX:] $$[-5,5]^{4}$$ [TeX:] $$3.075 \mathrm{e}-04$$
[TeX:] $$ \begin{aligned} &F_{16}(X)=\left[1+\left(x_{1}+x_{2}+1\right)^{2}\left(19-14 x_{1}+3 x_{1}^{2}-6 x_{1} x_{2}+3 x_{2}^{2}\right)\right] \\ &\times\left[30+\left(2 x_{1}-3 x_{2}\right)^{2} \times\left(18-32 x_{1}+12 x_{1}^{2}+48 x_{2}-36 x_{1} x_{2}+27 x_{2}^{2}\right)\right] \end{aligned} $$ 2 [TeX:] $$[-5,5]^{2}$$ 3.00
[TeX:] $$ F_{17}(X)=-\sum_{i=1}^{4} c_{i} \exp \left(-\sum_{j=1}^{3} a_{i j}\left(x_{j}-p_{i j}\right)^{2}\right) $$ 2 [TeX:] $$[0,1]^{3}$$ -3.86
[TeX:] $$ F_{18}(X)=-\sum_{i=1}^{4} c_{i} \exp \left(-\sum_{j=1}^{6} a_{i j}\left(x_{j}-p_{i j}\right)^{2}\right) $$ 6 [TeX:] $$[0,1]^{6}$$ -3.3220

Table 2.

Experimental results of convergence accuracy
GSA PSO-GSA SIN-GSA
Best Mean Std Best Mean Std Best Mean Std
[TeX:] $$F_{1}$$ [TeX:] $$1.85 \mathrm{e}-16$$ [TeX:] $$4.16 \mathrm{e}-16$$ [TeX:] $$1.22 \mathrm{e}-16$$ [TeX:] $$2.31 \mathrm{e}-19$$ [TeX:] $$1.02 \mathrm{e}+03$$ [TeX:] $$4.03 \mathrm{e}+03$$ 0 0 0
[TeX:] $$F_{2}$$ [TeX:] $$5.50 \mathrm{e}-08$$ [TeX:] $$7.09 \mathrm{e}-08$$ [TeX:] $$6.10 \mathrm{e}-09$$ 10.00 5.05 17.12 0 0 0
[TeX:] $$F_{3}$$ 171.77 378.68 128.76 805.41 [TeX:] $$5.52 \mathrm{e}+03$$ [TeX:] $$6.19 \mathrm{e}+03$$ 0 0 0
[TeX:] $$F_{4}$$ [TeX:] $$7.45 \mathrm{e}-09$$ 0.03 0.11 24.46 41.62 24/59 0 0 0
[TeX:] $$F_{5}$$ 27.07 59.63 112.92 12.55 28.46 17.98 24.18 25.19 0.36
[TeX:] $$F_{6}$$ [TeX:] $$1.67 \mathrm{e}-18$$ 0 0 [TeX:] $$2.13 \mathrm{e}-19$$ 990.023 [TeX:] $$3.02 \mathrm{e}+3$$ 0 0 0
[TeX:] $$F_{7}$$ 0.01 0.02 0.01 0.02 0.05 0.02 [TeX:] $$1.09 \mathrm{e}-05$$ [TeX:] $$5.25 \mathrm{e}-04$$ [TeX:] $$5.80 \mathrm{e}-07$$
[TeX:] $$F_{8}$$ -3585.80 -2734.80 406.16 [TeX:] $$-7.12 \mathrm{e}+03$$ [TeX:] $$-7.77 \mathrm{e}+03$$ 700.57 [TeX:] $$-1.44 \mathrm{e}+04$$ [TeX:] $$-1.50 \mathrm{e}+04$$ 328
[TeX:] $$F_{9}$$ 7.96 14.16 4.74 95.52 149.57 40.34 [TeX:] $$3.65 \mathrm{e}-15$$ [TeX:] $$6.78 \mathrm{e}-14$$ [TeX:] $$8.26 \mathrm{e}-20$$
[TeX:] $$F_{10}$$ [TeX:] $$9.58 \mathrm{e}-09$$ [TeX:] $$1.35 \mathrm{e}-08$$ [TeX:] $$1.87 \mathrm{e}-09$$ 17.11 8.18 7.46 [TeX:] $$8.88 \mathrm{e}-16$$ [TeX:] $$8.88 \mathrm{e}-16$$ 0
[TeX:] $$F_{11}$$ 1.89 4.95 1.71 0.01 21.08 38.82 [TeX:] $$8.32 \mathrm{e}-16$$ [TeX:] $$5.79 \mathrm{e}-15$$ [TeX:] $$3.31 \mathrm{e}-17$$
[TeX:] $$F_{12}$$ [TeX:] $$2.81 \mathrm{e}-18$$ 0.11 0.23 0.52 3.76 4.43 0.10 0.21 0.06
[TeX:] $$F_{13}$$ [TeX:] $$4.36 \mathrm{e}-17$$ 0.01 0.03 11.86 9.38 7.87 1.42 1.95 0.19
[TeX:] $$F_{14}$$ 0.998 3.995 3.318 0.998 2.682 4.301 0.998 1.792 0.989
[TeX:] $$F_{15}$$ [TeX:] $$9.89 \mathrm{e}-04$$ [TeX:] $$2.8 \mathrm{e}-03$$ [TeX:] $$1.4 \mathrm{e}-03$$ 0.0012 0.0039 0.0075 [TeX:] $$3.075 \mathrm{e}-04$$ [TeX:] $$3.426 \mathrm{e}-04$$ [TeX:] $$1.747 \mathrm{e}-04$$
[TeX:] $$F_{16}$$ 3.00 3.00 [TeX:] $$6.84 \mathrm{e}-15$$ 3.00 3.00 [TeX:] $$1.27 \mathrm{e}-15$$ 3.00 3.00 [TeX:] $$1.79 \mathrm{e}-16$$
[TeX:] $$F_{17}$$ -3.86 -3.86 [TeX:] $$1.44 \mathrm{e}-04$$ -3.86 -3.86 [TeX:] $$2.48 \mathrm{e}-15$$ -3.86 -3.86 [TeX:] $$2.05 \mathrm{e}-15$$
[TeX:] $$F_{18}$$ -3.3220 -3.3220 [TeX:] $$5.71 \mathrm{e}-16$$ -3.3220 -3.2546 [TeX:] $$5.99 \mathrm{e}-02$$ -3.3220 -3.3220 [TeX:] $$3.17 \mathrm{e}-18$$

The best results are highlighted in bold.

To display the optimization process of the algorithms intuitively, as shown in Fig. 1, the optimization iteration curves of part test functions are provided. In Fig. 1, the abscissa represents the number of iterations, and the ordinate represents the average fitness value (logarithm with e as the base).

The analysis of Table 2 and Fig. 1 indicates that the high-dimensional unimodal function [TeX:] $$\left(F_{1}-F_{7}\right)$$ examines the algorithm’s global search capabilities. Among the seven measured functions, the SIN-GSA can make the five functions converge to the theoretical value of zero. Even if [TeX:] $$F_{5}$$ and [TeX:] $$F_{7}$$ did not converged to the theoretical value of 0, the optimal value, average value and standard deviation value converged to in 1000 iterations can be significantly improved. As shown in Table 2, for the global optimization ability, the proposed SIN-GSA has the strongest.

[TeX:] $$F_{8}-F_{13}$$ are high-dimensional multimodal functions that have many local extremum points, which are used to test the ability of the algorithm to avoid premature convergence. [TeX:] $$F_{9}$$ is a typical nonlinear multi-modal function. There are many local extreme points in its search space, and its peak shape shows a jump shape, which will increase the search difficulty of the algorithm. Table 2 shows that better values were obtained by the proposed algorithm for the benchmark functions [TeX:] $$F_{9}-F_{11}$$ . In comparison, neither the GSA nor the PSO-GSA can obtain the ideal values of these two test functions.

The low-dimensional multimodal functions [TeX:] $$\left(F_{14}-F_{18}\right)$$ have relatively few local extremums, and global search is easier. As shown in Table 2, when the low-dimensional multimodal function is solved, each index of SIN-GSA is the minimum value and the theoretical optimal values can be obtained.

Fig. 1.

Test functions convergence diagram.
1.png
4.3 Comparison of Convergence Speed

When considering the convergence rate, we adopt the accuracy of the solution at the same number of iterations. [TeX:] $$F_{1}-F_{13}$$ are high-dimensional functions. The optimal solutions were counted when the number of iterations is 500, 1000, and 1500. The optimal solutions of the low-dimension functions [TeX:] $$\left(F_{14}-F_{18}\right)$$ are counted when the function evaluation times are 400, 600, and 800. Tables 3 and 4 show the experiment results.

Under the same evaluation times, the convergence accuracy of the proposed SIN-GSA has significantly improved compared with the GSA and PSOGSA. SIN-GSA can converge to the theoretical optimal value faster than the original algorithm. The convergence speed of the proposed algorithm is significantly improved, especially when the population is increased.

Table 3.

Experimental results of convergence speed [TeX:] $$\left(F_{1}-F_{13}\right)$$
GSA PSO-GSA SIN-GSA
500 1000 1500 500 1000 1500 500 1000 1500
[TeX:] $$F_{1}$$ [TeX:] $$2.01 \mathrm{e}-15$$ [TeX:] $$3.37 \mathrm{e}-16$$ [TeX:] $$1.93 \mathrm{e}-16$$ 3.5563 [TeX:] $$2.48 \mathrm{e}-19$$ [TeX:] $$2.30 \mathrm{e}-19$$ 0 0 0
[TeX:] $$F_{2}$$ [TeX:] $$7.34 \mathrm{e}-08$$ [TeX:] $$8.15 \mathrm{e}-08$$ [TeX:] $$5.70 \mathrm{e}-08$$ [TeX:] $$6.53 \mathrm{e}-09$$ [TeX:] $$2.07 \mathrm{e}-09$$ [TeX:] $$1.95 \mathrm{e}-09$$ 0 0 0
[TeX:] $$F_{3}$$ [TeX:] $$7.37 \mathrm{e}+02$$ [TeX:] $$2.27 \mathrm{e}+02$$ [TeX:] $$2.98 \mathrm{e}+02$$ [TeX:] $$1.73 \mathrm{e}+04$$ [TeX:] $$5.72 \mathrm{e}+03$$ [TeX:] $$1.00 \mathrm{e}+04$$ 0 0 0
[TeX:] $$F_{4}$$ 6.62 [TeX:] $$9.77 \mathrm{e}-09$$ [TeX:] $$8.05 \mathrm{e}-09$$ 31.57 27.55 24.68 0 0 0
[TeX:] $$F_{5}$$ 27.93 27.39 27.08 91.23 23.91 23.69 26.96 25.02 23.34
[TeX:] $$F_{6}$$ 2 0 0 [TeX:] $$3.26 \mathrm{e}-02$$ [TeX:] $$1.94 \mathrm{e}-19$$ [TeX:] $$1.71 \mathrm{e}-19$$ 0 0 0
[TeX:] $$F_{7}$$ [TeX:] $$7.58 \mathrm{e}-02$$ [TeX:] $$2.96 \mathrm{e}-02$$ [TeX:] $$6.13 \mathrm{e}-03$$ [TeX:] $$1.23 \mathrm{e}-01$$ [TeX:] $$4.50 \mathrm{e}-02$$ [TeX:] $$7.16 \mathrm{e}-02$$ [TeX:] $$3.41 \mathrm{e}-03$$ [TeX:] $$5.53 \mathrm{e}-04$$ [TeX:] $$5.28 \mathrm{e}-05$$
[TeX:] $$F_{8}$$ [TeX:] $$-3.3 \mathrm{e}+03$$ [TeX:] $$-2.3 \mathrm{e}+03$$ [TeX:] $$-2.7 \mathrm{e}+03$$ [TeX:] $$-7.8 \mathrm{e}+03$$ [TeX:] $$-7.2 \mathrm{e}+03$$ [TeX:] $$-7.1 \mathrm{e}+03$$ [TeX:] $$-5.8 \mathrm{e}+03$$ [TeX:] $$-4.8 \mathrm{e}+03$$ [TeX:] $$-5.9 \mathrm{e}+03$$
[TeX:] $$F_{9}$$ [TeX:] $$1.89 \mathrm{e}+01$$ [TeX:] $$1.29 \mathrm{e}+01$$ [TeX:] $$1.69 \mathrm{e}+01$$ [TeX:] $$1.02 \mathrm{e}+02$$ [TeX:] $$1.24 \mathrm{e}+02$$ [TeX:] $$1.28 \mathrm{e}+02$$ [TeX:] $$5.42 \mathrm{e}-14$$ [TeX:] $$5.42 \mathrm{e}-14$$ [TeX:] $$5.42 \mathrm{e}-14$$
[TeX:] $$F_{10}$$ [TeX:] $$1.70 \mathrm{e}-08$$ [TeX:] $$1.51 \mathrm{e}-08$$ [TeX:] $$1.14 \mathrm{e}-08$$ 16.67 16.74 17.64 [TeX:] $$8.88 \mathrm{e}-16$$ [TeX:] $$8.88 \mathrm{e}-16$$ [TeX:] $$8.88 \mathrm{e}-16$$
[TeX:] $$F_{11}$$ 19.54 3.66 0.99 1.04 [TeX:] $$3.84 \mathrm{e}-09$$ [TeX:] $$1.11 \mathrm{e}-16$$ 0 0 0
[TeX:] $$F_{12}$$ 0.57 [TeX:] $$1.24 \mathrm{e}-05$$ [TeX:] $$2.31 \mathrm{e}-18$$ 2.81 2.19 1.63 0.21 0.17 0.17
[TeX:] $$F_{13}$$ [TeX:] $$3.65 \mathrm{e}-08$$ [TeX:] $$1.60 \mathrm{e}-16$$ [TeX:] $$4.69 \mathrm{e}-17$$ 41.12 25.44 [TeX:] $$3.22 \mathrm{e}-20$$ 1.92 2.08 1.98

The best results are highlighted in bold.

Table 4.

Experimental results of convergence speed [TeX:] $$\left(F_{14}-F_{18}\right)$$
GSA PSO-GSA SIN-GSA
400 600 800 400 600 800 400 600 800
[TeX:] $$F_{14}$$ 13.69 3.97 4.32 0.998 0.998 0.998 0.998 0.998 0.998
[TeX:] $$F_{15}$$ [TeX:] $$8.03 \mathrm{e}-03$$ [TeX:] $$2.92 \mathrm{e}-03$$ [TeX:] $$8.92 \mathrm{e}-04$$ [TeX:] $$7.26 \mathrm{e}-04$$ [TeX:] $$5.36 \mathrm{e}-04$$ [TeX:] $$6.46 \mathrm{e}-04$$ [TeX:] $$3.07 \mathrm{e}-04$$ [TeX:] $$3.07 \mathrm{e}-04$$ [TeX:] $$3.07 \mathrm{e}-04$$
[TeX:] $$F_{16}$$ 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00 3.00
[TeX:] $$F_{17}$$ -3.80 -3.85 -3.86 -3.82 -3.84 -3.86 -3.86 -3.86 -3.86
[TeX:] $$F_{18}$$ -3.3220 -3.3220 -3.3220 -3.3220 -3.3220 -3.3220 -3.3220 -3.3220 -3.3220

The best results are highlighted in bold.

5. Conclusion

The SIN-GSA based on asynchronous learning is proposed to address the problems of insufficient convergence accuracy of the GSA. The main work of this paper is summarized as follows. (1) With the introduction of a learning mechanism into GSA, particles evolve themselves while keeping learning from outstanding particles in the population, and they remember their own evolution information and optimal particle evolution information during the evolution process to maintain the memory and sharing of evolutionary information, improve population diversity, and avoid premature convergence. (2) The concept of sine function mapping is introduced into GSA, and the sine function is used to map the change in particle velocity to the probability of position change, giving the particles strong position change information, and improving the algorithm convergence accuracy and speed. (3) With the introduction of the concept of Levy flight in GSA, the Levy flight strategy can make particles shake during the search, change the path of particle search, strengthen the algorithm searches for the local area, jump out of the local optimal area, and avoid falling into the local optimal solution. (4) By selecting representative different peak shape test functions for simulation experiments, and comparing with other improved algorithms, the results show that SIN-GSA has better optimization performance.

In future work, SIN-GSA can be extended to handle combinatorial optimization and constrained optimization problems. In addition, we can also employ SIN-GSA for solving more complex real-world problems.

Acknowledgement

This research is funded by the Jilin City Project of Scientific and Technological Innovation Development (No. 20190302202).

Biography

Xinxin Zhou
https://orcid.org/0000-0003-2209-2164

She received Ph.D. degree from China University of Mining and Technology (Beijing). She is currently an associate professor in the School of Computer Science, Northeast Electric Power University. Her current research interests include intelligent algorithm and intelligent information processing.

Biography

Guangwei Zhu
https://orcid.org/0000-0003-2617-7982

He is a master from the Department of Computer Science, Northeast Electric Power University, China. His research interests include intelligent algorithm and application. Now working in Guangdong Yudean Jinghai Power Generation Co. Ltd., the research direction is smart power plant.

References

  • 1 F. B. Ozsoydan, A. Baykasoglu, "A swarm intelligence-based algorithm for the set-union knapsack problem," Future Generation Computer Systems, vol. 93, pp. 560-569, 2019.doi:[[[10.1016/j.future.2018.08.002]]]
  • 2 S. Liu, Y. Yang, Y. Zhou, "A swarm intelligence algorithm-lion swarm optimization," Pattern and Artificial Intelligence, vol. 31, no. 5, pp. 431-441, 2018.doi:[[[10.16451/j.cnki.issn1003-6059.201805005]]]
  • 3 E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, "GSA: a gravitational search algorithm," Information Sciences, vol. 179, no. 13, pp. 2232-2248, 2009.doi:[[[10.1016/j.ins.2009.03.004]]]
  • 4 C. Liu, P. Niu, G. Li, X. You, Y. Ma, W. Zhang, "A hybrid heat rate forecasting model using optimized LSSVM based on improved GSA," Neural Processing Letters, vol. 45, no. 1, pp. 299-318, 2017.doi:[[[10.1007/s11063-016-9523-0]]]
  • 5 F. V an den Bergh, A. P. Engelbrecht, "A study of particle swarm optimization particle trajectories," Information Sciences, vol. 176, no. 8, pp. 937-971, 2006.doi:[[[10.1016/j.ins.2005.02.003]]]
  • 6 D. Karaboga, B. Akay, "A comparative study of artificial bee colony algorithm," Applied Mathematics and Computation, vol. 214, no. 1, pp. 108-132, 2009.doi:[[[10.1016/j.amc.2009.03.090]]]
  • 7 V. Brunner, L. Klockner, R. Kerpes, D. U. Geier, T. Becker, "Online sensor validation in sensor networks for bioprocess monitoring using swarm intelligence," Analytical and Bioanalytical Chemistry, vol. 412, no. 9, pp. 2165-2175, 2020.doi:[[[10.1007/s00216-019-01927-7]]]
  • 8 E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, "BGSA: binary gravitational search algorithm," Natural Computing, vol. 9, no. 3, pp. 727-745, 2010.doi:[[[10.1007/s11047-009-9175-3]]]
  • 9 H. C. Tsai, Y. Y. Tyan, Y. W. Wu, Y. H. Lin, "Gravitational particle swarm," Applied Mathematics and Computation, vol. 219, no. 17, pp. 9106-9117, 2013.doi:[[[10.1016/j.amc.2013.03.098]]]
  • 10 S. Mirjalili, S. Z. M. Hashim, "A new hybrid PSOGSA algorithm for function optimization," in Proceedings of 2010 International Conference on Computer and Information Application, Tianjin, China, 2010;pp. 374-377. doi:[[[10.1109/iccia.2010.6141614]]]
  • 11 J. Yang, F. Li, P. Di, "Research and simulation of the gravitational search algorithms with immunity," Acta Armamentarii, vol. 33, no. 12, pp. 1533-1538, 2012.custom:[[[http://www.co-journal.com/EN/abstract/abstract256.shtml]]]
  • 12 X. Han, X. Xiong, F. Duan, "A new method for image segmentation based on BP neural network and gravitational search algorithm enhanced by cat chaotic mapping," Applied Intelligence, vol. 43, no. 4, pp. 855-873, 2015.doi:[[[10.1007/s10489-015-0679-5]]]
  • 13 S. Gao, C. V airappan, Y. Wang, Q. Cao, Z. Tang, "Gravitational search algorithm combined with chaos for unconstrained numerical optimization," Applied Mathematics and Computation, vol. 231, pp. 48-62, 2014.doi:[[[10.1016/j.amc.2013.12.175]]]
  • 14 P. Luo, W. Liu, S. Zhou, "Gravitation search algorithm of adaptive chaotic mutation," Journal of Guangdong University of Technology, vol. 33, no. 4, pp. 57-61, 2016.custom:[[[-]]]
  • 15 Y. Xu, S. Wang, "Enhanced version of gravitational search algorithm: weighted GSA," Computer Engineering and Applications, vol. 47, no. 35, pp. 188-192, 2011.custom:[[[http://cea.ceaj.org/EN/Y2011/V47/I35/188]]]
  • 16 Y. Zhang, Z. Gong, "Hybrid differential evolution gravitation search algorithm based on threshold statistical learning," Journal of Computer Research and Development, vol. 51, no. 10, pp. 2187-2194, 2014.custom:[[[-]]]
  • 17 X. Li, M. Yin, Z. Ma, "Hybrid differential evolution and gravitation search algorithm for unconstrained optimization," International Journal of Physical Sciences, vol. 6, no. 25, pp. 5961-5981, 2011.custom:[[[https://www.semanticscholar.org/paper/Hybrid-differential-evolution-and-gravitation-for-Li-Yin/c66053d45483d36562c513e809d7bd1c3bfda265]]]
  • 18 X. Zhang, X. Wang, Q. Tu, Q. Kang, "Particle swarm optimization algorithm based on combining global-best operator and Levy flight," Journal of University of Electronic Science and Technology of China, vol. 47, no. 3, pp. 421-429, 2018.custom:[[[-]]]