## Xiuye Yin and Liyong Chen## |

Parameter | Value |
---|---|

Computing capability of mobile devices (cycles/s) | [TeX:] $$6 \times 10^2$$ |

Standby power of mobile devices (W) | [TeX:] $$1.0 \times 10^{-3}$$ |

Transmitting power of mobile devices (W) | [TeX:] $$0.5 \times 10^{-1}$$ |

Computing power of edge devices (cycles/s) | [TeX:] $$3 \times 10^3$$ |

Edge offloading delay (ms) | 1.2 |

Cloud computing capability (cycles/s) | [TeX:] $$4 \times 10^3$$ |

Cloud offloading delay (ms) | 20 |

Channel bandwidth (kB/s) | [TeX:] $$1.0 \times 10^2$$ |

Calculated power of mobile devices (W) | [TeX:] $$6 \times 10^{-1}$$ |

Gaussian white noise power (W) | [TeX:] $$1.0 \times 10^{-9}$$ |

Number of users | [10,50], uniform distribution |

Improving the crossover and mutation probabilities of a genetic algorithm has a certain impact on the convergence of the algorithm. The total user overhead results under different crossover and mutation probabilities when the number of users is 30 are shown in Fig. 3.

As shown in Fig. 3, when a fixed crossover and mutation probability are given, the algorithm reaches a local optimal solution, and the search process is lengthened. The adaptive crossover and mutation probabilities can be dynamically adjusted using the fitness value to prevent entering the local optimal solution. Compared to fixed crossover probability and mutation probability settings, the proposed algorithm has obvious advantages and fast convergence speed. When the number of iterations is 50, the total user overhead tends to converge to 32 J approximately.

The result of the iteration is shown in Fig. 4.

As shown in Fig. 4, when the number of iterations increases, the total user overhead of the improved genetic algorithm is lower than that of the genetic algorithm by approximately 7 J. Because the improved genetic algorithm uses the NDX operator to update the crossover and mutation probabilities, it can avoid falling into the local optimum. Therefore, it gradually converges at 50 iterations and finally approaches approximately 32 J. However, the genetic algorithm easily reaches the local optimum. In addition, the convergence process fluctuates significantly; when the number of iterations is 100, the total user overhead is 39 J.

The relationship between the total number of network users and the number of offloading users is shown in Fig. 5.

Fig. 5 shows that as all computing tasks are processed locally, task scheduling does not occur. Therefore, the number of offloading users is always 0. When choosing to offload all tasks, until the number of users is greater than 30, the system bandwidth is exhausted, and the remaining users who have not been offloaded can only choose to process tasks locally. Zhou et al. [13] did not consider the limitations of system bandwidth. When the number of users is greater than 35, the system bandwidth is close to saturation, and users cannot choose to offload tasks. The suggested strategy can maximize the use of the system bandwidth and appropriately select the number of offloaded users compared with the other three strategies, thereby improving system utility. When the number of network users reaches 45, the number of offloading users is only 18. The system still has the capabilities of task offloading and resource management.

To demonstrate the performance of the proposed strategy, we compared the total system overhead obtained by the local calculation of all network users, all offloading users, the strategy in [13], and our proposed strategy in terms of different numbers of users. The results are shown in Fig. 6.

As shown in Fig. 6, channel congestion and inter-user interference increase so rapidly that the system overhead for users to select the all-offloading strategy is greater than the local computing overhead when there are more than 30 users in the system. Therefore, designing an edge cloud computing environment requires designing the hardware configuration of the base stations and MEC servers according to the number of edge users to meet the needs of mobile users. In addition, Zhou et al. [13] realized task offloading and resource sharing in two stages by combining contract theory and computational intelligence. The incentive mechanism and contract theory were used to encourage the server to share its remaining computing resources, and the multi-arm slot machine algorithm was used for online learning to complete distributed task offloading. Thus, the system energy consumption can be effectively reduced; however, the offloading process is complicated, which undoubtedly increases the calculation and communication energy consumption to a certain extent. The proposed strategy was based on edge cloud computing for task scheduling, and the best resource management plan was obtained by improving a genetic algorithm. The whole process is simple and efficient, resulting in low energy consumption. The total system overhead is less than 50 J when the number of users is 50.

Similarly, considering different numbers of users, we compared the average delay of all users’ local computing, all offloading, the strategy in [13], and the mobile user’s task execution in the proposed strategy, as shown in Fig. 7.

Fig. 7 clearly shows that when the number of mobile devices increases, the average delay of the proposed strategy increases slowly, and the advantages become more evident. The average delay is approximately 23.2 ms when the number of devices is 180. This is because the proposed cloud-side collaboration model is used to construct the task scheduling model, and an improved genetic algorithm is selected to handle the problem; hence, the best resource management plan can be obtained. It can comprehensively consider local and cloud computing resources to accomplish both minimum energy consumption and minimum delay. The average delay of the local calculation strategy is constant at approximately 28.1 ms. Because all computing tasks were performed locally, there was no offloading delay. When the full offloading strategy is adopted, the more mobile devices there are, and the more exhausted the communication bandwidth is will tend to be exhausted; thus, the resulting offloading delay will continues to increase. When the number of devices reaches 180, the average delay is close to 28 ms. Zhou et al. [13] proposed a two-stage resource-sharing and task-offloading method, which used the incentive mechanism and contract theory were used to spur servers to share their remaining computing resources. Although it can achieve better task scheduling better, the complex contract theory and offloading process increase the system delay. Therefore, the average delay increases rapidly, exceeding 24 ms, as the number of devices increases.

In recent years, the number of mobile Internet users with mobile smart terminals has increased with the continuous popularization of cloud computing technology and the ongoing advancements of mobile network technology. Edge cloud computing has emerged to meet the business requirements of ultralow latency and power consumption, ultrahigh reliability, and density. Based on this, an edge cloud computing task scheduling and resource management strategy using an improved genetic algorithm was proposed. In addition, a user task scheduling system model was constructed based on edge cloud computing, and an improved legacy algorithm was selected to handle the multi-objective optimization function, including time delay and energy consumption. The optimal solution for the algorithm is the best resource management plan. A simulation experiment of the suggested strategy was implemented based on MATLAB, and the results of the experiment confirm the following conclusions:

(1) Optimizing the crossover and mutation operations of the genetic algorithm using the NDX operator can increase the convergence speed and optimization performance of the algorithm. Convergence was achieved when the number of iterations was 50, and the total system overhead was reduced by approximately 7 J compared with traditional genetic algorithms.

(2) The proposed strategy combined edge cloud computing and intelligent algorithms for task scheduling. The energy consumption during this period was less than 50 J, and the average delay was 23.2 ms. The experimental results indicate that the overall performance of the proposed strategy is better than that of the comparison strategy.

To reduce the difficulty in deriving a theoretical formula, some system model parameters were set to constant values in the simulation experiment. In future work, we will impose fewer restrictions on the model parameters, and more consideration will be given to the dynamic changes in the parameter weights in the model. Moreover, the addition of simulation experimental samples will enable the real-life applications of the proposed model algorithm.

He was born in 1982, male, Chinese, He received master's degree in School of Com-puter Science and Technology, Faculty of Electronic Information, Liaoning University of Science and Technology, China, in 2010. He has been teaching at Zhoukou Normal University since 2010. His research interests include artificial intelligence and data mining.

- 1 D. Madeo, S. Mazumdar, C. Mocenni, and R. Zingone, "Evolutionary game for task mapping in resource constrained heterogeneous environments,"
*Future Generation Computer Systems*, vol. 108, pp. 762-776, 2020. https://doi.org/10.1016/j.future.2020.03.026doi:[[[10.1016/j.future.2020.03.026]]] - 2 E. H. Lee and S. Lee, "Task offloading algorithm for mobile edge computing,"
*Journal of Korean Institute of Communications and Information Sciences*, vol. 46, no. 2, pp. 310-313, 2021. https://doi.org/10.7840/kics. 2021.46.2.310doi:[[[10.7840/kics.2021.46.2.310]]] - 3 A. R. Arunarani, D. Manjula, and V . Sugumaran, "Task scheduling techniques in cloud computing: a literature survey," Future Generation Computer Systems, vol. 91, pp. 407-415, 2019. https://doi.org/10.1016/j.future. 2018. 09.014doi:[[[10.1016/j.future.2018.09.014]]]
- 4 P. P. Hung, M. G. R. Alam, H. Nguyen, T. Quan, and E. N. Huh, "A dynamic scheduling method for collaborated cloud with thick clients,"
*International Arab Journal of Information Technology*, vol. 16, no. 4, pp. 633-643, 2019.custom:[[[https://khu.elsevierpure.com/en/publications/a-dynamic-scheduling-method-for-collaborated-cloud-with-thick-cli-2]]] - 5 G. Lou and Z. Cai, "A cloud computing oriented neural network for resource demands and management scheduling," International Journal of Network Security, vol. 21, no. 3, pp. 477-482, 2019. https://doi.org/ 10.6633/IJNS.201905_21(3).14doi:[[[10.6633/IJNS.20_21(3).14]]]
- 6 X. Huang, C. Li, H. Chen, and D. An, "Task scheduling in cloud computing using particle swarm optimization with time varying inertia weight strategies,"
*Cluster Computing*, vol. 23, pp. 1137-1147, 2020. https://doi.org/ 10.1007/s10586-019-02983-5doi:[[[10.1007/s10586-019-02983-5]]] - 7 Y . Li and C. Jiang, "Distributed task offloading strategy to low load base stations in mobile edge computing environment,"
*Computer Communications*, vol. 164, pp. 240-248, 2020. https://doi.org/10.1016/j.comcom. 2020.10.021doi:[[[10.1016/j.comcom.2020.10.021]]] - 8 S. Luo, X. Chen, Z. Zhou, X. Chen, and W. Wu, "Incentive-aware micro computing cluster formation for cooperative fog computing,"
*IEEE Transactions on Wireless Communications*, vol. 19, no. 4, pp. 2643-2657, 2020. https://doi.org/10.1109/TWC.2020.2967371doi:[[[10.1109/TWC.2020.2967371]]] - 9 S. Josilo and G. Dan, "Decentralized algorithm for randomized task allocation in fog computing systems," IEEE/ACM Transactions on Networking, vol. 27, no. 1, pp. 85-97, 2019. https://doi.org/10.1109/TNET.2018. 2880874doi:[[[10.1109/TNET.2018.2880874]]]
- 10 G. Sakarkar, N. Purohit, N. S. Gour, S. B. Meshram, "A review of computational task offloading approaches in mobile computing," International Journal of Scientific Research in Science, Engineering and Technology, vol. 6, no. 2, pp. 381-387, 2019. https://doi.org/10.32628/IJSRSETdoi:[[[10.32628/IJSRSET]]]
- 11 W. Li, S. Cao, K. Hu, J. Cao, and R. Buyya, "Blockchain-enhanced fair task scheduling for cloud-fog-edge coordination environments: model and algorithm,"
*Security and Communication Networks*, vol. 2021, article no. 5563312, 2021. https://doi.org/10.1155/2021/5563312doi:[[[10.1155/2021/5563312]]] - 12 X. Xu, Q. Liu, Y . Luo, K. Peng, X. Zhang, S. Meng, and L. Qi, "A computation offloading method over big data for IoT-enabled cloud-edge computing," Future Generation Computer Systems, vol. 95, pp. 522-533, 2019. https://doi.org/10.1016/j.future.2018.12.055doi:[[[10.1016/j.future.2018.12.055]]]
- 13 Z. Zhou, H. Liao, B. Gu, S. Mumtaz, and J. Rodriguez, "Resource sharing and task offloading in IoT fog computing: a contract-learning approach," IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 4, no. 3, pp. 227-240, 2020. https://doi.org/10.1109/TETCI.2019.2902869doi:[[[10.1109/TETCI.2019.2902869]]]
- 14 J. Liu, S. Wang, J. Wang, C. Liu, and Y . Yan, "A task oriented computation offloading algorithm for intelligent vehicle network with mobile edge computing," IEEE Access, vol. 7, pp. 180491-180502, 2019. https://doi. org/10.1109/ACCESS.2019.2958883custom:[[[-]]]