1. Introduction
The Internet of Vehicles (IoV) is one of the main application scenarios of 5G. In practical applications, task offloading and resource allocation between multiple devices and multiple servers are usually affected by multiple factors such as limited server resources, interference between different devices, and the location and mobility of IoV devices [1-3]. Therefore, reasonable optimization of task offloading and resource allocation is an important challenge to ensure the stable and efficient operation of IoV communication [4-6].
At present, certain research results have been obtained for the problem of vehicle networking com¬munication resource allocation, such as time delay, optimization of energy consumption, and limited resource allocation [7,8]. As one of the key 5G technologies, mobile edge computing (MEC) can dynamically obtain real-time information in the wireless access network, and promote the development of caching technology in a more intelligent direction [9,10]. Jiang et al. [11] proposed a new bandwidth-link resource cooperative allocation strategy. Based on the predictive characteristics of the relative position of the mobile transceiver, a collaborative strategy for mobile cellular networks and dedicated short-range communications is formulated. Chen et al. [12] uses asynchronous actor critical learning algorithm to deal with the problem of computing offload in the scenario of IoVs. The experimental results show that this method is effective and can effectively improve the performance of virtual wireless network. Wang et al. [13] proposed a solution based on deep reinforcement learning to deal with the problem of computing unloading in the scenario of IoVs. The scheme maximizes the average cumulative reward, that is, minimizing the total cost, but it has poor adaptability to complex networks. Tang et al. [14] proposed a general framework for sharing and utilizing communication, computing and storage resources in the Internet of Things environment, which realizes more efficient resource allocation to a certain extent, but the scheme ignores the data transmission delay. Guan et al. [15] integrates the radar cross section of the small-scale structure into the simulator based on the high-frequency prediction technology framework. Li et al. [16] proposed a two-layer algorithm based on independent power greedy outer approximation and unified power to ensure the communication rate and service quality of the IoVs. Wei et al. [17] used a deep reinforcement learning algorithm to solve the computational offloading problem of mobile users in wireless cellular networks with MEC. The experimental results are realistic, and the strategy has achieved good results. Wang et al. [18] proposed a federated DRL-based collabo¬rative edge caching framework to deal with the content delivery latency and the cache hit ratio when offloading repetitive traffic. However, in practical applications, due to the complexity of state-action, the Q-learning algorithm takes a long time to converge.
The processing power of the IoV equipment is difficult to meet the needs of delay-sensitive and computationally intensive applications. And most resource allocation strategies have the problem of uneven load distribution. This paper proposes a communication resource allocation strategy for the IoV based on edge computing. The main innovations of this study are summarized as follows:
(1) Due to the limited computing power of the in-vehicle cloud processor and the less power storage, the MEC server is introduced to build a cloud-side collaborative cache and resource allocation system model to strengthen the communication capabilities of the system.
(2) In order to improve the utilization of resources, the proposed strategy uses coding communication tasks and uses genetic algorithms (GA) to find the optimal allocation plan. The offloading of computing tasks to a suitable MEC server or neighboring vehicles also further reduces the total energy consumption.
The rest of this paper is arranged as follows. The second section introduces the system model and modeling process. The third section introduces the resource allocation strategy of IoV based on GA. The fourth section designs experiments to verify the strategy. The fifth section is the conclusion.
2. System Model and Modeling
For ease of reading, abbreviations of some nouns are shown in Table 1.
2.1 System Model
The system scenario is shown in Fig. 1.
Model architecture of cloud-edge collaboration for Internet of Vehicles.
We can see from Fig. 1 that in this model, L roadside units (RSU) are deployed around the road, denoted as [TeX:] $$R=\left\{r_{1}, r_{2}, \cdots, r_{L}\right\}$$. Each MEC server is placed next to the RSU. There are N vehicles, and their vehicle identification is [TeX:] $$n=\{1,2, \cdots, N\}$$; thus, the vehicle is denoted as [TeX:] $$V=\left\{v_{1}, v_{2}, \cdots, v_{N}\right\}$$. Both the MEC server and neighboring vehicles are collectively referred to as the service node [TeX:] $$\psi=\left\{\varphi_{1}, \varphi_{2}, \cdots, \varphi_{M}\right\}$$.
The vehicles are randomly distributed, and the collection of vehicles in cell j is [TeX:] $$\phi_{j}=\left\{v_{1}, v_{2}, \cdots, v_{n}\right\}$$. The spectrum is equally divided into K sub-channels, denoted as [TeX:] $$k=\{1,2, \cdots, K\}$$, and the bandwidth of each sub-channel is B. The set of vehicle unloading strategies is denoted as [TeX:] $$A=\left\{a_{1}, a_{2}, \cdots, a_{N}\right\}$$. If [TeX:] $$a_{i}=1$$, it means that [TeX:] $$v_{i}$$ will offload the task to the service node [TeX:] $$\varphi_{m}$$ for calculation. If [TeX:] $$a_{i}=0$$, it means that [TeX:] $$v_{i}$$ is performing calculation tasks locally.
2.2 Communication Model
Communication consumption occurs when the task is offloaded to the server and the server downloads the calculation content required to complete the task. Communication consumption is not only related to the amount of data transmitted but also is related to the bandwidth and the actual transmission rate of the wireless network. It is assumed that the communication link between the vehicle and the MEC node is a frequency-flattened fast fading Rayleigh channel. Then the vehicle offloads the task to the MEC node and the MEC node returns the calculation result to the vehicle speed [TeX:] $$v_{V 2 I}$$, which can be expressed as:
where, [TeX:] $$v_{V 2 I}$$ represents the bandwidth between the vehicle and the MEC node. [TeX:] $$P_{s}$$ represents the transmit power of the transmitting device. [TeX:] $$d^{-\delta}$$ represents the path loss between the vehicle and the server node. [TeX:] $$\delta$$ represents the path loss factor. [TeX:] $$\tau^{2}$$ represents the channel fading factor of the upload link.[TeX:] $$N_{0}$$ represents the Gaussian white noise power. Therefore, the required upload delay [TeX:] $$T_{i}^{u p}$$ when offloading the task to the server is:
where, [TeX:] $$S_{i}$$ represents the size of the task.
Assuming that the distance between the MEC server nodes is [TeX:] $$d_{M E C 2}$$, and the distance between the edge server and the cloud server is [TeX:] $$d_{C M E C}$$. It can be assumed that the transmission speed between the MEC server nodes is [TeX:] $$v_{M E C 2}$$, and the transmission speed from the cloud service to the edge server is [TeX:] $$v_{C M E C}$$. When task [TeX:] $$\Omega_{i}$$ is sent to the MEC server numbered [TeX:] $$l$$ and has cached [TeX:] $$G_{s}^{i}$$ cached content, the edge server needs to initiate a request for downloading missing computing content to the cloud. Therefore, the time [TeX:] $$T_{i}^{\text {down }}$$ required for the service server to download the missing content can be approximately expressed as:
In the formula, [TeX:] $$\vartheta_{\Sigma}^{i}$$ represents the calculation content involved in the calculation process.[TeX:] $$\vartheta_{m}^{i}$$ represents the calculated content that is offloaded to the MEC server. [TeX:] $$v_{\Sigma}$$ is the sum of the transmission speed between the MEC server nodes and the cloud service to the edge server.
Assuming that the energy consumption due to the transmission per unit time is [TeX:] $$E_{\text {tran }}$$, the total energy consumption [TeX:] $$E_{\text {tran }}^{\Sigma}$$ due to the transmission is:
2.3 Calculation Model
Assuming that each vehicle has a calculation task [TeX:] $$\Omega=\left\{s_{i}, c_{i}, t_{i}^{\max }\{\}\right\}, i \in N$$ needs to be processed. Among them, [TeX:] $$s_{i}$$ is the input size of task [TeX:] $$\Omega_{i}$$. [TeX:] $$c_{i}$$ is the number of CPU cycles required to calculate task [TeX:] $$\Omega_{i}$$. [TeX:] $$t_{i}^{\max }$$ is the maximum time delay that can be tolerated by the calculation task [TeX:] $$\Omega_{i}$$. The task will be offloaded to the MEC server or neighboring vehicles through the RSU. It can also be executed on the local vehicle.
Offload calculation
Define the cost of the calculation process of task request vehicle [TeX:] $$v_{i}$$ offloading task as the service node [TeX:] $$\varphi_{m}$$, which is the weighted combination of delay and energy consumption, expressed as:
where, [TeX:] $$\alpha$$ represents the weighting factors of delay, and [TeX:] $$\beta$$ represents the energy consumption, and satisfies [TeX:] $$\alpha+\beta \leq 1$$. [TeX:] $$t_{i}^{o f f}=\frac{s_{i}}{v_{V 2 I}}+\frac{c_{i}}{f_{m}^{i}}$$ is the sum of the unloading delay and the calculation delay.[TeX:] $$f_{m}^{i}$$ is the computing resource allocated by the service node [TeX:] $$\varphi_{m}$$ to the vehicle [TeX:] $$v_{i} \cdot e_{i}^{o f f}=P_{s} \frac{s_{i}}{v_{V 2 I}}$$ is the energy consumption of the transmission process.
Local calculation
Assuming that the computing power of vehicle [TeX:] $$v_{i}$$ is [TeX:] $$F_{i}^{1}$$, different vehicles have different computing powers. When [TeX:] $$\Omega_{i}$$ is calculated locally, the delay is represented by [TeX:] $$t_{i}^{1}, t_{i}^{1}=c_{i} / F_{i}^{1}$$; the cost that the vehicle [TeX:] $$v_{i}$$ needs to bear is:
2.4 Optimization
The binary-based task offloading scheme is adopted, that is, tasks are only allowed to be processed locally or offloaded to the server. In the network, the role of the vehicle is only to collect data or offload tasks to the server. Therefore, all tasks can only be processed in cloud servers or in edge servers. If the total number of tasks requested from vehicles is [TeX:] $$N$$, there is a task set [TeX:] $$\Omega=\left\{\Omega_{i} \mid i=1,2, \cdots, N\right\}$$. For each task, [TeX:] $$\Omega_{i}$$ has [TeX:] $$\Omega_{i}=\left\{\alpha_{i}, Q_{c}^{i}, \vartheta_{\sum}^{i}, t_{\max }^{i_{i}}\{\}\right\}$$, where [TeX:] $$Q_{c}^{i}$$ is the CPU resource required for execution, [TeX:] $$t_{\max }^{i}$$ is the maximum execution time of the task, [TeX:] $$A_{i}$$ is the unloading decision of the task, [TeX:] $$\vartheta_{\Sigma}^{i}$$ is the calculation content involved in the calculation process and [TeX:] $$s_{i}$$ represents the size of the task.
The main concern in this paper is how to realize a joint optimization of the time delay and the energy consumption required by the offloading task. For any task [TeX:] $$\Omega_{i}$$, its energy consumption [TeX:] $$E_{\Sigma}$$ is composed of transmission energy consumption [TeX:] $$E_{\text {tran }}^{\sum}$$ and calculation energy consumption [TeX:] $$E_{\text {cal }}$$. The delay consumption [TeX:] $$T_{\Sigma}$$ is composed of calculation delay [TeX:] $$T_{c a l}$$ and transmission delay [TeX:] $$T_{t r a n}$$. Then the optimization objective function can be expressed as:
where, [TeX:] $$\lambda$$ represents the weight of energy consumption and delay consumption. Constraint C1 represents that the delay consumption of the task cannot be greater than the maximum tolerable delay of the task. Constraint C2 means that tasks can only be offloaded to cloud servers or MEC servers. Constraint C3 represents that the computing resources required by the task cannot exceed the maximum computing resources of the MEC server.
3. Resource Allocation Strategy of IoVs Based on GA
The optimization problem is transformed into a knapsack problem, and a GA is used to solve it to meet the computational unloading requirements. Different MEC servers have different computing capabilities. When the MEC server with insufficient computing resources undertakes a large number of computing tasks, it cannot guarantee the completion of computing tasks, and will also cause the MEC server to be overloaded, affecting the processing of other services. At the same time, the random offloading of computing tasks will lead to a lengthy number of iterations when searching for the optimal computing offloading strategy. Therefore, a task offloading strategy based on GA is used to reserve suitable initial chromosomes. We combine the greedy algorithm with the GA to find the optimal offloading strategy for each MEC server; thus, speeding up the iterative process.
3.1 Coding and Initial Population Setting
Using binary coding, the coding of each chromosome is an offloading strategy [TeX:] $$A$$. [TeX:] $$A_{(q)}$$ represents the value of the [TeX:] $$q$$ bit in the strategy [TeX:] $$A$$, and [TeX:] $$A_{(q)}$$ takes 0 or 1. Let [TeX:] $$\left[X_{i M}, X_{i M+1}, \cdots, X_{(i+1) M}\right]$$ denote the unloading result of computing task Ω_i, the number a_i of the uninstalled MEC server is:
where, [TeX:] $$\kappa$$ represents the number of code bits.
Fig. 2 shows the coding process of computing task strategy for no more than 8 MEC servers.
Mapping of strategy code and MEC server.
Initialize the relevant parameters of the GA, including the maximum number of iterations [TeX:] $$I$$, chromosome length [TeX:] $$N \times M$$, mutation probability [TeX:] $$P_{m}$$, crossover probability [TeX:] $$P_{c}$$, population size [TeX:] $$Z$$ and reserved population [TeX:] $$Z^{\prime}$$. Most GA select the initial population randomly. The proposed strategy combines pre-defined chromosomes and random chromosome algorithms for population initialization. In the process of pre-defining chromosomes, classify the chromosomes with the highest weight. That is, the on-board computing task encoding is set as follows:
3.2 Fitness Function
The fitness [TeX:] $$f_{i}$$ of each unloading scheme in the proposed strategy is the reciprocal of the objective function. The higher the fitness function value, the smaller the value of the objective function model. It means that when the overall cost of the computational offloading scheme decreases, the offloading scheme presents better performances. The specific calculation formula of [TeX:] $$f_{i}$$ is as follows:
After the fitness function is calculated, the general chromosomes with poor fitness are eliminated according to the calculated value, (i.e., the unloading plan), and the chromosomes with higher fitness are retained. In this way, the quality of the chromosomes after several iterations is increasingly higher. In other words, the overall cost of the uninstall scheme is becoming less.
3.3 Genetic Operator Operation
(1) Select operation: The chromosomes are selected according to the selection probability, and the above-mentioned individuals are regarded as the first generation. Using the roulette random selection method proportional to fitness, assuming the fitness of the individual is [TeX:] $$f_{i}$$, the probability of [TeX:] $$i$$ being selected is:
For the initialized population, calculate the fitness value of each chromosome and its probability of being selected for comparison, and eliminate the chromosome with the lowest probability. The chromosome with the highest probability is selected for copying, instead of the chromosome that has been eliminated.
(2) Cross operation: Using a one-point crossover method, the crossover probability is [TeX:] $$P_{c}$$.
(3) Mutation operation: For the proposed optimization problem, the mutation operation is to change the mutation bit 1 of the chromosome to 0, and 0 to 1. The other bits remain unchanged, and the probability of mutation is [TeX:] $$P_{m}$$. Therefore, select a variant position for mutation. Then calculate whether its fitness is greater than or equal to its original fitness. If not, re-select the variant position for mutation.
3.4 Termination Condition
The GA requires the setting of termination conditions. Therefore, the fitness function value needs to be calculated after each chromosome mutation. If the set maximum number of iterations has been reached or the fitness value has remained unchanged for a long time, the entire GA process is terminated. That is, the global optimal solution that meets the conditions is obtained; otherwise, the iteration continues.
4. Experiment and Analysis
The simulation experiment of the proposed strategy is carried out on the MATLAB platform (MATLAB2012a). Considering that there are 5 cells on the roadside, each cell is configured with RSU and MEC server. The specific simulation parameters are shown in Table 2.
4.1 Effect of Iteration Number on the Task Completion Rate
First, verify the impact of the number of iterations of the proposed strategy on the task completion rate. The task completion rate is expressed as the proportion of successfully calculated computing tasks to the total computing tasks. Through repeated experiments, the effect of the number of iterations on the task completion rate under different vehicle-mounted terminals is studied. The result is shown in Fig. 3.
It can be seen from Fig. 3 that the proposed strategy can reach a plateau after effective iterations. It tends to converge about 250 times. And the larger the number [TeX:] $$N$$ of vehicle-mounted terminals, the lower the completion rate of the task. When [TeX:] $$N$$ increases from 10 to 50, the task completion rate drops from 100% to 73%. As the number of vehicle-mounted terminals increases, the amount of computing tasks increases. In the case of limited computing resources, data processing capabilities will decline. As a result, the task completion rate has dropped.
Influence of iteration times on task completion rate.
4.2 Performance Comparison with Other Methods
In order to demonstrate the performance of the proposed strategy, it is compared with [11,12,17]. The relationship between the number of vehicles and the total cost is shown in Fig. 4.
Relationship between the number of vehicles and the total cost.
It can be seen from Fig. 4 that as the number of vehicles increases, the total cost increases. The proposed strategy makes full use of the surrounding idle resources, reasonably allocating computing and communication resources, and the cloud-side collaborative offloading can effectively improve resource utilization. The caching mechanism can avoid double calculations and reduce network congestion, thereby reducing system overhead. When the number of vehicles is 60, the total calculation cost is only about 180 J.
Similarly, the relationship between the amount of task data and the total cost of different strategies is shown in Fig. 5.
Relationship between task data volume and total cost.
As can be seen from Fig. 5, the total overhead increases with the increase of the amount of task data. Among several comparison strategies, the total cost of the proposed strategy is the smallest, and the cost growth rate is relatively slow. When the task volume is 30 MB, the total overhead is about 300 J. Because it adopts the cloud-side collaborative offloading method, the GA selects the best resource allocation mode, which can provide resource utilization. In the case of a large number of tasks, low computational overhead can be guaranteed. Wei et al. [17] uses a deep reinforcement learning algorithm to solve the communication resource allocation problem of mobile users in a wireless cellular network with MEC. The learning algorithm used is relatively complicated. When the number of tasks increases, the computational overhead will increase significantly, exceeding 600 J. Chen et al. [12] only uses the asynchronous advantage actor-critic learning algorithm for communication resource allocation. But it relies too much on the processing power of the vehicle itself. Therefore, the total overhead is higher when the number of tasks increases. Jiang et al. [11] allocates resources through the cooperation of mobile cellular network and is dedicated to short-range communication, lacking the optimization of the allocation scheme. Therefore, the overall cost remains high, exceeding 800 J.
In order to verify the delay and energy consumption performance of the proposed model, an experiment is designed to verify the performance of different strategies with the number of tasks as the variable. The experimental results are shown in Fig. 6.
Comparison of task number and average delay of different strategies.
It can be seen from Fig. 6 that the delay consumption of the strategy in [11] is higher than that of other strategies. It relies on a dedicated short-range communication network, but the vehicle has a wide range of movement, and this network architecture cannot meet the communication needs of the IoVs. Therefore, the time delay is as high as 280 ms. In [12,17], both use learning algorithms to optimize the communication resource allocation plan. But the strategy in [17] is based on MEC deployment, offloading tasks to the MEC server with stronger processing capability, which can reduce the delay by about 20 ms. The proposed strategy is based on the cloud-edge coordinated car networking and uses GA to solve the optimal resource allocation plan. It not only makes full use of the powerful processing capabilities of MEC, but also combines with the optimization capabilities of GA. Therefore, the average delay is relatively small, the highest is only 210 ms.
Similarly, for different number of tasks, the average energy consumption of the four strategies is shown in Fig. 7.
Comparison of task number and average energy consumption of different strategies.
It can be seen from Fig. 7 that compared with other strategies, the average energy consumption of the proposed strategy is the smallest. When the number of tasks is 60, its average energy consumption is 300 J. Its use of cloud-side collaborative offloading can effectively improve MEC resource utilization. And the caching mechanism can avoid repeated calculations and reduce network congestion, thereby reducing system energy consumption. The authors of [11] only uses changes in network to optimize resource allocation, unable to adapt to the IoVs environment with a large amount of data. Therefore, the energy loss is large. In [12,17], both use learning algorithms to achieve communication resource allocation. However, the learning algorithm is more complicated, and the computational energy loss is large.
5. Conclusion
The IoVs is facing the impact of massive data traffic. In order to realize the real-time response of the IoVs business and to reduce energy consumption. A strategy for resource allocation of IoV communication based on edge computing is proposed. Based on the cloud-side collaborative cache and the resource allocation system of the IoVs, the GA is used to solve the optimization objective function that minimizes time delay and energy consumption. In order to find the best resource allocation plan, experimental analysis based on the built IoV model shows that under different system parameters, the system overhead and the average delay of our proposed strategy are 300J and 210ms, respectively. They are superior to other strategies and provide a certain theoretical basis for practical applications. In future work, we will continue to study the MEC-based V2X (Vehicle-to-Everything) communication resource allocation strategy. At the same time, the situation where the link is disconnected due to the high-speed movement of the vehicle and the task backhaul fails should also be addressed. It is suggested to build a V2X resource allocation framework based on vehicle location prediction.
Acknowledgement
This work is supported by application and research of teaching diagnosis and improvement based on information system fragmentation in the construction of “Shuang Gao Xiao” (No. 2019SJGLX704).