PDF  PubReader

Piao , Ding , He , Jang , and Liu: Implementation of Image Transmission Based on Vehicle-to-Vehicle Communication

Changhao Piao , Xiaoyue Ding , Jia He , Soohyun Jang and Mingjie Liu

Implementation of Image Transmission Based on Vehicle-to-Vehicle Communication

Abstract: Weak over-the-horizon perception and blind spot are the main problems in intelligent connected vehicles (ICVs). In this paper, a V2V image transmission-based road condition warning method is proposed to solve them. The encoded road emergency images which are collected by the ICV are transmitted to the on-board unit (OBU) through Ethernet. The OBU broadcasts the fragmented image information including location and clock of the vehicle to other OBUs. To satisfy the channel quality of the V2X communication in different times, the optimal fragment length is selected by the OBU to process the image information. Then, according to the position and clock information of the remote vehicles, OBU of the receiver selects valid messages to decode the image information which will help the receiver to extend the perceptual field. The experimental results show that our method has an average packet loss rate of 0.5%. The transmission delay is about 51.59 ms in low-speed driving scenarios, which can provide drivers with timely and reliable warnings of the road conditions.

Keywords: Internet of Vehicles , Real-Time Image Transmission , Road Condition Warning , V2X

1. Introduction

In recent years, with the rapid growth of national car ownership, it makes people’s travel more convenient, but it also brings problems such as traffic congestion and traffic accidents. Especially due to the traffic congestion, the casualties and economic losses are particularly severe. Data from the National Bureau of Statistics of the People’s Republic of China shows that, the deaths which are caused by traffic accident in 2020 is about 62,000, and the direct property losses are as high as 1.346 billion yuan [1].

The development of technologies such as wireless communication and sensor networks has provided solutions to these problems. These technologies are gradually being applied to intelligent transportation systems (ITS), laying the foundation for the development of Internet of Vehicles (IoV) technology [2]. IoV is similar to the Internet of Things (IoT) [3]. The communication objects of IoV include vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P). The communication equipments of IoV are mainly on-board unit (OBU) and road side unit (RSU). The data information of IoV mainly includes vehicle speed, heading angle, longitude, latitude, traffic light information, road sign information, etc. This wireless communication feature of IoV improves the perception capability of ICVs. It avoids the drawbacks of sensor detection failures caused by external factors such as weather or view blocked by the preceding vehicle. Relevant studies have shown that IoV, which supports information interaction between vehicles, can provide real-time traffic information, entertainment information, and other applications for ICVs or drivers [4-6]. It effectively guarantees road safety and traffic efficiency.

According to “Cooperative intelligent transportation system; vehicular communication; application layer specification and data exchange standard,” detection methods used by ICVs for various road emergencies in the over-the-horizon scenario [7] mainly include the following applications: hazardous location warning (HLW), blind spot warning (BSW), vulnerable road user collision warning (VRUCW), etc. The warning messages of the following application scenarios are all delivered based on text.

(1) Hazardous location warning: HLW refers to the application that warns the vehicle or driver when the ICV is traveling on a potentially dangerous road section. It can improve the vehicle’s ability to perceive dangerous road conditions over the horizon and reduce the risk of accidents. The application which adopts the V2I communication method gives warning by judging the positional relationship between the vehicle and the dangerous road conditions through the roadside information (RSI) broadcast by the RSU.

(2) Blind spot warning: BSW means that when a remote vehicle (RV) traveling in blind spot of our own vehicle, the application needs to alert the possible collision risk to the vehicle or driver. The application which adopts the V2V communication method gives a warning by judging the position of the two ICVs according to the longitude, latitude, heading angle, vehicle size, acceleration, and other information of the RVs.

(3) Vulnerable road user collision warning: VRUCW means that the application needs to warn the vehicle or driver when there is a risk of collision with the surrounding pedestrians, bicycles and other small vehicles while driving. It can assist vehicles or drivers to reduce the risk of collision with surrounding pedestrians, which improves the driving safety of vehicles on urban roads and the safety of pedestrians. The application which adopts the V2I or V2P communication methods predicts dangerous collisions by acquiring the vehicle's own data and the information about the status of surrounding pedestrians.

According to the above methods, traditional warning messages are delivered in the form of text. After the vehicle recognizes the road condition information through various sensors, it broadcasts warning messages to the rear vehicle in the form of a text message [8]. The messages received by the rear vehicle are all processed warning messages, and there is a risk of being unreliable. As we all know, image can convey more details than text. ICVs and drivers can obtain accurate information of the road conditions ahead through image information, including the type of traffic accident, its severity, casualties, and surrounding environmental conditions, further to take corresponding measures on their own to avoid secondary accidents or aggravating road congestion [9,10].

Therefore, this paper proposes a forward road condition detection method based on V2V image transmission. The method introduces the image information into the traditional road warning message to achieve a more detailed presentation of the road condition ahead. Secondly, the geographic location and clock information are integrated into the warning message, so that the vehicle only receives the road condition images from vehicles behind, filtering out redundant information to improve the efficiency of data reception. This also enables ICV to obtain the time and location of the road condition to make a more accurate warning.

2. Image Transmission based on V2V

2.1 Overall Structure

In the autonomous driving scenario, ICVs mainly rely on the assistance of various sensors for road condition detection, such as millimeter-wave radar, camera, and LIDAR [11]. Although the detection accuracy of sensors is high and more maturely developed, they often cannot give accurate detection results in extreme weather or limited visual information [12]. In these cases, it is necessary to combine the detection results of the sensor with V2X technology to ensure the safe driving of ICVs.

The overall framework of the V2V image transmission scheme proposed in this paper is shown in Fig. 1. It is mainly divided into two parts: the image processing module and the communication module. Describing the situation where a sudden dangerous condition occurs on the road, but the visual information of the rear vehicle is obscured by the front vehicle, resulting in the rear vehicle can’t carry out danger warning through the sensors in time. In this situation, the solution adopted is that the RV encodes the collected images, and then transfers the information through OBU1. After receiving the image coding information from OBU1, OBU2 of the HV decodes it to acquire the original image. The HV analyzes the acquired image immediately and gives corresponding warnings.

Since the communication method adopted in this paper is PC5, which transmits messages in the form of broadcast, the method described in Fig. 1 is also applicable for one-to-many, many-to-one, and many-to-many modes.

Fig. 1.

Overall framework of image transmission based on V2V.
1.png

In Fig. 2, the algorithm flow chart on the left depicts the behavior of the sender. During the image segmentation of the sender, U bytes is set as the segmentation unit firstly, then the total length of the encoded image is calculated and denoted as P. Secondly, we perform division and remainder on P to obtain the total number of segments and the length of the last segment. We study different cases according to whether the length of the last segment is 0. Finally, we generate the JSON packet and send it.

The algorithm flow chart on the right depicts the behavior of the receiver. Firstly, the JSON format string is parsed, the information is extracted and stored in variables. Secondly, the receiver filters out the invalid image information through the positional relationship and the timestamp between HV and RV. In the decoding process, the reorganization of the segmented image starts when the flag is equal to “Start” and the flag position p is set to 1. A counter n is set to compare with the seq in the JSON string to avoid the reorganization failure problem generated by packet loss. Thus, reassembly is performed only when p=1 and n=seq. It is not until a fragment with the flag “End” is received that both p and n are set to 0, indicating that the last packet is received and decoded. Finally, HV gives corresponding warnings.

Fig. 2.

Algorithm flowchart of sender and receiver.
2.png
2.2 Location Algorithm

The location information obtained by the RTK in this experiment adopts the WGS84 coordinate system, which is a geocentric coordinate system and a space Cartesian coordinate system. The coordinates (B, L, H) under WGS84 can be directly obtained through GPS. B is the latitude, L is the longitude, and H is the geodetic height, which means the height to the WGS84 ellipsoid. Mercator Projection assumes that the earth is surrounded by a cylinder, the equator and the cylinder are connected. Then assuming that there is a lamp in the center of the sphere, project the figure on the sphere onto the cylinder, and expand the cylinder to form a picture using Mercator Projection’s world map. The positioning algorithms used in this paper are all calculated after converting the WGS84 coordinate system to Mercator Projection. We assume that the coordinate of a point in the Mercator Projection is (x, y). The conversion formula is as follows:

(1)
[TeX:] $$\begin{aligned} &x=\frac{L \times 20037508.342789}{180}\\ &y=\frac{\frac{\log \left(\tan \left(\frac{(90+B) \times \pi}{360}\right)\right.}{\frac{\pi}{180}} \times 20037508.342789}{180} \end{aligned}$$

The positioning idea of this paper is as follows. Firstly, in order to determine whether the vehicle is on the road and driving forward. We should calculate the relationship between the road centerline points and the vehicle. Secondly, select two adjacent points on the road centerline points set, and convert them into a Mercator Projection together with the vehicle’s latitude and longitude. Then, according to the converted coordinates, the triangle formed by the three points is calculated. If it is an acute triangle, it means that the vehicle is located between these two points, and the specific position of the vehicle can be determined. Eventually, after performing such calculations on the information of the HV and the RV, the HV selects the information of the RV located in front of itself and on the same road, and performs the image decoding operation on this message.

2.3 Experimental Equipment Construction
2.3.1 Hardware platform

As shown in Fig. 3, the hardware platform chosen for this experiment is an embedded development board based on ZTE ZM8350 C-V2X module, which is an industrial-grade C-V2X wireless communication module based on LTE-V protocol in LGA package. This module can support data linking of B46D/B47 band network as well as Linux embedded operating system.

The verification platform of this experiment is a connected unmanned vehicle designed by us, equipped with CTI inertial navigation. The experimental site is a straight section of the southern campus of Chongqing University of Posts and Telecommunications, with a lane width of 4 m.

2.3.2 Software platform

The kernel system of the embedded development board is Linux4.14.78, and the cross-compilation chain is arm-poky-linux-gnueabi-gcc. The development language is C/C++.

Fig. 3.

Hardware platform: (a) imx6q, (b) ZM8350 C-V2X module, and (c) experimental unmanned vehicle.
3.png

3. Performance Analysis

The experimental results of image transmission in this paper will be analyzed in terms of transmission delay, packet loss rate, and the accuracy of the positioning algorithm. Firstly, the transmission efficiency of different image fragment sizes is compared and analyzed, and the optimal fragment size is selected comprehensively. Then, we analyze the packet loss rate of image transmission at a given distance between the two vehicles. Finally, the experimental vehicle will pass through the test points of 150 m, 100 m, 50 m, and 0 m at the speed of 10 km/hr, 20 km/hr, and 30 km/hr. Recording the calculated distance and actual distance output by the algorithm respectively, so as to obtain the precision of the positioning algorithm.

3.1 Image Transmission Delay

We assume that the byte size of a frame of image is P bytes, the fragment length is U bytes, and the transmission delay of a fragment with a length of U bytes is [TeX:] $$\operatorname{Delay}_{U}.$$ This paper uses formula (2) to measure the transmission efficiency [TeX:] $$\eta$$ of a fragment:

(2)
[TeX:] $$1 / \eta=(P / U) \times \text { Delay }_{U}=\left(\text { Delay }_{U} / U\right) \times P$$

It can be known from formula (2) that when the ratio of [TeX:] $$\operatorname{Delay}_{U}.$$ and U is smaller, the transmission efficiency of this fragment is higher. In this experiment, through the comparison of multiple sets of data, the following five fragment lengths with smaller transmission delay and bigger [TeX:] $$\eta$$ are finally selected, which are 1500 bytes, 1200 bytes, 1050 bytes, 1030 bytes, and 1000 bytes, respectively, and they are analyzed below. Their [TeX:] $$\eta$$ at each moment was obtained through the calculation of formula (2). The distribution of [TeX:] $$\operatorname{Delay}_{U}$$/U is shown in Fig. 4, and the distribution of the transmission delay is shown in Fig. 5.

Fig. 4.

Distribution of [TeX:] $$\operatorname{Delay} / U.$$
4.png

Fig. 5.

Distribution of [TeX:] $$\operatorname{Delay}_{U}$$
5.png

As shown in Fig. 4, the ratio is randomly distributed, and its image features are high in the middle and low on both sides, which satisfies the graphic law of normal distribution. When the fragment length is 1030 bytes, the normal distribution curve has a thin and narrow shape, its average μ is also the smallest, indicating that the transmission efficiency at this time is the highest, and the average ratio is 0.050085. According to the normal distribution theory, data points far from the average are small probability events, so a more concentrated distribution curve represents a more stable transmission. Therefore, this experiment selects 1030 bytes as the fragment length of image to ensure the best transmission efficiency.

As shown in Fig. 5, the V2V image transmission delay presents random distribution characteristics, mainly high in the middle, low at both ends, and basically symmetrical distribution, which also conforms to the normal distribution law. Analysis of the waveform in Fig. 5 shows that when the fragment size is 1030 bytes, the average transmission delay [TeX:] $$\mu$$ is 51.59 ms. Using the normal distribution theory, the data points far from the average are small probability events, and events with a probability lower than 5% are almost impossible to happen.

3.2 Image Transmission Packet Loss Rate

To test the packet loss rate of V2V communication, just add a subject “Count” to the JSON message of the above experiment. “Count” cycles between 0–99 and increments by 1 after each message is sent. A total of 3,000 data samples are collected for this experiment. The collated experimental data are plotted in Fig. 6, and the packet loss rate of image transmission is 0.5%. As shown in Fig. 6, when there is no packet loss, the points represented by the heartbeat code should form a completely straight line. When there is a packet loss situation, a vacancy occurs.

3.3 Positioning Algorithm Accuracy

In this experiment, four test points were set at 0 m, 50 m, 100 m, and 150 m away from the test starting point. The connected unmanned vehicle passes through the test points of 150 m, 100 m, 50 m, and 0 m at speeds of 10 km/hr, 20 km/hr, and 30 km/hr, respectively. The recorded data are shown in Table 1. The analysis shows that the average positioning error is 1.19 m, which meets the requirements of cooperative communication standards. It can ensure good positioning accuracy at low speed or stationary state. Although the positioning accuracy may be affected in high-speed scenarios, it is still within a certain allowable error range, so the positioning algorithm used in this paper can be well applied to various driving scenarios.

Fig. 6.

Packet loss rate test result.
6.png

Table 1.

Positioning accuracy at different speeds
Actual distance (m) Speed (km/hr) Calculate distance (m) Error (m)
150 10 147.56 2.44
20 147.78 2.22
30 146.33 3.67
100 10 101.33 1.33
20 99.00 1.00
30 102.00 2.00
50 10 50.00 0.00
20 20 1.33
30 50.33 0.33
0 10 0 0
20 0 0
30 0 0

4. Conclusion

The forward road condition detection scheme based on V2V image transmission proposed in this paper can provide ICVs and drivers with more detailed and accurate road condition information, and improve the accuracy of road condition warning. And this method can send the road condition image through OBU to the rear vehicle with a low packet loss rate and transmission delay. When the size of the transferred image exceeds the number of bytes in a single transfer of OBU, it can choose an optimal segment length to segment the image. The method adds a fault-tolerant mechanism to ensure that the received fragment can be accurately reconstructed into the original image. The vehicle status information integrated with the image message, which can filter out invalid messages for the ICVs and improve the communication efficiency of the IoV.

Acknowledgement

This paper is supported by the Office of Science and Technology of Chongqing (No. cstc2019jscx-mbdxX0052, Development and application of L4 level autonomous driving).

Biography

Changhao Piao
https://orcid.org/0000-0002-0576-5032

He received his B.E. degree in electrical engineering and automation from Xi’an Jiaotong University in 2001, M.S. and Ph.D. degree in Inha University, South Korea in 2007. He is currently a professor in School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China. His research interests include automobile electronics and energy electronics.

Biography

Xiaoyue Ding
https://orcid.org/0000-0001-6358-9594

She received B.E. degree in Internet of Things Engineering from Chongqing Uni-versity of Posts and Telecommunications in 2020. Since September, 2020, she is with the School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China as a M.S. candidate. Her current research interests include internet of vehicle and video transmission.

Biography

Jia He
https://orcid.org/0000-0002-0612-1984

He received B.E. degree in Petroleum Engineering from Southwest Petroleum Uni-versity in 2020. Since September, 2020, he is with the School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China as a M.S. candidate. His current research interests include smart transportation.

Biography

Soohyun Jang
https://orcid.org/0000-0003-2852-0318

He received the B.S., M.S., and Ph.D. degrees in the School of Electronics and Infor-mation Engineering from Korea Aerospace University, Goyang, Korea, in 2009, 2011, and 2015, respectively. He is currently a principal researcher in the Mobility Platform Research Center, Korea Electronics Technology Institute, Seongnam, Korea. His research interests include the signal processing algorithm and VLSI implementation for the wireless communication systems.

Biography

Mingjie Liu
https://orcid.org/0000-0003-0464-675X

He received his M.S. degree from Chongqing University of Posts and Telecommuni-cations in 2012, Ph.D. degree from Inha University, South Korea in 2019. He is currently a lecturer in School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China. His research interests include computer vision and automobile electronics.

References

  • 1 National Bureau of Statistics of the People's Republic of China, China Statistical Yearbook 2020, Beijing: China Statistics Press, 2020.custom:[[[-]]]
  • 2 B. He, T. Li, "An offloading scheduling strategy with minimized power overhead for Internet of vehicles based on mobile edge computing," Journal of Information Processing Systems, vol. 17, no. 3, pp. 489-504, 2021.custom:[[[-]]]
  • 3 M. M. Rad, A. M., Rahmani, A. Sahafi, N. N. Qader, "Social Internet of Things: vision, challenges, and trends," Human-centric Computing and Information Sciences, vol. 10, no. 52, 2020.doi:[[[10.1186/s13673-020-00254-6]]]
  • 4 J. K. Park, T. M. Chung, "Boundary-RRT* algorithm for drone collision avoidance and interleaved path re-planning," Journal of Information Processing Systems, vol. 16, no. 6, pp. 1324-1342, 2020.custom:[[[-]]]
  • 5 L. Tang, C. Chen, "Integration development trend of the internet of vehicles industry," Telecom-munications Science, vol. 35, no. 11, pp. 96-100, 2019.custom:[[[-]]]
  • 6 T. Kim, I. Y. Jung, Y. C. Hu, "Automatic, location-privacy preserving dashcam video sharing using blockchain and deep learning," Human-centric Computing and Information Sciences, vol. 10, no. 36, 2020.doi:[[[10.1186/s13673-020-00244-8]]]
  • 7 J. Li, Y. Zhang, M. Shi, Q. Liu, Y. Chen, "Collision avoidance strategy supported by LTE-V-based vehicle automation and communication systems for car following," Tsinghua Science and Technology, vol. 25, no. 1, pp. 127-139, 2020.doi:[[[10.26599/tst.2018.9010143]]]
  • 8 A. Khalil, N. Minallah, I. Ahmed, K. Ullah, J. Frnda, N. Jan, "Robust mobile video transmission using DSTS-SP via three-stage iterative joint source-channel decoding," Human-centric Computing and Infor-mation Sciences, vol. 11, no. 42, 2021.doi:[[[10.22967/HCIS..11.042]]]
  • 9 D. F. Chu, Z. E. Wang, Y. J. Qiu, "Research on transmission strategy for real⁃time video stream under LTE⁃V2X," Automotive Engineering, vol. 43, no. 6, pp. 815-824, 2021.custom:[[[-]]]
  • 10 F. Hui, M. Xing, J. Guo, S. Tang, "Forward collision warning strategy based on vehicle-to-vehicle communication," Journal of Computer Applications, vol. 41, no. 2, pp. 498-503, 2021.custom:[[[-]]]
  • 11 W. Song, S. Zou, Y. Tian, S. Sun, S. Fong, K. Cho, L. Qiu, "A CPU-GPU hybrid system of environment perception and 3D terrain reconstruction for unmanned ground vehicle," Journal of Information Processing Systems, vol. 14, no. 6, pp. 1445-1456, 2018.doi:[[[10.3745/JIPS.02.0099]]]
  • 12 R. H. Rasshofer, M. Spies, H. Spies, "Influences of weather phenomena on automotive laser radar systems," Advances in Radio Science, vol. 9, pp. 49-60, 2011.doi:[[[10.5194/ars-9-49-2011]]]