PubReader

Byun* , Oh** , and Choi*: ICT Agriculture Support System for Chili Pepper Harvesting

Younghwan Byun* , Sechang Oh** and Min Choi*

ICT Agriculture Support System for Chili Pepper Harvesting

Abstract: In this paper, an unmanned automation system for harvesting chili peppers through image recognition in the color space is proposed. We developed a cutting-edge technology in terms of convergence between information and communication technology (ICT) and agriculture. Agriculture requires a lot of manpower and entails hard work by the laborers. In this study, we developed an autonomous application that can obtain the head coordi-nates of a chili pepper using image recognition based on the OpenCV library. As an alternative solution to labor shortages in rural areas, a robot-based chili pepper harvester is proposed as a convergence technology between ICT and agriculture requiring hard labor. Although agriculture is currently a very important industry for human workers, in the future, we expect robots to have the capability of harvesting chili peppers autonomously.

Keywords: Agriculture Supporting System , Automation System , Chili Harvest , ICT Convergence

1. Introduction

The area of information and communication technology (ICT) and its role in changing agricultural practices has attracted increasing attention owing to its wide application. ICT convergence has been considered for use in future technologies in numerous industrial environments. As a result of ICT technology convergence, an automatic chili harvesting system can result in easier cultivation; thus, resolving the manpower shortage in rural areas [1]. This type of robot can eliminate simple repetitive tasks required in agriculture. For this purpose, we adopted an automation strategy consisting of looking for a chili pepper one step ahead of the chili tree, and then harvesting the pepper using a robot arm as part of the automated harvesting system [2,3]. To find a chili pepper on a chili tree, we apply an image processing technology and look for ripened peppers with a red color, on the camera image. To obtain the coordinates of the pepper faucet as a result of image processing, an algorithm applies the input through the camera after the image is captured. Previous studies have focused on certain automation components, although the use of human labor remains a requirement [1,4-6]. This makes automation rather cumbersome and limits its application. To solve these problems, we developed an automation system that is capable of harvesting chili peppers in which the robot arm reaches the actual hardware coordinates obtained from a frame of an actual camera image. An automatic chili harvesting system is equipped with an ultrasonic sensor at the front. The sensors calculate the distance between the system and targets. In this study, we developed an ICT convergence agriculture support system.

To this end, we propose a set of algorithms for image preprocessing, noise removal, finding the center coordinates of the image frame containing a chili pepper, an Atmel AVR module control from the center coordinate of the input image, and other applications. Through this study, we realized that human labor is not required, and pepper harvesting can be achieved through an ICT convergence system, resulting in automated chili pepper harvesting.

In Section 2, we describe previous related studies. In Section 3, we demonstrate the system architecture and implementation of an ICT convergence agriculture support system for chili pepper harvesting, and an algorithm for finding the top of a chili pepper, calculating the noise of the coordinates, and determining the coordinates of the center section. In Section 4, we present the experimental results. Finally, we provide some concluding remarks regarding this study in Section 5.

2. Existing Device Discovery Scheme

Fig. 1 applies an end effector to automatically gather the peppers one by one. This invention includes an end-effector for harvesting peppers and a cylindrical immersion tube, with cuts installed at the upper end of the tube to allow the chili peppers above the nipple area to be injected into the tube. The cut area is provided at the side surface of the immersion tube, and the drive link includes a drive motor. The chili pepper is added to the input tap of the pipe to detect the position of the region provided on the upper side of the cutout, which is configured to include a detection signal transmitting the driving side of the driving motor using a small and light machine. In addition, pepper harvest operations can easily and simul¬taneously increase labor savings by applying automation to the harvest work of capsicum growers [7].

Fig. 1.

End effector for the pepper harvest to harvest the peppers one by one automatically.
1.png

Although the present invention is related to pepper harvesters, it can be applied to other agricultural products for easier harvesting. Fig. 2 includes a cutting blade to cut the tap of the pepper. The pepper cutting is provided at one end, and allows a flexible transfer of the pepper to the transfer tube for conveying purposes. The suction force connects the pepper transfer tube on one end, and the other end forms at least one cut through the capsicum pepper transfer pipe to transfer the engine cover. In addition, the cover is integrally connected to the hinge joint to collect the peppers transferred through the transfer tube, and the lower part has a collector with castors [8-11].

Fig. 2.

Body parts chili pepper red pepper cut with a cutting tool with a cutting blade.
2.png

3. System Architecture and Implementation

The overall architecture of the robot has a size of 340 mm × 250 mm, which includes a stepper motor (3KH56KM-902, PM55L-048-HHD0), and one drive torque at 24 V 2 A and 24 V 600 mA. Drive torque servo motors with an HS-485HB 6 were used for the forceps. Four DC motors with a drive torque of 1.3 are used. The input voltages of the robot are 12 V (motor) and 5 V (AVR and the camera), and the current is more than 4 A. Video will be captured through a wireless camera, and wireless transmission will be applied using a CAM control board with ZigBee communication and a receiving PC with a USB communication port and data board. The motor control and sensor use an ATmega128. Wireless cameras and ATmega128 are connected to the UART communications. Therefore, the data processed on the PC are transmitted through a wireless camera with a CAM control board using an ATmega128.

3.1 H/W Frame and XYZ Coordinates of the Automatic Harvesting System

The automatic chili harvesting system has a device arm capable of linear motion in three parts, as shown in Fig. 3. First, on the x-axis, right or left movement along a straight line is needed. Thus, a DC motor is controlled by forward and reverse rotations. Second, on the y-axis, the height of the pepper varies depending on the case. This is possible with the rising point or falling motion of the step motor. Third, for the z-axis, when the X and Y coordinates are the same, we harvest the chili pepper by moving the rail clamp at the end of the rail pepper.

An aluminum frame of the automated chili harvesting system was applied through a process using the CNC. This consists of three parts: upper, middle, and bottom. The overall design and 3D models are shown in Fig. 3.

Fig. 3.

Three-dimensional (3D) design of auto chili harvesting robot.
3.png

We installed four polished rods on the upper part. In the middle of the system, the step motor runs a timing belt to conduct the harvesting associated with a forward belt rail. At the end of the rail tongs, ultrasonic sensors and a wireless camera are mounted. At the bottom of the system, four DC motors and two step motors are mounted. At the top of the figure, four polished rods at the bottom prevent motion to stably hold the parallel portion. A two-step motor can carry a bolt up or down.

On the z-axis as shown in Fig. 4, step motors are attached to the profile for turning the timing belt to harvest the peppers. We implemented a system that was connected to the forward belt rail. At the end of the rail tongs, ultrasonic sensors and a wireless camera are in solidarity. Modifying the length by the cooperative control of the motor in Fig. 5 can efficiently access the tap of the pepper.

Fig. 4.

Rail frame for Z axis.
4.png

Fig. 5.

Detail of stepping motors applied in this system.
5.png

The aluminum frame was made using a CNC machine. We designed the robot frame in three parts: top, medium, and bottom. At the top of the robot, as shown on the left side of Fig. 6, four polishers are set to prevent motion by holding the parallel portion with stably suspended abrasive rods. In the middle of the robot, as shown on the left side of Fig. 6, there is a step-motor with a timing belt connected to a belt rail, resulting in the robot arm moving back and forth. This is for enhancing the actual harvest. At the end of the rail tongs, ultrasonic sensors and a wireless camera are mounted.

At the bottom of the robot, there are four DC motors that can be mounted with the two-step motors. In addition, there are two-step motors that can be carried up or down by rotating the computational bolt.

Fig. 6.

Aluminum frame for robot.
6.png
3.2 Image Processing

We used a two-way wireless camera, a VRC 3.0. The pixel size of the frame is 160×120, and JPEG is used as the format. When the image is input into the PC, it converts the frame in the YCbCr color model as shown in Fig. 7. In addition, a range of values for the binary YCbCr pixel location of the mouse is available by clicking the red part of the red pepper based on the image shown on the PC screen. To additionally provide a more optimized value, users can reduce the value using a slide bar. Next, to obtain the outline of the red part of the red pepper, we run the canny edge detection algorithm to obtain the contour of the object. Then, we label the contour result. Finally, some areas that are smaller than 100 pixels are determined as noise and removed from the frame.

Fig. 7.

Flow char of image processing.
7.png

Fig. 8.

Labeling process of the input image frame and edge detection.
8.png

Most peppers run down the faucet. Peppers are also curved and significantly bent at the lower part. The upper part is usually less bent and is even straight. Focusing on these points, we applied an algorithm of the head coordinates from the chili images, as shown in Fig. 8. If noise-free labels are available, we obtain the center coordinates from the noise-removed labels. When the Y coordinate is larger than the center coordinates, we calculate a second coordinate for Y. In addition, we obtain the third center coordinates and we obtain the slope through the second and third points using the algorithm in Fig. 9.

We also obtain the equation for a straight line using the third point. When we put five lower points of the y axis into the straight-line equation, we can obtain the final X and Y coordinates of the target. These coordinates are the coordinates of the pepper tap. Once the tap barycenter coordinates of the pepper are recovered, an error value with the frame can be calculated. The robot makes the error compensation after clearing the error and sending a signal to move along the z-axis. If the absolute value of the error of the first X is greater than 20, it will send a signal to request movement of a DC motor at a relatively fast speed.

Fig. 9.

Algorithm for detecting a center coordination of chili.
9.png

[TeX:] $$(Center Coordinate Point) $>0$$$

When the error of X is less than the second error value, it controls the speed and precision when the center coordinates are close. The absolute value of the error of Y is divided through six steps, and if greater than 10, the distance is calculated and a signal is sent. The calculation equation of the signal value is as follows:

[TeX:] $$8+\left(\frac{|e r r Y|}{10}\right) \text { and } 8-\left(\frac{|e r r Y|}{10}\right)$$

If the numbers correspond to the signal sent to the AVR, the AVR can rotate the wheel using a step number of 200. We applied the above algorithm to the control of only half a turn in speed for precise control of 100 step values if the error of less than 10 and if the center coordinates available, allowing real-time control of Y to make a final error value of 0.

4. Experimental Results

This section presents the experimental results of the chili pepper harvesting robots. For DC motor control, we used an AM-DC2-2D motor driver. High-speed and low-speed controls for the DC motor are available. The high-speed control operates through a fast pulse width modulation (PWM) control with an interrupt service routine onto timers of ATmega128. Low-speed control occurs when data is received by UART, resulting in delayed PWM control. For step motor control, we make use of a MAI-STM-LC V2.0, which drives the unipolar step motor up to a current of 3 A. The step motor operates with acceleration and deacceleration using a certain delay parameter.

To check the distance between the robot arm and chili peppers, we make use of microwave sensors, which give us analog outputs. The sensors obtain an ADC interrupt every second when the potential difference is larger than 3 V. The sensor detects when a chili pepper is within 7 cm of the robot, as shown in Fig. 10.

Fig. 10.

Output measurement of ultrasonic wave sensor.
10.png

In Fig. 11, the robot looks for the red pepper faucet coordinates from the input image through image processing. Here, the left-most label is recognized as the first crop. Jaws are applied through camera movement along the x- and y-axes when the robot is in front of the pepper. Finally, the robot moves forward on the z-axis rail to harvest the peppers.

Fig. 11.

Coordinate of robot arm movement and algorithm to detect knob position of chilies.
11.png

In Fig. 8, the first row of the figure shows the first image frame. The image frames in the second row depict the image frame where the center position was detected through image processing. Thereafter, we determine the edges through the edge detection algorithm.

The robot operation through which the robot harvests the chili peppers is depicted in Fig. 12. The robot reaches the crops in front of the peppers. Then, the web camera mounted on top of the robot takes an image and processes it on the ATmega128 processor. Next, the robot arm approaches the chili peppers and cuts the knob of the peppers at the center head position, which is calculated through the image processing.

Fig. 12.

Actual demo operation of harvesting chilies automatically.
12.png

5. Conclusions

In this paper, an automatic unmanned robot that can harvest chili peppers by recognizing the red color within the color space was proposed. This robot applies a fusion of ICT and agriculture requiring hard labor as an alternative solution to labor shortages in rural areas. The image processing module of the robot can obtain the head coordinates of a chili pepper using red color in the OpenCV library. After comparing the coordinates obtained with the center coordinates, the image processing module sends a signal to the robot and controls the DC motor and a step motor through AVR. Finally, the robot is able to harvest the chili peppers. Agriculture will become an extremely important industry in the future, and will depend on robots or automated production systems. The technologies in this work will be necessary for a system realization.

Biography

Younghwan Byun
https://orcid.org/0000-0003-3485-6272

He received his B.S. degree in Information and Communication Engineering from Chungbuk National University, Korea, in 2014. Since 2014, he has been working for LG Innotek, a global materials and components company. His current research interests include optical solutions, substrate materials, and automotive electronic parts.

Biography

Sechang Oh
https://orcid.org/0000-0003-0899-7207

He received his M.S. and Ph.D. degrees in computer science from Korea Advanced Institute of Science and Technology (KAIST) in 1990 and 1997, respectively. He worked at LG Corporation Institute of Technology for 4 years and worked at Ajou University for 3 years. After that, he worked as a professor in Department of Computer Software at Sejong Cyber University for 14 years. Now, he is working for Saltlux, the leading company in artificial intelligence. His current research interests include deep learning, data science, and computer vision.

Biography

Min Choi
https://orcid.org/0000-0002-8031-1022

He received his B.S. degree in Computer Science from Kwangwoon University, Korea, in 2001, and the M.S. and Ph.D. degrees in Computer Science from Korea Advanced Institute of Science and Technology (KAIST) in 2003 and 2009, respectively. From 2008 to 2010, he worked for Samsung Electronics as a Senior Engineer. Since 2011, he has been a faculty member of the Department of Information and Communication of Chungbuk National University. His current research interests include high performance computing, cloud computing, interconnection network, and embedded computing.

References

  • 1 S. Lee, Micro-controller AVR ATmega128, Seoul: Hanbit Media Pub, 2013.custom:[[[-]]]
  • 2 J. Ahn, C. Kim, CNC Machine Tool and Programming, Seoul: Bookshill Pub, 2014.custom:[[[-]]]
  • 3 P. Sharma, G. Singh, A. Kaur, "Different techniques of edge detection in digital image processing," International Journal of Engineering Research and Applications, vol. 3, no. 3, pp. 458-461, 2013.custom:[[[-]]]
  • 4 P. Ravoor, S. Rupanagudi, R. Bs, "Novel algorithm for finger tip blob detection using image processing," International Journal of Signal Processing Systems, vol. 1, no. 1, pp. 11-16, 2012.custom:[[[-]]]
  • 5 D. Kim, OpenCV Programming, Seoul: Kame Publishing, Seoul: Kame Publishing 2011, 2011.custom:[[[-]]]
  • 6 H. Kang, Y. Shin, Perfect C, Seoul: Infinity Books Pub., 2007.custom:[[[-]]]
  • 7 Q. L. Liu, D. H. Oh, "Performance evaluation of multi-hop communication based on a mobile multi-robot system in a subterranean laneway," Journal of Information Processing Systems, vol. 8, no. 3, pp. 471-482, 2012.doi:[[[10.3745/JIPS.2012.8.3.471]]]
  • 8 A. P. James, S. Dimitrijev, "Ranked selection of nearest discriminating features," Human-Centric Computing and Information Sciences, vol. 2, no. 12, 2012.custom:[[[-]]]
  • 9 Y. Zhao, L. Gong, Y. Huang, C. Liu, "Robust tomato recognition for robotic harvesting using feature images fusion," Sensors, vol. 16, no. 2, 2016.doi:[[[10.3390/s16020173]]]
  • 10 R. Fernandez, C. Salinas, H. Montes, J. Sarria, "Multisensory system for fruit harvesting robots: experimental testing in natural scenarios and with different kinds of crops," Sensors, vol. 14, no. 12, pp. 23885-23904, 2014.custom:[[[-]]]
  • 11 S. T. Namin, L. Petersson, "Classification of materials in natural scenes using multi-spectral images," in Proceedings of 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 2012;pp. 1393-1398. custom:[[[-]]]