PDF  PubReader

Ren* ** , Kim*** , and Jeong*: A Survey of Deep Learning in Agriculture: Techniques and Their Applications

Chengjuan Ren* ** , Dae-Kyoo Kim*** and Dongwon Jeong*

A Survey of Deep Learning in Agriculture: Techniques and Their Applications

Abstract: With promising results and enormous capability, deep learning technology has attracted more and more attention to both theoretical research and applications for a variety of image processing and computer vision tasks. In this paper, we investigate 32 research contributions that apply deep learning techniques to the agriculture domain. Different types of deep neural network architectures in agriculture are surveyed and the current state-of-the-art methods are summarized. This paper ends with a discussion of the advantages and disadvantages of deep learning and future research topics. The survey shows that deep learning-based research has superior performance in terms of accuracy, which is beyond the standard machine learning techniques nowadays.

Keywords: Deep Learning , Agriculture , State-of-the-Art , Survey

1. Introduction

In the 1980s, several countries like the United States and Canada pioneered precision agriculture. Using remote sensing technology, geographic information technology, global satellite positioning technology, and computer automatic control technology monitor and manage agriculture in real-time. These tech¬nologies bring many conveniences to agriculture. For example, the optimal amount of fertilizer appli¬cation can be determined.

Furthermore, the production can be increased, and the cost can be reduced under the premise of reducing pollution. Early detection and prevention can effectively slow down the spread of crop diseases. At the same time, fewer drugs can be used to prevent or control crop diseases ahead of the stage, which can reduce pollution to the environment. Timely as well accurate crop information is of great significance for social economic and the environment.

Since precision farming was proposed, which initiated a new research field in agriculture, it has brought many problems and challenges, such as the effect of environment, plant diseases, crop yield, food safety, and health. Meanwhile, accompanying the emergence of big data technology, machine learning (ML), which motivates farm production, in favor of reducing the impact of environment and maintaining sustainability, is used for trying to resolve those challenges. What’s more, because of ML’s excellent computing ability, it can make better use of quantitative and qualitative analysis of data in smart farming operational environment. ML has been applied to many areas, such as agriculture [1-3], medicine [4,5], human-robot interaction [6-8]. ML is the practice of having computer simulating human learning, acquiring new knowledge, continuously improving performance, and achieving an intelligent self-improvement method. However, designing the feature extractor of ML requires careful engineering and considerable domain expertise, which is time-consuming and demanding on human, material, and financial resources. The conventional ML does not satisfy that yet.

In 2006, Geoffrey Hinton proposed that deep belief networks (DBN) can use unsupervised layer-by-layer greedy training algorithm [9] with bringing hope for training Deep Neural Networks. Then, deep learning (DL) is widely spread out. The most typical and representative DL models are restricted Boltzmann machine (RBM) [10], autoencoder (AE) [11], convolutional neural network (CNN) [12], and recurrent neural network (RNN) [13]. In the process of DL, DL learns data representation with multiple levels of abstraction through computational model that is composed of multiple processing layers. The dimension of the data is reduced, and the concise description is created by feature extraction from data input in DL. All the samples are labeled step by step. In other words, a DL network trains all of the sample data. Compared with the traditional ML, DL has attracted full attention from researchers because of the advantages of DL in various application fields. Many scholars have made remarkable achievements in image classification [14,15], speech recognition [16], and image recognition [17,18] by using the concept of the DL networks.

DL can effectively extract various features of image and structured data. Hence, DL may combine with agricultural machinery to support the development of agricultural intelligent machinery equipment. In recent years, DL-based research results also keep emerging in the field of agriculture. This paper investigates the applications and techniques of DL in agriculture. This paper aims to provide a reference to the DL methods for agricultural researchers. This paper can be helpful for researchers to retrieve the literatures related to the research problems quickly and accurately.

2. Scope

In recent years, the applications of DL in agriculture have achieved remarkable results. Firstly, we searched papers based on keywords of agriculture, deep learning, framing, and convolutional neural network from the databases such as Web of Science, IEEE Xplore, Google Scholar, and Baidu Scholar. By this means, we filtered out articles that involved DL but did not apply to agriculture. There are 54 remaining. Then, out of 54, we selected 32 papers that are sorted by citation index from high to low and are published in the last 5 years. This sort of ranking ignores some areas that get less attention but is meaningful. The papers are studied in terms of research problems, proposed solutions, data sources, and results. This paper aims to survey DL techniques and their applications in agriculture to provide a guideline and timely reference for the research communities. The full use and technical analysis of each paper is the main difference between this paper and other surveys.

Several fascinating investigations have been published on the subject of DL in agriculture, and the examples include a survey of DL in agriculture [19], and a review of crop yield prediction and nitrogen status estimation using ML in agriculture [20]. Studies close to the topic of the paper are listed in Table 1. According to the survey of relevant review articles in the last 5 years, we can see from Table 1 that the articles in 2018 were the most. Since 2018, researchers have been indirectly paying more and more attention to the applications of DL in agriculture.

The structure of this paper is as follows. Section 3 gives a brief introduction to DL. Section 4 presents a survey of the selected documents. Section 5 discusses the technical analysis of the papers, and Section 6 discusses the survey. Section 7 finally concludes the paper.

Table 1.

Studies close to the topic of the paper
Study Year Title Content
Kamilaris and Prenafeta-Boldu [19] 2018 Deep learning in agriculture: a survey A survey of DL in agriculture
Chlingaryan et al. [20] 2018 Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: a review A review of crop yield prediction and nitrogen status estimation using ML in agriculture
Kamilaris and Prenafeta-Boldu [21] 2018 A review of the use of convolutional neural networks in agriculture A review of the use of CNNs model in agriculture
Patricio and Rieder [22] 2018 Computer vision and artificial intelligence in precision agriculture for grain crops: a systematic review A review of grain crops in AI
Gongal et al. [23] 2015 Sensors and systems for fruit detection and localization: a review A review of ML system for fruit detection and localization
Liakos et al. [24] 2018 Machine learning in agriculture: a review A synthesize review of ML in agriculture
Mishra et al. [25] 2016 Application of machine learning techniques in agriculture crop production: a review A review of crop production using ML techniques
Zhu et al. [26] 2018 Deep learning for smart agriculture: concepts, tools, applications, and opportunities A review of concepts, tools, applications, and opportunities for DL in agriculture
Weiss et al. [27] 2020 Remote sensing for agricultural applications: a meta-review A review of remote sensing in agriculture

3. A Brief Introduction to Deep Learning

DL finds distributed characteristic of data by combining low-level feature to form more abstract high-level representation of attribute categories or features. Its motivation is to build neural networks to simulate the human brain for analytical learning. DL interprets data (e.g., text, images, video, and sound) by mimicking the way the human brain work.

3.1 Artificial Neural Network

The concept of DL was derived from the study of artificial neural network (ANN). A DL structure consists of a multi-layer perceptron with multiple hidden layers. ANN refers to a series of neurons connected in an acyclic diagram. During ANN training, the gradient becomes more and more sparse and tends to converge to the local minimum. Back-propagation (BP) is not ideal when only a few layers of the network use the typical algorithms of traditional multi-layer network training. A simple neural network consists of three parts: input layer (i), hidden layer (j), and output layer (k), as shown in Fig. 1.

Fig. 1.

ANN architecture.
1.png

Fig. 2.

Process of BP neural network model.
2.png

At the input layer, variables are input. Computation is performed in the hidden layer, and output is produced at the output layer. The hidden layer contains neurons which rely on activation functions to execute operations.

Between them, the transfer function f of each node must satisfy the conditions of everywhere derivative. The most common function is sigmoid. If the network produces the desired outputs of the kth neuron [TeX:] $$y_{k}^{*},$$ the squared error function of the network is as follows:

(1)
[TeX:] $$\mathrm{E}=\frac{1}{2} \sum_{k=1}^{n} e_{k}^{2}=\frac{1}{2} \sum_{k=1}^{n}\left(y_{k}-y_{k}^{*}\right)^{2}$$

The BP algorithm modifies the weight according to the negative gradient of the error function in (1). The weight update formula expressed as follows where l denotes the number of layers:

(2)
[TeX:] $$w^{l}=w^{l}+\Delta w^{l}$$

The BP neural network has good nonlinear mapping ability. It can automatically learn feature from training datasets. The training process of the BP includes forward propagation of signal and backpropagation of error. The error output is calculated in the direction from input to output, while the adjustment weight and threshold are calculated in the direction from output to input. The training process of the BP neural network model is shown in Fig. 2.

3.2 Convolutional Neural Network

CNN is an algorithm of DL which consists of a deep feedforward ANN. With the shared weight, high layering ability, and learning ability, CNN is capable of resolving more complex problems with a larger model and producing gratifying results. CNN has also made significant breakthroughs in large quantities of applications, such as speech recognition [28-30], language translation [31-33], image recognition [34-37], information retrieval [38-41]. Nevertheless, it is not comparable with ANN in solving large-scale problems.

CNN is usually made up of three parts: convolutional layer, pool layer, and full connection layer, as shown in Fig. 3. The items from left to right in Fig. 3 are the input, convolutional layer, rectified linear units (ReLU) layer, pooling layer, convolutional layer, relu layer, pooling layer, full connection layer, and softmax layer. The convolutional layer forms a set of filters to extract various features from an image. The pooling has several pooling methods, such as MaxPooling and AveragePooling. MaxPooling, which is widely used, can reduce the size of the convolutional kernel while retaining corresponding features. Therefore, it is mainly used for dimension reduction. The convolutional layer and the pooling layer are usually used together for feature extraction from input images. After multiple convolutions, the highly abstracted features of the full connection layer are integrated and can be normalized to output a pro¬bability for each classification. Then, the classifier can be classified according to the probability obtained by the full connection. In general, rock-bottom convolutions can describe objects such as lines and textures.

Fig. 3.

CNN architecture.
3.png

High-level convolutions represent detailed features. High-level features are obtained from low-level combinations. With sharing weight and no pressure to process high dimensional data, CNN can extract features automatically and perform exceptionally on classification and prediction. However, CNN employs a gradient descent algorithm, which often generates the local minimum and overfits. The pooling layer also loses much of valuable information.

Before starting the training, CNN needs to set some super parameters, such as the number and size of filters, the pooling step size of the pooling layer, the zero filling amounts, the batch size, and the learning rate. Once the super parameters are set, they do not change during the training. Training images can be input into CNN for training in batches. After the training, another new picture is input into CNN. And then, the network performs the forward propagation process again and calculates the probability of each image belonging to each category. The training process of CNN is shown in Fig. 4.

Fig. 4.

The training process of CNN.
4.png

As shown from Fig. 2, the idea of CNN originated from BP neural network. BP neural network is a multi-layer feedforward neural network trained according to the algorithm of error backward propagation. It has strong nonlinear mapping ability and flexible network structure. However, CNN has the ability of representational learning and can classify input information by translation and invariance according to its hierarchical structure.

4. Applications of Deep Learning in Agriculture

This section describes the survey papers related with applications of deep learning in agriculture, and Table 2 summarizes the relevant papers.

4.1. Plant Domain

With the development of agricultural modernization, the area of large-scale cultivation becomes increasing. DL has a wide range of applications in the planting of agriculture, such as the detection of plant diseases, species classification, and prediction of crop yield.

In agricultural production, especially the diseases of crop need to be detected for improved pro-ductivity. There are many types of plant species to be inspected, and so are types of disease species. If we rely on professionals to visually observe the disease situation of crops in the planting area, it requires huge demand for human services for control, which is inefficient and imprecise. Therefore, automated computer vision technologies are desired to help solve the problem of disease identification in agricultural production. There are several works on DL applying to crop disease classification or detection. The work by Ha et al. [42] proposed a highly accurate system to detect radish disease (Fusarium wilt). The radish was classified into diseased and healthy through the deep convolutional neural network (DCNN). The work by Ma et al. [43] developed a DCNN to recognize cucumber four types of cucumber diseases. Compared to conventional methods (e.g., RF, SVM, and AlexNet), DCNN can detect better cucumber diseases with 93.41% of accuracy. Similar to the research [43], Lu et al. [44] came up with CNNs to identify ten types of rice diseases with 95.48% of accuracy, which demonstrated the superiority of CNN-based models to DCNN in identifying rice diseases. The work by Liu et al. [45] presented a novel AlexNet-based model to detect four types of common apple leaf diseases. The approach demonstrated 97.6% and improved the robustness of the CNN model in experiment. Considering the food security issues, Mohanty et al. [46] proposed to identify 26 types of diseases and 14 crop species using the CNN model. The model demonstrated an excellent performance, which proved itself was feasible and robust for detection diseases. The work by Tran et al. [47] presented a system for monitoring the growth pro¬cess and increasing tomato production. It classified nutrient deficiencies and pathology during growth. Based on the output of the system, agriculture experts gauge corresponding measures to resolve symptoms. The work by Fuentes et al. [48] used DL three meta-architectures, faster region-based convolutional neural network (Faster R-CNN), region-based fully convolutional network (R-FCN), and single shot multibox detector (SDD). The model in the paper combined each of them with feature extractor, VGG and ResNet to detect plant diseases and pests. The work showed that the developed models can effectively detect nine types of diseases and pests in complex surrounding. The work by Wang et al. [49] diagnosed disease severity by training fine-tuned CNN with transfer learning using the PlantVillage dataset, which explained that the best model produced 90.4% accuracy.

Crop classification and identification are the critical initial stages of the agricultural monitoring system. Precise identification of various crop types not only allows an accurate estimation for crop planting area, structure, and spatial distribution but also provides the input parameters of the estimation model for crop yield. Zhong et al. [50] presented a classification framework for identifying crop growth patterns and crop types using DL applied to time-series remotely sensed data based on Conv1-D. Their work showed that the framework was effective in representing the time series of multi-temporal classification tasks. Another study by Milioto et al. [51] presented a system to detect and classify sugar beets and weeds with outstanding performance. The work by Ghazi et al. [52] combined transfer learning and popular CNN architectures, including VGGNet, AlexNet, and GoogLeNet to recognize plant types. They analyzed the parameters of the networks and adjusted them to improve performance. Their model placed the third in PlanCLEF2016. The work by Zhu et al. [53] used an improved inception V2 architecture to identify plant species. Through experiment with real scenes, it was proved that the proposed method had accuracy superior to FasterRCNN in identifying leaf species in a complex environment. In the last one study, to boost fruit production and quality, the work by Dias et al. [54] developed a robust system to recognize apple flowers using CNN.

Prediction of crop yield that can predict production in advance before harvest belongs to another area of study in planting. It provides forecast data based on region, crop, and multiple forecast surveys at different growth stages. To observe the growth of apple at every stage, Tian et al. [55] put forward a YOLOV3-dense model to detect apple growth and estimate yield using data augment technique to avoid overfitting. The orchard in their study involved undulating lighting, complex backgrounds, overlapping of fruits. Their approach was concluded as valid for real-time application in apple orchards. The work by Rahnemoonfar and Sheppard [56] used an improved Inception-ResNet model with accuracy for esti¬mating fruit yield in terms of the number of fruits. The model was efficient even with complex condition on fruits.

4.2 Animal Domain

As the concern on animals grows, DL technologies have been adopted in the animal domain for monitoring and improving animal breeding environment and the quality of meat products. The study on DL-based face recognition and behavior analysis of pigs and cows is very active in applied research. To develop an automatic recognition method of nursing interactions for animal farms by using DL techniques, Yang et al. [57] showed that the fully convolutional network combining spatial and temporal information was able to detect nursing behaviors, which was tremendous progress in identifying nursing behaviors in pig farm. The study by Qiao et al. [58] presented a Mask R-CNN architecture to settle cattle contour extraction and instance segmentation in a sophisticated feedlot surrounding. The method was trained and tested on the challenging dataset. The study by Kumar et al. [59] used DL techniques based on nose pattern characteristic to identify cattle to address the loss or exchange of animals and inaccurate insurance claim. Inspired by the work of face recognition, the work by Hansen et al. [60] proposed a CNN-based model to recognize pigs. In order to predict sheep commercial value, the work by Jwade et al. [61] built an automatic system to recognize sheep types in a sheep environment and reached 95.8% accuracy. The work by Tian et al. [62] proposed counting CNN to deal with the pig amounts and got 1.67 MAE per image.

4.3 Land Cover

Land cover change is an active area of research in global change. Land cover changes affect not only the natural basis of human survival and development, such as climate, soil, vegetation, water resources, and biodiversity but also the structure and function of the Earth’s biochemical circle as well as the energy and material circulation of the Earth system. A fundamental task in land cover change is cover classification. Kussul et al. [63] presented a multi-level DL technique that classified crop types and land cover from Landsat-8 and Sentinel-1A RS satellite imagery with nineteen multitemporal scenes. The work by Gaetano et al. [64] proposed a two-branch end-to-end model called MultiResoLCC. The model extracted characteristics of land covers and classified land covers by combing their attributes at the PAN resolution. The work by Scott et al. [65] trained a DCNNs model and used transfer learning and data augmentation to classify land covers for remote sensing imagery. The work by Xing et al. [66] used improved architectures, VGG16, ResNet-50, and AlexNet to validate land cover, and the results showed that the proposed method was effective with accuracy 83.80%. The work by Mahdianpari et al. [67] presented a survey of DL tools for classification of wetland classes and checked seven power of deep networks using multispectral remote sensing imagery.

4.4 Other Domains

The development of smart agriculture inevitably requires automated machines. To operate it safely without supervision, it should have the function of detecting and avoiding obstacles. The work by Christiansen et al. [68] detected unusual surrounding areas or unknown target types with distant and occlusion targets using DeepAnomaly, which combined DL algorithms. Compared to Faster R-CNN and most CNN models, DeepAnomaly had better performance and accuracy and requires less computation and fewer parameters for image processing, which was suitable for real-time systems. In contrast to [68], the work by Steen et al. [69] can detect an obstacle with high accuracy in the field of row crops and grass mowing. However, it cannot recognize people and other distant objects. The work by Khan et al. [70] used popular DL networks to estimate vegetation index from RGB images. They used a modified AlexNet deep CNN and Caffe as the base framework for implementation. The work by Kaneda et al. [71] presented a novel prediction system for plant water stress to reproduce tomato cultivation. The word by Song et al. [72] combined DBN and MCA to predict soil moisture in the Zhangye oasis, Northwest China. The work by Wang et al. [73] presented used CNN, ResNet, and modified architecture ResNeXt to examine lousy blueberries. The work by Mandeep et al. [74] employed H2O model to estimate evapotranspiration in Northern India and got a better performance than four learning methods, including DL, generalized linear model (GLM), random forest (RF), and gradient boosting machine (GBM).

Table 2.

Applications of deep learning in agriculture
Ref. Research problem Proposed model Data source Results
[42] Classification healthy and Fusarium wilt of radish CNN-based The author collection by UAVs in Hongchun-gun and Jungsun-gun, Kangwon-do, Korea Comparison with standard ML, in identifying radish, improved CNN achieved an accuracy higher than 97.4%, in detecting Fusarium wilt of radish, obtained an accuracy of 93.3%
[43] Recognition of cucumber diseases DCNN PlantVillage, Forest Image 93.4% accuracy
[44] Recognition of rice diseases CNN-based The author collection 500 natural images from rice field Comparison with the conventional ML models, the proposed model, got an accuracy of 95.48%
[45] Identification of apple leaf diseases AlexNet-based From two apple experiment fields, collected 13689 diseases apple leaves images Comparison with standard AlexNet, GoogLeNet, ResNet-20, VGGNet-16, achievement accuracy of 97.62%, and improvement of 10.83%
[46] Plant disease detection AlexNet-based, GoogleNet-based PlantVillage The model achieved an accuracy of 99.35%
[47] Classification and prediction nutritional deficiencies of the tomato plant Improved Inception-ResNet V2, AutoEncoder The author collection dataset Top-3 accuracy of 87.273% and 79.091% Ensemble averaging 91% validity
[48] Detection of tomato diseases and pests (Faster R-CNN, R-FCN, SDD)-based The author collection dataset The proposed model was valid to detect diseases and pests
[49] Prediction of disease severity and yield loss Modified different depth CNN models PlantVillage The fine-tuned VGG16 model performed best, reaching 90.4% accuracy
[50] Crop classification Improved LSTM and Conv1D In Yolo County, California, 2014 Comparison with XGBoost, RF, SVM, Conv1D-based model achieved the highest accuracy of 85.54%, F1 score 0.73
[51] Classification of sugar beet and weed Improved CNN The author collection datasets, dataset A, dataset B Getting greater than 90% accuracy
[52] Plant identification Fine-tune VGGNet, AlexNet, and GoogLeNet LifeCLEF2015 The best system obtained an accuracy of 80% comparison with LifeCLEF2015 campaign, the model results improved 15%
[53] Plant leaf recognition Inceptionv2+BN The author collection dataset Higher identification accuracy than Faster RCNN in a real-world complex background application
[54] Apple flower detection CNN-based The author collection dataset Compassion with HSV +SVM, HSV, HSV+Bh, proposed approach obtained the recall and precision rates higher than 90%
[55] Identification growth stages of apple, prediction yield YOLOv3-dense The author collection dataset The proposed YOLOV3-dense model outperformed to the YOLO-V3 model and the Faster R-CNN with the VGG16 model. The model average detection time was 0.304 s per frame at 3000×3000
[56] Fruit yield estimation. Modified Inception-ResNet The author collection dataset A 91% average test accuracy on real images, 93% on synthetic images
[57] Recognition of sow nursing behavior Fully convolutional network combing geometrical properties The author collection dataset The accuracy, sensitivity, and specificity are 96.4%, 92.0%, and 98.5%, respectively
[58] Cattle segmentation and contour extraction Improved Mask R-CNN In a real environment 0.92 MPA and ADE33.56 pixel
[59] Recognition cattle Deep CNN-based framework In a real environment Rank-1, 98.99% accuracy
[60] Recognition pig face CNN-based In the farm environment 96.7% accuracy
[61] Recognition sheep CNN The author collection dataset 95.8% accuracy with 1.7 deviation
[62] Pig counting Counting CNN In a real environment 1.67 MAE per image
[63] Land cover and crop type classification AlexNet-based Landsat-8and Sentinel-1A RS satellites The overall 97.62% accuracy compared with standard AlexNet, GoogLeNet, ResNet-20, VGGNet-16
[64] Land cover classification MultiResoLCC PAN and MS imagery Under different conditions, MultiResoLCC got the best accuracy
[65] Land cover classification DCNNs UC Merced data Accuracies rate of [TeX:] $$97.8 \% \pm 2.3 \%,97.6 \% \pm 2.6 \%, \text { and } 98.5 \% \pm 1.4 \%$$
[66] Land cover validation Improved VGG16, ResNet-50, AlexNet In a heterogeneous area, western California 83.8% accuracy, close to 80.45% validation
[67] Land cover classification DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, InceptionRetV2 Experiment field in Canada The first three, InceptionResNetV2, ResNet50, Xception and corresponding 96.17%,94.81%, 93.57% accuracy, respectively
[68] Obstacle detection DeepAnomaly The author collection dataset The proposed method detected humans at longer ranges (45–90 m) than RCNN and fewer parameters. Comparison with YOLO, RCNN, SS, LCF, DeepAnomaly detected humans with better accuracy
[69] Obstacle detection Fine-tuned AlexNet The author collection dataset The proposed model with 99.9% accuracy in row crops and 90.8% in grass mowing
[70] Estimation of vegetation index Fine-tuned DNN In a real environment DNN-RGB 0.019, DNN-GRAY 0.045
[71] Prediction of plant water stress Multi-modal SW-SVR In a real environment Multi-modal SW-SVR, more precise and stable water stress prediction approach
[72] Prediction of soil moisture MCA+DBA In a real environment Decrease RMSE by 18%
[73] Detection of bad blueberries CNN, ResNet, ResNeXt The author collection dataset More precise and higher F1 score
[74] Prediction of evapotranspiration H2O model In a real environment RMSE=0.1921-0.2691, ACC=85-95, NSE=0.95-0.98

5. Techniques of Deep Learning in Agriculture

The surveyed papers used numerous DL techniques to address their concerned issues. CNN was adopted most as the backbone [42-44,54,60,62-64,68,70], especially AlexNet and VGG. The number of the applied methods was based on their improvements [45,49,69,70], and a few works used the combination of CNN and other approaches [54,62,68,71,72] to achieve better results through experiment. Table 3 classifies the papers into several groups according to DL techniques.

Table 3.

Paper using backbone
Paper backbone Ref.
CNN [42] [43] [44] [51] [54] [59] [60] [61] [62] [63] [64] [65] [68] [73]
AlexNet [45] [46] [52] [69]
Fully convolutional network [57]
DNN [70] [71]
Mask R-CNN [58]
YOLOv3 [55]
ResNet [47] [49] [56] [67] [73]
GoogLeNet [46] [52]
VGG [49] [52] [66] [67]
Inception [47] [49] [53] [56] [67]
LSTM [50]
Faster-CNN, R-FCN, SSD [48]
DenseNet [67]

Caffe is a clear and efficient framework and is widely used due to its convenience of expression, high speed, and openness in DL. Caffe was employed in 80% of the surveyed works, including [42,52,54,65,69,70], followed by TensorFlow [56,64,65], and MATLAB [68] is also used commonly. Table 4 shows the classification of the papers according to the frameworks.

Table 4.

Paper framework
Framework Function Ref.
Caffe Applications in video, image processing [42] [52] [64, 65] [55][70] [49] [56]
Matlab Mainly used for mathematical modeling [68]
TensorFlow Applied to the realization of various ML algorithms [56] [64] [65]

Another concerned step is data preprocessing. The process of data preprocessing includes data cleaning, data conversion, and dimensionality reduction. The data cleaning technology is mainly used to ensure the integrity of the specific characteristic of data. The data transformation is to meet the requirements of the DL model. The data conversion has a role of converting data from one format or structure to another process. The dimension reduction is to remove irrelevant and redundant variables, reduce the complexity of analysis and generation model, and improve the modeling efficiency. The most common preprocessing methods is image resizing, containing image segmentation, scaling, and normalization. In the paper under study, each image was resized to particular size, such as 256×256 [49], 32×32 [73], 200×200 [42].

The DL model with relatively complex architecture is generally composed of multi-layer nonlinear learners. The data to be analyzed are derived from natural environment. In order to make the DL model have better generalization performance, it is necessary to increase the training sample size as much as possible. The most widely used data augmentation techniques include random image rotation, cropping, translation, horizontal and vertical inversion, etc. Data augmentation was used to improve the model performance in [43,54,55,58,60,65,73]. Another technique to avoid overfitting is dropout which resets the activation values of randomly selected neurons in training to zero. This is a very efficient way of performing model averaging with neural networks [50,51].

In order to evaluate the DL effect, accuracy [46,50-1,55,64,73], recall R [54], root mean square error (RMSE) [70,72,74], F1 value [46,55,68,73], and other evaluation indexes were adopted as shown in Table 1. From these analysis results, we can obtain that the DL-based methods are superior to other imple¬mentation mechanisms. DL in the fields of plant disease and insect pests detection, plant identification or classification has shown outstanding performance in the aspects of the identification accuracy, the fast identification speed, the strong robustness, and the improved generalization. Especially, DL showed more than 95% identification accuracy.

Transfer learning is about transferring trained model with its parameters to another model for the reuse of model. The purpose of transfer learning is to address the difficulty in data acquisition. Many researchers [49,52,54,65] proposed the incorporation of transfer learning techniques.

In the surveyed papers, the Learning rate is an essential hyperparameter in supervised learning and DL. It determines when the objective function should converge to the local minimum. Different values of learning rate have been designed, such as 0.01 [42] and 0.001 [43]. The choice of an appropriate learning rate is vital for the objective function to converge to the local minimum value in the proper time.

6. Discussion

6.1 Advantages/Disadvantages of Deep Learning

Now, the manual feature finding is still not an easy task. Traditional methods of feature extraction need the significant human. However, DL not only can improve performance in classification and detection, as shown in Table 2, but also can reduce efforts in feature research. Besides, to deal with real-world issues, DL may stimulate more databases to train networks [42,47,48,50-62,69-74]. Meanwhile, DL also has good generalization performance [46,50,51,56,61,62,65,67,68,70,72]. However, DL cannot estimate the law of data without bias. Therefore, to achieve higher accuracy, much data support is needed. Although data augmentation techniques mentioned in Section 5 can increase the size of the dataset, actually a significant number of images are needed. Another notable disadvantage is data annotation that requires expertise to accurately annotate to improve performance. In some areas, experts or labeling volunteers are limitation. Moreover, the process of data training is time-consuming in DL, especially when the input image size is large.

6.2 Future of Deep Learning in Agriculture

In this paper, the applications of DL in agriculture are listed with classification crop types, detection diseases, detection weeds, counting fruits, prediction yield, classification land cover, estimation water stress, and others. From the above analysis, we can observe that CNN has better performance in terms of precision. When DL-based methods are compared with other techniques in literatures for performance, the premise is to have the same experimental environment. However, in fact, this is very difficult because each paper employed different datasets, techniques, models, and metrics. It is observed that DL outperforms traditional methods such as ANN, SVM, RF, and others. Automatic feature extraction using DL models are more efficient than conventional feature extraction. To improve performance in classification and prediction, more techniques are being adopted to solve practical agriculture problems in the future. Long short-term memory (LSTM) [75] and RNN models [76] have the function of mining time dimensions and memory. Thus, they can be used to estimate plant and animal growth based on previously recorded data, assess fruit yield or water needs. Two models can also be applied to the environment, for instance, predicting climate change and phenomena, etc. Using infrared thermal imaging and hyperspectral imaging technologies [77] to provide data for early detection of crop diseases is the development direction of early detection of crop diseases.

7. Conclusion

In this paper, we have surveyed the development of deep neural-based work efforts in the agriculture domain in the last 5 years. We have analyzed 32 works on the applications of deep learning and the technical details of their implementation. Each work was compared with existing techniques for performance. It is found that deep learning was better in performance than other technologies. Moreover, with the advances in computer hardware, we think that deep learning will receive more attention and broader applications in future research. This paper aims to encourage more researchers to study deep learning to settle agricultural issues such as recognition, classification or prediction, relevant image analysis, and data analysis, or more general computer vision tasks.

Acknowledgement

This work was supported by the Ministry of Education of the Republic of Korea (No. 2019R 1I1A3A01060826).

Biography

Chengjuan Ren
https://orcid.org/0000-0002-9958-0476

She received B.S. in Computer Science and Technology Department from Chongqing Three Gorges University, China, in 2007 and M.S. degrees in School of Computer Science and Engineering from Chongqing, China, in 2010. Since March 2019, she is studying for a doctor's degree in Software Convergence Engineering Department, Kunsan National University, Gunsan, Korea.

Biography

Dae-Kyoo Kim
https://orcid.org/0000-0002-7133-9111

He is a professor in the Department of Computer Science and Engineering at Oakland University. He received a Ph.D. in computer science from Colorado State University in 2004. During his Ph.D., he worked as a technical specialist at the NASA Ames Research Center.

Biography

Dongwon Jeong
https://orcid.org/0000-0001-9881-5336

He received his Ph.D. in computer science from Korea University, Korea, in 2004. He was a research assistant professor, Korea University, Korea, 2004 to 2005. He was a visiting scholar, Oakland University, USA, 2013 to 2014 and 2019 to 2020. He is a professor in the Department of Software Convergence Engineering at Kunsan National University, Gunsan, Korea.

References

  • 1 A. Singh, B. Ganapathysubramanian, A. K. Singh, S. Sarkar, "Machine learning for high-throughput stress phenotyping in plants," Trends in Plant Science, vol. 21, no. 2, pp. 110-124, 2016.custom:[[[-]]]
  • 2 S. Park, J. Im, E. Jang, J. Rhee, "Drought assessment and monitoring through blending of multi-sensor indices using machine learning approaches for different climate regions," Agricultural and Forest Meteorology, vol. 216, pp. 157-169, 2016.custom:[[[-]]]
  • 3 D. C. Duro, S. E. Franklin, M. G. Dube, "A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT -5 HRG imagery," Remote Sensing of Environment, vol. 118, pp. 259-272, 2012.custom:[[[-]]]
  • 4 R. Lang, R. Lu, C. Zhao, H. Qin, G. Liu, "Graph-based semi-supervised one class support vector machine for detecting abnormal lung sounds," Applied Mathematics and Computation, vol. 364, no. 124487, 2020.custom:[[[-]]]
  • 5 Y. Liu, S. Zhou, W. Han, W. Liu, Z. Qiu, C. Li, "Convolutional neural network for hyperspectral data analysis and effective wavelengths selection," Analytica Chimica Acta, vol. 1086, pp. 46-54, 2019.custom:[[[-]]]
  • 6 J. Greeff, T. Belpaeme, "Why robots should be social: enhancing machine learning through social human-robot interaction," PLoS ONE, vol. 10, no. 9, 2015.custom:[[[-]]]
  • 7 E. Senft, P. Baxter, J. Kennedy, S. Lemaignan, T. Belpaeme, "Supervised autonomy for online learning in human-robot interaction," Pattern Recognition Letters, vol. 99, pp. 77-86, 2017.doi:[[[10.1016/j.patrec.2017.03.015]]]
  • 8 G. Canal, S. Escalera, C. Angulo, "A real-time Human-Robot Interaction system based on gestures for assistive scenarios," Computer Vision and Image Understanding, vol. 149, pp. 65-77, 2016.doi:[[[10.1016/j.cviu.2016.03.004]]]
  • 9 M. S. Hinton, The State on the Streets: Police and Politics in Argentina and Brazil, CO: Lynne Rienner Publishers, Boulder, 2006.custom:[[[-]]]
  • 10 I. Sutskever, G. E. Hinton, G. W. Taylor, "The recurrent temporal restricted Boltzmann machine," Advances in Neural Information Processing System, vol. 21, pp. 1601-1608, 2009.custom:[[[-]]]
  • 11 X. Lu, Y. Tsao, S. Matsuda, C. Hori, "Speech enhancement based on deep denoising autoencoder," in Proceedings of the 14th Annual Conference of the International Speech Communication Association, Lyon, France, 2013;pp. 436-440. custom:[[[-]]]
  • 12 Y. Kim, 2014 (Online). Available:, https://arxiv.org/abs/1408.5882
  • 13 A. Graves, A. Mohamed, G. Hinton, "Speech recognition with deep recurrent neural networks," in Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, V ancouver, Canada, 2013;pp. 6645-6649. custom:[[[-]]]
  • 14 Y. Chen, Z. Lin, X. Zhao, G. Wang, Y. Gu, "Deep learning-based classification of hyperspectral data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 6, pp. 2094-2107, 2014.doi:[[[10.1109/jstars.2014.2329330]]]
  • 15 C. Kalyoncu, O. Toygar, "Geometric leaf classification," Computer Vision and Image Understanding, vol. 133, pp. 102-109, 2015.doi:[[[10.1016/j.cviu.2014.11.001]]]
  • 16 A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, et al., 2014 (Online). Available:, https://arxiv.org/abs/1412.5567
  • 17 T. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, Y. Ma, "PCANet: a simple deep learning baseline for image classification?," IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5017-5032, 2015.custom:[[[-]]]
  • 18 J. B. Robinson, D. M. Silburn, D. Rattray, D. M. Freebairn, A. Biggs, D. McClymont, N. Christodoulou, "Modelling shows that the high rates of deep drainage in parts of the Goondoola Basin in semi-arid Queensland can be reduced with changes to the farming systems," Australian Journal of Soil Research, vol. 48, no. 1, pp. 58-68, 2010.custom:[[[-]]]
  • 19 A. Kamilaris, F. X. Prenafeta-Boldu, "Deep learning in agriculture: a survey," Computers and Electronics in Agriculture, vol. 147, pp. 70-90, 2018.doi:[[[10.1016/j.compag.2018.02.016]]]
  • 20 A. Chlingaryan, S. Sukkarieh, B. Whelan, "Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: a review," Computers and Electronics in Agriculture, vol. 151, pp. 61-69, 2018.doi:[[[10.1016/j.compag.2018.05.012]]]
  • 21 A. Kamilaris, F. X. Prenafeta-Boldu, "A review of the use of convolutional neural networks in agriculture," The Journal of Agricultural Science, vol. 156, no. 3, pp. 312-322, 2018.custom:[[[-]]]
  • 22 D. I. Patricio, R. Rieder, "Computer vision and artificial intelligence in precision agriculture for grain crops: a systematic review," Computers and Electronics in Agriculture, vol. 153, pp. 69-81, 2018.doi:[[[10.1016/j.compag.2018.08.001]]]
  • 23 A. Gongal, S. Amatya, M. Karkee, Q. Zhang, K. Lewis, "Sensors and systems for fruit detection and localization: a review," Computers and Electronics in Agriculture, vol. 116, pp. 8-19, 2015.doi:[[[10.1016/j.compag.2015.05.021]]]
  • 24 K. G. Liakos, P. Busato, D. Moshou, S. Pearson, D. Bochtis, "Machine learning in agriculture: a review," Sensors, vol. 18, no. 8, pp. 1-29, 2018.doi:[[[10.3390/s18082674]]]
  • 25 S. Mishra, D. Mishra, G. H. Santra, "Applications of machine learning techniques in agricultural crop production: a review paper," Indian Journal of Science and Technology, vol. 9, no. 38, pp. 1-14, 2016.custom:[[[-]]]
  • 26 N. Zhu, X. Liu, Z. Liu, K. Hu, Y. Wang, J. Tan, et al., "Deep learning for smart agriculture: concepts, tools, applications, and opportunities," International Journal of Agricultural and Biological Engineering, vol. 11, no. 4, pp. 32-44, 2018.custom:[[[-]]]
  • 27 M. Weiss, F. Jacob, G. Duveiller, "Remote sensing for agricultural applications: a meta-review," Remote Sensing of Environment111402, vol. 236, 2020.custom:[[[-]]]
  • 28 T. Mikolov, A. Deoras, D. Povey, L. Burget, J. Cernocky, "Strategies for training large scale neural network language models," in Proceedings of 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, Waikoloa, HI, 2011;pp. 196-201. custom:[[[-]]]
  • 29 G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, et al., "Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.doi:[[[10.1109/MSP.2012.2205597]]]
  • 30 T. N. Sainath, A. Mohamed, B. Kingsbury, B. Ramabhadran, "Deep convolutional neural networks for LVCSR," in Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, V ancouver, BC, Canada, 2013;pp. 8614-8618. custom:[[[-]]]
  • 31 P. Y. Huang, F. Liu, S. R. Shiang, J. Oh, C. Dyer, "Attention-based multimodal neural machine translation," in Proceedings of the 1st Conference on Machine Translation, Volume 2: Shared Task Papers, Berlin, Germany, 2016;pp. 639-645. custom:[[[-]]]
  • 32 J. Bastings, I. Titov, W. Aziz, D. Marcheggiani, K. Sima`an, "Graph convolutional encoders for syntax-aware neural machine translation," in Proceedings of the 2017 Conference on Empirical Methods Natural Language Processing, Copenhagen, Denmark, 2017;pp. 1957-1967. custom:[[[-]]]
  • 33 T. Shen, T. Zhou, G. Long, J. Jiang, S. Pan, C. Zhang, "DiSAN: directional self-attention network for RNN/CNN-free language understanding," in Proceedings of the 32nd AAAI Conference on Artificial Intelligence, Palo Alto, CA, 2018;pp. 5446-5455. custom:[[[-]]]
  • 34 A. Krizhevsky, I. Sutskever, G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Advances in Neural Information Processing Systems, vol. 25, pp. 1097-1105, 2012.doi:[[[10.1145/3065386]]]
  • 35 C. Farabet, C. Couprie, L. Najman, Y. LeCun, "Learning hierarchical features for scene labeling," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1915-1929, 2013.doi:[[[10.1109/TPAMI.2012.231]]]
  • 36 J. Tompson, A. Jain, Y. LeCun, C. Bregler, "Joint training of a convolutional network and a graphical model for human pose estimation," Advances in Neural Information Processing Systems, vol. 27, pp. 1799-1807, 2014.custom:[[[-]]]
  • 37 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. V anhoucke, A. Rabinovich, "Going deeper with convolutions," in Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015;pp. 1-9. custom:[[[-]]]
  • 38 A. N. Lam, A. T. Nguyen, H. A. Nguyen, T. N. Nguyen, "Combining Deep Learning with Information Retrieval to Localize Buggy Files for Bug Reports (N)," in Proceedings of 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lincoln, NE, 2015;pp. 476-481. custom:[[[-]]]
  • 39 P. S. Huang, X. He, J. Gao, L. Deng, A. Acero, L. Heck, "Learning deep structured semantic models for web search using clickthrough data," in Proceedings of the 22nd ACM international conference on Information Knowledge Management, San Francisco, CA, 2013;pp. 2333-2338. custom:[[[-]]]
  • 40 P. Hamel, D. Eck, "Learning features from music audio with deep belief networks," in Proceedings of the11th International Society for Music Information Retrieval Conference (ISMIR), Utrecht, The Netherlands, 2010;pp. 339-344. custom:[[[-]]]
  • 41 N. Srivastava, R. R. Salakhutdinov, "Multimodal learning with deep Boltzmann machines," Advances in Neural Information Processing Systems, vol. 25, pp. 2222-2230, 2012.custom:[[[-]]]
  • 42 J. G. Ha, H. Moon, J. T. Kwak, S. I. Hassan, M. Dang, O. N. Lee, H. Y. Park, "Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles," Journal of Applied Remote Sensing, vol. 11, no. 4, 2017.custom:[[[-]]]
  • 43 J. Ma, K. Du, F. Zheng, L. Zhang, Z. Gong, Z. Sun, "A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network," Computers and Electronics in Agriculture, vol. 154, pp. 18-24, 2018.doi:[[[10.1016/j.compag.2018.08.048]]]
  • 44 Y. Lu, S. Yi, N. Zeng, Y. Liu, Y. Zhang, "Identification of rice diseases using deep convolutional neural networks," Neurocomputing, vol. 267, pp. 378-384, 2017.doi:[[[10.1016/j.neucom.2017.06.023]]]
  • 45 B. Liu, Y. Zhang, D. He, Y. Li, "Identification of apple leaf diseases based on deep convolutional neural networks," Symmetry, vol. 10, no. 1, pp. 1-16, 2017.doi:[[[10.3390/sym10010011]]]
  • 46 S. P. Mohanty, D. P. Hughes, M. Salathe, "Using deep learning for image-based plant disease detection," Frontiers in Plant Science, vol. 7, no. 1419, 2016.custom:[[[-]]]
  • 47 T. T. Tran, J. W. Choi, T. T. H. Le, J. W. Kim, "A comparative study of deep CNN in forecasting and classifying the macronutrient deficiencies on development of tomato plant," Applied Sciences, vol. 9, no. 8, 2019.custom:[[[-]]]
  • 48 A. Fuentes, S. Y oon, S. C. Kim, D. S. Park, "A robust Deep-Learning-based detector for real-time tomato plant diseases and pests recognition," Sensors, vol. 17, no. 9, 2017.doi:[[[10.3390/s17092022]]]
  • 49 G. Wang, Y. Sun, J. Wang, "Automatic image-based plant disease severity estimation using deep learning," Computational Intelligence and Neuroscience, vol. 2017, no. 2917536, 2017.doi:[[[10.1155/2017/2917536]]]
  • 50 L. Zhong, L. Hu, H. Zhou, "Deep learning based multi-temporal crop classification," Remote Sensing of Environment, vol. 221, pp. 430-443, 2019.custom:[[[-]]]
  • 51 A. Milioto, P. Lottes, C. Stachniss, "Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks," in Proceedings of ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Bonn, Germany, 2017;pp. 41-48. custom:[[[-]]]
  • 52 M. M. Ghazi, B. Y anikoglu, E. Aptoula, "Plant identification using deep neural networks via optimization of transfer learning parameters," Neurocomputing, vol. 235, pp. 228-235, 2017.doi:[[[10.1016/j.neucom.2017.01.018]]]
  • 53 X. Zhu, M. Zhu, H. Ren, "Method of plant leaf recognition based on improved deep convolutional neural network," Cognitive Systems Research, vol. 52, pp. 223-233, 2018.custom:[[[-]]]
  • 54 P. A. Dias, A. Tabb, H. Medeiros, "Apple flower detection using deep convolutional networks," Computers in IndustryAug, vol. 99, pp. 17-28, 2018.doi:[[[10.1016/j.compind.2018.03.010]]]
  • 55 Y. Tian, G. Yang, Z. Wang, H. Wang, E. Li, Z. Liang, "Apple detection during different growth stages in orchards using the improved YOLO-V3 model," Computers and Electronics in Agriculture, vol. 157, pp. 417-426, 2019.custom:[[[-]]]
  • 56 M. Rahnemoonfar, C. Sheppard, "Deep count: fruit counting based on deep simulated learning," Sensors, vol. 17, no. 4, 2017.doi:[[[10.3390/s17040905]]]
  • 57 A. Yang, H. Huang, X. Zhu, X. Yang, P. Chen, S. Li, Y. Xue, "Automatic recognition of sow nursing behaviour using deep learning-based segmentation and spatial and temporal features," Biosystems Engineering, vol. 175, pp. 133-145, 2018.custom:[[[-]]]
  • 58 Y. Qiao, M. Truman, S. Sukkarieh, "Cattle segmentation and contour extraction based on Mask R-CNN for precision livestock farming," Computers and Electronics in Agriculture, vol. 165, no. 104958, 2019.custom:[[[-]]]
  • 59 S. Kumar, A. Pandey, K. S. R. Satwik, S. Kumar, S. K. Singh, A. K. Singh, A. Mohan, "Deep learning framework for recognition of cattle using muzzle point image pattern," Measurement, vol. 116, pp. 1-17, 2018.custom:[[[-]]]
  • 60 M. F. Hansen, M. L. Smith, L. N. Smith, M. G. Salter, E. M. Baxter, M. Farish, B. Grieve, "Towards on-farm pig face recognition using convolutional neural networks," Computers in Industry, vol. 98, pp. 145-152, 2018.doi:[[[10.1016/j.compind.2018.02.016]]]
  • 61 S. A. Jwade, A. Guzzomi, A. Mian, "On farm automatic sheep breed classification using deep learning," Computers and Electronics in AgricultureArticle 105055, vol. 167, 2019.custom:[[[-]]]
  • 62 M. Tian, H. Guo, H. Chen, Q. Wang, C. Long, Y. Ma, "Automated pig counting using deep learning," Computers and Electronics in Agriculture, vol. 163, no. 104840, 2019.custom:[[[-]]]
  • 63 N. Kussul, M. Lavreniuk, S. Skakun, A. Shelestov, "Deep learning classification of land cover and crop types using remote sensing data," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 778-782, 2017.doi:[[[10.1109/LGRS.2017.2681128]]]
  • 64 R. Gaetano, D. Ienco, K. Ose, R. Cresson, "A two-branch CNN architecture for land cover classification of PAN and MS imagery," Remote Sensing, vol. 10, no. 11, 2018.custom:[[[-]]]
  • 65 G. J. Scott, M. R. England, W. A. Starms, R. A. Marcum, C. H. Davis, "Training deep convolutional neural networks for land–cover classification of high-resolution imagery," IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 4, pp. 549-553, 2017.custom:[[[-]]]
  • 66 H. Xing, Y. Meng, Z. Wang, K. Fan, D. Hou, "Exploring geo-tagged photos for land cover validation with deep learning," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 141, pp. 237-251, 2018.custom:[[[-]]]
  • 67 M. Mahdianpari, B. Salehi, M. Rezaee, F. Mohammadimanesh, Y. Zhang, "Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery," Remote Sensingno, 7, vol. 10, no. 1119, 2018.doi:[[[10.3390/rs10071119]]]
  • 68 P. Christiansen, L. N. Nielsen, K. A. Steen, R. N. Jorgensen, H. Karstoft, "DeepAnomaly: combining background subtraction and deep learning for detecting obstacles and anomalies in an agricultural field," Sensors, vol. 16, no. 11, 2016.doi:[[[10.3390/s16111904]]]
  • 69 K. A. Steen, P. Christiansen, H. Karstoft, R. N. Jorgensen, "Using deep learning to challenge safety standard for highly autonomous machines in agriculture," Journal of Imaging, vol. 2, no. 1, 2016.doi:[[[10.3390/jimaging2010006]]]
  • 70 Z. Khan, V. Rahimi-Eichi, S. Haefele, T. Garnett, S. J. Miklavcic, "Estimation of vegetation indices for high-throughput phenotyping of wheat using aerial imaging," Plant Methods, vol. 14, no. 20, 2018.custom:[[[-]]]
  • 71 Y. Kaneda, S. Shibata, H. Mineno, "Multi-modal sliding window-based support vector regression for predicting plant water stress," Knowledge-Based Systems, vol. 134, pp. 135-148, 2017.doi:[[[10.1016/j.knosys.2017.07.028]]]
  • 72 X. Song, G. Zhang, F. Liu, D. Li, Y. Zhao, J. Yang, "Modeling spatio-temporal distribution of soil moisture by deep learning-based cellular automata model," Journal of Arid Land, vol. 8, no. 5, pp. 734-748, 2016.custom:[[[-]]]
  • 73 Z. Wang, M. Hu, G. Zhai, "Application of deep learning architectures for accurate and rapid detection of internal mechanical damage of blueberry using hyperspectral transmittance data," SensorsArticle 1126, vol. 18, no. 4, 2018.doi:[[[10.3390/s18041126]]]
  • 74 M. K. Saggi, S. Jain, "Reference evapotranspiration estimation and modeling of the Punjab Northern India using deep learning," Computers and Electronics in Agriculture, vol. 156, pp. 387-398, 2019.custom:[[[-]]]
  • 75 A. Graves, J. Schmidhuber, "Framewise phoneme classification with bidirectional LSTM and other neural network architectures," Neural Networks, vol. 18, no. 5-6, pp. 602-610, 2005.doi:[[[10.1016/j.neunet.2005.06.042]]]
  • 76 A. Jain, A. R. Zamir, S. Savarese, A. Saxena, "Structural-RNN: deep learning on spatio-temporal graphs," in Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016;pp. 5308-5317. custom:[[[-]]]
  • 77 Y. R. Chen, K. Chao, M. S. Kim., "Machine vision technology for agricultural applications," Computers and Electronics in Agriculture, vol. 36, no. 2-3, pp. 173-191, 2002.custom:[[[-]]]