ChapterPDF Available

Agricultural Robot for Intelligent Detection of Pyralidae Insects

Authors:
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
Interested in publishing with us?
Contact book.department@intechopen.com
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
Open access books available
Countries delivered to Contributors from top 500 universities
International authors and editor s
Our authors are among the
most cited scientists
Downloads
We are IntechOpen,
the world’s leading publisher of
Open Access books
Built by scientists, for scientists
12.2%
116,000
120M
TOP 1%
154
3,900
Provisional chapter
Agricultural Robot for Intelligent Detection of
Pyralidae Insects
Zhuhua Hu, Boyi Liu and Yaochi Zhao
Additional information is available at the end of the chapter
Abstract
The Pyralidae insects are one of the main pests in economic crops. However, the manual
detection and identification of Pyralidae insects are labor intensive and inefficient, and
subjective factors can influence recognition accuracy. To address these shortcomings, an
insect monitoring robot and a new method to recognize the Pyralidae insects are presented
in this chapter. Firstly, the robot gets images by performing a fixed action and detects
whether there are Pyralidae insects in the images. The recognition method obtains the total
probability image by using reverse mapping of histogram and multi-template images, and
then image contour can be extracted quickly and accurately by using constraint Otsu.
Finally, according to the Hu moment characters, perimeter, and area characters, the con-
tours can be filtrated, and recognition results with triangle mark can be obtained. According
to the recognition results, the speed of the robot car and mechanical arm can be adjusted
adaptively. The theoretical analysis and experimental results show that the proposed
scheme has high timeliness and high recognition accuracy in the natural planting scene.
Keywords: pest detection and recognition, Pyralidae insects, reverse mapping,
multi-template matching, agricultural robot
1. Introduction
The timely detection and identification of corn pests and diseases are one of the major tasks of
agriculturists for social and environmental challenges, such as maintaining the stability of
grain output and reducing environmental pollution caused by the use of pesticides. Pyralidae
insects are one of the most common pests of maize [1], and it does great harm to the quality
and yield of maize. The traditional manual monitoring not only requires a large amount of
labor but also causes that detection is not timely due to human omissions. With the rapidly
© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
development of computer technology, the monitoring of diseases and insect pests based on
computer vision has been feasible, which can greatly improve the real-time detection and
recognition of pests [2].
Currently, there have existed some methods to detect plant diseases or insect with image
processing and computer vision technologies [3]. For example, Ali et al. used color histogram
and textural descriptors to detect citrus diseases [4]. They took the use of color difference to
separate the area affected by disease. Lu et al. used spectroscopy technology to detect anthrac-
nose crown rot in strawberry [5]. Xie et al. employed the hyper-spectral images to detect
whether there was gray mold disease in tomato leaves [6]. In addition, the researchers
constructed an automated detection and monitoring system for the detection of small pests in
the greenhouse, such as whitefly, etc., which can effectively monitor the tiny insects and their
densities [710]. Meanwhile, computer vision technology was also used for aphid detection
and monitoring of its population [11]. For the parasites on strawberry plants, support vector
machine (SVM) method combined with the image processing technique was successful in
detecting the thrips with an error less than 2.5% in the greenhouse environment [12]. The
incorporation k-means clustering methodology with image processing was used to segment
the pests or any object from the image [13]. Dai and Man used a convolutional Riemannian
texture with differential entropic active contours to distinguish the background regions and
expose pest regions [14]. Zhao et al. obtained accurate contour of crop diseases and insect pests
for the following recognition, taking the use of texture difference and active contour guided by
the texture difference [15]. In their further research, they also proposed image segmentation
method for fruits with diseases based on constraint Otsu and level set active contour [16].
However, they did not research on identification.
As for the recognition of insects and diseases, some recent research advances can be classified
into the two categories. The first category focuses on the image processing and computer
vision technologies without requiring data training. Pest recognition method based on sparse
representation and multi-feature fusion was proposed, which mainly used to identify beetles
[17]. Four methods for the diagnosis and classification of the diseases of corn leaf were
presented by using image processing and machine vision techniques [18]. Martin et al. pro-
posed an extended region grow algorithm, which can identify the pest and have the counting
of the pest to predict the pesticide amount to be used [19]. Przybyłowicz et al. developed a
technique based on wing measurements, which can be an effective tool for monitoring of the
European corn borer [20].
The second category concentrated on the training of data models, which mainly used machine
learning and neural network technology. The method based on difference of Gaussian filter
and local configuration pattern algorithm was used to extract the invariant features of the pest
images, and then these features were put to a linear SVM (support vector machine) for pest
recognition with recognition rate of 89% [21]. KohonensSelf-Organizing Maps neural network
was used to identify the extracted insect pests caught by a sticky trap [22]. In addition,
Boniecki et al. proposed a classification neural model using optimized learning sets acquired
based on the information encoded, which can be used to accurately identify the six most
Agricultural Robots - Fundamentals and Applications2
common apple pests [23]. Based on the combination of an image processing algorithm and
artificial neural networks, Espinoza et al. proposed an algorithm to detect and monitor adult-
stage whitefly (Bemisia tabaci) and thrip (Frankliniella occidentalis) in greenhouses, and the
correct recognition rate reached above 0.92 [24]. Zhu et al. combined the color histogram with
dual tree complex wavelet transform [25] and SVM [26] to recognize insects, which can
improve the recognition rate of insects. Li et al. proposed a red spider recognition method
based on k-means clustering, which transformed the image into Lab color space for clustering
[27]. This method had a high accuracy rate to identify red spider with obvious red features.
However, the method can be only applied in the situation that there is high color contrast
between the objects and the scenes.
In addition, the device for image acquisition is also necessary [7]. Johannes et al. presented a
scheme to diagnosis wheat disease automatically by using mobile capture devices [28]. In his
research, a novel image processing algorithm based on candidate hot-spot detection in combi-
nation with statistical inference methods is proposed to tackle disease identification in wild
conditions.
From the literature analysis in recent years, the image processing and computer vision tech-
nology have been widely used for the detection and recognition of diseases and pests and have
achieved good results. Generally, the researchers used an existing method combined with
image processing techniques to detect and identify clustering method, neural network, texture
analysis, wavelet transform, the level set method, etc. However, it is difficult to have a univer-
sal method to detect and identify all pests. In general, the algorithms are used to detect and
identify one or a class of pests. Moreover, most of the existing studies are often aimed at the
greenhouse environment, and the researchers usually do not build a practical verification
system. Obviously, deep learning can achieve high recognition accuracy, but this training-
based approach is difficult to guarantee real time and requires a large amount of existing data
to train the model.
At present, there are still relatively few studies on the detection and identification of Pyralidae
insects. In order to detect and identify the Pyralidae insects automatically and accurately in
real time, we have researched in the following aspects. Firstly, a robot platform for pest
monitoring is designed and fabricated. Then, a recognition scheme for Pyralidae insects is
presented, in which the color feature of the image is used. Moreover, the histogram reverse
mapping method and the multi-template image are used to obtain the general probability
image superposition. Next, the image is segmented with constraint Otsu. Finally, the contours
and Hu moments are used to automatically screen and identify the contours; thus, the contour
of Pyralidae insects can be recognized. The scheme proposed in this chapter can recognize the
single target and also has good recognition ability for multiple targets.
The rest of this chapter is organized as follows. Section 2 shows data acquisition equipment
and its structure and also gives the whole detailed description of detection and recognition
algorithm. In Section 3, we verify the monitoring robots work and the proposed scheme of
detection and recognition. In addition, we also evaluate the proposed scheme and discuss the
results of the experiment. Finally, Section 4 concludes the chapter.
Agricultural Robot for Intelligent Detection of Pyralidae Insects 3
2. Materials and methods
2.1. Acquisition of Pyralidae insect data source
The image data used in this study are collected by an Automatic Detection and Identification
System for Pests and Diseases. The system has been installed at the zone of technology
application and demonstration of Hainan University in Hainan province, China. The system
prototype and structure diagram is shown in Figure 1. The basic structure of the system can be
divided into five major parts: the camera sensor (automatic focusing, resolution 1600 1200
and camera model KS2A01AF) and display unit, trap unit, the power delivery unit, the
intelligent detection and recognition unit, and the hardware bearer unit.
2.2. Description of proposed scheme
In this chapter, the recognition scheme for Pyralidae insects based on reverse mapping of
histogram and contour template matching is mainly divided into input module, reference
image processing module, image segmentation module, contour extraction module, and target
recognition module. The input module firstly converts the experimental image into a matrix
and initializes the parameters such as contour recognition threshold and the binarization
threshold of the probability image. Then, the reference image processing module makes space
conversion for the reference image, transforms the image from RGB space to HSV space, and
extracts the histogram of the color layer (H layer). After that, the image segmentation module
is to extract the color histogram of the experimental image. After normalization, the total
Figure 1. The intelligent recognition of robot car for Pyralidae insects. (1) Deep grooved wheel, (2) shell, (3) guardrail, (4)
screen display, (5) camera, (6) mechanical arm, (7) vertical thread screw, (8) screw guardrail, (9) solar panels, (10) sensor
integrator, (11) horizontal screw motor, (12) trap lamp, (13) the hardcore, (14) crossbar, (15) insect collecting board, (16)
vertical thread screw-driven motor, (17) chassis, (18) car control buttons, (19) horizontal thread screw, and (20) trap top cover.
Agricultural Robots - Fundamentals and Applications4
probability image is obtained by the principle of histogram reverse mapping using the H layer
histogram of multiple template images, and then the module binarizes the probability image.
Subsequently, in the contour extraction module, the method obtains the contour of the binary
image with the help of the function named findContours() in OpenCV. The contours of the
internal holes are removed by morphological methods, which are screened according to the
circumference and area features. Finally, in the target recognition module, the scheme recog-
nizes the contour by calculating the similarity between the contour obtained in the previous
steps and the template contour. The outline of the contour larger than the threshold is consid-
ered to be the target contour, and finally we can get the recognition result. The pseudo-code
corresponding to the scheme is shown in Table 1.
2.3. Probability image acquisition based on color histogram reverse projection and multi-
template matching
The adults of the Pyralidae insects are yellowish brown. The male moths are 1013 mm long,
and the wing can reach 2030 mm. The back of the Pyralidae insects is yellowish brown, and
the end of the abdomen is relatively thin and pointed. Usually, they have a pair of filamentous
antennae, which are grayish brown. Meanwhile, its forewing is tan, with two brown wavy
stripes, and there are two yellowish brown short patterns between the two lines. In addition,
Algorithm: Recognition scheme of Pyralidae insects
Input: S (target image); M
x
(reference image);
Output: Three vertices of triangular markings on the Pyralidae insects α1;β1

,α2;β2

,α3;β3

1: Initialize: (R, G, B) S, M
x
2: Setting: The threshold of Hu moments; Reference contour image Y
image
3: V=max(R, G, B);
S=(V-min(R,G,B))255÷V if V!=0, 0 otherwise
H¼
GBðÞ60÷SifV¼R
180 þBRðÞ60÷SifV¼G
240 þRGðÞ60÷SifV¼B
8
>
<
>
:
4: for i=0:1:255
The color histogram of each image is obtained by statistics: H X
i
=H
pi
÷(H
m
H
n
);
Normalized (H);
end for
5: for i=0:1:m
for j=0:1:n
G
ij
=Similarity(H of {Image blocks with the same size as M
x
}, H of M
x
)
/*Similarity(),Calculate the histogram similarity */
end for
end for
6: R = OSTU(G); /* Binarize the image by Otsu method */
7: C=findContours(R) /*findContours() extracts the contours from binary images*/
8: real_match Based on Hu moment feature, calculating the similarity between C and the template contour
9: if real_match > match:
Triangle Approximate processing for triangle contour;
Output vertex coordinates
else:delete R
Table 1. The pseudo-code description of the proposed scheme.
Agricultural Robot for Intelligent Detection of Pyralidae Insects 5
the hind wings of the Pyralidae insects are grayish brown; especially, female moths are similar
in shape to male moths with lighter shades, yellowish veins, lightly brown texture, and obese
abdomen. From these characteristics, the color characteristics of adult Pyralidae insects are
obvious, and it is very effective to recognize the Pyralidae insects by color characteristics. Color
histograms are often used to describe color features and are particularly useful for describing
images that are difficult to segment automatically.
The inverse projection of the histogram is proposed by Michael J. Swain and Dana H. Ballard
[29], which is a form of record that shows how the pixel or pixel block adapts to the histogram
model allocation. It can be used to segment image or find interesting content in the image. The
output of the algorithm is an image of the same size as the input image, where the value of the
pixel represents the probability that it belongs to the target image. Therefore, it is possible to
obtain a probability image by mapping the histogram in the target image by using the tem-
plate image of the Pyralidae insects. Considering the Pyralidae insects highlight color feature
and the functional characteristics of histogram reflective algorithm, the scheme proposed in
this chapter applies the image grayscale processing based on the reflection of the color histo-
gram to the color feature extraction step. After the target image and the template image are
converted into the HSV space and the color layer (i.e., the H component) is extracted, the
image is grayed out by the method of histogram mapping. The gray image obtained in this
way is a probability image that reflects the degree of similarity to the target color. Thus, it
realizes the color distribution feature screening of the target image. The algorithm flow is
shown below:
1. Convert the reference image into HSV space; extract the H spatial matrix, statistically,
histogram; and normalize it.
2. Start from the first pixel (x, y) of the experimental image, and cut temporary image that is
the same size as the reference image, where (x, y) is the center pixel of the temporary
image. Extract the H space matrix, statistically its histogram, and normalize it.
3. Calculate the similarity between the color histogram of the detected image H
1
and the
color histogram of the reference image H
2
. The result is Similarity (H
1
,H
2
):
H0
k¼HkiðÞ1
NX
N
j
HkjðÞ (1)
Similarity H1;H2
ðÞ¼
PN
iH0
1iðÞH0
2iðÞ
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
PN
iH02
1iðÞH02
2iðÞ
q(2)
In Eqs. (1) and (2), k1;2fg,i¼j1;2;3;;Nfg,Nis the number of intervals in the
histogram, and HkiðÞis the value of the ith interval in the kth histogram. Similarity H1
ð;H2Þ
is the similarity between histogram H
1
and histogram H
2
. The degree of similarity reflects
the color characteristics of the pixel which are in line with the probability of Asian Pyral-
idae insects.
Agricultural Robots - Fundamentals and Applications6
In addition, due to the differences in the color and texture between different Pyralidae insects
in natural scenes, it is necessary to use a plurality of template images for histogram reverse
projection processing, which can avoid the use of a template that cannot be adapted to a
variety of different scene situations. As shown in Table 3, three template images are given.
The total probability image obtained by this method is shown in Eq. (3), where Mrepresents
the number of template images. The results obtained are shown in Table 4.
Similarity H1
ðÞ¼
X
M
m¼1
Similarity H1;Hm
ðÞ (3)
2.4. Otsu image segmentation based on constrained space
The Otsu algorithm is also known as maximum between-class variance method [30], some-
times called the Otsu algorithm, which is considered to be the best algorithm of selecting the
threshold in image segmentation. For the image Gx;yðÞ, the split threshold is set as T,ω
1
is
the proportion of foreground pixels, μ
1
is the average grayscale of foreground image, ω
2
is the
proportion background pixels, μ
2
is the average grayscale of background image, μis the total
average grayscale of background image, and g is the maximum between-class variance. pmin
and pmax are, respectively, the minimum and maximum values of the pixel values in the image.
Then, we can get
μ¼μ1ω1þμ2ω2s:t:ω1þω2¼1 (4)
gotsu ¼argmax ω1μμ1

2þω2μμ2

2
no
(5)
Substitute Eq. (4) into Eq. (5), and then the Otsu solution expression for threshold is as below:
gotsu ¼argmax ω1ω2μ1μ2

2
no
pmin Tpmax (6)
Finally, by using the method of traverse, the threshold of the maximum between-class variance
of the image is obtained. Inspired by the literature [16], the variance of the similarity value of
the background area is smaller because of the variance of the similarity degree of the Pyralidae
insect area and the diversity of the natural scene. In addition, the similarity of the Pyralidae
insects is larger than that of the background. Therefore, the Otsu threshold will be biased
toward the background, which can lead smaller threshold compared with the actual optimal
threshold. After that, the Otsu constrained spatial segmentation method is used to obtain the
gotsu firstly, and then a threshold for maximizing the between-class variance is obtained in the
constraint space (between gotsu and pmax), as shown in Eq. (7), where gotsu is a simple calculation
method [31], that is, gotsu ¼1
2μ1þμ2

, which indicates that the Otsu threshold is biased to a
larger variance for the image with a large difference between the two variance values:
goptimal ¼argmax ω1ω2μ1μ2

2
no
gotsu Tpmax (7)
Agricultural Robot for Intelligent Detection of Pyralidae Insects 7
2.5. Target contour recognition based on Hu moments
The moment feature mainly characterizes the geometric characteristics of the image area, also
known as the geometric moment. Because it has the invariant characteristic of the rotation,
translation, scale, and so on, so it is also called the invariant moment. In image processing,
geometric invariant moments can be used as an important feature to represent objects, which
can be used to classify an image. Among them, the invariant moments commonly used in
humanoid recognition are mainly composed of Hu moments, Zernike moments, and so on. Hu
moment is first proposed by M.K. Hu [32], and he gave the definition of Hu moments, the
basic properties, and seven invariant moments with translation, rotation, and scaling invari-
ance.
Specifically, we assume that the gray distribution in the target Dregion is f(x, y). In order to
describe the target, the gray distribution outside the region Dis considered to be 0, and then
the geometric moment and the regional moment of the pþqorder are, respectively, expressed
as follows:
mpq ¼ðð
D
xpyqfx;yðÞdxdy (8)
μpq ¼ðð
D
xxðÞ
pyyðÞ
pfx;yðÞdxdy (9)
As shown in the above equation, m
pq
represents the pþqorder geometric moments of the
image, and μ
pq
represents the pþqorder center moments of the image. Calculating the two
features of the reference contour image and the experimental contour image, we can use these
two features to represent the contour. The similarity between the experimental contour and the
reference contour is compared, and the similarity less than the threshold value of the contour is
removed. Then, the rest of the contour is the contour of the Pyralidae insects. Finally, by using
the function named approxPolyDP() in the OpenCV and other contour approximation
processing functions, the contour is approximated to a triangle and marked. Obviously, the
marked contour is the result we want.
2.6. Recognition algorithm combined with robot control
Combining with robot operations is one of the innovations of this chapter. Depending on the
result of the similarity detection, the robot arm can adjust the speed. When the similarity is greater
than 0.9, the robot arm will stop moving; meanwhile, camera sensors continue to collect image
data, and the robot will give an alarm. When the similarity is between 0.7 and 0.9, the movement
of the robot will slow down. Using robot and image recognition in a coordinated manner, we can
reduce the false alarm rate and missed detection rate. Meanwhile, when there is interference of
other insects, the robot arm will stop or slow down, which can reduce the probability of false
positives. Only when the similarity of five consecutive insect images is greater than 0.9, we can
make the final decision on the presence of Pyralidae insects. Using this method, it can be
prevented from being mistaken for other insects, so as to improve the correct recognition rate.
Agricultural Robots - Fundamentals and Applications8
3. Results and discussions
The hardware environment of this scheme includes PC (Inter(R) Core(TM) i3-2500 CPU
@3.30GHZ and 4.00GB RAM), embedded master development board (NVIDIA Jetson TX1),
embedded auxiliary control development board (2 Raspberry Pi B+ and 6 Arduino uno r3
expansion board), camera module (KS2A01AF), etc. The software experiment environment
includes Window 7 operating system, Python 2.7, OpenCV 2.4.13, and embedded Linux
operating system. The images used in the experiment are collected from cameras on the robot
arm. We gather about more than 200 photos of the Pyralidae insects for experiments. Some
result images of detection are shown in Table 4. The robot can perform a well-designed
motion, capture the images well, and identify Pyralidae insect object from the images. The
main parts and functions of the robot are shown in Table 2.
3.1. Probabilistic image acquisition experiment and analysis
After the image is converted to HSV space, the next step is that histogram reverse mapping is
conducted by using the three template images for target images, and then we can obtain the
probability image. The probability images obtained in the experiment are shown in Table 3.
As shown in Table 3, there are probability images obtained after three template images make
original image from the histogram reverse mapping. The image of the first column of lines 24is
thetemplateimageusedbythecurrentline.Thefirstlineofthetableistheoriginalimageofthe
The chassis is used to
support and secure the
sliding rails on the vehicle
and can also be used to
move the equipment
The sliding
guide is
used to
move the
robot arm
The part
marked by
the white
circle is the
robotic arm
The camera is used to
acquire images
Display
screen for
real-time
display of
images
Table 2. Image acquisition equipment: pest identification and environmental monitoring robot.
Agricultural Robot for Intelligent Detection of Pyralidae Insects 9
five images containing Asian Pyralidae insects. The second line of the table is the probability
image obtained by mapping the backward histogram with the template image 1. The third line of
the table is the probability image obtained by mapping the backward histogram with the template
image 2. The fourth line of the table is the probability image obtained by mapping the backward
histogram with the template image 3. The last line of the table is the total probability image
obtained by logical or operation and image erosion with the above three probability images.
As can be found in Table 3, the proposed method can avoid the situation that only one image
used cannot adapt to a variety of different scenarios. It can be seen from the results of the final
image after erosion operation that the total probability image obtained by multi-template
images logical operation has better effect.
3.2. Experiment and analysis of maize borer
After obtaining the probability image, the contour extraction, matching, screening, and recog-
nition experiments are carried out in this chapter. At the same time, triangle mark is to identify
the results of recognition for the characteristics of Pyralidae insects shape, and recognition
results are shown in Table 4.
As can be seen from Table 4, the scheme proposed can better identify the target containing
Pyralidae insect images. The number marked on the pictures indicates similarity. While we use
the triangle to identify the results of identification, better results are achieved. According to
Table 3. The original images and the obtained probability image after histogram reverse mapping.
Agricultural Robots - Fundamentals and Applications10
different recognition results, the speed of the robot arm can be adjusted adaptively to improve
the detection accuracy. Subsequently, we make statistics on time consumption and other
indicators in the experimental results. The processing time is about 1 s on every image. So,
the method proposed in this chapter can achieve real-time processing.
3.3. Comparison and analysis
Currently, recognition method based on ELM and deep learning has a rapid development. In
theory, the use of these methods can get a higher correct rate. Unfortunately, the capture and
establishment of such pest images of maize borers are very difficult. By now, there are few
useful pictures we can take, which are far less than the minimum requirement for the number
of image to be trained. Certainly, we also try to collect images through the trap. However, the
background of the resulting images is single, which cannot meet the requirements. In addition,
ELM and deep learning all have relatively high computational complexity and cannot meet the
needs of real-time detection. So, based on the two reasons mentioned above, they are not
feasible. Conversely, through the artificial summary of the characteristics of Pyralidae insects,
the robot adaptively adjusts the sampling frequency to detect, which can achieve better accu-
racy and good practicability.
Table 4. The recognition results and the robot arm action.
Agricultural Robot for Intelligent Detection of Pyralidae Insects 11
Finally, the proposed method is compared with the multi-structural element-based crop pest
identification method proposed in [33] and the general histogram reverse mapping method.
The experimental results are shown in Table 5. As can be seen from Table 5, the scheme of
maize borer recognition proposed in this chapter has higher recognition rate, lower false alarm
rate, and good application prospects. Besides, it is not necessary to carry out a large amount of
data analysis, which ensures that the average time consumption is not significantly increased.
In Table 5, the recognition rate and the false-positive rate are calculated as follows:
β¼
P
n
i¼1
rij
nrij ¼0or rij ¼1
 (10)
δ¼
xP
n,m
i¼1,j¼1
rij
xrij ¼0or rij ¼1
 (11)
In formulae (10) and (11), βrepresents the recognition rate and δrepresents the false-positive
rate. rij is the jth contour of the ith Pyralidae insects (if exist, then 1, else 0). nrepresents the
number of real Pyralidae insects in the image, xrepresents the total number of contours
marked by the algorithm, and mrepresents the total number of contours marked by the
algorithm for the ith Pyralidae insects in the image. Thus, the recognition rate reflects the
ability of the algorithm to identify maize borers. The false alarm rate reflects the proportion of
the error contours in all marker contours. Especially, the sum of these two probabilities is not
necessarily equal to 1.
Our scheme and two other algorithms are used to test more than 200 images containing the
Pyralidae insects, respectively. Then, we conducted a statistical analysis for the average time
consumption, the recognition accuracy, and the false alarm rate. The results of the statistics are
shown in Table 5.
4. Conclusions
Pyralidae insects have a great influence on the quality and yield of maize and so on. In order to
solve the problem of maize borer detection, this chapter presents a scheme for the detection and
identification of Pyralidae insects by using the robot we designed. Firstly, the mathematical
Schemes Recognition rate
(%)
False alarm rate
(%)
Average time
consumption (s)
Our proposed scheme in this chapter 94.3 6.5 1.12
Histogram reverse mapping method 65.2 60.8 1.01
Multi-structural element-based crop pest identification
method [33]
78.8 16.9 1.10
Table 5. Comparison results of different schemes.
Agricultural Robots - Fundamentals and Applications12
morphology is used to preprocess the obtained image, and then the image is binarized by
histogram reverse mapping. Next, the binary image is processed by contour extraction and
preliminary screening. Then, combining the reference contour image, the contours of Asian
Pyralidae insect characteristics are selected by using the Hu moment feature. In the end, this
chapter makes a statistical analysis of the experimental results, and the correct rate of recognition
based on multi-template matching can reach nearly 94.3%. Compared with other methods, the
time complexity of this scheme is basically the same as that of those, which can meet the
requirement of real-time detection.
Acknowledgements
The contents of this chapter were supported by the Key R&D Project of Hainan Province
(Grant no. ZDYF2018015), the Hainan Province Natural Science Foundation of China (Grant
no. 617033), the Open Sub-project of State Key Laboratory of Marine Resource Utilization in
South China Sea (Grant no. 2016013B), and the Oriented Project of State Key Laboratory of
Marine Resource Utilization in South China Sea (Grant no. DX2017012).
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this
chapter.
Author details
Zhuhua Hu
1,2
*, Boyi Liu
1,3
and Yaochi Zhao
1
*Address all correspondence to: yaochizi@163.com
1 College of Information Science and Technology, Hainan University, Haikou, China
2 State Key Laboratory of Marine Resource Utilization in South China Sea, Hainan University,
Haikou, China
3 University of Chinese Academy of Science, Beijing, China
References
[1] Wei TS, Zhu WF, Pang MH, Liu YC, Wang ZY, Dong JG. Influence of the damage of cotton
bollworm and corn borer to ear rot in corn. Journal of Maize Sciences. 2013;21(4):116-118
(in Chinese)
Agricultural Robot for Intelligent Detection of Pyralidae Insects 13
Agricultural Robots - Fundamentals and Applications14
Speech and Signal Processing (ICASSP), New Orleans, LA, USA: IEEE; 59 March, 2017.
pp. 1028-1032
[15] Zhao Y, Hu Z, Bai Y, Cao F. An accurate segmentation approach for disease and pest
based on DRLSE guided by texture difference. Transactions of the Chinese Society for
Agriculture Machinery. 2015;46(2):14-19 (in Chinese)
[16] Zhao Y, Hu Z. Segmentation of fruit with diseases in natural scenes based on logarithmic
similarity constraint Otsu. Transactions of the Chinese Society for Agriculture Machinery.
2015;46(11):9-15 (in Chinese)
[17] Hu Y, Song L, Zhang J, Xie C, Li R. Pest image recognition of multi-feature fusion based on
sparse representation. International Journal of Pattern Recognition and Artificial Intelli-
gence. 2014;27(11):985-992 (in Chinese)
[18] Bayat M, Abbasi M, Yosefi A. Improvement of pest detection using histogram adjustment
method and Gabor wavelet. Journal of Asian Scientific Research. 2016;6(2):24-33
[19] Martin A, Sathish D, Balachander C, Hariprasath T, Krishnamoorthi G. Identification and
counting of pests using extended region grow algorithm. In: 2015 2nd International
Conference on Electronics and Communication Systems (ICECS), Coimbatore, India:
IEEE; 2627 February, 2015. pp. 1229-1234
[20] Przybyłowicz Ł, Pniak M, Tofilski A. Semiautomated identification of European corn
borer (Lepidoptera: Crambidae). Journal of Economic Entomology. 2015;109(1):195-199
[21] Deng L, Yu R. Pest recognition system based on bio-inspired filtering and LCP features.
In: 2015 12th International Computer Conference on Wavelet Active Media Technology and
Information Processing (ICCWAMTIP), Chengdu, China: IEEE; 1820 December, 2015. pp.
202-204
[22] Miranda JL. Pest identification using image processing techniques in detecting image
pattern through neural network. International Journal of Advances in Image Processing
Techniques. 2014;1(4):4-9
[23] Boniecki P, Koszela K, Piekarska-Boniecka H, Weres J, Zaborowicz M, Kujawa S,
Majewskic A, Rabaa B. Neural identification of selected apple pests. Computers and
Electronics in Agriculture. 2015;110:9-16
[24] Espinoza K, Valera DL, Torres JA, López A, Molina-Aiz FD. Combination of image
processing and artificial neural networks as a novel approach for the identification of
Bemisia tabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture.
Computers and Electronics in Agriculture. 2016;127:495-505
[25] Zhu L, Zhang Z, Zhang P. Image identification of insects based on color histogram and
dual tree complex wavelet transform (DTCWT). Acta Entomologica Sinica. 2010;53(1):
91-97 (in Chinese)
[26] Zhu L, Zhang Z. Automatic insect classification based on local mean colour feature and
supported vector machines. Journal of Oriental Insects. 2012;46(34):260-269
Agricultural Robot for Intelligent Detection of Pyralidae Insects 15
[27] Li Z, Hong T, Zeng X, Zheng J. Citrus red mite image target identification based on K-
means clustering. Transactions of the Chinese Society of Agricultural Engineering. 2013;
28(23):147-153 (in Chinese)
[28] Johannes A, Picon A, Alvarez-Gila A, Echazarra J, Rodriguez-Vaamonde S, Navajas AD,
Ortiz-Barredo A. Automatic plant disease diagnosis using mobile capture devices, applied
on a wheat use case. Computers and Electronics in Agriculture. 2017;138:200-209
[29] Swain MJ, Ballard DH. Color indexing. International Journal of Computer Vision. 1991;
7(1):11-32
[30] Otsu N. A threshold selection method from gray-level histogram. IEEE Transactions on
Systems, Man, and Cybernetics. 1979;9(1):62-66
[31] Xu X, Song E, Jin L. Characteristic analysis of threshold based on Otsu criterion. Acta
Entomologica Sinica. 2009;37(12):2716-2719 (in Chinese)
[32] Doyle W. Operations useful for similarity-invariant pattern recognition. Association for
Computing Machinery. 1962;9(2):259-267
[33] Liu J, Geng G, Ren Z. Plant pest recognition system based on multi-structure element
morphology. Journal of Computational Design and Engineering. 2009;30(6):1488-1490
(in Chinese)
Agricultural Robots - Fundamentals and Applications16
... Anomaly detection in plant production is one of the most challenging tasks agricultural robotics faces [39][40][41][42][43][44][45][46][47][48]. Pathogens that require entirely different treatments can cause similar symptoms; for example, yellow discoloration of the leaves can be caused by nutrient deficiencies, fungi, and insects. ...
... For that reason, the great variety of techniques proposed across the different research reviewed in this paper can be observed as positive. Three categories can be observed: (i) those that use histogram/vegetation indexes and thresholding [30,43,52,61,64,71]; (ii) those that use traditional machine learning algorithms such as SVM or Random Forests, and therefore, the features of the images are extracted manually [26,39,47,49]; and (iii) those that rely on deep-learning and CNN's methods (e.g., AlexNet, SqueezeNet, and VGG-16) [46,53], and thus, the features are extracted automatically. Although this last technique represents the last trend among AI methods, it has the drawback of requiring larger amounts of data to obtain good performance and avoid overfitting. ...
Article
Full-text available
Robotics has been increasingly relevant over the years. The ever-increasing demand for productivity, the reduction of tedious labor, and safety for the operator and the environment have brought robotics to the forefront of technological innovation. The same principle applies to agricultural robots, where such solutions can aid in making farming easier for the farmers, safer, and with greater margins for profit, while at the same time offering higher quality products with minimal environmental impact. This paper focuses on reviewing the existing state of the art for vision-based perception in agricultural robots across a variety of field operations; specifically: weed detection, crop scouting, phenotyping, disease detection, vision-based navigation, harvesting, and spraying. The review revealed a large interest in the uptake of vision-based solutions in agricultural robotics, with RGB cameras being the most popular sensor of choice. It also outlined that AI can achieve promising results and that there is not a single algorithm that outperforms all others; instead, different artificial intelligence techniques offer their unique advantages to address specific agronomic problems.
... Automatic or semi-automatic identification of insects are greatly needed for diagnosing causes of damage and quarantine protocols for the economically relevant species. Some computer-aided systems have been implemented in recent decades for identifying harmful species [2][3][4][5][6], and the intelligent recognition approaches for insects living in natural scenes have also made some progresses recently [7][8][9]. ...
... They can neither deal with the case where there are more than one species in an image. Recently, there appear a few works applying detection techniques to insect recognition [7,9,10]. However, in these works, the images are always taken elaborately, where the insects occupy most of the space in the image with a simple background as shown in Fig. 1. ...
Chapter
Insect species recognition is an important application of computer vision in zoology and agriculture. Most of existing methods resort to hand-crafted features and traditional classifiers, which usually give poor accuracy and apply only to elaborately taken full-size pictures. In this paper, we focus on a more challenging case where the images are taken in the wild with complex backgrounds, and propose to use a deep learning based detection model to deal with it. It exploits multi-class object detection to eliminate interferences from complex backgrounds, while taking advantages of deep learning to significantly improve the performance of recognition. After evaluating several popular detection methods, R-FCN is selected as the base model. To further improve its performance, we introduce a clustering algorithm for estimation of the anchor boxes instead of using predefined ones. The experimental results on a dataset of insect images collected in the wild prove the effectiveness of our proposed method in improving both accuracy and speed.
... In the literature, various robot-assisted applications, such as crawl space inspection, tunnel inspection, drain inspection, and power transmission line fault detection, have been reported. In [14], the authors designed an insect monitoring robot to detect and identify Pyralidae insects. The contours of Asian Pyralidae insect characteristics are selected using the Hu moment feature. ...
Article
Full-text available
Mosquito-borne diseases can pose serious risks to human health. Therefore, mosquito surveillance and control programs are essential for the wellbeing of the community. Further, human-assisted mosquito surveillance and population mapping methods are time-consuming, labor-intensive, and require skilled manpower. This work presents an AI-enabled mosquito surveillance and population mapping framework using our in-house-developed robot, named 'Dragonfly', which uses the You Only Look Once (YOLO) V4 Deep Neural Network (DNN) algorithm and a two-dimensional (2D) environment map generated by the robot. The Dragonfly robot was designed with a differential drive mechanism and a mosquito trapping module to attract mosquitoes in the environment. The YOLO V4 was trained with three mosquito classes, namely Aedes aegypti, Aedes albopictus, and Culex, to detect and classify the mosquito breeds from the mosquito glue trap. The efficiency of the mosquito surveillance framework was determined in terms of mosquito classification accuracy and detection confidence level on offline and real-time field tests in a garden, drain perimeter area, and covered car parking area. The experimental results show that the trained YOLO V4 DNN model detects and classifies the mosquito classes with an 88% confidence level on offline mosquito test image datasets and scores an average of an 82% confidence level on the real-time field trial. Further, to generate the mosquito population map, the detection results are fused in the robot's 2D map, which will help to understand mosquito population dynamics and species distribution.
Article
The threat posed to crop production by pests and diseases is one of the key factors that could reduce global food security. Early detection is of critical importance to make accurate predictions, optimize control strategies and prevent crop losses. Recent technological advancements highlight the opportunity to revolutionize monitoring of pests and diseases. Biosensing methodologies offer potential solutions for real-time and automated monitoring, which allow advancements in early and accurate detection and thus support sustainable crop protection. Herein, advanced biosensing technologies for pests and diseases monitoring, including image-based technologies, electronic noses, and wearable sensing methods are presented. Besides, challenges and future perspectives for widespread adoption of these technologies are discussed. Moreover, we believe it is necessary to integrate technologies through interdisciplinary cooperation for further exploration, which may provide unlimited possibilities for innovations and applications of agriculture monitoring.
Article
Full-text available
Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a conceptual framework that interpretively develops the ethical implications of AI robot applications, drawing on descriptive and normative ethical theory. The new framework elaborates on how the locus of morality (human to AI agency) and moral intensity combine within context-specific AI robot applications, and how this might influence accountability thinking. Our theorization indicates that in situations of escalating AI agency and situational moral intensity, accountability is widely dispersed between actors and institutions. ‘Accountability clusters’ are outlined to illustrate interrelationships between the locus of morality, moral intensity, and accountability and how these invoke different categorical responses: (i) illegal, (ii) immoral, (iii) permissible, and (iv) supererogatory pertaining to using AI robots. These enable discussion of the ethical implications of using AI robots, and associated accountability challenges for a constellation of actors—from designer, individual/organizational users to the normative and regulative approaches of industrial/governmental bodies and intergovernmental regimes.
Article
Full-text available
A survey of the population densities of rice planthoppers is important for forecasting decisions and efficient control. Traditional manual surveying of rice planthoppers is time-consuming, fatiguing, and subjective. A new three-layer detection method was proposed to detect and identify white-backed planthoppers (WBPHs, Sogatella furcifera (Horváth)) and their developmental stages using image processing. In the first two detection layers, we used an AdaBoost classifier that was trained on a histogram of oriented gradient (HOG) features and a support vector machine (SVM) classifier that was trained on Gabor and Local Binary Pattern (LBP) features to detect WBPHs and remove impurities. We achieved a detection rate of 85.6% and a false detection rate of 10.2%. In the third detection layer, a SVM classifier that was trained on the HOG features was used to identify the different developmental stages of the WBPHs, and we achieved an identification rate of 73.1%, a false identification rate of 23.3%, and a 5.6% false detection rate for the images without WBPHs. The proposed three-layer detection method is feasible and effective for the identification of different developmental stages of planthoppers on rice plants in paddy fields.
Article
Disease diagnosis based on the detection of early symptoms is a usual threshold taken into account for integrated pest management strategies. Early phytosanitary treatment minimizes yield losses and increases the efficacy and efficiency of the treatments. However, the appearance of new diseases associated to new resistant crop variants complicates their early identification delaying the application of the appropriate corrective actions. The use of image based automated identification systems can leverage early detection of diseases among farmers and technicians but they perform poorly under real field conditions using mobile devices. A novel image processing algorithm based on candidate hot-spot detection in combination with statistical inference methods is proposed to tackle disease identification in wild conditions. This work analyses the performance of early identification of three European endemic wheat diseases – septoria, rust and tan spot. The analysis was done using 7 mobile devices and more than 3500 images captured in two pilot sites in Spain and Germany during 2014, 2015 and 2016. Obtained results reveal AuC (Area under the Receiver Operating Characteristic–ROC–Curve) metrics higher than 0.80 for all the analyzed diseases on the pilot tests under real conditions.
Article
This paper presents a technique to detect and classify major citrus diseases of economic importance. Kinnow mandarin being 80% of Pakistan citrus industry was the main focus of study. Due to a little variation in symptoms of different plant diseases, the diagnosis requires the expert’s opinion in diseases detection. The inappropriate diagnosis may lead to tremendous amount of economical loss for farmers in terms of inputs like pesticides. For many decades, computers have been used to provide automatic solutions instead of a manual diagnosis of plant diseases which is costly and error prone. The proposed method applied ∆E color difference algorithm to separate the disease affected area, further, color histogram and textural features were used to classify diseases. Our method out performed and achieved overall 99.9% accuracy and similar sensitivity with 0.99 area under the curve. Moreover, the combination of color and texture features was used for experiments and achieves similar results, as compared to individual channels. Principle components analysis was applied for the features set dimension reduction and these reduced features were also tested using state of the art classifiers.
Article
Automatic pest detection is a useful method for greenhouse monitoring against pest attacks. One of the more harmful pests that threaten strawberry greenhouses is thrips (Thysanoptera). Therefore, the main objective of this study is to detect of thrips on the crop canopy images using SVM classification method. A new image processing technique was utilized to detect parasites that may be found on strawberry plants. SVM method with difference kernel function was used for classification of parasites and detection of thrips. The ratio of major diameter to minor diameter as region index as well as Hue, Saturation and Intensify as color indexes were utilized to design the SVM structure. Also, mean square error (MSE), root of mean square error (RMSE), mean absolute error (MAE) and mean percent error (MPE) were used for evaluation of the classification. Results show that using SVM method with region index and intensify as color index make the best classification with mean percent error of less than 2.25%.
Article
Anthracnose crown rot (ACR) is one of the major diseases affecting strawberry crops grown in warm climates and causes huge yield losses each year. ACR is caused by the fungus Colletotrichum. Since this airborne disease spreads rapidly, detection at the early stage of infection is critical. The objective of this study was to investigate the feasibility of detecting ACR in strawberry at its early stage under field conditions using spectroscopy technology. Hyperspectral data were collected in-field using a mobile platform on three categories of strawberry plants: infected but asymptomatic, infected and symptomatic, and healthy. As a comparison, indoor data were also collected from the same three categories of strawberry plants under a controlled laboratory setup. Three classification models, stepwise discriminant analysis (SDA), Fisher discriminant analysis (FDA), and the k-Nearest Neighbor (kNN) algorithms, were investigated for their potential to differentiate the three infestation categories. Thirty-three spectral vegetation indices (SVIs) were calculated as inputs using selected spectral bands in the visible (VIS) and near infrared (NIR) regions to train classification models. The mean classification accuracies of in-field tests for the three infestation categories were 71.3%, 70.5%, and 73.6% for SDA, FDA, and kNN, respectively. These accuracies were approximately 15–20% lower than those of the indoor tests. The low accuracy (15.4%) of classifying healthy leaves in-field using the kNN model was possibly due to the training datasets being unbalanced. After the adjustment of sample sizes of each category, the accuracies of kNN improved greatly, especially for the healthy and symptomatic categories. Overall, SDA was the optimal classifier for both indoor and in-field tests for detection strawberry ACR. However, kNN performed better for asymptomatic leaves in the field in the case of balanced sample sizes of each category.
Article
This study used hyperspectral imaging technique to classify healthy and gray mold diseased tomato leaves. Hyperspectral images of diseased samples at 24 h, 48 h, 72 h, 96 h and 120 h after inoculation and healthy samples were taken in the wave range of 380–1023 nm. A total of ten pixels from each sample were identified as the region of interest (ROI), and the mean reflectance values of ROI were calculated. The dependent variables of healthy samples were set as 0, and diseased samples were set as 1, 2, 3, 4 and 5 according to infection severities, respectively. K-nearest neighbor (KNN) and C5.0 models were built to classify the samples using the full wave band set. To reduce data volume, features ranking (FR) was used to select sensitive bands. Then, the KNN classification model was built based on just the selected bands. This later procedure of reducing spectral dimensionality and classifying infection stages was defined as FR-KNN. Performances of KNN classifier on all wave bands and FR-KNN were compared. The overall classification results in the testing sets were 61.11% for KNN, 54.17% for C5.0 and 45.83% for FR-KNN model. When differentiating infected samples from control, the testing results were 94.44%, 94.44% and 97.22% for each model, respectively. In addition, early disease detection (1 dpi) obtained the results of 66.67% for KNN, 66.67% for C5.0 and 41.67% for FR-KNN. Therefore, it demonstrated that hyperspectral imaging has the potential to be used for early detection of gray mold disease on tomato leaves.
Article
Integrated Pest Management (IPM) lies at the core of the current efforts to reduce the use of deleterious chemicals in greenhouse agriculture. IPM strategies rely on the early detection and continuous monitoring of pest populations, a critical task that is not only time-consuming but also highly dependent on human judgement and therefore prone to error. In this study, we propose a novel approach for the detection and monitoring of adult-stage whitefly (Bemisia tabaci) and thrip (Frankliniella occidentalis) in greenhouses based on the combination of an image-processing algorithm and artificial neural networks. Digital images of sticky traps were obtained via an image-acquisition system. Detection of the objects in the images, segmentation, and morphological and color property estimation was performed by an image-processing algorithm for each of the detected objects. Finally, classification was achieved by means of a feed-forward multi-layer artificial neural network. The proposed whitefly identification algorithm achieved high precision (0.96), recall (0.95) and F-measure (0.95) values, whereas the thrip identification algorithm obtained similar precision (0.92), recall (0.96) and F-measure (0.94) values.