Conference PaperPDF Available

Image recognition of plant diseases based on principal component analysis and neural networks

Authors:

Abstract

Plant disease identification based on image processing could quickly and accurately provide useful information for the prediction and control of plant diseases. In this study, 21 color features, 4 shape features and 25 texture features were extracted from the images of two kinds wheat diseases (wheat stripe rust and wheat leaf rust) and two kinds of grape diseases (grape downy mildew and grape powdery mildew), principal component analysis (PCA) was performed for reducing dimensions in feature data processing, and then neural networks including backpropagation (BP) networks, radial basis function (RBF) neural networks, generalized regression networks (GRNNs) and probabilistic neural networks (PNNs) were used as the classifiers to identify wheat diseases and grape diseases, respectively. The results showed that these neural networks could be used for image recognition of these diseases based on reducing dimensions using PCA and acceptable fitting accuracies and prediction accuracies could be obtained. For the two kinds of wheat diseases, the optimal recognition result was obtained when image recognition was conducted based on PCA and BP networks, and the fitting accuracy and the prediction accuracy were both 100%. For the two kinds of grape diseases, the optimal recognition results were obtained when GRNNs and PNNs were used as the classifiers after reducing the dimensions of feature data with PCA, and the prediction accuracies were 94.29% with the fitting accuracies equal to 100%.
978-1-4577-2133-5/10/$26.00 ©2012 IEEE 253
2012 8th International Conference on Natural Computation (ICNC 2012)
Image Recognition of Plant Diseases Based on
Principal Component Analysis and Neural Networks
Haiguang Wang, Guanlin Li, Zhanhong Ma, Xiaolong Li
Department of Plant Pathology,
China Agricultural University,
Beijing 100193, China
Abstract—Plant disease identification based on image processing
could quickly and accurately provide useful information for the
prediction and control of plant diseases. In this study, 21 color
features, 4 shape features and 25 texture features were extracted
from the images of two kinds wheat diseases (wheat stripe rust
and wheat leaf rust) and two kinds of grape diseases (grape
downy mildew and grape powdery mildew), principal component
analysis (PCA) was performed for reducing dimensions in feature
data processing, and then neural networks including
backpropagation (BP) networks, radial basis function (RBF)
neural networks, generalized regression networks (GRNNs) and
probabilistic neural networks (PNNs) were used as the classifiers
to identify wheat diseases and grape diseases, respectively. The
results showed that these neural networks could be used for
image recognition of these diseases based on reducing dimensions
using PCA and acceptable fitting accuracies and prediction
accuracies could be obtained. For the two kinds of wheat
diseases, the optimal recognition result was obtained when image
recognition was conducted based on PCA and BP networks, and
the fitting accuracy and the prediction accuracy were both 100%.
For the two kinds of grape diseases, the optimal recognition
results were obtained when GRNNs and PNNs were used as the
classifiers after reducing the dimensions of feature data with
PCA, and the prediction accuracies were 94.29% with the fitting
accuracies equal to 100%.
Keywords- image recognition; plant diseases; principal
component analysis; neural networks
I. INTRODUCTION
There are many kinds of plant diseases in the world. The
diseases could cause quality decline of agricultural products
and serious yield losses, and even threaten food security.
Timely recognition and diagnosis of plant diseases is the basis
for taking disease control measures. Recognition and diagnosis
of plant diseases usually relies on the in-field visual
identification by the agricultural technicians. This approach
requires high professional knowledge and rich experience, and
needs many professional and technical personnel. Disease
diagnosis via pathogen detection requires satisfactory
laboratory conditions and more professional knowledge. Now
the pathogen detection methods based on the molecular
biological techniques were rapidly developed, and more
accurate results of disease diagnosis could be obtained by using
these methods. However, these methods need to be performed
by professional and technical personnel and also could not be
performed in the field. Moreover, these methods are time-
consuming and of high cost. Therefore, it is very necessary to
find out a simple and fast plant disease identification method
with high identification accuracy.
With the rapid development of information technology and
agricultural informatization, computer technology has played
an important role in acquisition, processing and communication
of plant disease information [1]. Now image recognition of
plant diseases is the widespread concern generated as a result
of the development of visual technologies and the
popularization of digital products. The studies on the
recognition and the automatic assessment of plant diseases
based on image processing have been reported [2], [3], [4], [5],
[6], [7], [8], [9], [10]. Computer automatic recognition and
diagnosis based on symptom images of plant diseases could
quickly and accurately provide disease information for
agricultural technicians and farmers and thus reduce the
dependence on agricultural technicians.
Image recognition of plant diseases is to extract the
characteristic feature information from the diseased regions in
the obtained images by using image processing techniques, and
then to achieve disease recognition by using pattern recognition
methods such as discriminant analysis [11], [12], neural
networks [9], [13], [14], [15], [16] and support vector machine
[17], [18], [19], [20], [21]. Generally, the extracted features
from the images of plant diseases include color features [12],
[19], [22], [23], shape features [24], texture features [25], and
so on. It is very important to extract the effective characteristic
features for the image recognition of plant diseases. However,
sometimes excessive features are extracted from the images for
the recognition of plant diseases. This would increase the
requirements for computer hardware and software and the
complexity of disease recognition, and greatly extend the
computation time. So it is necessary to reduce the dimensions
of the feature data while many features are used for disease
recognition based on image processing. The commonly used
methods to reduce the dimensions of the feature data include
principal component analysis (PCA) [26], [27], stepwise linear
regression method [12], and so on.
In order to find out a method for plant disease identification
based on image processing, image recognition of four kinds of
important plant diseases including wheat stripe rust caused by
Puccinia striiformis f. sp. tritici, wheat leaf rust caused by
Puccinia recondita f. sp. tritici, grape downy mildew caused by
This work was supported in part by Special Fund for Agro-scientific Research
in the Public Interest (200903004 and 200903035).
Corresponding author: Haiguang Wang, E-mail: wanghaiguang@cau.edu.cn.
254
Plasmopara uiticola and grape powdery mildew caused by
Uncinula necator, was conducted based on color features,
shape features and texture features extracted from the disease
images using PCA and neural networks including
backpropagation (BP) networks, radial basis function (RBF)
neural networks, generalized regression networks (GRNNs)
and probabilistic neural networks (PNNs).
II. MATERIALS AND METHODS
In this study, 185 digital images of plant diseases were
obtained by using common digital camera. The images were
divided into two groups according to the types of plants; one
group included 50 images of wheat stripe rust and 50 images of
wheat leaf rust, another group included 50 images of grape
downy mildew and 35 images of grape powdery mildew. The
size of the original plant disease images was 2592×1944 with
format of jpg, 24 bitmap. To improve the operation speed of
computer programs, the images of wheat diseases were cropped
to the size of 400 × 300, and the images of grape diseases were
compressed from 2592×1944 to 800×600 in the same
proportion without changing the image resolution using the
nearest neighbor interpolation method. Then the plant disease
images were denoised with median filter algorithm. After
images preprocessing, K_means clustering algorithm was used
to segment the plant disease images [28]. In MATALAB 7.6,
50 features including 21 color features, 4 shape features and 25
texture features were extracted from the segmented disease
images [29].
The image recognition of plant diseases was carried out
using BP networks, RBF neural networks, GRNNs and PNNs
as the classifiers, respectively. For wheat diseases, 30 images
of wheat stripe rust and 30 images of wheat leaf rust were
randomly selected as the training set, and the remaining wheat
disease images including 20 images of wheat stripe rust and 20
images of wheat leaf rust were regarded as the testing set. For
grape diseases, 30 images of grape downy mildew and 20
images of grape powdery mildew were randomly selected as
the training set, and the remaining grape disease images
including 20 images of grape downy mildew and 15 images of
grape powdery mildew were regarded as the testing set. For the
disease images of each kind of plant, seven groups of feature
combinations were obtained and were recorded as Col, Sha,
Tex, ColSha, ColTex, ShaTex and CST, respectively. Col
referred to 21 color features, Sha referred to 4 shape features,
Tex referred to 25 texture features, ColSha referred to 21 color
features and 4 shape features, ColTex referred to 21 color
features and 25 texture features, ShaTex referred to 4 shape
features and 25 texture features, and CST referred to 50
features. These seven groups of feature combinations were
processed by using PCA, respectively.
PCA is to convert a set of observations of possibly
correlated variables into a set of values of linearly uncorrelated
variables using an orthogonal transformation. Usually, it is
used for reducing dimensions in data processing. In this study,
the processes of PCA were implemented by using the
procedure PRINCOMP in the SAS System for Windows 8.2.
Principal component was selected for further processing if the
eigenvalue was about 1 and the cumulative contribution
rate85. The selected principal components were normalized
and then were regarded as network inputs. The data on disease
types were regarded as target outputs. While BP networks,
RBF neural networks and GRNNs were used as the classifiers,
wheat stripe rust and wheat leaf rust were expressed as (0, 1)
and (1, 0), respectively, and grape downy mildew and grape
powdery mildew were expressed as (0, 1) and (1, 0) ,
respectively. While RBF neural networks, GRNNs and PNNs
were used as the classifiers, wheat stripe rust and wheat leaf
rust were expressed as 1 and 2, respectively, and grape downy
mildew and grape powdery mildew were expressed as 1 and 2,
respectively. The fitting recognition results and the prediction
recognition results were compared with the actual disease
types, and then the fitting accuracies and the prediction
accuracies were obtained, respectively.
In MATALAB 7.6, BP networks, RBF neural networks,
GRNNs and PNNs were designed with newff, newrbe,
newgrnn and newpnn, respectively. One-hidden-layer BP
networks were constructed for the image recognition of plant
diseases. The transfer function tansig was used in the hidden
layer and the log-sigmoid transfer function logsig was used in
the output layer. Levenberg-Marquardt algorithm (trainlm) was
used as training functions, and learngdm was used as learning
functions. For the BP networks, maximum number of epochs to
train was 5000 and the goal of training performance was 0.01.
Number of neurons in the hidden layer n was calculated using
the following formula,
annn ++= 21 (1)
in whichn1 is the number of neurons in the input layer, n2 is
the number of neurons in the output layer, and a is a constant
between 1 and 10. For RBF neural networks, GRNNs and
PNNs, spreads of radial basis functions were assumed to be 0.1
to 2.0 with step size 0.1.
III. RESULTS AND ANALYSIS
The recognition results of the four kinds of neural networks
with fitting accuracy 75% and prediction accuracy 75% were
shown in TABLE , TABLE , TABLE , TABLE , and
TABLE , respectively. After reducing the dimensions of the
data of the different feature combinations using PCA,
acceptable fitting accuracies and prediction accuracies for
image recognition could be obtained using the neural networks
as the classifiers. Overall, the recognition results based on the
combinations of different features were better than that based
on the individual features. The recognition results obtained by
using BP networks, GRNNs and PNNs were better than that
obtained by using RBF neural networks. Whether the types of
the plant diseases were expressed as (0, 1) and (1, 0) or
expressed as 1 and 2 when RBF neural networks or GRNNs
were used as the classifiers, the recognition results were not
affected. The recognition results for wheat diseases and grape
diseases obtained by using RBF neural networks were listed in
TABLE . When GRNNs and PNNs were used as the
classifiers for the image recognition of wheat diseases, there
were some differences between the recognition results of these
two kinds of methods, and the recognition results were shown
in TABLE with the note describing the differences.
However, the recognition results when GRNNs were used as
255
the classifiers for the image recognition of grape diseases were
the same as that when PNNs were used, and the corresponding
results were shown in the same table (TABLE).
When BP networks were used as the classifiers, the
recognition results for wheat diseases were shown in TABLE
. The fitting accuracy and the prediction accuracy were both
100% when CST was used as inputs with the number of
neurons in the hidden layer equal to 6. The fitting accuracy was
98.33% and the prediction accuracy was 100% when ShaTex
was used as inputs with the number of neurons in the hidden
layer equal to 11. When ShaTex was used as inputs with the
number of neurons in the hidden layer equal to 5 and CST was
used as inputs with the number of neurons in the hidden layer
equal to 9, the fitting accuracies were 100% and the prediction
accuracies were 97.50%. When CST or ColTex was used as
inputs with the number of neurons in the hidden layer equal to
8, the fitting accuracy was 100% and the prediction accuracy
was 95%.
As TABLE shown, with some exceptions, most of the
recognition results for wheat diseases obtained by using
GRNNs and PNNs were the same. The best recognition results
were obtained when Tex was used as inputs with the value of
spread equal to 0.3; the fitting accuracy was 96.67% when
GRNN was used, the fitting accuracy was 95% when PNN was
used, and the prediction accuracies obtained by using these two
kinds of neural networks were both 100%. Using GRNNs as
the classifiers based on Sha, the fitting accuracy was 90% and
the prediction accuracy was 95% when the value of spread was
equal to 1.0, the fitting accuracy was 88.33% and the prediction
accuracy was 95% when the value of spread was equal to 1.6,
and the fitting accuracy was 88.33% and the prediction
accuracy was 92.50% when the value of spread was equal to
1.9 or 2.0. Using PNNs as the classifiers based on Sha, the
fitting accuracy was 88.33% and the prediction accuracy was
95% when the value of spread was equal to 1.0, the fitting
accuracy was 88.33% and the prediction accuracy was 92.50%
when the value of spread was equal to 1.6, and the fitting
accuracy was 88.33% and the prediction accuracy was 95%
when the value of spread was equal to 1.9 or 2.0. When Tex
was used as inputs with the value equal to 1.6, the fitting
accuracy was 91.67% and the prediction accuracy was 97.50%
using GRNNs, however, the fitting accuracy was 91.67% and
the prediction accuracy was 100% using PNNs. The prediction
accuracies were 97.50% when ShaTex was used as inputs with
the values of spread equal to 0.3~0.7. The fitting accuracies
were 91.67% and the prediction accuracies were 100% when
ShaTex was used as inputs with the values of spread equal to
0.8~2.0. The fitting accuracies were 91.67% and the prediction
accuracies were 100% when Tex was used as inputs with the
values of spread equal to 0.6~1.5. The fitting accuracies were
91.67% and the prediction accuracies were 97.50% when Tex
was used as inputs with the values of spread equal to 0.4, 0.5
and 1.7~2.0.
The recognition results using RBF neural networks as the
classifiers were shown in TABLE. For wheat diseases,
there were the acceptable recognition results picked out when
ColSha, ColTex, ShaTex, Tex or CST was used as inputs. For
grape diseases, there were the acceptable recognition results
picked out only when ColTex or CST was used as inputs. For
wheat diseases, the fitting accuracy was 100% and the
prediction accuracy was 95% when ColTex was used as inputs
with the value of spread equal to 0.5, the fitting accuracy was
100% and the prediction accuracy was 92.50% when CST was
used as inputs with the value of spread equal to 0.5, and the
fitting accuracy was 100% and the prediction accuracy was
90% when ColTex was used as inputs with the value of spread
equal to 0.4 or when Tex was used as inputs with the value of
spread equal to 0.2. For grape diseases, the best prediction
accuracy was 80% with the fitting accuracy equal to 100%
when CST was used as inputs with the value of spread equal to
0.5.
When BP networks were used as the classifiers, the
recognition results for grape diseases were shown in TABLE
. The fitting accuracies were 100% and the prediction
TABLE I. FITTING RESULTS AND PREDICTION RESULTS FOR IMAGE RECOGNITION OF WHEAT DISEASES USING BP NETWORKS
Features Number of neurons in the
hidden layer
Fitting
accuracy
Prediction
accuracy Features Number of neurons in the
hidden layer
Fitting
accuracy
Prediction
accuracy
Col 5 98.33% 80% ShaTex 3, 9 98.33% 82.50%
Col 8 98.33% 82.50% ShaTex 10 98.33% 92.50%
Col 10 100% 77.50% ShaTex 11 98.33% 100%
ColSha 3 98.33% 87.50% ShaTex 6 100% 80%
ColSha 5, 11 100% 75% ShaTex 7 100% 90%
ColSha 7, 8 100% 77.50% ShaTex 5 100% 97.50%
ColSha 10 100% 82.50% Tex 3 98.33% 77.50%
ColSha 6, 9 100% 85% Tex 12 98.33% 82.50%
ColSha 4, 12 100% 87.50% Tex 4, 6 98.33% 85%
ColTex 4 98.33% 82.50% Tex 8, 10 98.33% 90%
ColTex 10 100% 82.50% Tex 7 100% 77.50%
ColTex 6, 9 100% 85% Tex 5 100% 87.50%
ColTex 3, 7, 11 100% 87.50% CST 3 95% 77.50%
ColTex 12 100% 90% CST 10 98.33% 77.50%
ColTex 8 100% 95% CST 7 98.33% 80%
Sha 3 95% 92.50% CST 4, 12 98.33% 92.50%
Sha 4, 5, 11 98.33% 75% CST 11 100% 82.50%
Sha 7, 9 98.33% 77.50% CST 5 100% 87.50%
Sha 6 98.33% 85% CST 8 100% 95%
Sha 8 98.33% 90% CST 9 100% 97.50%
ShaTex 12 96.67% 85% CST 6 100% 100%
256
accuracies were 91.43% when ColSha was used as inputs with
the number of neurons in the hidden layer equal to 4 and
ShaTex was used as inputs with the number of neurons in the
hidden layer equal to 5, 8 and 9. The fitting accuracy was 98%
and the prediction accuracy was 91.43% when ColSha was
used as inputs with the number of neurons in the hidden layer
equal to 3. In the remaining cases, the prediction accuracies
were less than 90%.
For grape diseases, the recognition results obtained by
using GRNNs and PNNs as the classifiers were shown in
TABLE. The best prediction accuracy was 94.29% with
the fitting accuracy equal to 100% when ColSha was used as
inputs with the value of spread equal to 0.2. The prediction
accuracies were less than 90% in the remaining cases.
TABLE II. FITTING RESULTS AND PREDICTION RESULTS FOR IMAGE RECOGNITION OF WHEAT DISEASES USING GRNNS AND PNNS
Features Spread Fitting
accuracy
Prediction
accuracy Features Spread Fitting
accuracy
Prediction
accuracy
Col 1.3~2.0 88.33% 82.50% Sha
0.3, 0.4, 0.5, 1.1~1.5, 1.6a, 1.0b, 1.9b,
2.0b 88.33% 95%
Col 1.2 90% 82.50% Sha 0.6, 0.7 90% 92.50%
Col 0.5~1.1 90% 85% Sha 0.2, 0.9, 1.0a 90% 95%
Col 0.4 90% 87.50% Sha 0.1 93.33% 95%
Col 0.3 91.67% 87.50% ShaTex 0.5, 0.6, 0.7 91.67% 97.50%
Col 0.2 96.67% 85% ShaTex 0.8~2.0 91.67% 100%
Col 0.1 100% 82.50% ShaTex 0.3, 0.4 95% 97.50%
ColSha 0.7~2.0 88.33% 85% ShaTex 0.2 96.67% 92.50%
ColSha 0.6 90% 85% ShaTex 0.1 100% 92.50%
ColSha 0.4, 0.5 91.67% 87.50% Tex 0.4, 0.5, 1.7~2.0, 1.6a 91.67% 97.50%
ColSha 0.3 95% 90% Tex 0.6~1.5, 1.6b 91.67% 100%
ColSha 0.2 98.33% 87.50% Tex 0.2 95% 95%
ColSha 0.1 100% 80% Tex 0.3 96.67%/95%c 100%
ColTex 0.5~2.0 93.33% 95% Tex 0.1 100% 92.50%
ColTex 0.4 96.67% 95% CST 0.5~2.0 93.33% 95%
ColTex 0.3 98.33% 92.50% CST 0.4 96.67% 95%
ColTex 0.1 100% 92.50% CST 0.3 98.33% 92.50%
ColTex 0.2 100% 95% CST 0.1 100% 90%
Sha 0.8, 1.7, 1.8, 1.9 a, 2.0a,
1.6b 88.33% 92.50% CST 0.2 100% 95%
a. The fitting accuracy and the prediction accuracy were obtained by using GRNN as the classifier. b. The fitting accuracy and the prediction accuracy were
obtained by using PNN as the classifier. c. The fitting accuracy was 96.67% by using GRNN as the classifier and the fitting accuracy was 95% by using PNN as
the classifier.
TABLE III. FITTING RESULTS AND PREDICTION RESULTS FOR IMAGE RECOGNITION OF WHEAT DISEASES AND GRAPE DISEASES USING RBF NEURAL NETWORKS
Features Spread Fitting accuracy Prediction accuracy Features Spread Fitting accuracy Prediction accuracy
ColShaa 0.2, 0.3 100% 77.50% Texa 0.2 100% 90%
ColTexa 0.8, 1.1~1.6 100% 75% CSTa 0.7 100% 75%
ColTexa 0.6, 0.7 100% 80% CSTa 0.4, 0.6 100% 87.50%
ColTexa 0.4 100% 90% CSTa 0.5 100% 92.50%
ColTexa 0.5 100% 95% ColTexb 0.5 100% 77.14%
ShaTexa 0.2 100% 77.50% CST b 0.6 100% 77.14%
ShaTexa 0.3 100% 82.50% CSTb 0.5 100% 80%
a. The results were for wheat diseases. b. The results were for grape diseases.
TABLE IV. FITTING RESULTS AND PREDICTION RESULTS FOR IMAGE RECOGNITION OF GRAPE DISEASES USING BP NETWORKS
Features Number of neurons in the
hidden layer
Fitting
accuracy
Prediction
accuracy Features Number of neurons in the
hidden layer
Fitting
accuracy
Prediction
accuracy
Col 5 98% 77.14% Sha 6, 7, 10 98% 85.71%
Col 6, 9 100% 77.14% Sha 12 100% 82.86%
Col 4 100% 80% ShaTex 3 96% 82.86%
ColSha 9 98% 82.86% ShaTex 4 98% 88.57%
ColSha 3 98% 91.43% ShaTex 7 100% 80%
ColSha 10, 12 100% 82.86% ShaTex 11, 12 100% 88.57%
ColSha 7 100% 85.71% ShaTex 5, 8, 9 100% 91.43%
ColSha 5 100% 88.57% Tex 3 92% 85.71%
ColSha 4 100% 91.43% Tex 4 94% 85.71%
ColTex 13 100% 77.14% Tex 5, 8 100% 88.57%
ColTex 11 100% 80% CST 8, 10 98% 80%
ColTex 10 100% 82.86% CST 13 100% 77.14%
Sha 3 86% 82.86% CST 5 100% 82.86%
Sha 5 90% 82.86% CST 11 100% 88.57%
Sha 9, 11 98% 82.86%
257
TABLE V. FITTING RESULTS AND PREDICTION RESULTS FOR IMAGE RECOGNITION OF GRAPE DISEASES USING GRNNS AND PNNS
Features Spread Fitting accuracy Prediction accuracy Features Spread Fitting accuracy Prediction accuracy
Col 0.3 86% 82.86% Sha 0.1 90% 85.71%
Col 0.2 92% 80% ShaTex 0.4 76% 80%
ColSha 0.3 90% 85.71% ShaTex 0.3 90% 85.71%
ColSha 0.1 100% 82.86% ShaTex 0.2 92% 85.71%
ColSha 0.2 100% 94.29% ShaTex 0.1 98% 82.86%
ColTex 0.6 76% 77.14% Tex 0.3 82% 80%
ColTex 0.5 88% 80% Tex 0.2 88% 80%
ColTex 0.4 94% 77.14% CST 0.5 86% 77.14%
ColTex 0.3 96% 77.14% CST 0.4 96% 80%
ColTex 0.1 100% 82.86% CST 0.3 98% 82.86%
ColTex 0.2 100% 85.71% CST 0.1 100% 80%
Sha 0.3 80% 77.14% CST 0.2 100% 88.57%
Sha 0.2 84% 85.71%
IV. CONCLUSIONS AND DISCUSSIO N
In this study, PCA and neural networks were used to
implement the image recognition of plant diseases based on the
extracted color features, shape features and texture features
from disease images and their combined features. As the results
showed, reducing the dimensions of the feature data extracted
from the images of plant diseases could reduce the running
time of the neural networks and acceptable recognition results
could be obtained. The method used in this study could also be
used for the image recognition of other plant diseases. In the
practical application, PCA could be used to reduce the
dimensions of the data extracted from the plant disease images
and then optimal neural networks could be constructed for
plant disease identification.
The optimal results for wheat diseases and grape diseases
obtained by using SVM based on the same images used in this
study were reported by Li [29]. The optimal result for wheat
stripe rust and wheat leaf rust was that the fitting accuracy was
96.67% and the prediction accuracy was 100%, and the optimal
result for grape downy mildew and grape powdery mildew was
that the fitting accuracy was 100% and the prediction accuracy
was 91.43% [29]. In this study, for the two kinds of wheat
diseases, the optimal result based on PCA and BP networks
was that the fitting accuracy and the prediction accuracy were
both 100%, that based on PCA and GRNNs was the same as
that obtained by Li, that based on PCA and RBF neural
networks was that the fitting accuracy was 100% and the
prediction accuracy was 95%, and that based on PCA and
PNNs was that the fitting accuracy was 95% and the prediction
accuracy was 100%. In this study, for the two kinds of grape
diseases, the optimal result based on PCA and BP networks
was the same as that obtained by Li, that based on PCA and
GRNNs was that the fitting accuracy was 100% and the
prediction accuracy was 94.29%, that based on PCA and PNNs
was the same as that based on PCA and GRNNs, and that based
on PCA and RBF neural networks was that the fitting accuracy
was 100% and the prediction accuracy was 80%.
PCA could reduce the dimensions of the obtained data
under the premise of retaining the total data information,
reduce the number of neurons in the input layer and increase
the speed of the neural networks. PCA is a kind of linear
projection and it could not correctly handle non-linear data. In
the image recognition of plant diseases, some other methods
should be used to reduce dimensions of the feature data if the
extracted features are on-linear data.
If the image recognition of plant diseases is carried out on
personal computers, it is not necessary to perform the operation
of reducing the dimensions of the obtained data and the
recognition speed could not be affected significantly because of
the increasing computer performance and the strong ability of
the neural networks to solve problems. When image
recognition of plant diseases based on many extracted features
is carried out via the internet, it is better to reduce the
dimensions of the obtained data firstly and then to conduct
image recognition using neural networks.
REFERENCES
[1] H. G. Wang, Z. H. Ma, M. R. Zhang, and S. D. Shi, “Application of
Computer Technology in Plant Pathology”, Agriculture Network
Information. vol. 19, pp. 31–34, October 2004.
[2] C. C. Tucker and S. Chakraborty, “Quantitative assessment of lesion
characteristics and disease severity using digital image processing”, J.
Phytopathology, vol. 145, pp. 273–278, July 1997.
[3] I. S. Ahmad, J. F. Reid, M. R. Paulsen, and J. B. Sinclair, “Color
classifier for symptomatic soybean seeds using image processing”, Plant
Disease, vol. 83, pp. 320–327, April 1999.
[4] J. W. Olmstead, A. Gregory, and G. A. Lang, “Assessment of severity of
powdery mildew infection of sweet cherry leaves by digital image
analysis”, Hortscience, vol. 36, pp. 107–111, January 2001.
[5] J. C. Lai, S. K. Li, B. Ming, N. Wang, K. R. Wang, R. Z. Xie, and S. J.
Gao, “Advances in research on computer-vision diagnosis of crop
diseases”, Scientia Agricultura Sinica, vol. 42, pp. 1215–1221, April
2009.
[6] Z. X. Guan, Q. Yao, B. J. Yang, J. Hu, and J. Tang, “Application of
digital image processing technology in recognizing the diseases, pests,
and weeds from crops”, Scientia Agricultura Sinica, vol. 42, pp. 2349–
2358, July 2009.
[7] C. H. Bock, A. Z. Cook, P. E. Parker, and T. R. Gottwald, “Automated
image analysis of the severity of foliar citrus canker symptoms”, Plant
Disease, vol. 93, pp. 660–665, June 2009.
[8] C. H. Bock, G. H. Poole, P. E. Parker, and T. R. Gottwald,Plant
disease severity estimated visually, by digital photography and image
analysis, and by hyperspectral imaging”, Critical Reviews in Plant
Sciences, vol. 29, pp. 59–107, March 2010.
[9] H. Al-Hiary, S. Bani-Ahmad, M. Reyalat, M. Braik, and Z.
ALRahamneh, “Fast and accurate detection and classification of plant
diseases”, International Journal of Computer Applications, vol. 17, pp.
31–38, March 2011.
[10] G. L. Li, Z. H. Ma, and H. G. Wang, “An automatic grading method of
severity of single leaf infected with grape downy mildew based on
image processing”, Journal of China Agricultural University, vol. 16, pp.
88–93, December 2011.
[11] N. Wang, K. R. Wang, R. Z. Xie, J. C. Lai, B. Ming, and S. K. Li,
“Maize leaf disease identification based on Fisher discrimination
analysis”, Scientia Agricultura Sinica, vol. 42, pp. 3836–3842,
November 2009.
258
[12] Z. X. Cen, B. J. Li, Y. X. Shi, H. Y. Huang, J. Liu, N. F. Liao, J. Feng,
“Discrimination of cucumber anthracnose and cucumber brown speck
base on color image statistical characteristics”, Acta Horticulturae
Sinica, vol. 34, pp. 1425–1430, June 2007.
[13] D. W. Zhang and J. Wang, “Design on image features recognition
system of cucumber downy mildew based on BP algorithm”, Journal of
Shenyang Jianzhu University (Natural Science), vol. 25, pp. 574–578,
May 2009.
[14] L. B. Liu and G. M. Zhou, “Identification method of rice leaf blast using
multilayer perception neural network”, Transactions of the CSAE, vol.
25(Supp.2), pp. 213–217, October 2009.
[15] Z. R. Li and D. J. He, “Research on identify technologies of apple’s
disease based on mobile photograph image analysis”, Computer
Engineering and Design, vol. 31, pp. 3051–3053, 3095, July 2010.
[16] D. T. Zhao, Y. H. Chai, and C. L. Zhang, “Inspection of soybean frogeye
spot based on image procession”, Journal of Northeast Agricultural
University, vol. 41, pp. 119–124, April 2010.
[17] Y. W. Tian, T. L. Li, C. H. Li, Z. L. Piao, G. K. Sun, and B. Wang,
“Method for recognition of grape disease based on support vector
machine”, Transactions of the CSAE, vol. 23, pp. 175–180, June 2007.
[18] D. Ren, H.Y. Yu, and J. H. Wang, “Research on plant disease
recognition based on linear combination of the kernel function support
vector machine”, Journal of Agricultural Mechanization Research, vol.
29, pp. 41–43, September 2007.
[19] Y. W. Tian, and Y. Niu, “Applied research of support vector machine on
recognition of cucumber disease”, Journal of Agricultural Mechanization
Research, vol. 31, pp. 36–39, March 2009.
[20] A. Camargo and J. S. Smith, “Image pattern classification for the
identification of disease causing agents in plants”, Computers and
Electronics in Agriculture, vol. 66, pp. 121–125, February 2009.
[21] G. L. Li, Z. H. Ma, and H. G. Wang, “Image recognition of grape downy
mildew and grape powdery mildew based on support vector machine”,
CCTA 2011, Part III, IFIP AICT 370, 2012, pp. 151–162.
[22] Y. L. Cui, P. F. Cheng, X. Z. Dong, Z. H. Liu, and S. X. Wang, “Image
processing and extracting color features of greenhouse diseased leaf”,
Transactions of the CSAE. vol. 21(Supp.), pp. 32–35, December 2005.
[23] P. Sanyala, and S. C. Patel, “Pattern recognition method to detect two
diseases in rice plants”, Imaging Science Journal, vol. 56, pp. 319–325,
December 2008.
[24] Y. X. Zhao, K. R. Wang, Z. Y. Bai, S. K. Li, R. Z. Xie, and S. J. Gao,
“Research of maize leaf disease identifying system based image
recognition”, Scientia Agricultura Sinica, vol. 40, pp. 698–703, April
2007.
[25] R. Pydipati, T. F. Burks, and W. S. Lee, “Identification of citrus disease
using color texture features and discriminant analysis”, Computers and
Electronics in Agriculture, vol. 52, pp. 49–59, June 2006.
[26] Z. Y. Liu, J. F. Huang, R. X. Tao, and H. Z. Zhang, “Estimating rice
brown spot disease severity based on principal component analysis and
radial basis function neural network”, Spectroscopy and Spectral
Analysis, vol. 28, pp. 2156–2160, September 2008.
[27] B. Li, Z. Y. Liu, J. F. Huang, L. L. Zhang, W. Zhou, and J. J. Shi,
“Hyperspectral identification of rice diseases and pests based on
principal component analysis and probabilistic neural network”,
Transactions of the CSAE, vol. 25, pp. 143–147, September 2009.
[28] G. L. Li, Z. H. Ma, C. Huang, Y. W. Chi, and H. G. Wang,
“Segmentation of color images of grape diseases using K_means
clustering algorithm”, Transactions of the CSAE, vol. 26(Supp.2), pp.
32–37, December 2010.
[29] G. L. Li, “Preliminary study on automatic diagnosis and classification
method of plant diseases based on image recognition technology”,
Beijing: China Agricultural University, 2011, pp. 1–64.
... First, the image is pre-processed, and then the K-mean algorithm is used to segment the diseased area. By utilizing the hue features, this strategy produced better results in terms of accuracy [16] [17]. ...
... Therefore, teams generally combine multiple models to win a significant machine learning competition rather than relying solely on a single model as the Inception-ResNet-v2 [26] constructed by merging two huge deep CNNs, as its name implies. Integration is the most exact and practical solution for significantly different models [16]. As a result of our investigation, we've developed the United Model. ...
... This method is one of the most common ways to diagnose and classify plant diseases. For example, grape disease can be identified using principal component analysis and a backpropagation network [16]. The prediction accuracy was as high as 94.29% for downy mildew and powdery mildew grape diseases in the dataset. ...
... Grape leaf disease detection represents a critical aspect of vineyard management, health and productivity of grape plants. In this study, we propose a meticulous methodology integrating deep learning techniques to achieve [20][21][22] both accuracy and efficiency in grape leaf disease detection. Our approach encompasses four key phases: preprocessing, segmentation, feature extraction, and disease detection. ...
... A recall value of 1 signifies flawless recall, indicating that the model accurately identifies all positive instances without any false negatives. By examining the recall values across the algorithms depicted on the x-axis, one can discern which algorithm achieves the highest recall value, indicating superior performance in capturing all positive instances without omission [22]. However, while higher recall values are indicative of better performance, it's crucial to consider additional metrics and contextual factors for a comprehensive assessment of algorithmic efficacy. ...
Article
Full-text available
Introduction: The agricultural crop grapes shows high susceptibility to multiple diseases, which result in negative impacts on both quantity and quality production. A wide range of existing methods exists for grape leaf disease detection but precise identification of diseases proves challenging. The research target improves disease detection accuracy through innovative machine learning approaches. Objectives: The main aim of this investigation targets improving the accuracy levels for diagnosing grape leaf diseases. The analysis takes place through an examination of different grape leaf diseases while reviewing past research to determine present-day methodological shortcomings. An evaluation of existing datasets and tools takes place to establish their suitability for disease identification methods. The synthesis of obtained research results will yield a thorough literature review while a research paper analyzes gaps within the existing methods. These analyses will help develop feature extraction methods by utilizing various image processing techniques. Methods: The proposed research utilizes a combined algorithm framework that unites Sparrow Search Algorithm (SSA) and Slime Mould Algorithm (SMA) to enhance disease recognition outcomes. The research begins with a thorough review of previously used methods along with their related difficulties. The evaluation includes reviewing different datasets together with image-based disease identification preprocessing methods. A comprehensive evaluation of extraction algorithms determines ways for the model to accurately identify distinct disease types. The development of a robust model occurs through implementation of the SSA-SMA hybrid technique for optimizing classification performance. Results: The analysis combined with model creation generates an accurate detection system for grape leaf diseases. The hybrid machine learning model will exceed current methods through improved feature selection processes and classification techniques. The study traces weaknesses and prospective advancement opportunities by providing important findings through its research manuscript about a gap analysis. Conclusions: Sustainable agricultural practices receive support through this research which strengthens grape leaf disease detection accuracy. The combination of advanced algorithms will create a powerful detection system which boosts yield quality. Research findings will create fundamental knowledge for precision agriculture developments that benefit grape growers alongside the agriculture sector.
... The DTCWT and its inverse DTCWT are implemented using analysis and synthesis filter banks. The input leaf images are converted into a set of low frequency coefficients (L) or high frequency coefficients (H) is given in the equation (20). ...
Preprint
Full-text available
Crop damage and monetary losses occur due to plant diseases. At specified periods, farmers closely monitor their crops in the field to look for infections or diseases. In place of manual diagnosis, computers have been used to provide computerized tracking and recognition of a variety of diseases.This study makes a contribution to denoising algorithms that improve contrast, edge details and picture details. To isolate the leaf affected area, segmentation was carried out. Image fusion is performed using the segmented image obtained from several segmentation techniques, and features are extracted according to structural, texture, and color criteria. Furthermore, an image fusion technique based on Discrete Shearlet Transforms (DST) is applied to improve imaging quality while reducing redundancy. The color, texture, and structural elements of the fused images are retrieved and fed into the Artificial Neural Network (ANN) for classification, resulting in improved performance. When compared to other classification methods (SVM, MSVM, FFNN, and RFNN), the accuracy required for the DST-based fusion approach and the Radial Basis Function Neural Network is relatively high. The classification accuracy was 99.35%, with 89.24% sensitivity, 94.67% specificity, and 90.12% precision.
... Precision farming, leveraging technological advancements such as remote sensing and GIS, helps farmers optimize inputs like water, pesticides, and fertilizers. Recent developments in computer vision and deep learning techniques, particularly Convolutional Neural Networks (CNNs), offer promising solutions for automatic image classification, addressing complex agricultural challenges [6,7]. ...
Article
Rice leaf diseases are a significant issue that adversely impact rice production in India. Identifying these diseases manually is labour-intensive and prone to delays, often resulting in substantial crop losses for farmers. Therefore, the need for an automated system for early detection of plant diseases is critical. Recent advancements in machine learning, computer vision, and deep learning have paved the way for classification models capable of automatically identifying these diseases. However, the challenge lies in obtaining a sufficiently large and diverse image dataset to effectively train deep learning models. In this paper, we address this limitation by employing advanced data augmentation techniques, including Deep Convolutional Generative Adversarial Networks (DCGANs), to generate synthetic images that expand the dataset of rice leaf diseases. By integrating these synthetic images with real images, a new Convolutional Neural Network (CNN) architecture is proposed, which offers improved generalization capabilities. The performance of the classification model is evaluated with and without the DCGAN-generated images. The results demonstrate that the inclusion of synthetic images significantly enhances accuracy, as the enlarged dataset better represents real-world conditions. This approach provides a promising solution for more effective rice disease identification, offering higher precision in real-time scenarios.
... A lot of work has been undertaken on the detection and classification of plant leaf diseases; however, we chose research studies that seemed to be the most relevant to ours in order to make an effective comparison. The first of these compared studies is that by Wang et al. [31] who built a model to identify and classify two types of grape disease and two types of wheat disease. To enhance the processing speed of the program, the images of both crops were cropped. ...
Article
Full-text available
The prevalence of plant diseases presents a substantial challenge to global agriculture, significantly impacting both production levels and economic stability in numerous countries. This study focuses on the early detection of two prevalent diseases affecting barley leaves: net blotch and spot blotch. We introduce a novel model designed for the accurate detection and classification of these diseases. The model employs advanced pre-processing techniques, including the transformation of images into the CIELAB color space and the segmentation of affected areas, to enhance disease identification accuracy. Key shape properties characterizing the diseased regions are extracted and analyzed to distinguish between the two diseases. A critical component of our approach is the feature selection phase, aimed at identifying the most pertinent and informative features, thereby minimizing classification errors and maximizing model accuracy with a minimal set of shape properties. To optimize this process, we have incorporated the Grasshopper Optimization Algorithm, which effectively identifies the optimal shape properties for feature selection. The final classification is executed using a Back Propagation Neural Network Classifier. The efficacy of our model was tested using images of barley afflicted with the specified diseases. The results were compelling, with the model achieving a remarkable accuracy rate, largely attributable to the integration of the grasshopper optimization algorithm in the feature selection stage.
... PCA is a dimensionality reduction technique that transforms the original image data into a set of orthogonal components, highlighting the most significant features. Wang et al. [45] suggested using PCA along with NN for disease identification in wheat and grape plants, leading to the extraction of twenty-one color features, four shape features, and twenty-five texture features from the plants. Gabor Filters are employed to extract texture features by analysing the frequency and orientation components of an image. ...
Article
Sugar beet, a sugar crop, faces a persistent threat from foliar and root diseases, leading to substantial yield losses. Traditional methods of disease identification and severity assessment are often time-consuming, error-prone, and impractical, particularly in large production areas. In response to this challenge, researchers have recently turned to innovative solutions involving image processing and machine learning techniques for efficient disease detection in sugar beet plants. Image processing technology has emerged as a rapid and precise disease identification technology in sugar beet. By capitalizing on the ability of image processing to differentiate coloured objects, this approach facilitates the accurate determination of disease severity, enabling timely intervention measures. The urgency of developing faster and more practical methods becomes evident, highlighting the need to decrease human errors in identifying plant diseases and assessing their severity and progression. This review showcases the potential of image processing technology in revolutionizing disease detection strategies for sugar beet crops. The ability to swiftly and accurately determine disease outbreak, severity, and progression addresses a critical gap in current agricultural practices. Image processing technology holds promise as a practical and efficient solution for large-scale disease management in sugar beet cultivation, paving the way for sustainable and high-yield sugar production.
... Using a neural network that has already been trained to classify the retrieved features. Wang et al. [13] told that wheat stripe rust from wheat leaf rust and grape downy mildew from grape powdery mildew, this study looks at four different types of neural networks (backpropagation, radial basis function, extended regression, and probabilistic neural networks). Prediction accuracy for wheat diseases is maximized by BP networks, GRNNs, and PNNs, and is maximized by 97.50% by RBF neural networks. ...
Article
Full-text available
To improve the segmentation precision and effectiveness of plant disease images, a kind of unsupervised segmentation processing method based on K_means clustering (HCM) algorithm was proposed according to the properties of the symptoms and images of plant diseases. On the basis of the color differences of ab two-dimension data space from L*a*b* color space model, iterative color clustering of two clusters was conducted using squared Euclidian distance as the similarity distance and mean square deviation as the clustering criterion function. And the mathematics morphology algorithm was used to correct the clustering results. The proposed method was used to segment the color images of three kinds of grape diseases. The results show that it can satisfactorily segment the diseased regions from the color images of grape diseases with good robustness and good accuracy.
Article
Full-text available
To realize accurately calculating and automatically grading of disease severity,a kind of automatic grading method of severity of single leaf infected with grape downy mildew based on image processing was proposed.In processing the completed vertical-projected images of leaf disease,leaf area and diseased area were segmented out automatically and accurately using K_means clustering (HCM) algorithm.The area features of leaf area and that of diseased area were extracted using pixels statistic.And then the assessed severity of a single leaf was obtained by calculating area ratio between diseased area and leaf area.The results show that the proposed method can assess the disease severity accurately with accuracy of 93.33%.
Article
In order to carrying on the intelligent disease prevention and cure for glasshouse plants, this paper researched two kinds of familiar cucumber diseases in the method of picture processing.It compared several kinds of hue system using hue H as the color characteristic parameter, and distinguished between normal leaf and diseased leaf according to hue histogram statistic parameters and chroma histogram of section value. It also studied the chroma distribution of normal and diseased leaves in different hue extents. The results showed that (48-50) and (45-47) were the best extents to distinct normal and diseased leaves. This paper provides important characteristic parameters for the pattern identified.
Article
A new method for recognizing grape leaf disease by using computer image processing and Support Vector Machine (SVM) was studied to improve recognition accuracy and efficiency. At first, vector median filter was applied to remove noise of the acquired color images of grape leaf with disease. Then a method of statistic pattern recognition and mathematics morphology was introduced to segment images of grape leaf with disease. At last texture features, shape features and color features of color image of grape leaf with disease were extracted, and classification method of SVM for recognition of grape disease was used. Experimental results indicate that the classification performance of Support Vector Machine is better than that of neural networks. Recognition rate of grape disease based on SVM of shape and texture feature is better than that of only using the shape or texture feature, recognition rate of grape disease based on SVM of color and texture feature is higher than that of only using the color or texture feature.
Article
In order to achieve the rapid diagnosis of rice leaf blast, authors utilized image processing techniques and neural network comprehensively for the recognition of leaf blast lesion. For the accuracy rate of comparative analysis on lesion identification, three multilayer perception classifiers were designed. Three features of normal and lesion part which were texture, color and combined feature of texture and color were selected as input unit for different classifier respectively. Output layer adopted one unit for the identify results of lesion and normal region. First of all, grayscale image was transformed from RGB image, using gray level co-occurrence matrix to extract energy, contrast, entropy, inverse gap as texture features from leaf lesion region and normal region respectively. The RGB color space was transformed to HIS and Lab space, and L, a, b values were extracted as color features from lesion area and normal region respectively, using different BP neural network classifier to identify lesion region.120 images were selected as test objects. The experimental results showed that if the combined feature of color and texture was used as input parameters, the accuracy rate would be 10%-15% higher than that of texture features and color features alone. The results of this paper laid a foundation for realization of the automatic diagnosis of rice diseases.
Article
Correct and fast identification of rice diseases and pests was the basis of diseases and pests prevention measures, and significant in disaster assessment. This study adopted spectral reflectance of rice leaves stressed by rice Aphelenchoides besseyi Christie of two periods at the rice booting stage and by rice leaf roller of two periods at the rice tillering stage. With the analysis of the spectral characteristics of rice leaves, visible band (490-670 nm) and short wave infrared band (1520-1750 nm) were selected. The principal components spectrum were obtained with principal component analysis (PCA) transformed from the above two selected band. The recognition precision of rice Aphelenchoides besseyi Christie and rice leaf roller using probabilistic neural network (PNN) was as high as 95.65%. The research demonstrated that the method was feasible and reliable to precisely identify non-healthy rice stressed by rice diseases and pests from healthy rice based on PCA and PNN.
Article
The image features recognition system of the leaf segment on cucumber with downy mildew is designed which is used to diagnosis the different course of disease. Using BP algorithm based on neural network technology, the structural diagram of the neural network is generated, the image features of cucumber with downy mildew is extracted and the effectual classification rules are designed. Thus the recognition of downy mildew on cucumber is done. The 110 samples of Cucumber Downy Mildew in different courses are trained and the other 100 samples are tested. The optimum relation of the algorithm's parameters are obtained, including inertial term coefficient α and weights revising coefficient η and the weights values of each layers. The coefficients of BP algorithm converges can be done and the correct recognition rata is above 85%. Within the permitted limitation and finite times, the error converges are less than specified numerical and the optimalizing numbers of hidden nodes are got. By extracting image features of cucumber with downy mildew and getting the correct rules of classification, the image recognition can be carried out. The BP neural networks have significance on recognition systems of crops'disease diagnosis.