Conference PaperPDF Available

“Diagnosis and classification of grape leaf diseases using neural networks”



Plant diseases cause significant damage and economic losses in crops. Subsequently, reduction in plant diseases by early diagnosis results in substantial improvement in quality of the product. Erroneous diagnosis of disease and its severity leads to inappropriate use of pesticides. The goal of proposed work is to diagnose the disease using image processing and artificial intelligence techniques on images of grape plant leaf. In the proposed system, grape leaf image with complex background is taken as input. Thresholding is deployed to mask green pixels and image is processed to remove noise using anisotropic diffusion. Then grape leaf disease segmentation is done using K-means clustering. The diseased portion from segmented images is identified. Best results were observed when Feed forward Back Propagation Neural Network was trained for classification.
Diagnosis and Classification of Grape Leaf
Diseases using Neural Networks
Sanjeev S Sannakki1, Vijay S Rajpurohit2, V B Nargund3, Pallavi Kulkarni4*
1,2,4 Dept. of Computer Science, Gogte Institute of Technology Belgaum, Karnataka, India
3Department of Plant Pathology, University of Agricultural Sciences, Dharwad, India
Abstract—Plant diseases cause significant damage and
economic losses in crops. Subsequently, reduction in plant
diseases by early diagnosis results in substantial improvement
in quality of the product. Erroneous diagnosis of disease and
its severity leads to inappropriate use of pesticides. The goal of
proposed work is to diagnose the disease using image
processing and artificial intelligence techniques on images of
grape plant leaf. In the proposed system, grape leaf image with
complex background is taken as input. Thresholding is
deployed to mask green pixels and image is processed to
remove noise using anisotropic diffusion. Then grape leaf
disease segmentation is done using K-means clustering. The
diseased portion from segmented images is identified. Best
results were observed when Feed forward Back Propagation
Neural Network was trained for classification.
Index Terms—plant disease identification, Feed forward
neural network, image processing, k-means, co-occurrence
matrix, feature extraction
Grape (Vitis vinifera) cultivation-Viticulture is one of the
most remunerative farming enterprises in India [1]. Grapes
originated in Western Asia and Europe. Fruit is eaten fresh
or made into juice, fermented to wines and brandy and dried
into raisins [2]. Grapes also have medicinal properties to
cure many diseases.
Grapes generally require a hot and dry climate during its
growth and fruiting periods. It is successfully grown in areas
where the temperature range is from 150-400C. High
temperatures above 400C during the fruit growth and
development reduce fruit set and consequently the berry
size. Low temperatures below 15 C followed by forward
pruning impair the budbreak leading to crop failure. Grapes
can be cultivated in variety of soils including sandy loams,
red sandy soils, sandy clay loams, shallow to medium black
soils and red loams [16].
Grape suffers from huge crop losses on account of
downy mildew, powdery mildew and anthracnose [1]. In
case of downy mildew, the losses are very high when the
clusters are attacked before fruit set. Entire clusters decay,
dry and drop down [16]. Plant disease is one of the crucial
causes of reduction in quantity and degrades quality of the
product. The naked eye observation of experts is the main
approach adopted in practice for detection and identification
of plant diseases. This approach is prohibitively expensive
and time consuming in large farms [3]. Further, in some
developing countries, farmers may need to go long distances
to contact experts. Diseases are managed by adjusting the
pruning time and using various fungicides [2]. Observations
during research at NRCG, Pune show that precision farming
i.e. using information technology for decision making has
improved the yield and quality of crops.
Various researchers have proposed image-processing and
pattern recognition techniques in agricultural applications
for detection of weeds in a field, sorting of fruits and veget-
ables, detecting diseases etc. Automatic detection of plant
diseases is an essential research topic as it may prove bene-
fits in monitoring large fields of crops, and detect the symp-
toms of diseases as soon as they appear on plant leaves.
Therefore; looking for fast, less expensive and accurate
method to detect plant disease cases is of great significance.
Excessive use of pesticides for plant disease treatment in -
creases costs and raises the danger of toxic residue levels on
agricultural products. This requires that the disease must be
identified accurately and also the stage in which the disease
is. Hence an efficient disease identification and diagnosis
model is required.
Al-Bashish, D., M. Braik and S. Bani-Ahmad have
proposed the leaf disease Detection and classification for
early scorch, cottony mold, late scorch and tiny whiteness
with K-means-based segmentation, CCM to consider texture
feature classification and BPNN to classify diseases into one
of the six disease classes [4]. A. Camargo, J.S. Smith
discuss converting the RGB image of the diseased plant or
leaf, into the H, I3a and I3b color transformations. The
transformed image is then segmented by analyzing the
distribution of intensities in a histogram. The extracted
region was post-processed to remove pixel regions not
considered part of the target region. Then the neighborhood
of each pixel and its gradient is analyzed [5]. S. S.
Sannakki, V. S. Rajpurohit, V B Nargund, proposes K-
means clustering for segmentation and Disease grading by
Fuzzy Logic where the intensity of disease is decided by
the area of diseased portion [6]. H. Al-Hiary, S. Bani-
Ahmad, M. Reyalat, propose masking of green pixels
before extracting feature which results in more accurate
classification [7].
IEEE - 31661
4th ICCCNT 2013
July 4-6, 2013, Tiruchengode, India
All these researchers gather the leaf samples and take
images in controlled environment i.e. plane background,
good lightening conditions and click from constant distance.
A. Meunkaewjinda, P. Kumsawat propose SOFM
and BPNN to recognize grape leaf color, MSOFM for
segmentation, GA and SVM for classification[8], process
the images with complex background but use many machine
learning algorithms which makes the system complex.
Figure 1: Decision Support System
The proposed system aims at processing the images with
complex background, varying lightening conditions and
clicked from various distances. This makes the system more
dynamic to work under various climatic conditions. Figure 1
shows the corresponding block diagram which is an image
processing system for grape leaf in which the following
steps are undertaken.
A. Image acquisition: The First step is to collect the sample
images which are needed to train system. In the working
system, this step indicates the input or query image. Leaf
images are captured using digital camera, Nikon Coolpix
P510, 16.1 Megapixel and are used for training and testing
the system. All the images are stored in standard jpg format.
In the present study, images are captured from different
regions like Pune, Bijapur, Sangali and expert advice is taken
for identification. Some images are downloaded from
internet to have diverse environment. Images gathered
include the leaves affected by two major diseases found in
India, downy mildew and powdery mildew.
B. Background removal: In this step, the input image is
resized to standard size 300x300. Then, mostly green colored
pixels are identified. If the green color component is less
than threshold i.e. 70 in present work, then all red, green and
blue value of that pixel is assigned zero and green channel of
that image is assigned to 255. This is called masking green
pixels. This fastens the processing in the next step and also
improves accuracy [7].
A mask is a black and white image of the same
dimensions as the original image (or the region of interest
you are working on). Therefore, each of the pixels in the
mask can have a value either 0 (black) or 1 (white) as shown
in Figure 2. When executing operations on the image the
mask is used to restrict the result to the pixels that are 1
(selected, active or white) in the mask and the operation
restricts to some parts of the image.
Figure 2: The query image, RGB channels, mask and resulting image
after masking.
C. Preprocessing: Then it is enhanced by five iterations of
Anisotropic Diffusion [10] to preserve the information of
affected portion. The diffused image is shown in Figure 3.
Anisotropic diffusion is a generalization of this diffusion
process; it produces a family of parameterized images, but
each resulting image is a combination between the original
image and a filter that depends on the local content of the
original image. As a consequence, anisotropic diffusion is
a non-linear and space-variant transformation of the original
Non-linear, Space variant method:
Equation 1 gives gradient of the brightness function. We
have an edge/not edge estimation method “E” in equation 2.
Equation 3 is used which will not only preserve but also
sharpen the edges if g(.) is chosen properly. First equation
amongst the two equations proposed by Perona and malik is
used. H component from HSV color space is extracted to
reduce the illumination effect.
Figure 3: Filtered image after Anisotropic diffusion.
D. Segmentation: Clustering of data is a method by which
large sets of data are grouped into clusters of smaller sets or
IEEE - 31661
4th ICCCNT 2013
July 4-6, 2013, Tiruchengode, India
segments of similar data. In present work, k-means
clustering [10] is used to for segmenting an image into six
groups which is found to be optimum as shown in figure 4.
One or more (In case more diseases are present at a time)
clusters contain the diseased portion of leaf. a and b
component from L*a*b space are extracted before clustering.
K-means clustering algorithm:
x1,…, xN are observed data points or vectors.
Each observation (vector xi) will be assigned to
only one cluster.
C(i) is cluster number for the ith observation
For a given cluster assignment C of the data points,
compute the cluster means mk where
For a current set of cluster means, each observation
is assigned as:
Iterate last two steps until convergence.
Figure 4: Six clusters formed by K-means clustering.
E. Extract lesion: Once the image is divided into six
clusters, the mean of each cluster is calculated and means are
sorted in ascending order. It is observed that the downy
affected lesion is extracted at second (Figure 5) and powdery
affected lesion is extracted at sixth in the sorted clusters. It is
observed that this is true for the leaves having lesions of both
the diseases at same time.
(a) (b)
Figure 5: (a) Downy affected region (b) Powdery affected region
F. Feature extraction: The next step is to extract texture
features of the extracted diseased portions. This is done by
calculating Gray Level Co-occurrence Matrix (GLCM) [12].
The colour co-occurrence texture analysis method was
developed through the use of spatial gray-level dependence
matrices (SGDM’s). Co-occurrence matrices measure the
probability that a pixel at one particular gray-level will occur
at a distinct distance and orientation from any pixel given
that pixel has a second particular gray-level.
The SGDM’s are represented by the function
P(i,j,d,θ) where i represents the gray-level of location (x,y)
in the image, and j represents the gray-level of the pixel at a
distance d from location (x,y) at an orientation angle of θ.,
where i is the row indicator and j is the column indicator in
the SGDM matrix P(i,j,d,θ). The nearest neighbour mask,
where the reference pixel (x,y) is shown as an asterisk. All
eight neighbors shown are one pixel distance from the
reference pixel ‘*’ and are numbered in a clockwise direction
from one to eight as shown in figure 6. The neighbors at
positions 1 and 5 are both considered to be at an orientation
angle equal to 0◦, while positions eight and four are
considered to be at an angle of 45◦.
6 7 8
5 * 1
4 3 2
Figure 6: Directions considered in Co-occurrence matrix
Co-occurrence matrix is then normalized. The equation
for normalizing co-occurrence matrix is given in Equation
In the above equation, P(i, j, 1, 0) is the intensity co-oc-
currence matrix and Ng represents the total number of in-
tensity levels.
Texture features can be used as useful discriminator when
target images do not follow well defined color or shape
[16]. Nine texture features listed in table I, are extracted for
each image which will be used for training Neural Network
in the next step for classification.
Classification: The feed forward Back Propagation Neural
Network classifier [13] [15] consisting of three layers
namely input layer, a hidden layer, and an output layer is
Training Back propagation Neural Network
We need to train the network with available data. Initially the
network predicts an output for one input vector for which the
true output is known. This combination of known output and
input vector is called training sample. The predicted output is
compared to the known value. The weights on the arcs are
adjusted depending on how the prediction of the actual
IEEE - 31661
4th ICCCNT 2013
July 4-6, 2013, Tiruchengode, India
result. Sigmoid transfer function is used for generating
output at each stage. The input layer has 9 nodes, which are
related to two 9 texture features—contrast, uniformity,
maximum probability, homogeneity, inverse difference,
difference variance, diagonal variance, entropy of H bands of
lesion area. Output layer contains two neurons. This module
assigns an appropriate disease class i.e. Downy or powdery.
Table I. Mathematical formulations of texture features [14]
No. Features Formula
1. Contrast ∑i j|i j|2 p(i, j, d, θ)
2. Uniformity(Energy) i j p(i, j, d, θ)2
3. Maximum probability max i,j p(i, j, d, θ)
4. Homogeneity ∑i j p(i,j,d,θ)/(1+|ij|)
5. Inverse difference
moment of order 2
ij1/(1+(ij)2) p(i, j, d, θ)
6. Difference variation Variance of
i j |i j|p(i, j, d, θ)
7. Diagonal variance Variance of p(i, j, d, θ)
8. Entropy ∑i j p(i, j, d, θ) log(p(i, j, d, θ))
9. Correlation (i-µi)(j-µj)p(i,j)
i,j σiσj
The data consisted of 16 images of powdery mildew (class
1) and 17 images of downy mildew (class 2). MATLAB 7.1
Neural network pattern recognition tool was used for train-
ing. In 33, 29 images are used for training and 2 each for
testing and validation. Data was given in two files namely
input and target. Input file with 9 rows representing texture
features and 33 columns representing sample images. Target
file consisted two rows with binary values. [0 1] for downy
and [1 0] for powdery. It gives good validation results as
shown in figure 7. Confusion matrix in figure 8(a) shows
images of class 1 and class 2 are classified properly. Figure
8(b) shows sensitivity (True positive) and specificity (False
positive) rate of the system.
Figure 7: Validation results after training
(a) (b)
Figure 8: Sensitivity and specificity of system and confusion matrix
The green line meeting at upper left corner in Figure 8(a)
shows the perfect result of binary classification test. Dur-
ing training it gave 100% correct results which shows the
system will work almost accurately. The green box indicates
correct classification and red box indicates wrong classifica-
tion. The model that used hue features gives accurate results
reaching a perfect 100%.
Figure 9: GUI developed using guide toolbox. Downy mildew is recog-
nized by the system
Study involved collecting leaf samples from different re-
gions. Work was carried out to investigate the use of com -
puter vision for classifying grape leaf diseases. Two classes
of grape leaves, i.e., downy mildew and powdery mildew
were considered in the experiments. Algorithms based on
image-processing techniques, feature extraction and classi-
fication, were deployed. The feature extraction process used
IEEE - 31661
4th ICCCNT 2013
July 4-6, 2013, Tiruchengode, India
color co-occurrence methodology, which uses the texture of
an image to arrive at unique features, which represent that
image. nprtool of MATLAB 7.1 was used to train the neural
network for pattern recognition. This training achieved
training accuracies of 100% when using hue features alone.
Future work of this study can explore the utility of these
techniques to include samples of healthy and other diseases
of grapes like anthracnose. Instead of K-means, other seg-
mentation techniques can be used to extract the lesion more
accurately. With hue, models can be constructed with
the combination of other color components like Saturation
and intensity and the results can be compared.
Dr. S. D. Sawant, National Research Centre for Grapes,
Manjri Farm, Solapur Road, Pune.
Dr. A. M. Sheikh, Regional Agriculture Research
Station, Hitnalli Farm, Bijapur,
[1] A report of the expert consultation on viticulture in Asia and the
Pacific. May 2000, Bankok, Thailand. RAP
[2] K. Soytong, W. Srinon Application of antagonistic fungi to
control anthracnose disease of grape Journal of Agricultural
Technology May 2005.
[3] Weizheng, S., Yachun, W., Zhanliang, C., and Hongda, W.
(2008). Grading Method of Leaf Spot Disease Based on Image
Processing. In Proceedings of the 2008 international Conference
on Computer Science and Software Engineering - Volume 06
(December 12 - 14, 2008). CSSE. IEEE Computer Society,
Washington, DC, 491-494. DOI=
[4] Al-Bashish, D., M. Braik and S. Bani-Ahmad, (2011). Detection
and classification of leaf diseases using K-means-based
segmentation and neural-networks-based classification. Inform.
Technol. J., 10: 267-275. DOI: 10.3923/itj.2011.267.275
[5] Camargo, A. and Smith, J. S., (2009). An image processing based
algorithm to automatically identify plant disease visual
symptoms, Biosystems Engineering, Volume 102, Issue 1,
January 2009, Pages 9-21, ISSN 1537-5110, DOI:
[6] Sanjeev S Sannakki, Vijay S Rajpurohit, V B Nargund,
(2011) Leaf Disease Grading by Machine Vision and Fuzzy
Logic, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716
[7] H. Al-Hiary, S. Bani-Ahmad, M. Reyalat, M. Braik and Z.
ALRahamneh. Fast and Accurate Detection and Classification
of Plant Diseases. International Journal of Computer
Applications (0975 – 8887) Volume 17– No.1, March 2011
[8] A.Meunkaewjinda, P.Kumsawat, K.Attakitmongcol Grape
leaf disease detection from color imagery using hybrid
intelligent system. Proceedings of ECTI-CON 2008.
[9] Peitro Perona ,Jitendra malik, scale space and edge
detection using Anisotropic Diffusion, IEEE transactions on
pattern analyses and machine intelligence, vol.12, no.7, July
[10] Macqueen, J.B.,1967. Some methods for classification and
analysis of multivariate observations. Proc.Berkely Symp.
Math. Statist. Prob., 1:281-297.
[11] R. Pydipati, T.F. Burks, W.S.Lee. Identification of citrus
disease using color texture features and discriminant analysis.
Computers and Electronics in Agriculture 52 (2006) 49–59.
[12] I.A. Basheer, M. Hajmeer , Artificial neural networks
fundamentals, computing, design, and application, Journal of
Microbiological Methods 43 (2000) 3–31.
[13] Huang K.Y.Application of artificial neural network for detecting
Phalaenopsis seedling diseases using color and texture feature.
Computer and Electronics in agriculture. Pp. 3-11, 2007.
[14] Prasad Babu, M. S. and Srinivasa Rao, B. (2010) Leaves
recognition using back-propagation neural network - advice for
pest and disease control on crops. Technical report, Department
of Computer Science & Systems Engineering, Andhra
University, India, on May 2010.
[15] A. Camago,J.S.Smith, Image pattern classification for the
identification of disease causing agents in plants, Computers
and Elecronics in Agriculture 66(2009) 121-125.
[16], gra002.pdf
IEEE - 31661
4th ICCCNT 2013
July 4-6, 2013, Tiruchengode, India
... Sannakki et al. [37] pre-processed images using anisotropic diffusion to produce space-variant and non-linear changes to the original images. Khirade et al. [57] utilized various image pre-processing techniques, including image smoothing, clipping, image enhancement, color conversion, and histogram equalization, to eliminate noise from the images. ...
... Dubey et al. [3] utilized the color coherence vector, global color histogram, complete local binary pattern, and local binary pattern methods for retrieving/extracting features. Sannakki et al. [37] utilized the color co-occurrence method for extracting texture features. Rastogi et al. [58] utilized the gray level co-occurrence matrix (GLCM) for extracting features. ...
... Mahlein et al. [72] examined the leaves of sugar beet plants to identify three different plant illnesses using spectral disease indices. Sannakki et al. [37] utilized a feed-forward back propagation neural network (BPNN) for identifying powdery mildew and downy mildew from grape leaves. Es-saddy et al. [38] employed a serial combination of two support vector machines for identifying different types of damage to leaves by Tuta absoluta, leaf miners, and thrips (pest insects), along with late blight and powdery mildew (pathogen symptoms). ...
Full-text available
A significant majority of the population in India makes their living through agriculture. Different illnesses that develop due to changing weather patterns and are caused by pathogenic organisms impact the yields of diverse plant species. The present article analyzed some of the existing techniques in terms of data sources, pre-processing techniques, feature extraction techniques, data augmentation techniques, models utilized for detecting and classifying diseases that affect the plant, how the quality of images was enhanced, how overfitting of the model was reduced, and accuracy. The research papers for this study were selected using various keywords from peer-reviewed publications from various databases published between 2010 and 2022. A total of 182 papers were identified and reviewed for their direct relevance to plant disease detection and classification, of which 75 papers were selected for this review after exclusion based on the title, abstract, conclusion, and full text. Researchers will find this work to be a useful resource in recognizing the potential of various existing techniques through data-driven approaches while identifying plant diseases by enhancing system performance and accuracy.
... It uses many processing steps and region-based classification but only recognizes the disease kind. To segregate the leaf diseases obtained from the augmented leaf dataset, a combination technique of regional-based CNNs and UNet(CRUN) is suggested in this research [18][19][20]. The proposed algorithm determines the severity of the disease in the leaf, a morphological technique is used for the segmented pictures. ...
Full-text available
In the fast-growing agricultural world, early detection of plant diseases is crucial for maintaining crop health and ensuring successful harvests. Advancements in computer vision technology have led to the development of advanced methods for diagnosing plant diseases. However, factors like lighting, weather, and the number of diseases in a single image can make it difficult to detect plant diseases. Traditional deep learning-based algorithms have drawbacks, such as high hardware investment, inference speed, and generalization. This research article aims to raise awareness among farmers about cutting-edge technology for detecting plant leaf disease in nightshade crops. The Enhance-Nightshade-CNN model was used to enhance the quality of nightshade crop leaf disease samples, achieving good accuracy compared to existing algorithms. The model accurately identified healthy and unhealthy leaves in the real environment, with ground truth results showing a 95–100% accuracy rate.
Grape farming is one of the most lucrative agricultural enterprises in India. There are several biotic and abiotic stress conditions that may adversely affect the yield if not tackled at the right time. It is crucial that the farmer can correctly identify and monitor the type of stress so that steps can be taken to prevent undesirable outcomes. We have gathered a dataset of these stress conditions on grape berries and categorized them into eight classes, necrosis, shriveling, and honeydew by mealybug, mealybug incidence, spray injury, thrips scarring, pink berry, and powdery mildew. Transfer learning was used to test the performance of six major deep learning image classification architectures (namely MobileNet-v2, Inception-v3, Inception-ResNet-v2, ResNet-v2, NASNet, and PNASNet) with variations in training conditions and hyper parameters. The results were compared to determine the most feasible and accurate deep learning architecture and its hyper parameters for the given problem statement. The experiment shows that Inception-ResNet-v2 obtained maximum classification accuracy of 88.75% when learning rate of 0.035 and minibatch size of 10 were applied using 8000 training steps. This result will act as a pre-requisite for the development of an application for mapping vineyard stress conditions on berries and give automated advisory.
Crop diseases pose a serious deathtrap to food safety, but their prompt disease diagnosis remains burdensome in many parts of the world due to the lack of the necessary foundation. These days deep learning models have shown better performance than hi-tech machine learning techniques in several fields, with computer vision being one of the most noticeable cases. Agronomy is one of the domains in which deep learning concepts have been used for disease identification on different parts of the plants. Having a disease is very normal and common but prompt disease recognition and early avoidance of crop diseases are crucial for refining production. Though the standard convolutional neural network models identified the disease very accurately but require a higher computation cost and a large number of parameters. This requires a model to be developed which should be efficient and need to generate less number of parameters. This research work proposed a model to identify the diseases of the plant leaves with greater accuracy and efficiency compared to the existing approaches. The standard models like Alex Net, VGG, and Google Net along with the proposed model were trained with the Lycopersicon plant leaf which is available in plant village. It has 9 categorical classes of diseases and healthy plant leaves. A range of parameters, including batch size, dropout, learning rate, and activation function were used to evaluate the models’ performance or achievement. The proposed model identifies the Target Spot, Mosaic Virus, Yellow Leaf Curl Virus, Bacterial Spot, Early Blight, Healthy, Late Blight, Leaf Mold, Septoria Leaf Spot, and Two Spotted Spider Mite accurately of the real dataset. The proposed model achieved a disease classification of accuracy rate of 93% to 95%. According to the findings of the accuracy tests, the suggested model is promising and may have a significant influence on the speed and accuracy with which disease-infected leaves are identified.
Full-text available
Agriculture plays a significant role in every nation's economy by producing crops. Plant disease identification is one of the most important aspects of maintaining an agriculturally developed nation. The timely and efficient detection of plant diseases is essential for a healthy and productive agricultural sector and to prevent wasting money and other resources. Various diseases that could affect a plant cause crop farmers to lose a substantial sum yearly. Deep learning can play a crucial role in helping farmers prevent crop failure by early disease detection in plant leaves. In the experiment, we examined CNN, VGG-16, VGG-19 and ResNet-50 models on plant-village 10000 image dataset to detect crop infection and got the accuracy rate of 98.60%, 92.39%, 96.15%, and 98.98% for CNN, VGG-16, VGG-19 and ResNet-50 respectively. The study indicates that ResNet-50 outperforms the other models with an accuracy of 98.98%. So, the ResNet50 model was chosen to be developed into a smart web application for real-life crop disease prediction. The proposed web application aims to assist farmers in identifying diseases of plants by analyzing photos of the plant leaves. The proposed application uses the ResNet50 transfer learning model at its heart to distinguish healthy and infected leaves and classify the present disease type. The goal is to help farmers save resources and prevent economic loss by detecting plant diseases early and applying the appropriate treatment.
Full-text available
Plant pests and diseases are a significant threat to almost all major types of plants and global food security. Traditional inspection across different plant fields is time-consuming and impractical for a wider plantation size, thus reducing crop production. Therefore, many smart agricultural practices are deployed to control plant diseases and pests. Most of these approaches, for example, use vision- based artificial intelligence (AI) or deep and machine learning methods to provide perfect solutions. However, existing open issues must be considered and addressed before AI methods can be used. In this study, we conduct a systematic literature review and present a detailed survey of the studies employing data collection techniques and publicly available datasets. To begin the review, 1349 papers were chosen from five academic databases. After deploying a comprehensive screening process, the review considered 176 studies based on the importance of the method. Convolutional neural network (CNN) methods are typically trained on small datasets and are only intended for a few selected plant diseases. Finally, the lack of large-scale, publicly available datasets from the plant field is one of the main obstacles to solving plant disease identification and related problems, among others.
Full-text available
Plant diseases represent one of the critical issues which lead to a major decrease in the quantity and quality of crops. Therefore, the early detection of plant diseases can avoid any losses or damage to these crops. This paper presents an image processing and a deep learning-based automatic approach that classifies the diseases that strike the apple leaves. The proposed system has been tested using over 18,000 images from the Apple Diseases Dataset by PlantVillage, including images of healthy and affected apple leaves. We applied the VGG-16 architecture to a pre-trained unlabeled dataset of plant leave images. Then, we used some other deep learning pre-trained architectures, including Inception-V3, ResNet-50, and VGG-19, to solve the visualization-related problems in computer vision, including object classification. These networks can train the images dataset and compare the achieved results, including accuracy and error rate between those architectures. The preliminary results demonstrate the effectiveness of the proposed Inception V3 and VGG-16 approaches. The obtained results demonstrate that Inception V3 achieves an accuracy of 92.42% with an error rate of 0.3037%, while the VGG-16 network achieves an accuracy of 91.53% with an error rate of 0.4785%. The experiments show that these two deep learning networks can achieve satisfying results under various conditions, including lighting, background scene, camera resolution, size, viewpoint, and scene direction.
The income of farmers is very low, mostly all over the world, as compared to other professions. They are the primary producers of our food, but they still live in poverty. There are many reasons for this, but one of these is damage caused to their crops by different diseases, either caused by some bacteria, viruses, or some mineral deficiency diseases. They have a deleterious impact on crops, lowering both quality and quantity. In this paper, we have used image processing and k-nearest neighbours (KNN)-based algorithm to detect plant disease with an implementation on the Raspberry Pi. At first, the leaf image is segmented, and later features such as area, entropy, and aspect ratio are extracted. These features are fed to the KNN classifier to get the desired result. Using this technology, it will have a significant impact on farming. The cost of fertilizers and excessive or wrong use of pesticides can be reduced, and hence, it can lead to an increase in productivity. The accuracy we have achieved is 96%.
Around 40% of the cotton in world is produced in India, but it is prone to various limitations in the area of the leaf. And most of these limitations are recognized as diseases. It is very difficult to detect those diseases with naked eyes. This research focuses on building an application that uses different convolution networks to improve the process of identifying the health of the cotton crops. To prevent the major losses in the cotton crop production, we need to check the crop health frequently to reduce the spread of the disease. Reason for damaged cotton crops is because of pests, fungi, and bacteria. The proposed application is used to save the cotton crops from diseases and farmers from heavy losses; about 90% of the cotton cultivation is prone to get affected by fungus pests and bacteria. In the proposed work, we use ResNet50, ResNet152v2, and Inception V3 models for predicting disease. Here, the dataset, i.e., the images of cotton leaves and plants which are used for training and testing the model, is captured manually. The proposed way shows the higher classification rate of accuracy, and also the computation time is low. This shows the practical importance of this application usage in the real-time situations for disease detection.KeywordsImageTRIMClassificationColorTextureDiseaseFeaturePredictionCotton
Full-text available
Present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading.
Full-text available
We propose and experimentally evaluate a software solution for automatic detection and classification of plant leaf diseases. The proposed solution is an improvement to the solution proposed in [1] as it provides faster and more accurate solution. The developed processing scheme consists of four main phases as in [1]. The following two steps are added successively after the segmentation phase. In the first step we identify the mostlygreen colored pixels. Next, these pixels are masked based on specific threshold values that are computed using Otsu's method, then those mostly green pixels are masked. The other additional step is that the pixels with zeros red, green and blue values and the pixels on the boundaries of the infected cluster (object) were completely removed. The experimental results demonstrate that the proposed technique is a robust technique for the detection of plant leaves diseases. The developed algorithm‟s efficiency can successfully detect and classify the examined diseases with a precision between 83% and 94%, and can achieve 20% speedup over the approach proposed in [1].
Full-text available
This study describes an image-processing based method that identifies the visual symptoms of plant diseases, from an analysis of coloured images. The processing algorithm developed starts by converting the RGB image of the diseased plant or leaf, into the H, I3a and I3b colour transformations. The I3a and I3b transformations are developed from a modification of the original I1I2I3 colour transformation to meet the requirements of the plant disease data set. The transformed image is then segmented by analysing the distribution of intensities in a histogram. Rather than using the traditional approach of selecting the local minimum as the threshold cut-off, the set of local maximums are located and the threshold cut-off value is determined according to their position in the histogram. This technique is particularly useful when the target in the image data set is one with a large distribution of intensities. In tests, once the image was segmented, the extracted region was post-processed to remove pixel regions not considered part of the target region. This procedure was accomplished by analysing the neighbourhood of each pixel and the gradient of change between them. To test the accuracy of the algorithm, manually segmented images were compared with those segmented automatically. Results showed that the developed algorithm was able to identify a diseased region even when that region was represented by a wide range of intensities.
Full-text available
This study reports a machine vision system for the identification of the visual symptoms of plant diseases, from coloured images. Diseased regions shown in digital pictures of cotton crops were enhanced, segmented, and a set of features were extracted from each of them. Features were then used as inputs to a Support Vector Machine (SVM) classifier and tests were performed to identify the best classification model. We hypothesised that given the characteristics of the images, there should be a subset of features more informative of the image domain. To test this hypothesis, several classification models were assessed via cross-validation. The results of this study suggested that: texture-related features might be used as discriminators when the target images do not follow a well defined colour or shape domain pattern; and that machine vision systems might lead to the successful discrimination of targets when fed with appropriate information.
The main goal of this paper is to develop a software model, to suggest remedial measures for pest or disease management in agricultural crops. Using this software, the user can scan an infected leaf to identify the species of leaf, pest or disease incidence on it and can obtain solutions for its control. The software system is divided into modules namely: Leaves Processing, Network Training, Leaf Recognition, and Expert advice. In the first module edge of the leaf and token values found. The Second module deals with the training of the leaf to the neural network and finding the error graph. The third and fourth modules are to recognize the species of the leaf and identify the pest or disease incidence. The last module is aimed at matching the recognized pest or disease sample on to the database where in pest-disease image samples and correcting remedial measures for their management are stored.
In this study, we present an application of neural network and image processing techniques for detecting and classifying Phalaenopsis seedling diseases, including bacterial soft rot (BSR), bacterial brown spot (BBS), and Phytophthora black rot (PBR). The lesion areas with BSR, PBR, and BBS of Phalaenopsis seedlings were segmented by an exponential transform with an adjustable parameter and image processing techniques. The gray level co-occurrence matrix (GLCM) was further used to evaluate the texture features of the lesion area. These texture features and three color features (the mean gray level of lesion area on the R, G, and B bands) were used in the classification procedure. A back-propagation neural network classifier was employed to classify BSR, BBS, PBR, and OK (uninfected area of leaf) of Phalaenopsis seedlings. The methodology presented herein effectively detected and classified these Phalaenopsis seedling lesions to an accuracy of 89.6%. The detection capability of the system, without classifying the disease type, is as high as 97.2%.
The citrus industry is an important constituent of Florida's overall agricultural economy. Proper disease control measures must be undertaken in citrus groves to minimize losses. Technological strategies using machine vision and artificial intelligence are being investigated to achieve intelligent farming, including early detection of diseases in groves, selective fungicide application, etc. This research used the color co-occurrence method (CCM) to determine whether texture based hue, saturation, and intensity (HSI) color features in conjunction with statistical classification algorithms could be used to identify diseased and normal citrus leaves under laboratory conditions. Normal and diseased citrus leaf samples with greasy spot, melanose, and scab were evaluated. The leaf sample discriminant analysis using CCM textural features achieved classification accuracies of over 95% for all classes when using hue and saturation texture features. Data models that relied on intensity features suffered a reduction in classification accuracy when categorizing leaf fronts, due to the darker pigmentation of the leaf fronts. This reduction was not experienced on the leaf backs where the lighter pigmentation clearly revealed the disease discoloration. Although, high accuracies were achieved when using an unreduced dataset consisting of all HSI texture features, the overall best performer was determined to be a reduced data model that relied on hue and saturation features. This model was selected due to reduced computational load and the elimination of intensity features, which are not robust in the presence of ambient light variation.