ArticlePDF Available

Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial -spectral analysis of hyperspectral images

Authors:

Abstract and Figures

Background Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Results Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.998\pm 0.003$$\end{document}0.998±0.003 for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. Conclusions An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.
Content may be subject to copyright.
Knauer et al. Plant Methods (2017) 13:47
DOI 10.1186/s13007-017-0198-y
METHODOLOGY
Improved classication accuracy
ofpowdery mildew infection levels
ofwine grapes byspatial-spectral analysis
ofhyperspectral images
Uwe Knauer1* , Andrea Matros2, Tijana Petrovic3, Timothy Zanker3, Eileen S. Scott3 and Udo Seiffert1
Abstract
Background: Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition
status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise pro-
cessing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices.
Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the
spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selec-
tion and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for
the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels
(disease severity) of intact Chardonnay grape bunches shortly before veraison.
Results: Instead of calculating texture features (spatial features) for the huge number of spectral bands indepen-
dently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few
descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective
extraction of texture parameters from the integral image representation of the image bands generated. Dimension-
ality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up
to
0.998 ±0.003
for detached berries used as a reference sample (training dataset). Our approach was validated by
predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of
decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was
achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visu-
ally healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or
sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR.
Conclusions: An advanced approach to hyperspectral image classification based on combined spatial and spectral
image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and
validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-
spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data
analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for
fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyp-
ing of grapevine and additional crops.
Keywords: Grapevine, Powdery mildew, Hyperspectral, Image analysis, Infection
© The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License
(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,
provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license,
and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/
publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Open Access
Plant Methods
*Correspondence: uwe.knauer@iff.fraunhofer.de
1 Biosystems Engineering, Fraunhofer IFF, Sandtorstr. 22,
39106 Magdeburg, Germany
Full list of author information is available at the end of the article
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 2 of 15
Knauer et al. Plant Methods (2017) 13:47
Background
Hyperspectral imaging
Hyperspectral imaging is a remote sensing technology
that is becoming widely used in plant breeding, smart
farming, material sorting, and quality control in food
production [1], as well as identification of grapevine
varieties from the air, detection and diagnosis of stresses
caused by disease or nutrient imbalances and other
applications in viticulture [2]. e generic behavior of
the material to reflect, absorb, or transmit light is used
to characterize its identity and even molecular composi-
tion. A hyperspectral camera records a narrowly sampled
spectrum of reflected or transmitted light in a certain
wavelength range and produces a high-dimensional pat-
tern of highly correlated spectral bands per image pixel.
Often, the direct relationship between this pattern and
the target value, for example a nutritional or infection
value, is unknown. In the simple case, exact spectral
bands are known to correlate with the presence of certain
chemical compounds. If such direct knowledge is una-
vailable, machine learning algorithms are used to learn
a classification or regression task from labeled reference
data [3].
Current sensor technology enables hyperspectral imag-
ing at different scales. For imaging of small objects such
as leaf lesions or seeds, frame-based hyperspectral cam-
eras can be mounted on a microscope or line-scanning
cameras can be equipped with macro lenses [4]. A com-
mon set-up for monitoring plants in the laboratory is a
hyperspectral camera mounted to the side or above a
conveyor belt or a translation stage [5]. While these set-
ups have been partially adapted for outdoor measure-
ments, for hyperspectral imaging of field trials, typically,
vehicle-mounted hyperspectral cameras are used, for
example on unmanned aerial vehicles (UAVs) [6]. e
current limitations of this approach relate to the availabil-
ity of lightweight sensors and loss of spectral and spatial
resolution. Airborne and spaceborne hyperspectral imag-
ing are options for the monitoring of production areas
and large scale assessment of vegetation parameters.
Approaches foranalysis ofhyperspectral data
Typically, the extraction of relevant information from
hyperspectral datasets consists of the following steps.
First, the hyperspectral data is normalized with respect
to sensor parameters and illumination. Second, map-
ping between image pixels and known object positions is
established, either by annotation of the acquired images
or by automatically assigning coordinates (e.g. GPS
measurements) to the image pixels. ird, preprocess-
ing of images ensures extraction of meaningful entities
by segmentation of objects (e.g. individual plants, leaves,
fruits). As it is not possible to reliably detect individual
objects in all cases, preprocessing can be restricted to
suppression of the background information (e.g. soil sur-
face). Low spatial-resolution of the hyperspectral dataset
may require additional steps such as separation of the
spectral information into components which character-
ize the mixture of different materials within the same
pixel. In the remote sensing literature, this is known as
spectral unmixing or endmember extraction [7].
Finally, the hyperspectral data (or derived measures
such as indices) of a certain object or pixel is mapped to
a target category/value provided by experts or laboratory
analysis. Common indices such as Normalized Differ-
ence Vegetation Index (NDVI), Photochemical Reflec-
tion Index (PRI), Anthocyanin Reflectance Index (ARI)
and others are sensitive [8, 9] but are not specific for
plant diseases, which has necessitated the development
of spectral disease indices (SDI) [10]. Disease indices are
developed for specific host-pathogen combinations based
on clearly defined reference data, but typically utilize a
limited number of wavelengths and normalized wave-
length differences. For example, in [10] a Powdery Mil-
dew Index (PMI) for sugar beet has been proposed as
PMI
=
R
520
R
584
R520
+
R584
+R
724
where the
Rxxx
denote normal-
ized reflectances for certain wavelengths. Indices may fail
due to changes in the properties of the biochemical back-
ground matrix.
Spectral Angle Mapping (SAM, [11]) takes all wave-
length bands into account and is capable of discrimi-
nating between healthy tissue and tissue with powdery
mildew disease symptoms at the microscopic scale. How-
ever, differentiation between sparse and dense mycelium
remains difficult. As SAM does not weight the different
wavelengths, the spectral angle is also sensitive to all
changes in appearance even if they are unrelated to the
symptoms (background matrix). In addition, large data-
sets for the dynamics of the pathogenesis of powdery
mildew on barley have been investigated with data min-
ing techniques [4]. Simplex volume maximization has
been effectively used to automatically extract traces of
the hyperspectral signatures that differ significantly for
inoculated and healthy barley genotypes. While manual
annotation of hyperspectral data by experts, as used in
our study, provides accurate reference data, the approach
of Kuska [4] effectively addresses the problem of large,
automatically recorded hyperspectral datasets in time
series analysis.
Spatial‑spectral segmentation withrandom forest
classiers
is paper addresses common challenges for the analysis
of hyperspectral imaging data by investigating the classi-
fication performance of a novel approach to hyperspec-
tral image segmentation. It is based on the tight coupling
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 3 of 15
Knauer et al. Plant Methods (2017) 13:47
of Random Forest classifiers [12] with the integral image
representation [13] of a dimensionality-reduced hyper-
spectral image.
ere are two reasons for this approach. First, Random
Forest classifiers are well established and combine fast
and robust classification. Second, dimensionality reduc-
tion can bridge the gap between traditional pixel-wise
classification of spectral information and texture-based
image processing approaches for single band and color
image segmentation which takes neighboring pixels into
account and typically increases the accuracy of the image
segmentation.
In general, image segmentation approaches can
be roughly divided into pixel- [14] and region-based
approaches [15]. Numerous approaches have been pre-
sented which treat image segmentation as a classifica-
tion problem using different strong classifiers [1619].
Other methodologies have been biologically motivated
by principles of the human visual system [20, 21]. How-
ever, classification in high-dimensional feature spaces
with the most sophisticated classification algorithms may
not be an option for some approaches. For example, for
many real-time image segmentation problems (online
processing), either the number of features used must be
limited to a few that are meaningful [22], a rather weak
classification technique must be used, or both limitations
are accepted in combination to meet the processing time
constraints [23, 24]. Even if online processing of the data
is not required, often the analysis results must be pro-
vided within a certain period to enable decision making
in precision farming, disease control, nutrition manage-
ment, and other applications.
Tree-based image segmentation has been reported [25],
but for several years application seems to have been lim-
ited to certain fields, such as the segmentation of aerial
or satellite imagery to identify land use. In recent years,
Random Forest classifiers have been identified as a valu-
able tool in these fields as well as for related fields such as
object detection [26]. New and demanding applications
have led to several modifications and improvements of
the original Random Forest approach to further improve
the method and to match the application requirements.
For example, Rotation Forest classifiers have been pro-
posed as a method for improved classification of hyper-
spectral data [27] by adding transformations of the input
feature space and hence contributing to the diversity of
ensemble decisions. Also, semi-supervised sampling has
been reported to improve the segmentation performance
of conventional Random Forest classifiers [28].
Feature relevance
Identification of relevant features for classification is
a crucial task for effective processing as well as for a
better understanding of the problems and their solutions.
In [29] the performance of different feature selection
approaches and classifiers for tree species classification
from hyperspectral data obtained at different locations
and with different sensors was reported. e authors
conclude that the selection of 15–20 bands provides the
best classification results and that the location of the
selected bands strongly depends on the classification
method. However, best classification results for all data-
sets have been obtained with Minimum Noise Fraction
(MNF) transformation and selection of the first 10–20
principal components of MNF as input features for clas-
sification. In [30] the input feature space is extended by
parallel extraction of spectral and spatial features. en,
a so-called hybrid feature vector is created and used for
training of a Random Forest classifier. Finally, results
are improved by imposing a label constraint which is
based on majority voting. Other recent developments in
hyperspectral image classification are reviewed in [31].
e authors present a Statistical Learning eory (SLT)
based framework for analysis of hyperspectral data.
ey highlight the ability of SLT to identify relevant
feature subspaces to enable the application of more effi-
cient algorithms. e review categorizes existing spa-
tial-spectral classification approaches into spatial filters
extraction, spatial-spectral segmentation, and advanced
spatial-spectral classification.
Scope ofthe spatial‑spectral segmentation approach
In this paper we present an improved texture-based
spatial-spectral approach to hyperspectral image classifi-
cation which can potentially be applied to images from
all available scales. is approach addresses the prob-
lem that pixel-wise processing of spectral data, even of
derived information such as SDI, does not incorporate
information about the spatial variation of the spectral
properties of healthy and diseased material. Hence, tak-
ing this variation into account aims to improve classifica-
tion accuracies for prediction of disease severity.
As a model system we selected the classification of
powdery mildew infection levels of Chardonnay grape
bunches, because the current approach of visual assess-
ment of infection levels (% of surface area affected of a
bunch) is subjective. Many Australian wineries use a
rejection threshold of 3–5% surface area affected by
powdery mildew based on visual assessment [32]. us,
objective assessment of disease-affected bunches and
quantification of pathogen (Erysiphe necator) biomass
are required. Hyperspectral imaging was investigated as a
means of detecting powdery mildew-affected bunches at
the beginning of bunch closure, after routine assessment
of disease in the field. Powdery mildew is more read-
ily assessed by visual inspection at this stage than later
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 4 of 15
Knauer et al. Plant Methods (2017) 13:47
in bunch development, providing a proof of concept for
subsequent investigation of the disease on bunches closer
to harvest.
We acquired hyperspectral images from powdery
mildew affected and non-affected Chardonnay grape
bunches. After preprocessing, the data sets were reduced
in dimensionality by means of Linear Discriminant Anal-
ysis (LDA) to retain only a few highly descriptive image
bands. Subsequent application of Random Forest classi-
fiers and selective extraction of texture parameters led to
improved classification accuracies for powdery mildew
infection levels and, hence, disease severity level predic-
tion (SLP) of wine grapes.
Methods
Plant material andfungal biomass
Grapes from a non-commercial vineyard (Waite Cam-
pus, University of Adelaide, South Australia) (E 138° 38
3.844, S 34° 58 3.111) were used in this study. In-field
assessment of powdery mildew on vines was conducted
according to [33]. Subsequently, 10 visually healthy and
20 bunches naturally infected by Erysiphe necator with
no signs of other diseases and/or abiotic/biotic damage,
were selected from Chardonnay vines (Vitis vinifera L.,
clone I10V1). Bunches were collected at the lag phase of
berry development (i.e. when berry growth is halted and
the seed embryos grow rapidly), otherwise described as
growth stage E-L 30-33 (beginning of bunch closure) [34]
when total soluble solids had reached 5° Brix (December
4, 2014).
Bunches were assessed in laboratory conditions using a
magnifying lamp and assigned to three categories: visu-
ally healthy, infected, and severely diseased. Bunches des-
ignated severely diseased were considered likely to have
been infected at E-L 23-26 when grape clusters are highly
susceptible to the pathogen. Berries on those bunches
were significantly lighter (
0.53 ±0.045
g,
p=0.03
) and
slightly smaller (
9.92 ±0.34
mm) than berries on healthy
bunches (weight
0.75 ±0.045
g; diameter
mm). However, morphology of all bunches was similar,
regardless of powdery mildew status. After hyperspectral
imaging of the upper and lower surface of each bunch,
bunches were stored at
20 C
. Each surface of the fro-
zen bunch was matched with the corresponding anno-
tated reference image (Fig.10) and berries were detached
and grouped according to bunch and surface (30 bunches
×
2 surfaces). e 60 samples were homogenized sepa-
rately, then DNA was extracted using a Macherey-Nagel
NucleoSpin® Plant II Kit and quantified using a Quanti-
Fluor® dsDNA System. A modified duplex quantitative
polymerase chain reaction(qPCR) assay using a TaqMan®
MGB probe (
FAMTM
dye-labelled) was used to quantify
E. necator biomass [35]. Reaction efficiency was assessed
by generating a standard curve for E. necator and abso-
lute quantification of E. necator biomass was achieved
using the standard curve. e number of copies of the
amplified E. necator DNA fragment per conidium was
calculated based on the DNA extracted from a known
number of E. necator conidia. Consequently, the num-
ber of copies of the E. necator DNA fragment obtained
for the DNA extracted from 100 mg of berry tissue was
expressed as number of E. necator conidia and then cor-
rected for the average weight of berries for each bunch.
Log-transformed data is presented (Fig.4).
Hyperspectral imaging
Figure1 provides an overview of the measurement set-up
and the experimental design. For the hyperspectral image
acquisition, samples of grapes were positioned along with
a standard optical PTFE (polytetrafluoroethylene) cali-
bration pad on a translation table. Spectra were acquired
either from the visible and near-infrared range (VNIR)
of 400–1000nm at 3.7nm resolution or from the short-
wave infra-red range (SWIR) of 970–2500 nm at 6nm
resolution yielding a 160dimensional or 256dimensional
spectral vector per pixel, respectively. Hyperspectral
images were recorded using HySpex VNIR 1600 (VNIR
camera) and HySpex SWIR-320m-e (SWIR camera) line
cameras (Norsk Elektro Optikk A/S). e VNIR cam-
era line has 1600 spatial pixels. Spectral data along this
line can be recorded with a maximum frame rate of
135frames per second (fps). e SWIR camera line has
320 spatial pixels. Spectral data can be recorded with a
maximum frame rate of 100fps. Radiometric calibration
was performed using the vendor’s software package and
the PTFE reflectance measure.
As part of the controlled environment, artificial broad-
band illumination was used as the only light source.
Before the recordings started, two custom made lamps
were adjusted to focus the light to a line overlapping the
fields of view (FOV) of the hyperspectral cameras.
Two hyperspectral images containing either only visu-
ally healthy or only severely diseased detached berries,
manually dissected from two bunches, were recorded.
ose images alone were used for SLP model devel-
opment. Next, 60 images of two sides of 30 complete
bunches were recorded, comprising 10 visually healthy
bunches, 10 powdery mildew infected bunches, and 10
severely diseased bunches. ese images were used to
assess the accuracy of the SLP method under realistic
conditions. Results of qPCR analysis of berries detached
from all bunches served as reference values. Figure2 illus-
trates the scanning result. It shows the hyperspectral data
cube with two spatial and the spectral dimension. Each
horizontal slice corresponds to a single wavelength image.
e 1000nm band of the VNIR camera is plotted on top.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 5 of 15
Knauer et al. Plant Methods (2017) 13:47
Development ofdisease severity level prediction models
Figure3 summarizes the approach for the development
of models for SLP based on pixel-wise powdery mildew
detection. For the development of prediction models and
initial tests of parameters, only the small subset of images
obtained from detached berries was used. First, this facil-
itates the generation of class information as the image
contains either severely diseased or healthy berries.
Second, the derived models can later be tested with the
Fig. 1 Overview of the measurement set-up and the experimental design. The measurement set-up consists of two hyperspectral line scanning
cameras for VNIR (a) and SWIR (b) wavelength range, artificial broadband illumination (c), and translation stage with stepper motor (d). Hyperspec-
tral images of PTFE reference plate (e) and 30 bunches (f), visually assigned to three categories (visually healthy, infected and severely diseased, blue
shading represents powdery mildew), were recorded in laboratory conditions. Berries of two bunches were detached to be used as reference data
for classifier training
Fig. 2 Hyperspectral image. Visualization of a hyperspectral image
cube with grape bunch, PTFE reference plate, and background
materials. The hyperspectral image consists of different layers which
directly correspond to the reflection of narrow wavelength bands.
The PTFE reference plate is calibrated and used for data normalization
Fig. 3 Systematic approach and development of powdery mildew
detection models. Based on hyperspectral images of visually healthy
and severely diseased detached berries a dataset containing spectra
of both classes is generated. Two different feature spaces are investi-
gated for classification of spectral data; first, dimensionality reduction
with subsequent spatial-spectral feature extraction and second,
classification of complete spectral signatures. The path on the right
corresponds to the first row of Table 1, whereas the left hand side cor-
responds to the remaining rows
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 6 of 15
Knauer et al. Plant Methods (2017) 13:47
complete set of hyperspectral images. is ensures inde-
pendent samples for validation of the approach. Preproc-
essing of the spectral data was undertaken to compensate
for the specific contributions of the sensor as well as the
illumination to the measured signal.
Image preprocessing
e preprocessing of hyperspectral images consists of the
following steps:
1. Conversion from raw images (photon count, digital
numbers) to radiance (at sensor)
2. Conversion from radiance (at sensor) to reflectance
(at surface)
3.
L2
-normalization (spectra are treated as vectors and
normalized to have equal length)
4. Dimensionality reduction
Dimensionality reduction aims to achieve the following
goals:
1. Reduction of computational costs
2. Avoid problems inherent in dimensionality (known
as the curse of dimensionality [36] and particularly
Hughes phenomenon [37] in machine learning and
computational intelligence)
We implemented different options for dimensionality
reduction:
1. Canonical band selection (inspired by human per-
ception and bands of other existing imaging sensors),
2. Relevance-based band selection based on importance
histograms,
3. Synthesis of orthogonal bands based on Principal
Component Analysis (PCA),
4. Target class specific synthesis based on adapted data
sampling before PCA,
5. Synthesis of orthogonal bands based on LDA.
Depending on the classification task at hand, each option
provides a different trade-off between transformation speed
and discriminative power of the original spectral data.
For canonical band selection the image bands used by
the software PARGE (ReSe Software) were selected. For
VNIR cameras such as NEO HySpex VNIR 1600, the red-
channel of the resulting RGB-image was mapped to the
651 nm band, the green-channel to 549 nm, and the blue-
channel to 440 nm. Another option for canonical band
selection is close infrared (CIR), where the three channels
were mapped to the 811, 640, and 498 nm bands, respec-
tively. In the short-wave infrared, the following mapping
was used: (1081, 1652, 2253 nm).
e relevance-based band selection was based on
supervised pixel-wise classification of spectral informa-
tion with Random Forest classifiers. During the construc-
tion of a decision tree, many different optimizations (with
respect to a measure of information gain) take place for
feature selection. Hence for each classification, the tree
nodes visited were checked for which feature (band) was
used to create a histogram of band importance. Finally,
the three highest ranked bands were selected.
PCA was used to derive a new orthogonal base of the
original feature space. e resulting bands represent lin-
ear combinations of all original bands. Random subsets
of spectra were used to calculate the projection matrices.
For target class-specific PCA the input spectra were sam-
pled from predefined pixels only. Closely related is the
application of LDA for deriving a task-specific projection.
Spatial‑spectral classication
Our approach for texture-based classification (spa-
tial component) relies on the data structure of integral
images [13]. is representation enables a cache-like fast
look-up of feature values for arbitrary rectangular image
regions of a single image band. ree base features are
used, which require calculation of three integral images
per image band:
1. Mean intensity
2. Standard deviation
3. Homogeneity
e choice of the base features is motivated by their
known support for the integral image representation [13,
38].
ese base features are calculated for 25 differently
sized squared image blocks centered on the current pixel
and all image channels (of the dimensionality reduced
hyperspectral image) separately. Here, a 225-dimensional
(
3×3×25
) feature vector is used per pixel. Even if the
dimension of the feature vector is approximately the
same as for the spectral data, each feature now consists
of a spatial (mean, standard deviation or homogeneity of
rectangular image area) and a spectral component (from
PCA, LDA or band selection).
In the training phase, feature vectors were selected at
random locations within the image. Class labels were
assigned based on given reference data. Next, a modified
Random Forest classifier was trained. In contrast to the
default Random Forest classifier, each tree node holds
additional information which is needed to quickly access
the tested feature from the set of integral images. Hence,
there is no need to calculate a full feature vector in the
application phase of the model. For each pixel only a sub-
set of dimensions of the feature space must be calculated.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 7 of 15
Knauer et al. Plant Methods (2017) 13:47
is speeds up the classification process. A significant
reduction in the time needed for calculation of features
can be obtained for single decision trees (in the order of
log2N
, where N is the total number of considered fea-
tures) and Random Forests with a small number of trees.
A related investigation of the trade-off between classifi-
cation accuracies, ensemble size, and number of features
used for different hyperspectral classification tasks can
be found in [39].
Cross‑validation procedure
N-fold cross-validation was used to calculate an estimate
for the classification accuracy (N = 10 was used for all
experiments). e training data was randomly parti-
tioned into 10 groups (folds) of equal size. is means
that each feature vector was assigned to only one of the
folds. While
N1
folds were used to train a classifica-
tion model, the remaining fold was used to test the accu-
racy of the resulting model. is was repeated N times.
e average accuracy and the standard deviation of the N
classification models were then compared.
Results
Fungal biomass
e differentiation between visually healthy, infected
and severely diseased bunches proved to be accurate for
the majority of bunches (75%) based on fungal biomass
(via qPCR) as reference (Fig.4). Of the visually healthy
bunches, four were negative in the qPCR assay so the
fungus was not detected on either side of the bunch.
However, the fungal biomass among the remaining six
visually healthy bunches varied considerably. Fungal
biomass from infected and severely diseased bunches
showed less variation. Maximum fungal biomass for
visually healthy and infected bunches overlapped with
biomass for infected and severely diseased bunches,
respectively (Fig.4). Overlap in fungal biomass was more
evident for visually healthy and infected bunches than for
infected and severely diseased bunches. is indicates
that bunches visually assessed to be healthy had colo-
nized berries hidden within the bunch, sparse mycelial
growth missed under the magnifying lamp or that air-
borne conidia had landed on the berry surface. Uneven
distribution and density of E. necator mycelium and con-
idiophores on berries in infected bunches is likely to have
caused the overlap in fungal biomass between infected
and severely diseased bunches (Fig.4).
Dataset
e dataset consists of 60 hyperspectral images cor-
responding to two scans (top and bottom view) of 30
bunches (see Fig. 1). From two of these bunches, 128
visually healthy and 136 severely diseased berries were
selected and detached for recording of an additional data-
set for classifier training and initial validation. Detached
berries were arranged in Petri dishes and two additional
hyperspectral images were recorded which contained
either severely diseased or healthy berries. Furthermore,
the small time gap between the two recordings ensured
constant conditions for the measurements. Figure 5
shows the mean spectra as well as the standard devia-
tions obtained from these reference images for healthy
and severely diseased detached berries. Here, the spectral
signatures of each image pixel have been normalized with
respect to the reflectance of the PTFE calibration pad.
For validation of the proposed spatial-spectral
approach these spectral signatures have been used to
train a reference Random Forest classifier. Figure6 shows
the relevance profile derived for individual wavelengths
within the classification process. For the dimensional-
ity reduction step in spatial-spectral segmentation, one
option is to select the most relevant bands from this
result. Additionally, a number of low-dimensional repre-
sentations of the hyperspectral images have been derived
to investigate the classification performance of the spa-
tial-spectral image segmentation approach in different
feature spaces.
Classication models
In order to maintain speed of the proposed segmentation
algorithm, dimensionality reduction is the first process-
ing step. e fastest and simplest approach is focusing
Fig. 4 Quantitation of Erysiphe necator biomass in Chardonnay grape
bunches. Boxplot of E. necator biomass as measured by an E. necator-
specific qPCR assay of bunches assigned to three visual categories
(visually healthy, infected, and severely diseased). Four bunches or
40% of scanned bunch profiles of visually healthy bunches were
confirmed to be pathogen-free according to qPCR
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 8 of 15
Knauer et al. Plant Methods (2017) 13:47
on a few (typically three) predefined image bands and
skipping processing of all the others. Several such selec-
tions, for both VNIR and SWIR wavelength ranges, are
compared to the more sophisticated reduction methods
in Table1. e mean accuracy values and their standard
deviations are given for 10-fold cross-validation experi-
ments for random sets of 4000 pixels from two training
images (containing healthy and infected grapes). Results
indicate that successful classification is possible in both
wavelength ranges. However, with an accuracy of 0.98,
pixel-wise spectral classification in VNIR performs sig-
nificantly better than in SWIR (accuracy 0.85). e
introduction of texture features by the spatial-spectral
classification approach can nearly compensate for the
effects of dimensionality reduction for all variants and
improve classification accuracy to 0.99 (especially in
the SWIR region this is a significant improvement). e
transformations investigated for reduction of dimension-
ality (PCA, LDA, adaptive PCA) incorporate all image
bands, potentially minimizing the loss of information
inherent in dimensionality reduction, while band selec-
tions (Custom, RGB, CIR, SWIR) have been tested to
exploit the potential of less expensive standard (RGB,
SWIR, CIR) or customized (Custom) camera systems.
e customized band selection was based on the analysis
of the relevance of individual bands for a Random Forest
classifier. To obtain a measure of relevance, during clas-
sification all nodes visited in the decision trees within
the Random Forest voted for the corresponding feature.
ree local maxima of the relevance curve were then
Fig. 5 Illustration of reflectance spectra. Spectral signatures of
healthy detached berries and detached berries with severe powdery
mildew infection (a) and the differences between mean spectra
of healthy and diseased berries (b). The standard deviations of the
spectral signatures are shown as error bars in a. Spectrally localized
differences are observed in the green peak region (550 nm) of the
spectra and just above the red edge region (680–730 nm). Throughout
the shortwave infrared region a shift between the mean spectral
signatures occurs due to higher reflectance of the diseased berries
Fig. 6 Relevance spectrum. Relevance of the individual spectral bands was derived from the structure of the Random Forest classifiers. More
relevant wavelength features are used more often and hence contribute more to the final decision. The images of the two hyperspectral cam-
eras have been processed independently and result in the blue and the red relevance profile, respectively. For each camera a number of highly
relevant bands are found. Three local maxima in the relevance profiles are highlighted. Limiting classification to only the three highlighted relevant
wavelengths yields mean accuracies of 0.98 (VNIR camera) and 0.99 (SWIR camera) for detached berries and in combination with textural features
extracted from these image bands
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 9 of 15
Knauer et al. Plant Methods (2017) 13:47
selected. A threshold ensures a minimum distance of 20
bands between selected local maxima.
Table2 shows the investigation of block size (of spatial-
spectral features) vs classification accuracy. LDA-based
reduction and two predefined band selections (denoted
as RGB and SWIR) have been compared. e results,
especially for SWIR, indicate that good performance
is already achieved with small maximum block sizes.
e baseline accuracy for individual pixel classification
(block size 1 pixel) is 0.78 for the VNIR camera and 0.94
for SWIR camera. is result shows the value of using
disease-specific LDA based projection to constitute a
low-dimensional representation for further processing.
Classification accuracies for a representation by three
default bands from the VNIR camera (RGB) or SWIR
camera are 0.76 and 0.62 (block size 1 pixel), respec-
tively. By increasing the maximum block size, additional
features (mean, standard deviation, and homogeneity of
intensity distribution) are taken into account which are
not defined for a single pixel. For a maximum block size
of 100×100 pixels in the VNIR camera image, which
corresponds to the approximate size of a single berry,
an accuracy of 0.99 is achieved. For image blocks of
20×20 pixels of the SWIR camera, accuracy of 0.99 was
achieved also. As the sample in this experiment consists
of detached berries which are covered by mycelium, the
block size and classification performance can be further
increased. However, in practice early detection of a pow-
dery mildew-affected surface requires the use of small
block sizes (to detect small infection spots).
Classification results correspond to the mean spectra
plotted in Fig.5 and with results from the literature [10].
Especially, in the SWIR domain the mycelium leads to
a shift of the spectral signatures due to a higher reflec-
tance over the complete wavelength range between 1000
and 2500 nm. While such a shift has been reported for
powdery mildew-affected sugar beet in VNIR, the mean
spectra show a different performance for grapes. We
observed a reduced reflectance at the green peak region
(550 nm) as well as in the plateau region after the red
edge (750–900 nm). is is due to the high reflectance
of healthy grapes compared to the reflectance of healthy
leaves, which has been the subject investigated in previ-
ous studies [10, 11].
Severity level prediction
Having an automated inspection system either in quality
control or in plant phenotyping in mind, it is not feasible
to scan detached berries and the scanning of complete
bunches is much more challenging. An automated
inspection system would deliver a score corresponding
to the severity level or surface area affected by powdery
mildew. Despite the promising results of cross-validation
experiments within the training datasets (detached ber-
ries), the spatial-spectral classification of the complete
bunch images yields different results. eir 3D structure
Table 1 Classication accuracy using dierent dimension-
ality reduction methods
Principal Component Analysis (PCA) and standard band selections (RGB, CIR,
SWIR) are compared to adaptive reduction methods. Adaptive PCA is based
on stratied sampling based on class labels, custom band selection is based
on relevance proles and uses only three most relevant individual bands,
while Linear Discriminant Analysis (LDA) is used to nd an optimal subspace
projection of the data
* Pixel-based segmentation of normalized spectra as reference, all other are
spatial-spectral-based
Feature space Bands VNIR SWIR
Normalized spectral* All 0.980 ± 0.006 0.853 ± 0.027
PCA All 0.968 ± 0.008 0.999 ± 0.002
Adaptive PCA All 0.969 ± 0.007 0.996 ± 0.004
Custom 3 0.981 ± 0.008 0.997 ± 0.003
RGB 3 0.972 ± 0.009
CIR 3 0.971 ± 0.009
SWIR 3 0.999 ± 0.003
LDA All 0.998 ± 0.003 0.998 ± 0.005
Table 2 Classication accuracy versusmaximum block size forspatial feature extraction
With increasing maximum block size (from left to right) a gain in accuracy was achieved by introducing additional spatial-spectral features. Due to the dierent
resolution of the cameras for the VNIR and SWIR domains, 100×100 pixels in the VNIR camera image match 20×20 pixels in the SWIR camera image of the same
bunch. These two block sizes correspond to the approximate size of a single berry in the measurement set-up used. The rows RGB and SWIR refer to spatial features
derived from selected bands, while rows LDA VNIR and LDA SWIR refer to texture features derived from projected images. For the VNIR wavelength range the spatial
component contributes most to the accuracy gain, while in the SWIR wavelength range classication of spatial features from projected images outperformed
classication based on spatial features from selected bands. Even by introducing only a few spatial features (maximum block size 5 pixels), a signicant gain in
classication accuracy was observed. Due to the dierent spatial resolution of VNIR and SWIR images, which is related to the dierent number of pixels and pixel sizes,
the increase of the block size was limited to the approximate size of a single Chardonnay berry (VNIR 100×100, SWIR 20×20 pixels)
1 5 20 50 100
RGB 0.767 ± 0.013 0.938 ± 0.014 0.952 ± 0.011 0.964 ± 0.01 0.972 ± 0.009
LDA VNIR 0.782 ± 0.016 0.865 ± 0.016 0.951 ± 0.015 0.984 ± 0.007 0.998 ± 0.003
SWIR 0.617 ± 0.027 0.729 ± 0.047 0.872 ± 0.019
LDA SWIR 0.948 ± 0.017 0.986 ± 0.009 0.993 ± 0.006
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 10 of 15
Knauer et al. Plant Methods (2017) 13:47
imposes some additional problems with shadow and the
focal plane compared to the recording of selected indi-
vidual berries which were used for model generation. So
far, in the SWIR wavelength range successful classifica-
tion was not possible using the independently generated
models for detached berries. Obviously, the observed
shift in the hyperspectral signatures (Fig.5) is the domi-
nating discriminating feature and is impossible to detect
in the presence of the aforementioned factors.
Figure7 shows the results of severity level prediction
for the VNIR camera. e severity level is estimated by
the surface area which is classified as powdery mildew-
affected. e results are presented from the aforemen-
tioned application perspective. e most relevant 3 cases
are shown. First, segmentation results solely based on
pixel-wise classification of the hyperspectral data are
shown. In practice, this represents the default approach
to hyperspectral image segmentation. e images have
been grouped according to the expert’s decision about
the infection level. For each of the groups of healthy,
infected, and severely diseased grapes a boxplot of auto-
matically estimated infection level is given in the upper
diagram (A). While severely diseased biological material
can be detected, detection of low infection states is not
possible at a statistically significant level. Surprisingly,
a Random Forest classifier cannot reliably handle the
detection of healthy material as indicated by the mean
offset for the estimated infection level if only normal-
ized spectral data is used as feature vector. However,
this is also related to the chosen training strategy. Train-
ing data comprised a sample from an independent set of
two images from selected infected and healthy grapes.
By recording the complete bunches, occlusions, shadows
and blurring of image regions occur.
Given the same set of hyperspectral images, the pro-
posed spatial-spectral segmentation of a projected hyper-
spectral image performs much better. Using LDA, a
projection can be found which keeps the most relevant
spectral information for the detection of powdery mil-
dew infection. By calculating spatial features of the pro-
jected images a better discrimination between healthy
bunches and bunches with only a few infected grapes is
possible (middle diagram, B). Fig.7c shows the improve-
ments made by increasing the ensemble size to 50 ran-
dom decision trees. Separation between healthy and
infected bunches was further improved.
Figure 8 shows a different visualization of the clas-
sification performance for the complete dataset of 60
grape bunch images. Receiver Operating Characteris-
tic (ROC) curves [40] are used to highlight the different
trade-offs between true positive and false positive rates
that exist for different threshold values. resholds are
applied to the calculated fraction of diseased pixels to
differentiate between healthy, infected, and severely dis-
eased bunches. As the dataset contains two images of
each bunch (top and bottom view), the mean of the two
scores was calculated prior to application of thresholds.
ROC curves and derived index values are often used for
comparison of diagnostic tests [41] and can be used for
optimal selection of operating points [42]. Diagrams
ROC-1 correspond to the classification performances for
Fig. 7 Classification accuracy of intact bunches depending on
random forest classifier complexity. Boxplot of the predicted surface
area affected for the three main categories of the experiment based
on pixel-wise segmentation of LDA projected hyperspectral images
(VNIR only). a Pixel-wise pure spectral classification with Random For-
est, b texture-based spatial-spectral segmentation with 10 trees ver-
sus c Random Forest with 50 trees. Severely diseased bunches can be
detected with high accuracy, while discrimination between healthy
and infected is challenging in a few cases. Classification accuracy
increases with the complexity (number of decision trees) of the Ran-
dom Forest classifier. Results of the analysis of hyperspectral images
are comparable and correspond well to qPCR results (see Fig. 4)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 11 of 15
Knauer et al. Plant Methods (2017) 13:47
the detection of healthy bunches versus overall infected
(infected and severely diseased) based on spectral fea-
tures (top row) and spatial-spectral features (bottom
row), respectively. For each threshold the fraction of cor-
rectly classified healthy bunches is plotted against the
false positive rate for the same threshold. For example,
using spatial-spectral features a successfull detection
of >80% of all healthy bunches (true positive rate >0.8)
was achieved with a lower misclassification of infected
bunches compared to using spectral features. is mis-
classification (error) directly corresponds to the contami-
nation level when used for sorting a tranche of bunches.
Diagrams in column ROC-2 show the inverse problem
to separate any infected bunch (infected + severely dis-
eased) from the group of healthy bunches. Obviously,
in ROC-1 and ROC-2 diagrams the axes are exchanged.
is illustrates the trade-off for the threshold-based deci-
sion, because the false positive rate now corresponds
to the loss of healthy bunches (e.g. when the classifier
is used in a sorting-machine). ROC-3 diagrams show
the easier detection of severely diseased versus healthy
and infected bunches. Both ROC-3 curves show that
a higher fraction of severely diseased bunches can be
detected with lower error compared to ROC-1 (healthy)
and ROC-2 (overall infected). e last column shows the
color coded classification accuracies as a 2-dimensional
function of the thresholds for separating between the
three classes (healthy, infected, severely diseased). e
gain in classification accuracy for detection of infected
bunches by using spatial-spectral features is clearly visi-
ble in diagrams ROC-1 and ROC-2, where the area under
curve (AUC), which is related to classification accu-
racy, is increased. ese improvements led to a signifi-
cant gain in the overall classification accuracy from 0.76
(using only spectral data) to 0.86 (using spatial-spectral
features). A detailed analysis of the performance gain is
given in Fig.9. e spatial-spectral approach significantly
improves the ability to separate the three classes, espe-
cially for the difficult detection of infected bunches with
little fungal biomass.
Fig. 8 Receiver operating characteric curves and dependence of classification accuracy on selected thresholds. ROC curves visualize the trade-off
between successful detection of healthy versus infected and severely diseased (ROC-1), infected and severely diseased versus healthy (ROC-2), and
severely diseased versus all other bunches (ROC-3) and the corresponding error rates. ROC curves are calculated for the complete dataset of 60
images. Class decision for each bunch is based on the average fraction of diseased pixels of two images (top and bottom view of the bunch). This
combined score was calculated for each bunch prior to application of a threshold. The top row shows the results for classification based on spectral
features, while the bottom row shows the results for spatial-spectral features with Random Forest classifiers (50 trees each). A true positive rate of
1 means that all bunches of the corresponding class have been successfully assigned to the correct class. This is achieved at the price of a certain
false positive rate, which denotes the fraction of bunches of the other classes falsely assigned to the same class. ROC-1 and ROC-2 are significantly
improved by using spatial-spectral features. As two thresholds are needed to separate the 3 classes, the last column visualizes the accuracy as a
function of the selected thresholds A and B. The optimal combination of thresholds is highlighted for both feature spaces and shows a significant
gain in overall classification accuracy for our spatial-spectral approach
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 12 of 15
Knauer et al. Plant Methods (2017) 13:47
Figure10 illustrates the general segmentation perfor-
mance of the proposed method. e comparison with the
manually annotated reference image highlights the capa-
bility of the Random Forest based segmentation approach
to successfully detect powdery mildew affected grapes
in VNIR hyperspectral images. Results for both spectral
and spatial-spectral segmentation contain a number of
pixels classified as false positive. As these pixels repre-
sent mainly background pixels which were not present in
the original training dataset (detached berries only), the
effect on the calculation of fractions of diseased/healthy
pixels is comparable for all bunches of grapes. For this
reason, we improved the approach by adding random
samples from typical background regions (PTFE-plate,
translation stage surface, paper labels, stem) of three
additional hyperspectral grape bunch images to the train-
ing dataset. e pixels detected were then excluded from
the count of diseased pixels. e accuracy values pre-
sented are based on the classification with background
regions suppressed.
Discussion
Hyperspectral imaging and data analysis based on spec-
tral as well as spatial-spectral features have been applied
here to test automated detection of powdery mildew
infection of Chardonnay grape bunches within 12 h of
routine in-field disease assessment. Hyperspectral imag-
ing has already been used to develop spectral indices
for detection of plant diseases [10], quantification of the
spatial proportions within leaf lesions [43] and quantifi-
cation of the intensity of sporulation and leaf coloniza-
tion [9]. Several host-pathogen model systems, such as
sugar beet and barley powdery mildew and grapevine
leaf downy mildew, have been studied previously and,
Fig. 9 Classification results. Confusion matrices for thresholds corresponding to the operating points with maximum accuracy (see Fig. 8) of
spectral (left) and spatial-spectral classification (right). For spatial-spectral classification, thresholds are found which allow perfect detection of
healthy and severely diseased grape bunches. Also, the false detections of infected bunches as healthy and as severely infected are reduced by the
spatial-spectral approach. The best automatically obtained decisions differ from visual assessment by experts only for 4 of the 10 infected bunches,
with 3 classified as healthy and 1 classified as severely diseased. In addition, operating points can be adjusted according to application demands to
provide a lower total accuracy but higher specificity/sensitivity for a certain class as needed
Fig. 10 Visual representation of the results from the various data analysis approaches. Images of a representative scanned Chardonnay grape
bunch: a example of a manually annotated grape bunch with visually identified infection sites shown as red dots, b disease specific visualization of
VNIR hyperspectral image based on LDA coefficients, c powdery mildew detection results based on spatial-spectral approach (Table 1, row 8), d
detection results based on classification of hyperspectral signatures (Table 1, row 1)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 13 of 15
Knauer et al. Plant Methods (2017) 13:47
to our knowledge, we are first to report results for pow-
dery mildew on grape bunches and individual berries
in a controlled environment (Fig.1). e approach pre-
sented in [10] requires exhaustive testing of the possible
combinations of two wavelengths to find the best dis-
ease-specific index. ose indices (e.g. PSSR, PRI) along
with a change of reflectance in particular spectral range
are useful as they may indicate the degree of reaction of
the disease-affected cells in the resistant and susceptible
genotypes [44]. However, the use of only two wavelengths
can be a major drawback and the incorporation of more
wavelengths would drastically increase the amount of
time required to find a solution. In our spatial-spectral
approach, a disease-specific projection based on LDA
is used instead. is approach can be easily transferred
to any other model system. e main advantage is that
the resulting projection is a linear combination of all
wavelengths.
However, often the motivation for incorporating fewer
wavelengths is to enable the application of simpler and
cheaper sensor systems. For this it is important to iden-
tify the most relevant wavelengths from the hyperspec-
tral dataset. In [10] the RELIEF-F algorithm is used prior
to exhaustive testing to constrain the search space for the
final solution for computational reasons. We have shown
that similar information can be derived from the struc-
ture of the Random Forest classifier. We also showed for
an adapted selection of three relevant wavelengths that a
gain in classification accuracy (for detached berries) can
be achieved when used in combination with textural fea-
tures of image blocks instead of single pixels (Table2).
An alternative approach for identifying most relevant
spectral features was reported in [45]. Here, Support
Vector Machines (SVM) and Random Forest classifiers
were coupled for classification of pine trees. An impor-
tant aspect of this work was the utilization of Random
Forest variable importance to identify the most relevant
wavelength bands. Importance is based on ‘out-of-bag’
error and measures the average loss of accuracy when a
single variable is not used. Experiments reported in [46]
also include dimensionality reduction of hyperspectral
data. e authors concluded that identifying the most
relevant wavelength bands prior to classification yielded
results similar to classification based on the complete
spectral data. ese findings showed that feature reduc-
tion was possible without significant loss of accuracy. An
alternative approach to incorporate feature relevance into
the training of Random Forest classifiers was proposed in
[47]. Here, the randomness was induced in a guided way
by selecting features based on a learned non-uniform
distribution.
e promising results for intact bunches in the VNIR
wavelength range and from cross-validation experiments
within the training datasets (detached berries), in either
the VNIR or SWIR domain, warrant further testing in a
controlled environment and an industry setting to cor-
roborate these findings. Results showed that a Random
Forest with 50 random decision trees can be used to esti-
mate infection and discriminate healthy bunches from
infected. However, variation of hidden E. necator bio-
mass and/or airborne conidia on the surface of berries
in the visually healthy bunches indicates the need to set
thresholds for characterization of healthy bunches.
e proposed algorithm for predicting powdery mil-
dew severity needs to be validated in controlled condi-
tions similar to those described by [48] for grape berries
and bunches with intact conidia and during the latent
period of E. necator development (i.e. between germi-
nation of the conidium and sporulation of the colony).
is algorithm also needs to be validated using intact
bunches harvested by hand at maturity, such as may be
used for premium quality wines, small wineries, organic
or biodynamic wines and dried products (e.g. raisins).
Such validation will determine the sensitivity and preci-
sion of hyperspectral imaging under different conditions
to assess its usefulness as a method to improve objective
assessment of powdery mildew severity.
e proposed algorithm was developed for Chardon-
nay from a single vineyard at the beginning of bunch
closure (E-L 30-33), when visually healthy and infected
berries as well as the fungus differ in biochemical com-
position from that at harvest (E-L 38). Also, at harvest,
skin and berry defects may be present due to biotic (e.g.
other diseases and pests) and abiotic damage. It has been
shown that LDA using data collected for berry color with
an automated in-field phenotyping device (PHENObot)
could not predict red and rose berries if RGB values were
used [49]. Consequently, it can be expected that addi-
tional adjustments, such as using grape bunches collected
at harvest from a range of white and black grape varieties
and growing regions, bunches with diverse compactness
and those affected by other economically important dis-
eases such as botrytis bunch rot [50], and validation in
uniform light conditions, will improve the accuracy of
hyperspectral imaging and prediction of powdery mildew
severity on intact bunches. is approach may expand
the application of hyperspectral discrimination of healthy
and infected hand-harvested bunches in an industry set-
ting. Implementation of hyperspectral imaging for sort-
ing healthy and infected hand-harvested bunches in a
single layer on a conveyor belt may be feasible.
Hyperspectral imaging has potential for real time
assessment. However, substantial modification would be
required to take into account differences between hand-
and machine-harvested grapes. Machine-harvested
grapes delivered to wineries comprise mainly individual
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 14 of 15
Knauer et al. Plant Methods (2017) 13:47
detached and damaged berries plus material other than
grapes (e.g. leaves, fragments of canes and vine bark).
ese detached berries can be either completely or par-
tially covered with juice [51]. e presence of juice con-
taining E. necator mycelia and conidia that are washed
from the surface of infected berries during machine-
harvesting might confound assessment due to reflec-
tion/scattering/shadow and the focal plane might differ
from the recording of selected berries used for model
generation. erefore, classification models would need
to be developed using detached berries covered with
juice. High spatial resolution and variability within the
juice-berry matrix make it necessary to define the most
important characteristics of berry skin, where E. neca-
tor resides, to increase the reliability and sensitivity of
the analysis. Consequently, sensitivity and accuracy of
hyperspectral imaging will need to be tested in these
conditions.
e qPCR results showed a need to establish thresh-
olds for fungal biomass in visually healthy bunches and
the same approach applies for hyperspectral imaging of
those bunches. In the future, fungal biomass thresholds
might be tentatively proposed for white and black vari-
eties from different regions and validated through the
perception of specific sensory characters in the resulting
wine [32, 52].
Conclusions
In this paper an approach to fast image segmentation has
been adapted for segmentation of hyperspectral image
data. Especially for automated plant phenotyping facili-
ties, fast and robust algorithms are crucial for the analysis
of imaging data from high-throughput experiments. Dif-
ferent dimensionality reduction methods have been tested
to study the performance of spatial-spectral segmentation
using Random Forest classifiers. e experimental results
for the estimation of various powdery mildew infection
levels on intact grape bunches show that the proposed
spatial-spectral segmentation approach outperforms tra-
ditional pixel-wise classification of normalized spectral
data by Random Forests. e use of a multiple classifier
system, namely Random Forest, enables easy improve-
ments in classification accuracy by increasing the ensem-
ble size, fast feature extraction by calculating only the
required features, as well as efficiency by parallel com-
putation of the trees within the ensemble. Altogether,
the application of the proposed image processing work-
flow has the potential to improve speed and accuracy in
disease detection and monitoring in plant phenotyping
applications. Also, it is applicable to all scales and, thus,
will broaden the scope for the application of hyperspectral
imaging technologies for the assessment of diseases, plant
vitality, stress parameters, and nutrition status.
Authors’ contributions
UK, AM, US, TP, ES and TZ designed research; UK, US, TP and TZ acquired the
data; UK and TP performed research and analyzed the data; TZ and TP per-
formed field trials and provided plant material, UK, AM, ES and TP wrote the
paper. All authors read and approved the final manuscript.
Author details
1 Biosystems Engineering, Fraunhofer IFF, Sandtorstr. 22, 39106 Magdeburg,
Germany. 2 Leibniz-Institute of Plant Genetics and Crop Plant Research (IPK),
OT Gatersleben, Corrensstraße 3, 06466 Seeland, Germany. 3 School of Agricul-
ture, Food and Wine, The University of Adelaide, Waite Campus, PMB 1, Glen
Osmond, Adelaide, SA 5064, Australia.
Acknowledgements
The authors acknowledge the supply of the biological material by The Univer-
sity of Adelaide, School of Agriculture, Food & Wine through a grant from the
Australian Grape and Wine Authority (trading as Wine Australia, UA1202, E.S.
Scott). This work was partly supported by a grant of the German Federal Min-
istry of Education and Research (BMBF) under Contract Numbers 01DR14027A
and 01DR14027B.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The datasets used and/or analysed during the current study available from the
corresponding author on reasonable request.
Funding
This work was partly supported by a grant of the German Federal Ministry of
Education and Research (BMBF) under Contract Number 01DR14027A and
01DR14027B. This grant covered the travel and accommodation expenses
of U.S. and A.M. as well as the shipping costs for the measurement equip-
ment. The general scope of the grant was the promotion of international
scientific collaboration. E.S. received a grant from the Australian Grape and
Wine Authority (trading as Wine Australia, UA1202, E.S. Scott) which allowed
the supply of the biological material, performance of DNA extraction and
real-time PCR.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in pub-
lished maps and institutional affiliations.
Received: 18 October 2016 Accepted: 7 June 2017
References
1. Dale LM, Thewis A, Boudry C, Rotar I, Dardenne P, Baeten V, Pierna
JAF. Hyperspectral imaging applications in agriculture and agro-food
product quality and safety control: a review. Appl Spectrosc Rev.
2013;48(2):142–59.
2. Jones HG, Grant OM. Remote sensing and other imaging technologies
to monitor grapevine performance. In: Gerós H, Chaves MM, Gil HM,
Delrot S, editors. Grapevine in a changing environment: a molecular and
ecophysiological perspective. West Sussex: Wiley ; 2015. p. 179–201.
3. Villmann T, Kästner M, Backhaus A, Seiffert U. Processing hyperspectral
data in machine learning. In: European symposium on artificial neural
networks, computational intelligence and machine learning, 2013, p.
1–10.
4. Kuska M, Wahabzada M, Leucker M, Dehne H-W, Kersting K, Oerke E-C,
Steiner U, Mahlein A-K. Hyperspectral phenotyping on the microscopic
scale: towards automated characterization of plant–pathogen interac-
tions. Plant Methods. 2015;11(28):1–14.
5. Arens N, Backhaus A, Döll S, Fischer S, Seiffert U, Mock H-P. Non-invasive
presymptomatic detection of Cercospora beticola infection and
identification of early metabolic responses in sugar beet. Front Plant Sci.
2016;7:1377.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 15 of 15
Knauer et al. Plant Methods (2017) 13:47
6. Aasen H, Burkart A, Bolten A, Bareth G. Generating 3D hyperspectral infor-
mation with lightweight UAV snapshot cameras for vegetation monitor-
ing: From camera calibration to quality assurance. ISPRS J Photogramm
Remote Sens. 2015;108:245–59.
7. Keshava N. A survey of spectral unmixing algorithms. Lincoln Lab J.
2003;14(1):55–78.
8. Bergsträsser S, Fanourakis D, Schmittgen S, Cendrero-Mateo MP, Jansen
M, Scharr H, Rascher U. HyperART: non-invasive quantification of leaf
traits using hyperspectral absorption-reflectance-transmittance imaging.
Plant Methods. 2015;11(1):1–17.
9. Oerke E-R, Herzog K, Toepfer R. Hyperspectral phenotyping of the
reaction of grapevine genotypes to Plasmopara viticola. J Exp Bot.
2016;67(18):5529–43.
10. Mahlein A-K, Rumpf T, Welke P, Dehne H-W, Plümer L, Steiner U, Oerke
E-C. Development of spectral indices for detecting and identifying plant
diseases. Remote Sens Environ. 2013;128:21–30.
11. Mahlein A-K, Steiner U, Hillnhütter C, Dehne H-W, Oerke E-C. Hyperspec-
tral imaging for small-scale analysis of symptoms caused by different
sugar beet diseases. Plant Methods. 2012;8(3):1–13.
12. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
13. Viola P, Jones M. Rapid object detection using a boosted cascade of sim-
ple features. In: Proceedings of the IEEE conference on computer vision
and pattern recognition; 2001, p. 511–8.
14. Wang X-Y, Zhang X-J, Yang H-Y, Bu J. A pixel-based color image segmenta-
tion using support vector machine and fuzzy -means. Neural Netw.
2012;33:148–59.
15. Gould S, Gao T, Koller D. Region-based segmentation and object detec-
tion. In: Advances in neural information processing systems; 2009, p.
655–63.
16. Wang X-Y, Wang T, Bu J. Color image segmentation using pixel wise sup-
port vector machine classification. Pattern Recogn. 2011;44(4):777–87.
17. Li J, Bioucas-Dias JM, Plaza A. Hyperspectral image segmentation using
a new bayesian approach with active learning. IEEE Trans Geosci Remote
Sens. 2011;49(10):3947–60.
18. Gong M, Liang Y, Shi J, Ma W, Ma J. Fuzzy c-means clustering with local
information and kernel metric for image segmentation. IEEE Trans Image
Process. 2013;22(2):573–84.
19. Pan C, Park DS, Yang Y, Yoo HM. Leukocyte image segmentation by
visual attention and extreme learning machine. Neural Comput Appl.
2012;21(6):1217–27.
20. Puranik P, Bajaj P, Abraham A, Palsodkar P, Deshmukh A. Human percep-
tion-based color image segmentation using comprehensive learning
particle swarm optimization. In: 2nd international conference on emerg-
ing trends in engineering and technology (ICETET), 2009, p. 630–5. IEEE
21. Lee C-Y, Leou J-J, Hsiao H-H. Saliency-directed color image segmentation
using modified particle swarm optimization. Sig Process. 2012;92(1):1–18.
22. Chen T-W, Chen Y-L, Chien S-Y. Fast image segmentation based on
k-means clustering with histograms in HSV color space. In: IEEE 10th
workshop on multimedia signal processing, 2008, p. 322–5. IEEE
23. Tobias OJ, Seara R. Image segmentation by histogram thresholding using
fuzzy sets. IEEE Trans Image Process. 2002;11(12):1457–65.
24. Zhang J, Hu J. Image segmentation based on 2D Otsu method with
histogram analysis. In: International conference on computer science and
software engineering, 2008, vol. 6, p. 105–08. IEEE
25. Bosch A, Zisserman A, Munoz X. Image classification using random forests
and ferns. In: International conference on computer vision, 2007. IEEE
26. Schroff Kriminisi Z Object class segmentation using random forests. In:
British machine vision conference; 2008.
27. Xia J, Du P, He X, Chanussot J. Hyperspectral remote sensing image
classification based on rotation forest. IEEE Geosci Remote Sens Lett.
2014;11(1):239–43.
28. Amini S, Homayouni S, Safari A. Semi-supervised classification of hyper-
spectral image using random forest algorithm. In: IEEE international
geoscience and remote sensing symposium; 2014, p. 2866–9. IEEE
29. Fassnacht F, Neumann C, Förster M, Buddenbaum H, Ghosh A, Clasen
A, Joshi PK, Koch B. Comparison of feature reduction algorithms
for classifying tree species with hyperspectral data on three central
european test sites. IEEE J Select Top Appl Earth Observ Remote Sens.
2014;7(6):2547–61.
30. Ren Y, Zhang Y, Wei W, Li L. A spectral-spatial hyperspectral data clas-
sification approach using random forest with label constraints. In: IEEE
workshop on electronics, computers and applications; 2014, p. 344–7.
IEEE
31. Camps-Valls G, Tuia D, Bruzzone L, Benedictsson JA. Advances in hyper-
spectral image classification. IEEE Signal Process Mag. 2014;31(1):45–54.
32. Iland P, Proffitt T, Dry P, Tyerman S. In: The grapevine: from the science to
the practice of growing vines for wine. Patrick Iland Wine Productions Pty
Ltd: Adelaide; 2011. p. 295.
33. Allan W. Winegrape assessment in the vineyard and at the winery. Wineti-
tles; 2003, p. 7–8.
34. Coombe BG. Adoption of a system for identifying grapevine growth
stages. Aust J Grape Wine Res. 1995;1(2):104–10.
35. Petrovic T, Zanker T, Perera D, Stummer BE, Cozzolino D, Scott ES.
Development of qPCR and mid-infra-red spectroscopy to aid objective
assessment of powdery mildew on grape bunches. In: Proceedings of the
7th international workshop on grapevine downy and powdery mildew;
2014, p. 122–4.
36. Bellman RE. Dynamic programming. Princeton: Princeton University Press;
1957.
37. Hughes GF. On the mean accuracy of statistical pattern recognizers. IEEE
Trans Inf Theory. 1968;14(1):55–63.
38. Knauer U, Meffert B. Fast computation of region homogeneity with appli-
cation in a surveillance task. In: ISPRS technical commission V symposium;
2010, p. 337–42. ISPRS
39. Knauer U, Backhaus A, Seiffert U. Fusion trees for fast and accurate clas-
sification of hyperspectral data with ensembles of
γ
-divergence-based
RBF networks. Neural Comput Appl. 2014;26(2):253–62.
40. Powers DMW. From precision, recall and f-measure to roc, informedness,
markedness and correlation. J Mach Learn Technol. 2011;2(1):37–63.
41. Youden WJ. Index for rating diagnostic tests. Cancer. 1950;3(1):32–5.
42. Knauer U, Seiffert U. Cascaded reduction and growing of results set for
combining object detectors. In: Zhou Z-H, Roli F, Kittler J, editors. Multiple
classifier systems, vol. 7872., LNCS Nanjing: Springer; 2013. p. 121–33.
43. Leucker M, Mahlein A-K, Steiner U, Oerke E-C. Improvement of lesion
phenotyping in Cercospora beticola—sugar beet interaction by hyper-
spectral imaging. Phytopathology. 2016;106(2):177–84.
44. Leucker M, Wahabzada M, Kersting K, Peter M, Beyer W, Mahlein A-K,
Oerke E-C. Hyperspectral imaging reveals the effect of sugar beet
quantitative trait loci on Cercospora leaf spot resistance. Funct Plant Biol.
2017;44(1):1–9.
45. Abdel-Rahman EM, Mutanga O, Adam E, Ismail R. Detecting Sirex noctilio
grey-attacked and lightning-struck pine trees using airborne hyperspec-
tral data, random forest and support vector machines classifiers. ISPRS J
Photogram Remote Sens. 2014;88:48–59.
46. Dalponte M, Orka HO, Gobacken T, Gianelle D, Naesset E. Tree species
classification in boreal forests with hyperspectral data. IEEE Trans Geosci
Remote Sens. 2013;51(5):2632–45.
47. Montillo A, Shotton J, Winn J, Iglesias JE, Metaxas D, Criminisi A. Entangled
decision forests and their application for semantic segmentation of CT
images. Berlin: Springer; 2011. p. 184–96.
48. Ficke A, Gadoury DM, Seem RC, Dry IB. Effects of ontogenic resistance
upon establishment and growth of Uncinula necator on grape berries.
Phytopathology. 2003;93(5):556–63.
49. Kicherer A, Herzog K, Pflanz M, Wieland M, Rüger P, Kecke S, Kuhlmann
H, Töpfer R. An automated field phenotyping pipeline for application in
grapevine research. Sensors. 2015;15(3):4823–36.
50. Herzog K, Wind R, Töpfer R. Impedance of the grape berry cuticle as a
novel phenotypic trait to estimate resistance to Botrytis cinerea. Sensors.
2015;15(6):12498–512.
51. Hendrickson DA, Lerno LA, Hjelmeland AK, Ebeler SE, Heymann H, Hopfer
H, Block KL, Brenneman CA, Oberholster A. Effect of machine harvesting
with and without optical berry sorting on Pinot Noir grape and wine
composition. In: Beames KS, Robinson EMC, Dry PR, Johnson DL editors,
Proceedings of the 16th Australian Wine Industry Technical Conference.
Adelaide, South Australia: Australian Wine Industry Technical Conference
Inc. (2017), p. 160–164.
52. Scott ES, Dambergs RG, Stummer BE. Fungal contaminants in the vine-
yard and wine quality. In: Reynolds AG, editor. Managing wine quality:
viticulture and wine quality, vol 1. Cambridge: Woodhead Publishing;
2010. p. 481–514 .
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... In addition, some studies on sugar beet and wheat diseases focused on phenotyping of several diseases of a crop [29,36] or early detection of the disease by analysing its development in time and space [31,37], but the classification accuracy still needs to be improved. Finally, HSI has been used to phenotype downy mildew on several susceptible and resistant grapevine genotypes [8] and to detect by spatial-spectral analysis different infection levels of powdery mildew on wine bunches to prevent the infection of the coming harvest [38,39]. ...
... By comparison, a few studies using HSI and classification models based on discriminant analysis to detect infected grapevine bunches reached an accuracy of 99% and 85% in the cross-validation model, respectively [38,39], and an accuracy for the test set of 87% for entire-bunch classification [38] and around 76% for pixel classification [39]. Other studies using SVM [53] and Spectral Angle Mapper (SAM) [29] classifiers to detect powdery mildew on sugar beet leaves achieved 93% and 90-97% accuracy depending on the stage of disease development, respectively. ...
... By comparison, a few studies using HSI and classification models based on discriminant analysis to detect infected grapevine bunches reached an accuracy of 99% and 85% in the cross-validation model, respectively [38,39], and an accuracy for the test set of 87% for entire-bunch classification [38] and around 76% for pixel classification [39]. Other studies using SVM [53] and Spectral Angle Mapper (SAM) [29] classifiers to detect powdery mildew on sugar beet leaves achieved 93% and 90-97% accuracy depending on the stage of disease development, respectively. ...
Article
Full-text available
Downy mildew is a highly destructive disease of grapevine. Currently, monitoring for its symptoms is time-consuming and requires specialist staff. Therefore, an automated non-destructive method to detect the pathogen before the visible symptoms appear would be beneficial for early targeted treatments. The aim of this study was to detect the disease early in a controlled environment, and to monitor the disease severity evolution in time and space. We used a hyperspectral image database following the development from 0 to 9 days post inoculation (dpi) of three strains of Plasmopara viticola inoculated on grapevine leaves and developed an automatic detection tool based on a Support Vector Machine (SVM) classifier. The SVM obtained promising validation average accuracy scores of 0.96, a test accuracy score of 0.99, and it did not output false positives on the control leaves and detected downy mildew at 2 dpi, 2 days before the clear onset of visual symptoms at 4 dpi. Moreover, the disease area detected over time was higher than that when visually assessed, providing a better evaluation of disease severity. To our knowledge, this is the first study using hyperspectral imaging to automatically detect and show the spatial distribution of downy mildew on grapevine leaves early over time.
... Diseases Approach [14] Powdery mildew, downy mildew, black rot Image processing on photos taken in the controlled environment [15] Powdery mildew, downy mildew, anthracnose Image processing on photos taken manually in an uncontrolled environment [16] Powdery mildew A spatial-spectral segmentation approach for the estimation of powdery mildew disease levels [17] Black rot, black measles, leaf blight and mites Improved CNNs are used for real-time detection of grape leaf diseases Table 3. Comparison of the grape leaf disease detection datasets. ...
... In [16], an advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The dataset consists of 60 hyperspectral images corresponding to two scans (top and bottom view) of 30 bunches. ...
Article
Full-text available
Powdery mildew, dead arm and vineyard downy mildew diseases are frequently seen in the vineyards in the Gediz River Basin, West Anatolia of Turkey. These diseases can be detected early using artificial intelligence (AI) based systems that can contribute to crop yields and also reduce the labor of the farmer and the amount of pesticides used. This article presents a dataset, namely Hermos, for use in such AI-based systems. Hermos contains four classes of grape leaf images; leaves with powdery mildew, leaves with dead arm, leaves with downy mildew and healthy leaves. We have currently 492 images and 13,913 labels in the dataset. We have published Hermos in the Linked Open Data (LOD) cloud in order to make it easier for consumers to access, process and manipulate the data.
... In contrast to the spectrometers, advanced camera technologies such as multispectral and hyperspectral sensors, which contain spectral and spatial information, are being investigated by several research groups for their use in disease diagnosis (Bauriegel & Herppich, 2014;Mutka, & Bart, 2015;Knauer et al., 2017;Kuska et al., 2017;Zhu et al., 2017;Huang et al., 2020;Singh, Sharma, & Singh, 2020;. Particularly, the potential of hyperspectral cameras is encouraging as it covers a wide range spectra with several narrow wavelength bands that have the ability to perceive changes in plants during initial or pre-symptomatic stages of disease . ...
... However, hyperspectral cameras generate hypercubes that contain huge volumes of data, creating difficulties in feature extraction and model development. Various dimensionality reduction techniques such as principal component analysis (PCA), simple volume maximization, successive projection algorithms, and vegetative indices have been used by various researchers to reduce the number of input variables for training and validation of the models (Ashourloo, Mobasheri, & Huete, 2014;Bauriegel & Herppich, 2014;Knauer et al., 2017;Kuska et al., 2017;Zhu et al., 2017). Although these approaches are encouraging, the features are handcrafted and require expert skills for identifying the suitable algorithms and features for effective classification (Zhao, & Du, 2016). ...
Article
Early diagnosis of fusarium head blight (FHB) presence and intensity in wheat can assist decision support for reducing disease spread and minimizing mycotoxin contamination in the grain. Although hyperspectral data was used successfully for the detection of FHB, using the traditional machine learning methods, these rely on time-consuming manual feature extraction, requiring expert skills. This study explores the use of deep learning models which can automatically extract FHB features. Images were generated from single lines hyperspectral (400-750 nm) data collected from wheat canopy in the laboratory and fed to a convolutional neural network (CNN) for pixel classification into two classes of healthy or FHB infected pixels. Four different types of image conversion schemes were explored, which resulted in a spectral (line & bar) graph, compressed spectral line graph and 2D generated band image. Eight different pre-trained lightweight CNN models that require limited computing resources were utilized. The preliminary analysis showed that DarkNet 19 model using the spectral line graph images from both smoothed and unsmoothed data resulted in the best accuracy and F1 score of 100% with a prediction score of 1 for the sample test dataset. The application of feature visualization using an occlusion sensitivity map was able to elucidate the spectral features responsible for the high accuracy for classification. The results suggest the robustness of the developed method for recognition of pixels corresponding to the FHB infected and healthy ears under the laboratory conditions that motivate for potential development with field data.
... In Knauer et al. [16], an advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The dataset consists of 60 hyperspectral images corresponding to two scans (top view and bottom view) of 30 bunches. ...
... Pantazi et al. [14] Powdery mildew, downy mildew, black rot Image processing on photos taken in the controlled environment Biswas et al. [15] Powdery mildew, downy mildew, anthracnose Image processing on photos taken manually in an uncontrolled environment Knauer et al. [16] Powdery mildew A spatial-spectral segmentation approach for the estimation of powdery mildew disease levels Xie et al. [17] Black rot, black measles, leaf blight and mites Improved CNNs are used for real-time detection of grape leaf diseases CNNs: convolutional neural networks. ...
Article
Powdery mildew, dead arm and vineyard downy mildew diseases are frequently seen in the vineyards in the Gediz River Basin, West Anatolia of Turkey. These diseases can be detected early using artificial intelligence (AI)–based systems that can contribute to crop yields and also reduce the labour of the farmer and the amount of pesticides used. This article presents a dataset – namely, Hermos – for use in such AI-based systems. Hermos contains four classes of grape leaf images: leaves with powdery mildew, leaves with dead arm, leaves with downy mildew and healthy leaves. We have currently 492 images and 13,913 labels in the dataset. We have published Hermos in the Linked Open Data (LOD) cloud in order to make it easier for consumers to access, process and manipulate the data.
... Visible and near-infrared hyperspectral image contains both spatial and spectral information with hundreds of narrow and contiguous bands formed a 3D data cube [9]. With the advantage of the non-destructive and informative characteristic of visible/near-infrared spectrum, promising results and methods in plant phenotyping have been made [10][11][12][13][14][15]. By building discriminant and regression models based on contiguous and narrow hyperspectral data, diverse plant disease is correctly determined and quantified. ...
... Apparently, specific responses in spectral reflectance which are related to biotic and abiotic stresses are readily distinct [17]. Visible/near-infrared spectrum provides a powerful tool to assess plant vitality, stress state, and disease category [15]. Nevertheless, when it comes to plant disease phenotyping, more attention is paid on the early detection or classification of disease at single time point rather than dynamic surveillance of the symptom. ...
Article
Full-text available
Background Rice bacterial blight (BB) has caused serious damage in rice yield and quality leading to huge economic loss and food safety problems. Breeding disease resistant cultivar becomes the eco-friendliest and most effective alternative to regulate its outburst, since the propagation of pathogenic bacteria is restrained. However, the BB resistance cultivar selection suffers tremendous labor cost, low efficiency, and subjective human error. And dynamic rice BB phenotyping study is absent from exploring the pattern of BB growth with different genotypes. Results In this paper, with the aim of alleviating the labor burden of plant breeding experts in the resistant cultivar screening processing and exploring the disease resistance phenotyping variation pattern, visible/near-infrared (VIS–NIR) hyperspectral images of rice leaves from three varieties after inoculation were collected and sent into a self-built deep learning model LPnet for disease severity assessment. The growth status of BB lesion at the time scale was fully revealed. On the strength of the attention mechanism inside LPnet, the most informative spectral features related to lesion proportion were further extracted and combined into a novel and refined leaf spectral index. The effectiveness and feasibility of the proposed wavelength combination were verified by identifying the resistant cultivar, assessing the resistant ability, and spectral image visualization. Conclusions This study illustrated that informative VIS–NIR spectrums coupled with attention deep learning had great potential to not only directly assess disease severity but also excavate spectral characteristics for rapid screening disease resistant cultivars in high-throughput phenotyping.
... Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and disease [19]. This technique produces digital measurements that can easily be shared and quickly analyzed using semi-automated procedures in a repeatable and objective manner [20]. ...
Article
Full-text available
Background Rice blast, which is prevalent worldwide, represents a serious threat to harvested crop yield and quality. Hyperspectral imaging, an emerging technology used in plant disease research, is a stable, repeatable method for disease grading. Current methods for assessing disease severity have mostly focused on individual growth stages rather than multiple ones. In this study, the spectral reflectance ratio (SRR) of whole leaves were calculated, the sensitive wave bands were selected using the successive projections algorithm (SPA) and the support vector machine (SVM) models were constructed to assess rice leaf blast severity over multiple growth stages. Results The average accuracy, micro F1 values, and macro F1 values of the full-spectrum-based SVM model were respectively 94.75%, 0.869, and 0.883 in 2019; 92.92%, 0.823, and 0.808 in 2021; and 88.09%, 0.702, and 0.757 under the 2019–2021 combined model. The SRR–SVM model could be used to evaluate rice leaf blast disease during multiple growth stages and had good generalizability. Conclusions The proposed SRR data analysis method is able to eliminate differences among individuals to some extent, thus allowing for its application to assess rice leaf blast severity over multiple growth stages. Our approach, which can supplement single-stage disease-degree classification, provides a possible direction for future research on the assessment of plant disease severity during multiple growth stages.
... ElasticNet-AIPSO-ELM had the highest classification accuracy with 94.05 and 92.05% for OA and Kappa, respectively. And on modeling with fused disease features, the classification model built by fused features has higher classification accuracy compared to single disease features (Knauer et al., 2017;Feng et al., 2022). This indicates that fusing spectral characteristic wavelengths with texture features can better represent the valid information contained in the disease images. ...
Article
Full-text available
Leaf blast is a disease of rice leaves caused by the Pyricularia oryzae. It is considered a significant disease is affecting rice yield and quality and causing economic losses to food worldwide. Early detection of rice leaf blast is essential for early intervention and limiting the spread of the disease. To quickly and non-destructively classify rice leaf blast levels for accurate leaf blast detection and timely control. This study used hyperspectral imaging technology to obtain hyperspectral image data of rice leaves. The descending dimension methods got rice leaf disease characteristics of different disease classes, and the disease characteristics obtained by screening were used as model inputs to construct a model for early detection of leaf blast disease. First, three methods, ElasticNet, principal component analysis loadings (PCA loadings), and successive projections algorithm (SPA), were used to select the wavelengths of spectral features associated with leaf blast, respectively. Next, the texture features of the images were extracted using a gray level co-occurrence matrix (GLCM), and the texture features with high correlation were screened by the Pearson correlation analysis. Finally, an adaptive-weight immune particle swarm optimization extreme learning machine (AIPSO-ELM) based disease level classification method is proposed to further improve the model classification accuracy. It was also compared and analyzed with a support vector machine (SVM) and extreme learning machine (ELM). The results show that the disease level classification model constructed using a combination of spectral characteristic wavelengths and texture features is significantly better than a single disease feature in terms of classification accuracy. Among them, the model built with ElasticNet + TFs has the highest classification accuracy, with OA and Kappa greater than 90 and 87%, respectively. Meanwhile, the AIPSO-ELM proposed in this study has higher classification accuracy for leaf blast level classification than SVM and ELM classification models. In particular, the AIPSO-ELM model constructed with ElasticNet+TFs as features obtained the best classification performance, with OA and Kappa of 97.62 and 96.82%, respectively. In summary, the combination of spectral characteristic wavelength and texture features can significantly improve disease classification accuracy. At the same time, the AIPSO-ELM classification model proposed in this study has sure accuracy and stability, which can provide a reference for rice leaf blast disease detection.
Article
Full-text available
Rice leaf blast is prevalent worldwide and a serious threat to rice yield and quality. Hyperspectral imaging is an emerging technology used in plant disease research. In this study, we calculated the standard deviation (STD) of the spectral reflectance of whole rice leaves and constructed support vector machine (SVM) and probabilistic neural network (PNN) models to classify the degree of rice leaf blast at different growth stages. Average accuracies at jointing, booting and heading stages under the full-spectrum-based SVM model were 88.89%, 85.26%, and 87.32%, respectively, versus 80%, 83.16%, and 83.41% under the PNN model. Average accuracies at jointing, booting and heading stages under the STD-based SVM model were 97.78%, 92.63%, and 92.20%, respectively, versus 88.89%, 91.58%, and 92.20% under the PNN model. The STD of the spectral reflectance of the whole leaf differed not only within samples with different disease grades, but also among those at the same disease level. Compared with raw spectral reflectance data, STDs performed better in assessing rice leaf blast severity.
Chapter
Globalization, modern cultivation techniques, climate change, and human activities have promoted the distribution of plant pathogens, resulting in frequent host–pathogen interactions and disease incidences. These causative factors are impossible to control as even the most rigorous quarantine system could not completely avoid the movement of plant pathogens and germplasm across countries and continents. Susceptibility of cultivated varieties against plant pathogens, especially fungi, has significantly increased because varieties are developed focusing on higher yield. Additionally, plant pathogens undergo frequent mutations and genetic changes to adapt to climate changes, overcome pesticide resistance, and infect plant germplasm previously resistant. Plant disease identification, quantification, and estimation of subsequent yield losses are crucial in modern-day agriculture to ensure food safety and security for the increasing global population. Phytopathometry utilizes systematized and specialized approaches for plant disease assessment and presents qualitative and quantitative data. Phytopathometry underpins all activities in plant pathology and extends into other related disciplines such as agronomy, plant breeding, and horticulture. Digital and biotechnology underpinned by contemporary artificial intelligence efficiently process sensory data for plant disease measurement. Modern phytopathometry tools aided with detailed knowledge of pathogen–host system biology are poised to become an integral part of precision agriculture.Use of machine learning, deep learning, digital technology, biotechnology, engineering, and nanotechnology in phytopathometry is exposing plant pathologists to new terminology, concepts, and ideas which were not even thinkable a few decades ago. Moreover, innovations in robotics have provided flexibility and precision in deploying these sensors for accurate disease assessment, even in large field areas. In this chapter, we have discussed various phytopathometry tools and approaches and their transdisciplinary uses.KeywordsDisease indexVisual assessmentRemote sensingDigital imageryArtificial intelligenceAdvanced technology
Chapter
Plant breeding is one of the primary technologies that increase crop production, improve food and feed quality, and enhance their resilience to adverse environments. Modern plant breeding programs have been significantly improved due to the advances in high-throughput genetic sequencing and genomic technologies. However, the acquisition of plant traits is still based on manual operations, which become a bottleneck for speeding up plant breeding programs to produce sufficient food and feed. In recent years, imaging technology as a non-contact and non-constructive measurement tool has been used widely for the fast and accurate acquisition of crop traits on a large scale, i.e., high-throughput plant phenotyping (HTPP) technology. A variety of imaging sensors are currently used in research and commercial practices to quantify complex crop traits that are used to quantify crop growth, yield, and resilience to biotic or abiotic stresses (disease, insects, drought, and salinity). High-resolution spatial and temporal crop traits derived from HTPP systems are used to assist breeding programs to develop better crops. Typical imaging sensors used in HTPP systems include optical sensors that are sensitive to light in different spectral ranges, such as visible, multispectral and hyperspectral sensing, thermal infrared imaging, fluorescence imaging, 3D imaging, and tomographic imaging (MRT, PET, and CT). Imagery data are processed and analyzed using advanced machine learning and deep learning algorithms to develop artificial intelligence (AI)-enabled plant phenotyping pipelines that will transform conventional plant breeding to next-generation plant breeding. This chapter presents a brief review of these imaging techniques and their applications in high-throughput plant phenotyping technology.
Article
Full-text available
Cercospora beticola is an economically significant fungal pathogen of sugar beet, and is the causative pathogen of Cercospora leaf spot. Selected host genotypes with contrasting degree of susceptibility to the disease have been exploited to characterize the patterns of metabolite responses to fungal infection, and to devise a pre-symptomatic, non-invasive method of detecting the presence of the pathogen. Sugar beet genotypes were analyzed for metabolite profiles and hyperspectral signatures. Correlation of data matrices from both approaches facilitated identification of candidates for metabolic markers. Hyperspectral imaging was highly predictive with a classification accuracy of 98.5–99.9% in detecting C. beticola. Metabolite analysis revealed metabolites altered by the host as part of a successful defense response: these were L-DOPA, 12-hydroxyjasmonic acid 12-O-β-D-glucoside, pantothenic acid, and 5-O-feruloylquinic acid. The accumulation of glucosylvitexin in the resistant cultivar suggests it acts as a constitutively produced protectant. The study establishes a proof-of-concept for an unbiased, presymptomatic and non-invasive detection system for the presence of C. beticola. The test needs to be validated with a larger set of genotypes, to be scalable to the level of a crop improvement program, aiming to speed up the selection for resistant cultivars of sugar beet. Untargeted metabolic profiling is a valuable tool to identify metabolites which correlate with hyperspectral data.
Chapter
Agricultural production is environmentally sensitive, being highly influenced by changes in climate, soil water and nutrition, and land use practices. From a climate perspective, agriculture is extremely vulnerable to climate change as most crop systems have been optimized to fit a given climate niche allowing for economically sustainable quality and production. These climatic niches range from fairly broad conditions suitable for crops such as wheat or corn to more narrow conditions suitable for specialty crops such as grapevines. Potential agricultural responses to changing climates reflect the interactions between temperature, water availability and timing, increasing soil salinity and nutrient stresses, and increasing carbon dioxide concentrations. As such, understanding agricultural impacts from climate change necessitates integrated information and research examining the combined effects of these and other factors. This chapter provides an overview of many of these issues through the discussion of how climate change and variability impact the structure and suitability for viticulture and wine production worldwide.
Article
The quantitative resistance of sugar beet (Beta vulgaris L.) against Cercospora leaf spot (CLS) caused by Cercospora beticola (Sacc.) was characterised by hyperspectral imaging. Two closely related inbred lines, differing in two quantitative trait loci (QTL), which made a difference in disease severity of 1.1-1.7 on the standard scoring scale (1-9), were investigated under controlled conditions. The temporal and spatial development of CLS lesions on the two genotypes were monitored using a hyperspectral microscope. The lesion development on the QTL-carrying, resistant genotype was characterised by a fast and abrupt change in spectral reflectance, whereas it was slower and ultimately more severe on the genotype lacking the QTL. An efficient approach for clustering of hyperspectral signatures was adapted in order to reveal resistance characteristics automatically. The presented method allowed a fast and reliable differentiation of CLS dynamics and lesion composition providing a promising tool to improve resistance breeding by objective and precise plant phenotyping.
Article
A major aim in grapevine breeding is the provision of cultivars resistant to downy mildew. As Plasmopara viticola produces sporangia on the abaxial surface of susceptible cultivars, disease symptoms on both leaf sides may be detected and quantified by technical sensors. The response of cultivars ‘Mueller-Thurgau’, ‘Regent’, and ‘Solaris’, which differ in resistance to P. viticola, was characterized under controlled conditions by using hyperspectral sensors. Spectral reflectance was suitable to differentiate between non-infected cultivars and leaf sides of the bicolored grapevine. Brown discoloration of tissue became visible on both leaf sides of resistant cultivars 2 days before downy mildew symptoms appeared on the susceptible ‘Mueller-Thurgau’ cultivar. Infection of this cultivar resulted in significant (P<0.05) reflectance changes 1–2 days prior to abaxial sporulation induced by high relative humidity, or the formation of adaxial oil spots. Hyperspectral imaging was more sensitive in disease detection than non-imaging and provided spatial information on the leaf level. Spectral indices provided information on the variability of chlorophyll content, photosynthetic activity, and relative water content of leaf tissue in time and space. On ‘Mueller-Thurgau’ downy mildew translated reflectance to higher values as detectable by the index DMI_3=(R470+R682+R800)/(R800/R682) and affected reflectance at 1450nm. Tissue discoloration on ‘Regent’ and ‘Solaris’ cultivars was associated with lower reflectance between 750 and 900nm; blue and red reflectance demonstrated differences from leaf necroses. With high inoculum densities, P. viticola sporulated on even resistant cultivars. Hyperspectral characterization at the tissue level proved suitable for phenotyping plant resistance to pathogens and provided information on resistance mechanisms.
Chapter
This paper outlines the various sensor technologies available for monitoring grapevine performance remotely. These sensors range from optical reflectance sensors (including multispectral and hyperspectral instruments that give a wide range of possible vegetation indices), through thermal imagers and their use for derivation of stress indices related to evaporation rate, to fluorescence sensors that provide information on photosynthesis and metabolite biochemistry. Mention is also made of Lidar and ultrasound ranging sensors and microwave sensors. The relative advantages of the different ways in which these sensors can be deployed, from satellite through aircraft and unmanned aerial vehicles to mobile and fixed in-field platforms, are discussed. Applications to crop management and precision farming, irrigation scheduling and phenotyping are outlined.