Conference PaperPDF Available

UAV-Based Crop and Weed Classification for Smart Farming

Authors:

Abstract and Figures

Unmanned aerial vehicles (UAVs) and other robots in smart farming applications offer the potential to monitor farm land on a per-plant basis, which in turn can reduce the amount of herbicides and pesticides that must be applied. A central information for the farmer as well as for autonomous agriculture robots is the knowledge about the type and distribution of the weeds on the field and UAVs are an excellent platform to address this challenge. In this paper, we address the problem of detecting value crops such as sugar beets as well as key weeds using a camera installed on a lightweight UAV. We propose a system that performs vegetation detection, plant-tailored feature extraction, and classification to obtain an estimate of the distribution of crops and weeds on the field. We implemented and evaluated our system using UAVs on two farms, one in Germany and one in Switzerland and demonstrate that our approach allows for analyzing the field and classifying the individual plants.
Content may be subject to copyright.
UAV-Based Crop and Weed Classification for Smart Farming
Philipp Lottes Raghav Khanna Johannes Pfeifer Roland Siegwart Cyrill Stachniss
Abstract— Unmanned aerial vehicles (UAVs) and other robots
in smart farming applications offer the potential to monitor
farm land on a per-plant basis, which in turn can reduce the
amount of herbicides and pesticides that must be applied. A
central information for the farmer as well as for autonomous
agriculture robots is the knowledge about the type and distri-
bution of the weeds on the field and UAVs are an excellent
platform to address this challenge. In this paper, we address
the problem of detecting value crops such as sugar beets as
well as key weeds using a camera installed on a light-weight
UAV. We propose a system that performs vegetation detection,
plant-tailored feature extraction, and classification to obtain an
estimate of the distribution of crops and weeds on the field.
We implemented and evaluated our system using UAVs on two
farms, one in Germany and one in Switzerland and demonstrate
that our approach allows for analyzing the field and classifying
the individual plants.
I. INTRODUCTION
Herbicides can have several side-effects on the biotic and
abiotic environment and bear a risk to harm human health [7].
Therefore, a reduction of the amounts of herbicides used in
modern agriculture is a relevant step towards a sustainable
agriculture. In conventional weed control, the whole field
is typically treated uniformly with a single herbicide dose,
spraying the soil, crops and weed in the same way. This
practice is simple for the user as neither the knowledge about
the spatial distribution nor about the type of the weeds is
required and thus is commonly used in conventional farming.
The availability of such knowledge, however, offers the
potential to reduce the agro-chemical brought to the fields.
One way to achieve this is by selectively spraying different
weed species as they show a variable sensitivities to different
herbicides. Another option is mechanical or laser-based weed
control, that aims at physically destroying the weed from a
moving platform.
Thus, modern approaches as agriculture system manage-
ment and smart farming typically require detailed knowledge
about the current status on the fields. One popular way to
monitor farm land is the use of aerial vehicles such as UAV.
Compared to ground vehicles, UAVs can cover large areas
in a comparably short amount of time and do not impact
the fields through soil compaction as ground vehicles do.
For successful on-field intervention, it is important to know
the type and spatial distribution of the weeds on the field at
an early state. The earlier mechanical or chemical weeding
actions are executed, the higher the chances for obtaining a
Philipp Lottes and Cyrill Stachniss are with the University of Bonn,
Germany, while Raghav Khanna, Johannes Pfeifer and Roland Siegwart are
with the Swiss Federal Institute of Technology, Zurich, Switzerland. This
work has partly been supported by the EC under contract number H2020-
ICT-644227-FLOURISH.
Fig. 1: Low-cost UAV used for field monitoring (left) as well as an
example image analyzed by our approach (right).
high yield. A prerequisite to trigger weeding and intervention
task is a detailed knowledge about the spread of weeds.
Therefore, it is important to have an automatic perception
pipeline that monitors and analyzes the crops automatically
and provides report about the status on the field.
In this paper, we address the problem of analyzing UAV
imagery to inspect the status of a field in terms of weed
types and spatial crop and weed distribution. We focus on a
detection on a per-plant basis to estimate the amount of crops
as well as various weed species to provide this information
to the farmers. Thus, the main contribution of this paper is a
vision-based classification system for identifying crops and
weeds in both, RGB-only as well as RGB combined with
near infra-red (NIR) imagery of agricultural fields captured
by an UAV. Our perception pipeline is capable of detecting
plants in such images only based on their appearance.
Our approach is able to exploit the geometry of the plant
arrangement without requiring a known spatial distribution
to be specified explicitly. However, optionally it can also
utilize prior information regarding crops row arrangement. In
addition to that it can deal with vegetation, which is located
within the intra-row space.
We implemented and tested our approach using different
types of cameras and UAVs, see Figure 1 for an example,
on real sugar beet fields. Our experiments suggest that our
proposed system is able to perform a classification of sugar
beets and different weed types in RGB images captured
by a commercial low cost UAV system. We show that the
classification system achieves a good performance in terms
of overall accuracy even in images where neither a row de-
tection of the crops nor a regular pattern of the plantation can
be exploited for geometry-based detection. Furthermore, we
evaluate our system using RGB+NIR images captured with a
comparable expensive camera system to those obtained using
the standard equipped RGB camera mounted on a consumer
quad-copter (DJI Phantom 4 and camera for around $1,500).
II. RE LATE D WORK
UAVs equipped with different sensors serve as an excellent
platform to obtain fast and detailed information of agriculture
field environments. Monitoring crop hight, canopy cover,
leaf area, nitrogen levels or different vegetation indices over
time can help to automatize data interpretation and thus to
improve crop management, see for example [9], [18], [20],
[3]. Geipel et al. [3] as well as Khanna et al. [9] focus in their
work on the estimation of crop height using UAV imagery.
Both these works apply a bundle adjustment procedure to
compute a terrain model and perform a vegetation segmenta-
tion in order to estimate the crop height based on the obtained
3D information. Tokekar and Hook [20] introduce a concept
for a collaboration of an unmanned ground vehicle (UGV)
and a UAV in order to measure nitrogen levels of the soil
across a farm. The basic idea is to use the UAV for the
measurements and the UGV for the transport of the UAV
due to its limited energy budget.
Several works have been conducted in the context of
vegetation detection by using RGB as well as multispectral
imagery of agricultural fields [5], [6], [21]. Hamuda et al. [6]
present a comprehensive study about plant segmentation in
field images by using threshold based methods and learning
based approaches. Torres Sanchez et al. [21] investigate an
automatic thresholding method based on the Normalized
Difference Vegetation Index (NDVI) and the Excess Green
Index (ExG) in order to separate the vegetation from the
background. They achieve an accuracy of 90100 % for the
vegetation detection based on their approach. In contrast,
Guo et al. [5] apply a learning approach based on decision
trees for vegetation detection. They use spectral features for
the classification exploiting different color spaces based on
RGB images. We use a threshold based approach based on
the ExG and NDVI in order to separate the vegetation from
the background, i.e. mostly soil.
The next level of data interpretation is the classification
of the detected vegetation by separating it into the classes
crop and weed. Several approaches have been proposed
in this context. Pe˜
na et al. [14] introduced a method for
the computation of weed maps in maize fields based on
multispectral imagery. They extract super-pixels based on
spatial and spectral characteristics, perform a segmentation of
the vegetation and detect crop rows in the images. Finally,
they use the information about the detected crop rows to
distinguish crops and weeds. In a following work by Pe˜
na et
al. [15], they evaluated a similar approach according to [14]
for different flight altitudes and achieve the best performance,
i.e. around 90% overall accuracy for crop/weed classification,
using images captured at an altitude around 40 m with a
spatial resolution of 15mm
px . Furthermore, they conclude that
using additional near infra-red (NIR) information leads to
better results for vegetation detection.
Also machine learning techniques have been applied to
classify crops and weeds, in UAV imagery of plantation [8],
[4], [16], [17]. Perez-Ortiz et al. [16] propose a weed detec-
tion system based on the classification of image patches into
the values crop, weed and soil. They use pixel intensities of
multispectral images and geometric information about crop
rows in order to build features for the classification. They
evaluate different machine learning algorithms and achieve
overall accuracies of 75 87% for the classification. Perez-
Ortiz et al. [17] used a support vector machine classifier
for crop/weed detection in RGB images of sunflower and
maize fields. They present a method for both inter-row and
intra-row weed detection by exploiting statistics of pixel
intensities, textures, shape and geometrical information as
features. Guerrero et al. [4] propose a method for weed
detection in images of maize field, which allows to identify
the weeds after its visual appearance changed in image space
due to rainfall, dry spell or herbicide treatment. Garcia et
al. [8] conducted a study for separating sugar beets and
thistle based on multispectral images with a comparably large
number of narrow bands. They applied a partial least squares
discriminant analysis for the classification and achieved
a recall of 84% for beet and 93% for thistle by using
four narrow bands at 521,570,610 and 658 nm for the
feature extraction. Another noteworthy approach is the one
by Mortensen et al. [?]. They apply a deep convolutional
neural network for classifying different types of crops to
estimate individual biomass amounts. The use RGB images
of field plots captured at 3m above soil and report an overall
accuracy of 80% evaluated on a per-pixel basis.
Our approach extracts visual features as well as geo-
metrical features of the detected vegetation and uses a
Random Forest [2] to further classify the vegetation into the
value crop and weed. Optionally, we perform a crop row
detection and extract an additional feature to integrate this
information into the classification system. In the past, we
have proposed a classification pipeline [10], [11] for ground
vehicles operating in farms. In this paper, we extend our
pipeline in terms of (i) geometric features for the use on
UAVs, (ii) the ability to deal without sunlight- shields and
100% artificial lighting as used in both previous papers, and
(iii) a way to handle RGB images without the requiring NIR
information.
III. PLA NT /WEE D CLASSIFICATI ON F OR UAVS
The primary objective of the proposed plant classification
is to identify sugar beet crops and weeds in UAV imagery
in order to provide a tool for an accurate monitoring of
the plantation on real fields. The target is to determine a
detailed map of the crops and weeds on a per-plant basis in
each image, also including weeds located in the intra-row
space. We furthermore target the detection of common weed
species on sugar beet fields in Northern Europe, which is
an important problem and a challenging task for precision
farming systems in Germany. The input of the system are
either 4-channel RGB+NIR images or regular RGB images,
depending on the sensor setup of the UAV. The optional
NIR information that is available on more advanced systems,
supports the vegetation analysis but is not essential.
The addressed task is strongly related to UGV-based plant
classification such as our previous approaches [10], [11]. Us-
Fig. 2: Example image captured by a DJI Phantom 4 at 3m altitude at different levels of progress within the classification pipeline.
From left to right: normalized RGB image, computed Excess Green Index (ExG) according to Eq. (2), vegetation mask Vand multi-class
classification output of our plant/weed classification system
ing UAV data instead of data recorded with an UGV, is more
challenging as the imagery is naturally exposed to varying
lightning conditions and different scales. Furthermore, UGV-
based systems can exploit more assumptions about the data
and are capable for controlled illumination as the sun-light
is typically shielded and artificial light-sources are applied.
The approach presented in this paper builds upon [10], [11]
but extends the classification system, adds more relevant
features, can work with RGB-only imagery, and is tailored
to UAVs.
The key steps of our systems are the following: First,
we apply a pre- processing in order to normalize intensity
values on a global scale and detect the vegetation in each
image. Second, we extract features only for regions, which
correspond to vegetation, exploiting a combination of an
object- based [11] and a keypoint-based [10] approach.
Third, we apply a multi-class Random Forest classification
and obtain a probability distribution for the predicted class
labels. We adapt our previous perception pipeline for the
crop/weed classification from the ground vehicle to the UAV
and introduce the following innovations:
multi-class detection for discrimination of different
weed species,
classification on RGB+NIR or RGB only imagery, and
geometric features for exploitation of plant arrangement
and for exploiting crop rows if available.
A. Vegetation Detection
The goal of vegetation detection is to remove the back-
ground, i.e. the soil or other objects given an image Iby
compute the vegetation mask
V(i, j) = 1,if I(i, j )vegetation
0,otherwise ,(1)
for each pixel location (i, j). Depending on the input data
(RGB+NIR or RGB images), we apply different vegetation
indices to separate the vegetation parts from the background.
In case NIR information is available, we exploit spectral
characteristics of plants in the NIR and RED channel through
the normalized difference vegetation index (NDVI) according
to [19]. In case we only have RGB data available, we rely
on the Excess Green Index (ExG) given by
IExG = 2 IGREEN − IRED − IBLUE (2)
and compute the masking based on a threshold.
Feature
Extraction Classification
Fig. 3: Pipeline of our Plant/Weed classification system and concept
for object-based (left) and keypoint-based (right) approach for the
feature extraction. Green refers to sugar beet keypoints and objects,
red refers to weed keypoints and objects and yellow refers to mixed
objects, which consists of both sugar beet and weed pixels
See Figure 2 for an illustration of the ExG and the obtained
vegetation mask by thresholding. Compared to the NDVI,
the ExG provides an appropriate index distribution for the
identification of the vegetation in RGB images, confirming
the results of [?], [13], [9]. However, this approach can
lead to wrong segmentations in case of green objects in the
image that do not correspond to vegetation. On agricultural
fields, however, there are usually only few green objects
except plants so that the ExG index is a good choice for
our application if no near infra-red channel is available.
B. Objects-based vs. Keypoint-based Feature Extraction
Given the vegetation mask V, we extract features as the
input for our classification system. Here, we consider two
approaches. The first one computes a single feature vector
per segmented object from the vegetation mask V, whereas
the second approach computes the feature vector on a dense
grid of keypoints within areas of vegetation, similar to our
previous approach described in [11].
Initially, the object-based approach searches for objects,
i.e. connected vegetation pixels, in V. Then, we compute a
feature vector for each object Ousing all pixels that belong
to the segment. In contrast, the keypoint-based approach
makes no topological assumptions to the vegetation pixels. A
keypoint Kis given by its position (i, j)and a certain neigh-
borhood N(K). We define the positions of the keypoints K
on a grid with a constant lattice distance over the whole
vegetation pixels in the image and extract a feature vector
for Ktaking its neighborhood N(K)into account. In our
current implementation, we use a lattice distance of 10 pixel
by 10 pixel for the keypoint and chose 20 pixel by 20 pixel
for the neighborhood N(K). Figure 3 depicts the concept
of both approaches visually. The object-based approach has
the advantage that features mostly describe whole plants
and comparably less objects are needed to represent the
vegetation. The keypoint-based approach has the advantage
to deal with under-segmented and overlapping plants. In [11],
we showed that combining both approaches in a cascade
benefits from the individual advantages respectively. In this
work, we use the cascade approach.
For the feature extraction, we start with the same set of
statistical and shape features as those described in [11] and
extract a set of features Ffor each object Oas well as
each keypoint K. This set contains statistics regarding the
(i) intensity values of the captured images, (ii) their gradient
representations, (iii) different color space representations and
(iv) texture information.
C. Geometric Features
In addition to the features described in [11], we consider
additional geometric features, to exploit the field geometry
for the image analysis. Usually, UAV images of plantation
capture larger areas compared to UGVs. Thus, they observe a
sufficient amount of crops within an image to perform a row
detection and to measure spatial relationships among mul-
tiple individual plants. We investigate additional geometric
features to exploit the fact that crops mostly have a regular
spatial distribution without explicitly specifying it. Note that
weeds may also appear spatially in a systematic fashion, e.g.
in spots or frequently in border regions of the field. First, we
perform a line set detection to find parallel crop rows and use
distances from potential rows to Oand Kas a feature for the
Random Forest classifier. Second, we compute distributions
based distances and angles in the local neighborhood around
objects and keypoints and extract statistics from it to use
them as additional features.
In most agricultural field environments, the plants are
arranged in rows, which share a constant inter-row space
r, i.e. the distance between two neighboring crop rows. The
main goal the line feature
fl=d
r(3)
is to exploit the distance dof an object Oor keypoint K
to a crop row. We normalize dby the inter-row space rand
use flas an additional feature for the classifier. The values d
and rare measured in pixels and can be directly obtained in
image space. From a mathematical point of view, crop rows
can be represented as a finite set of parallel lines
L(θ, ρ) = {l1(θ, ρ1), . . . , lI(θ, ρI)}I
i=1.(4)
where, θrefers to the orientation of the line set and ρare the
distances from each line lito the origin of the image frame.
1) Line Feature for Crop Rows: Figure 4 depicts an
exemplary result of a detected line set and illustrates the
concept of our line-based feature. We introduce the constraint
that ρiare equidistant to exploit the fact that the inter-row
space of crop rows is constant. Note that we do not make
any assumptions about the size of r, i.e., the inter-row space.
To detect the set L(θ, ρ)of parallel lines, we employ the
Hough transform on the vegetation mask V. This Hough
space accumulates the number of votes vρ,θ, i.e. the number
of corresponding vegetation pixels for a given line with the
parameters θand ρ. To compute L(θ, ρ), we analyze the
Hough space and perform the following three steps.
a) Step 1: Estimating the main direction of the crop
rows: We compute the main direction θLof the vegetation in
an image in order to estimate the direction of the crop rows.
This direction can be estimated by considering the votes for
parallel lines in Hough space. Here, we follow an approach
similar to the one proposed by Midtiby and Rasmussen [12].
To obtain θL, they compute the response
E(θ) = X
ρ
v2
ρ,θ (5)
for each direction and select the maximum E(θ). The
term vρ,θ refers to the number of votes for a certain line
with the parameters θ, ρ. In contrast to [12], we do not
only select the maximum of E(θ)but consider the Nbest
values for E(θ)to evaluate the Nbest voted directions in
Hough space given the vegetation in the subsequent steps.
In our implementation, we use N= 15. We consider the 15
best main directions supported by the vegetation in order to
handle scenarios with large amounts of weed. Tests under
high weed pressure show that the maximum response is not
always the correct choice as many weed plants may lead to
more votes for a false detection of the rows.
b) Step 2: Estimating the crop rows as sets of parallel
lines: Given the Nbest voted orientations of possible line
sets from Step 1, we want estimate in which direction we
find the best set of parallel lines with equidistant spacing.
We search for an unknown but constant spacing rbetween
neighboring lines as well as the offset sof the first potential
crop row in image space, see Figure 4 for an illustration.
Thus, we search for the maximum response of
E(θ, r, s, Lr) = P+
R
X
r=1
r
X
s=0
Lr1
X
l=0
v(s+l r) ,(6)
with the penalty term
P=Lr¯vθ,(7)
Fig. 4: Left: Result of the line set detection. s(red) refers to
the distance of the first line in the set L(θ, ρ)to the origin of
the image frame. r(orange) refers to the inter-row space. Right:
visual illustration of the line model feature flfor the keypoint-based
approach. The different colors refer to different lines of the detected
line set. The value of flis encoded by the radius of a keypoint.
Weeds located within the inter-row space get a higher value in fl.
by varying the size of rand s. The term Lrrefers to
the number of lines that intersect with in the image for a
given r. The penalty term Pis an additional cost term that
is introduced for each line of the set in order to penalize an
increasing number of lines. Here, ¯vθis the mean response
over the column corresponding to θLin the Hough space.
This leads to the effect that E, according to Eq. (6), increases
for lines, which have a better response vs+l r,θ >¯vθand
vice versa decreases if the response is lower. The maximum
response according to Eq. (6) provides the best voted line
set, which has a constant inter-row space.
c) Step 3: Refitting the best line set to the data: Crops
are commonly sown out in fixed assemblages of a certain
number of rows, called plots. It can happen that the inter-
row space between plots slightly differs due to the limited
position accuracy of the sowing machine. To overcome this,
we finally fit each line of the best set obtained from step 2
to the data by using a robust estimator based on a Huber
kernel and obtain a robust estimate L(θ, ρ)for the crop rows.
2) Spatial Relationship Features: In order to describe
spatial relationships among individual plants, we first
compute the distances and azimuths from a query object Oq
or keypoint Kqto all other nearby objects or keypoints in
world coordinates (which requires to know the flying altitude
of the UAV). We compute the differences of the measured
distances between the query object and its neighbors to
obtain a distribution in form of a histogram. Similarly,
we obtain the distribution over angles from the observed
azimuths. From these distributions, we compute common
statistical qualities such as min, max, range, mean, standard
deviation, median, skewness, kurtosis and entropy and use
them as features for the classifier. In addition to that, we
count the number of vegetation objects Oor keypoints K
in their neighborhood in object space.
Both the spatial relation features as well as the line
features allow for encoding additional geometric properties
and in this way to improve the random forest classifier used
to make the actual decision.
D. Random Forest Classification
We apply a rather standard random forest [2], which is
a comparably robust ensemble method, that is capable of
solving multi-class problems. The central idea is to construct
a large number of decision trees at training time by randomiz-
ing the use of features and elements from the training data. It
also allows for implicitly estimating confidences for the class
labels by considering the outputs of the individual decision
trees.
IV. EXP ER IM EN TS
This evaluation is designed to illustrate the performance
of our UAV-based plant/weed classification system and to
support the main claims made in this paper. These claims
are: (i) Our classification system identifies sugar beets as
well as common weed species on sugar beet farms in RGB
Fig. 5: Zoomed view of images analyzed by our plant/weed classifi-
cation system (left column) and corresponding ground truth image
(right column). Top row: Multi-class results for the PHANTOM-
dataset. Center row: Result for the JAI-dataset. Bottom row: Result
for the MATRICE-dataset. Arrows point to weeds detected in intra-
row space. Sugar beet (green), chamomile (blue), saltbush (yellow)
and other weeds (red)
imagery captured by an UAV, even if the crop is not sowed in
rows and the line detection cannot support the classification,
(ii) we demonstrate that our plant classification system is
able to exploit an arrangement prior about crop rows using
our proposed line model and in addition to that benefits
from geometric features, which capture spatial relationships
in a local neighborhood, and (iii) our method provides good
results under challenging conditions such as overlapping
plants and is capable for weed detection in intra-row space.
Finally, we analyze the effect of exploiting additional NIR-
channel in terms of vegetation detection and classification
performance.
A. Evaluation Metric
We illustrate the performance of the classification results
by ROC curves and precision-recall (PR) plots. For these
plots, we varied the threshold for the class labeling con-
cerning the estimated confidences of the random forest. The
parameters influencing the performance of the crop/weed
classification are exclusively evaluated on the detected veg-
etation parts of the images in order to analyze the quality of
the classification output. We labeled all the plants manually
in the images and perform the evaluation on object-level, i.e.
predicted object vs. ground truth object, in order to obtain
a performance as close as possible to a per-plant basis. For
the keypoint-based classification we compute the class-wise
ratios of predicted keypoints concerning the ground truth
object to keep the evaluation metric on the object-level.
B. Datasets
We collected three different datasets, with different char-
acteristics and challenges, to thoroughly evaluate the per-
False Positiv Rate
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
True Positiv Rate
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
ROC - Crosvalidation on Phantom dataset
sugar beet with SRF
sugar beet
saltbushwith SRF
saltbush
chamomile with SRF
chamomile
weed-other with SRF
weed-other
False Positiv Rate
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
True Positiv Rate
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1ROC - Crosvalidation on JAI dataset
sugar beet with SRF/LM
sugar beet with SRF
sugar beet
weed with SRF/LM
weed with SRF
weed
False Positiv Rate
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
True Positiv Rate
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
ROC - Crosvalidation on Matrice dataset
sugar beet with SRF/LM
sugar beet with SRF
sugar beet
weed with SRF/LM
weed with SRF
weed
Recall
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Precision
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P/R - Crosvalidation on Phantom dataset
sugar beet with SRF
sugar beet
saltbushwith SRF
saltbush
chamomile with SRF
chamomile
weed-other with SRF
weed-other
Recall
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Precision
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1P/R - Crosvalidation on JAI dataset
sugar beet with SRF/LM
sugar beet with SRF
sugar beet
weed with SRF/LM
weed with SRF
weed
Recall
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Precision
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P/R - Crosvalidation on Matrice dataset
sugar beet with SRF/LM
sugar beet with SRF
sugar beet
weed with SRF/LM
weed with SRF
weed
Fig. 6: ROC curves (top) and PR plots (bottom) for each dataset based on a leave-one-out cross-validation scheme according to [1].
Left: Multi-class performance evaluated on the PHANTOM-dataset. Middle: Performance of our approach on the JAI-dataset. Right:
Performance of our approach on the MATRICE-dataset. The term label “SRF” refers to the additional use of the spatial relationship
features and “LM” to the line model feature.
TABLE I: Information about the datasets.
Parameter JAI MATRICE PHANTOM
# images 97 31 15
flight mode manual way-point-mission manual
weather conditions sunny sunny/cloudy overcast
flight altitude 2-3 m15 m3m
ground resolution 2 mm
px 5mm
px 0.2 mm
px
formance of our plant/weed classification system. All the
recorded datasets represent real field conditions and contain
sugar beet crops and weeds.
The JAI-dataset is captured with a 4-channel JAI AD-
130 GE camera on a sugar beet farm in Eschikon, Switzer-
land. The dataset contains 97 RGB+NIR images of sugar
beets arranged in crop rows and weeds captured at a weekly
basis over 1.5months in May 2016. The growth stage of the
sugar beet is from early 4-leaf up to late 6-leaf stadium. The
camera was pointing downwards to the field at a hight of
approximately 23m above the soil. The image resolution
of 1296 ×966 pixels in combination with Fujinon TF4DA 8
lens with 4mm focal length yields a ground resolution of
roughly 2mm
px . The MATRICE-dataset is captured by the
Zenmuse X3 camera of the DJI Matrice 100 UAV. The data
has been recorded on two days with one week temporal
difference in May 2016 on the same field as the JAI-
dataset. The dataset contains 31 RGB images where the
crop is in early 6-leaf growth stage. We flight with the UAV
roughly 15 m above the soil. The achieved ground resolu-
tion is around 5mm
px . The JAI-dataset and the MATRICE-
dataset provide the basis to evaluate the line feature and
to analyze the performance for the crop/weed classification.
The dataset contains weeds that are located within both inter
and intra-row space. In these datasets the amount of weeds
is much smaller compared to the crop. The PHANTOM-
dataset is captured on a field in Bonn, Germany, where
the sugar beets are not sowed in crop rows. The dataset
provides images obtained by an unmodified consumer DJI
Phantom 4 UAV. The obtained ground resolution of 0.2mm
px
is comparably high as the images were captured with a
resolution of 4000 ×3000 pixels at a flight altitude of
3m. Due to the resolution, we can visually identify typical
weeds on common sugar beet fields, i.e. saltbush (Atriplex)
as a common problem weed in terms of mass, chamomile
(Matricaria chamomilla), and other weeds.
C. Classification of Crops and Different Weed Types
This first experiment in our evaluation is designed to
demonstrate that our system is capable for classifying sugar
beets and weed species, which are common on sugar
beet farms. Therefore, we analyze the performance on the
PHANTOM-dataset for the classification of sugar beets,
saltbush,chamomile and other weeds. Figure 6 (left column)
depicts the ROC and PR plots computed in a one-vs-all mode
to illustrate the obtained performance.
The ROC curve shows that for all explicitly specified
classes a recall around 90%, for saltbush even 95%, can
be obtained depending on the selected threshold. The class
labeling based on the predicted maximum confidences of
the random forest leads to the following results. The sys-
tem achieves a recall of 95% for saltbush and 87% for
chamomile. Both species are predicted with a precision of
85%. For sugar beet a recall of 78% with a precision of 90%
is obtained. Generally, the precision suffers from the obtained
recall for other-weeds of 45%. This result is affected by (i)
having a small number of examples within the datasets and
(ii) probably by a higher intra-class variance since all other
weeds, which occur in this dataset, are represented by this
class. More datasets with different weed types are needed
to clarify that. In terms of overall accuracy, 86% of the
predicted objects and 93% of the area are classified correctly.
If we only focus on crop vs. weed classification, the overall
accuracy increases by 11% to 96% for the detection on
object-level. For sugar beets, the performance do not change
but the fusion of the weed species leads to a recall of 99%
with an obtained precision of 97%.
D. Impact of Geometric Features
We designed the second experiment in order to illustrate
the impact in performance when using geometric features.
Therefore, we evaluated all datasets with the usage of all ge-
ometric features, i.e. the line feature and the spatial relation-
ship features, and compare the results with the performance
neglecting them. Figure 6 illustrates the results for each
dataset by using all geometric features (“with SRF/LM”)
using only the spatial relationship features (“with SRF”)
without exploiting the line feature.
Our evaluation shows that the use of geometric features
supports the classification based on visual features as it
improves the overall accuracy and the precision, especially
for weeds, in all tested datasets. Even for the PHANTOM-
dataset, which not involves any regular pattern of the plants,
the performance benefits from the use of the spatial relation-
ship features. In numbers, the gain for saltbush is about 6%
for both precision and recall. For all other classes the recall
raises of around 3% on average for a given precision
For the JAI-dataset and MATRICE-dataset, we measured
the effect of the spatial relationship features and all ge-
ometric features, including the line model feature, by an
individual cross-validation on the whole dataset respectively.
The biggest gain in performance can be observed for the
MATRICE-dataset. Here, the detection based only on visual
appearance, i.e. features ignoring geometry, suffers from
the comparably low ground resolution. Thus, geometric
features become great supporters for the detection as they
are rather invariant to the image resolution. On average, the
spatial relationship features are responsible for an increase
in performance of around 10% and in combination with
the line feature of 13% in terms of overall accuracy. The
corresponding PR plot indicates that this amount is mainly
caused by better detection. This statement also holds for the
JAI-dataset. The PR plot illustrates a significant gain in recall
of around 5% for weeds even when the precision is greater
than 85%. In sum, the effect of the geometric features is
smaller as for the MATRICE-dataset and the PHANTOM-
dataset, i.e. 5%. That is because the classification based on
pure visual appearance performs better and is more stable
with respect to a varying threshold for the class labeling. We
conclude that using geometric features for the classification
task is an appropriate way to exploit spatial characteristics
of plantation in agricultural field environments.
Fig. 7: PR plot obtained by a leave-one-out cross validation on the
JAI-dataset by using only RGB and RGB+NIR information.
Fig. 8: Left: Detailed view on RGB image containing sugar beets.
Right: Masking based on NDVI (white) and ExG (green). All
vegetation pixels detected by ExG based approach are also detected
by the NDVI based approach
E. Intra-Row Weed Detection
Most existing approaches that exploit the prior knowl-
edge about the crop rows perform the following two steps
sequentiality: (i) detect the rows and (ii) use geometry to
force vegetation corresponding to the rows to be weeds. In
contrast to that, our approach is suited for both exploiting the
geometry and identifying intra-row-weeds. The usage of flas
feature in the random forest leads to the fact that vegetation
located within intra-row space still has the chance to get
detected as weed based on its visual features.
Figure 5 visually illustrates typical examples of the perfor-
mance of our approach. A visual comparison of example im-
ages from the JAI-dataset (middle row) and the MATRICE-
dataset (bottom row) against the corresponding ground truth
image shows that weed are correctly detected in intra-row
space. This results originates from the classification using
also the geometric features.
F. Effect of the Availability of NIR Information
In our last experiment, we evaluate the effect of using
additional NIR information for the vegetation detection as
well as for the crop/weed classification. We separate this
experiment into two parts. First, we evaluate the performance
for the vegetation detection given RGB+NIR and RGB only
images and compare it with the ground truth. Second, we
analyze the impact of additional NIR information for the
classification. For this part, we extract features based on
RGB and RGB+NIR images receptively by using the mask
obtained by the ExG in order avoid effects of the masking to
the considered shape features, see [11]. Finally, we neglect
the use of geometric features for this experiment in order to
rely only on visual information.
TABLE II: Recall for objects consisting of at least 50 pixels.
index NDVI ExG ExGR NGRDI CIVE
recall 92% 86% 83% 83% 81%
We tested several commonly applied vegetation indices as
basis for a threshold based vegetation segmentation, see [6],
[9], [21], on the JAI-dataset, where both RGB and NIR im-
ages are available. To evaluate the segmentation performance
on object-level, we compare the obtained vegetation masks
Vindex with the labeled data. We define an object as detected,
if 75% of its pixels are detected as vegetation pixel. Table II
gives an overview of the detection rate for vegetation objects
by using NDVI, ExG, Normalized GreenRed Difference
Index (NGRDI), Excess Green minus Excess Red Index
(ExGR) and Color Index of Vegetation Extraction (CIVE)..
Using the NIR information to exploit the NDVI for the
detection outperforms all RGB-based methods. Figure 8
depicts a typical result of Vwhen some parts in the image
are shaded. Visually, the NDVI- based masking (white) gives
better results compared to the ExG-based (green) vegetation
detection. This is because the NIR-channel observes a com-
parably high reflectance for vegetation pixels, even in shaded
areas. Given only RGB data, we use the ExG as it is the best
choice given our data.
Figure 7 depicts the precision recall plots for the perfor-
mance of the classification using the additional NIR infor-
mation (RGB+NIR) as well as only relying on RGB. Using
the NIR-channel for the features increases the performance
around 3% in terms of overall accuracy. To conclude, using
the additional NIR leads to better classification results and
provides more robustness in terms of vegetation segmen-
tation in shaded image regions but our approach is also
operational if the NIR information is missing.
V. CONCLUSION
UAVs used in precision farming applications must be able
to distinguish crops from weeds on the field to estimate the
type and distribution of weeds. In this work, we focus on
sugar beet plants and typical weeds observed on fields in
Germany and Switzerland using a regular RGB camera as
well as a RGB+NIR camera. We described a system for
vegetation detection, feature extraction, and classification for
aerial images relying on object-features and keypoints in
combination with a random forest. Our system can exploit
the spatial arrangement of crops through geometric features.
We implemented our approach and thoroughly evaluated it
on UAVs on two farms and illustrate that our approach allows
for identifying the crops and weeds on the field, which
is an important information for several precision farming
applications.
REFERENCES
[1] E. Alaydin. Introduction to Machine Learning. MIT Press, 2004.
[2] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
[3] J. Geipel, J. Link, and W. Claupein. Combined spectral and spatial
modeling of corn yield based on aerial images and crop surface
models acquired with an unmanned aircraft system. Remote Sensing,
6(11):10335, 2014.
[4] J. M. Guerrero, G. Pajares, M. Montalvo, J. Romeo, and M. Guijarro.
Support vector machines for crop/weeds identification in maize fields.
Expert Systems with Applications, 39(12):11149 – 11155, 2012.
[5] W. Guo, U. K. Rage, and S. Ninomiya. Illumination invariant
segmentation of vegetation for time series wheat images based on
decision tree model. Computers and Electronics in Agriculture, 96:58–
66, 2013.
[6] E. Hamuda, M. Glavin, and E. Jones. A survey of image processing
techniques for plant extraction and segmentation in the field. Com-
puters and Electronics in Agriculture, 125:184–199, 2016.
[7] L. Horrigan, R. S. Lawrence, and P. Walker. How sustainable
agriculture can address the environmental and human health harms
of industrial agriculture. Environ Health Perspect, 110:445–56, 2002.
[8] G. R. F. Jose, D. Wulfsohn, and J. Rasmussen. Sugar beet (beta
vulgaris l.) and thistle (cirsium arvensis l.) discrimination based on
field spectral data. Biosystems Engineering, 139:1–15, 2015.
[9] R. Khanna, M. M¨
oller, J. Pfeifer, F. Liebisch, A. Walter, and R. Sieg-
wart. Beyond point clouds - 3d mapping and field parameter mea-
surements using uavs. In Proc. of the IEEE Int. Conf. on Emerging
Technologies Factory Automation (ETFA), pages 1–4, 2015.
[10] P. Lottes, M. H¨
oferlin, S. Sander, M. M¨
uter, P. Schulze-Lammers, and
C. Stachniss. An effective classification system for separating sugar
beets and weeds for precision farming applications. In Proc. of the
IEEE Int. Conf. on Robotics & Automation (ICRA), 2016.
[11] P. Lottes, M. H¨
oferlin, S. Sander, and C. Stachniss. Effective vision-
based classification for separating sugar beets and weeds for precision
farming. Journal of Field Robotics, 2016.
[12] H. S. Midtiby and J. Rasmussen. Automatic location of crop rows
in uav images. In NJF Seminar 477. Future arable farming and
agricultural engineering, pages 22–25, 2014.
[13] M. Montalvo, G. Pajares, J. M. Guerrero, J. Romeo, M. Guijarro,
A. Ribeiro, J. J. Ruz, and J. M. Cruz. Automatic detection of crop rows
in maize fields with high weeds pressure. Expert System Applications,
39(15):11889–11897, 2012.
[14] J. M. Pe˜
na, J. Torres-S´
anchez, A. I. de Castro, M. Kelly, and F. L´
opez-
Granados. Weed mapping in early-season maize fields using object-
based analysis of unmanned aerial vehicle uav images. PLoS ONE, 8,
2013.
[15] J. M. Pe˜
na, J. Torres-S´
anchez, A. Serrano-Perez, A. I. de Castro, and
F. L´
opez-Granados. Quantifying efficacy and limits of unmanned aerial
vehicle uav technology for weed seedling detection as affected by
sensor resolution. Sensors, 15(3), 2015.
[16] M. Perez-Ortiz, J. M. Pe˜
na, P. A. Gutierrez, J. Torres-S´
anchez,
C. Herv´
as-Mart´
ınez, and F. L´
opez-Granados. A semi-supervised
system for weed mapping in sunflower crops using unmanned aerial
vehicles and a crop row detection method. Applied Soft Computing,
37:533 – 544, 2015.
[17] M. Perez-Ortiz, J. M. Pe˜
na, P. A. Gutierrez, J. Torres-S´
anchez,
C. Herv´
as-Mart´
ınez, and F. L´
opez-Granados. Selecting patterns and
features for between- and within- crop-row weed mapping using uav-
imagery. Expert Systems with Applications, 47:85 – 94, 2016.
[18] J. Pfeifer, R. Khanna, C. Dragos, M. Popovic, E. Galceran,
N. Kirchgessner, A. Walter, R. Siegwart, and F. Liebisch. Towards
automatic UAV data interpretation for precision farming. Proc. of the
International Conf. of Agricultural Engineering (CIGR), 2016.
[19] J. W. Rouse, R. H. Haas, J. A. Schell, and D. W. Deering. Monitoring
Vegetation Systems in the Great Plains with Erts. NASA Special
Publication, 351:309, 1974.
[20] P. Tokekar, J. V. Hook, D. Mulla, and V. Isler. Sensor planning for
a symbiotic UAV and UGV system for precision agriculture, pages
5321–5326. 2013.
[21] J. Torres-Sanchez, F. L´
opez-Granados, and J.M. Pe˜
na. An automatic
object-based method for optimal thresholding in uav images: Appli-
cation for vegetation detection in herbaceous crops. Computers and
Electronics in Agriculture, 114:43 – 52, 2015.
... Most of them rely on handcrafted distinct features of the object of interest, and it was shown that this principle is suitable for the agriculture as well [5]. Now almost all stateof-the-art approaches involve the usage of CNNs and even transformers. ...
... Dataset is available at www.x.com. 5 ...
... The GIMP Development Team. Available at https://www.gimp.org.5 It is the subject of double blind review.This article has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. ...
Article
Full-text available
In this article, we address the problem of Hogweed detection using a drone equipped with an RGB and multispectral cameras. We study two approaches (i) offline detection running on the orthophoto of the area scanned within the mission, and (ii) realtime scanning from the frames stream directly on the edge device performing the flight mission. We show that by fusing the information from an additional multispectral camera installed on the drone, there is an opportunity to boost the detection quality which then can be preserved even with a single RGB camera setup by introduction of additional Convolution Neural Network (CNN) trained with transfer learning to produce the fake multispectral images directly from the RGB stream. We show that this approach helps either eliminate the multispectral hardware from the drone or if only the RGB camera is at hand, to boost the segmentation performance by the cost of slight increase in computational budget. To support this claim we have performed an extensive study of networks performance in simulations of both realtime and offline modes, where we achieve at least 1.1% increase in terms of mean intersection over union (mIoU) metric when evaluated on RGB stream from the camera and 1.4% when evaluated on ortophoto data. Our results show that the proper optimization guarantees a complete elimination of the multispectral camera from the flight mission by adding a preprocessing stage to segmentation network without the loss of quality.
... Weed plant control is the botanical section of pest control that aims to end weeds, particularly harmful weeds, from contending with wanted flora and fauna comprising domesticated plants and livestock, and in natural situations stopping non-native species that compete with native species. Approaches for weed control comprise chemical attack with herbicides, hand cultivation with hoes, powered cultivation with cultivators, stifling with mulch or soil solarization, lethal wilting with extraordinary heat, or burning [16]. However, detection of the weed plant is necessary for weed control by herbicides [17]. ...
... Weed plant control is the botanical sectio of pest control that aims to end weeds, particularly harmful weeds, from contending wit wanted flora and fauna comprising domesticated plants and livestock, and in natural si uations stopping non-native species that compete with native species. Approaches fo weed control comprise chemical attack with herbicides, hand cultivation with hoes, pow ered cultivation with cultivators, stifling with mulch or soil solarization, lethal wiltin with extraordinary heat, or burning [16]. However, detection of the weed plant is nece sary for weed control by herbicides [17]. ...
Article
Full-text available
Smart agriculture is a concept that refers to a revolution in the agriculture industry that promotes the monitoring of activities necessary to transform agricultural methods to ensure food security in an ever-changing environment. These days, the role of technology is increasing rapidly in every sector. Smart agriculture is one of these sectors, where technology is playing a significant role. The key aim of smart farming is to use the technologies to increase the quality and quantity of agricultural products. IOT and digital image processing are two commonly utilized technologies, which have a wide range of applications in agriculture. IOT is an abbreviation for the Internet of things, i.e., devices to execute different functions. Image processing offers various types of imaging sensors and processing that could lead to numerous kinds of IOT-ready applications. In this work, an integrated application of IOT and digital image processing for weed plant detection is explored using the Weed-ConvNet model to provide a detailed architecture of these technologies in the agriculture domain. Additionally, the regularized Weed-ConvNet is designed for classification with grayscale and color segmented weed images. The accuracy of the Weed-ConvNet model with color segmented weed images is 0.978, which is better than 0.942 of the Weed-ConvNet model with grayscale segmented weed images.
... (c) DJI Phantom 4 UAV 9 [109]. [15,43,48,76]. ...
... • Between/within-row assumption: To increase classification performances of the classifier, one often assumes within-row plants as crops and between-row plants as weeds [71,76,107,109]. This assumption is however not always verified because weeds can grow within crop rows (see Fig. 1.4). ...
Thesis
PRECISION spraying aims to fight weeds in crop fields while reducing herbi- cide use by exclusive weed targeting. Among available imaging technologies, multispectral (multishot) cameras sample the scene radiance according to narrow spectral bands in the visible and/or near infrared domains and provide multispec- tral radiance images with many spectral channels. The main objective of this work is to develop an automatic recognition system of crop and weed plants in field conditions based on multispectral imaging. In this manuscript, we describe the formation of multispectral radiance images un- der the Lambertian surface assumption, and provide a formalization of the linescan multispectral camera used in this study. We then propose an original multispectral image formation model that takes illumination variation during image acquisition into account. From our image formation model, we propose a method to estimate the reflectance as an illumination-invariant spectral signature. The quality of reflectance estimated by our method is evaluated against state-of-the-art methods, and its contribution to supervised crop/weed recognition is demonstrated. As spectral bands associated to the acquired channels may be redundant or contain highly correlated spectral information, we select the best spectral bands for crop/weed identification. We then use them to specify a single-sensor (snapshot) camera model suited for outdoor crop/weed recognition. Finally, we propose an original approach based on a convolutional neural network for spatio–spectral feature extraction from multispec- tral images at reduced computation costs. Extensive experiments show the contribution of our approach to outdoor crop/weed recognition.
... Weed detection in crops is a challenging task of ML. Different automated weed monitoring and detection methods are being developed based on ground platforms or UAVs [35][36][37]. ...
Article
Full-text available
Weeds are a crucial threat to agriculture, and in order to preserve crop productivity, spreading agrochemicals is a common practice with a potential negative impact on the environment. Methods that can support intelligent application are needed. Therefore, identification and mapping is a critical step in performing site-specific weed management. Unmanned aerial vehicle (UAV) data streams are considered the best for weed detection due to the high resolution and flexibility of data acquisition and the spatial explicit dimensions of imagery. However, with the existence of unstructured crop conditions and the high biological variation of weeds, it remains a difficult challenge to generate accurate weed recognition and detection models. Two critical barriers to tackling this challenge are related to (1) a lack of case-specific, large, and comprehensive weed UAV image datasets for the crop of interest, (2) defining the most appropriate computer vision (CV) weed detection models to assess the operationality of detection approaches in real case conditions. Deep Learning (DL) algorithms, appropriately trained to deal with the real case complexity of UAV data in agriculture, can provide valid alternative solutions with respect to standard CV approaches for an accurate weed recognition model. In this framework, this paper first introduces a new weed and crop dataset named Chicory Plant (CP) and then tests state-of-the-art DL algorithms for object detection. A total of 12,113 bounding box annotations were generated to identify weed targets (Mercurialis annua) from more than 3000 RGB images of chicory plantations, collected using a UAV system at various stages of crop and weed growth. Deep weed object detection was conducted by testing the most recent You Only Look Once version 7 (YOLOv7) on both the CP and publicly available datasets (Lincoln beet (LB)), for which a previous version of YOLO was used to map weeds and crops. The YOLOv7 results obtained for the CP dataset were encouraging, outperforming the other YOLO variants by producing value metrics of 56.6%, 62.1%, and 61.3% for the mAP@0.5 scores, recall, and precision, respectively. Furthermore, the YOLOv7 model applied to the LB dataset surpassed the existing published results by increasing the mAP@0.5 scores from 51% to 61%, 67.5% to 74.1%, and 34.6% to 48% for the total mAP, mAP for weeds, and mAP for sugar beets, respectively. This study illustrates the potential of the YOLOv7 model for weed detection but remarks on the fundamental needs of large-scale, annotated weed datasets to develop and evaluate models in real-case field circumstances.
... The number of plant species is extremely huge, with about 450,000 plant species all over the world [1]. Currently, machine learning, a subfield of artificial intelligence (AI), is a popular and widely used technique, that has been applied in various domains including biology, medicine, computer vision, speech recognition, and others [2]. ...
Article
Plant species identification via plant leaves based on shape, color, and texture features using digital image processing techniques for classifying plant species using various machine learning algorithms. This research proposes a machine learning algorithm-based system to identify multiple plants of the same genus having different species in the form of an image using their leaf feature. The image processing technique is regarded as the primary method for classifying different plants based on various characteristics or specific portions or regions of the plant leaves, which will then be identified via image processing.
... Advances in drone technologies have enabled various types of drones to be used in numerous applications and missions including wildlife and environment monitoring (Kabir and Lee, 2021;Hodgson et al., 2016;Bolla et al., 2018;Duan et al., 2019), cellular network (Mozaffari et al., 2018;Zeng et al., 2018;Mozaffari et al., 2019), military (Samad et al., 2007;Ma'sum et al., 2013), planetary exploration (Bryson and Sukkarieh, 2008;Elfes et al., 2003;Balaram et al., 2021), entertainment (Brescianini et al., 2013;Hehn and D'Andrea, 2011;Augugliaro et al., 2013), smart farming (Lottes et al., 2017;Tripicchio et al., 2015;Popović et al., 2020), search and surveillance (Gu ´ et al., 2018;Semsch et al., 2009), drone services in healthcare (Hiebert et al., 2020;Ullah et al., 2019) and transportation (Chang and Lee, 2018;Goodchild and Toy, 2018), to list a few. Unlike fixed-wing drones, drones with a Vertical Take-Off and Landing (VTOL) feature have wide applicability since they can hover over a particular point or region and do not require runways. ...
... The studies [11]- [13] used an SVM and a traditional neural network classifier with scale-invariant feature transform (SIFT) feature extractor. Alam et al. [14] used the grey level co-occurrence matrix (GLCM) with the Haralick descriptor as the texture characteristic and the ndi as the color for the classification. ...
Article
Full-text available
span lang="EN-US">Weeds compete with plants for sunlight, nutrients and water. Conventional weed management involves spraying of herbicides to the entire crop which increases the cost of cultivation, decreasing the quality of the crop, in turn affecting human health. Precise automatic spraying of the herbicides on weeds has been in research and use. This paper discusses automatic weed detection using hybrid features which is generated by extracting the deep features from convolutional neural network (CNN) along with the texture and color features. The color and texture features are extracted by color moments, gray level co-occurrence matrix (GLCM) and Gabor wavelet transform. The proposed hybrid features are classified by Bayesian optimized support vector machine (BO-SVM) classifier. The experimental results read that the proposed hybrid features yield a maximum accuracy of 95.83%, higher precision, sensitivity and F-score. A performance analysis of the proposed hybrid features with BO-SVM classifier in terms of the evaluation parameters is made using the images from crop weed field image dataset.</span
... Recent developments in UAVs have made them indispensable in modern society. UAVs can perform various tasks previously performed by humans in dangerous and complex environments, such as topographic surveying [1,2], power control [3], military operations [4,5], environmental monitoring and disaster response [6][7][8], smart agriculture [9,10], commercial transportation [11,12], and so on. As tasks become more complex and the task environment becomes more volatile, a single UAV can no longer handle the demands of complex tasks, and UAV cluster technology has come into being. ...
Article
Full-text available
Task planning involving multiple unmanned aerial vehicles (UAVs) is one of the main research topics in the field of cooperative unmanned aerial vehicle control systems. This is a complex optimization problem where task allocation and path planning are dealt with separately. However, the recalculation of optimal results is too slow for real-time operations in dynamic environments due to a large amount of computation required, and traditional algorithms are difficult to handle scenarios of varying scales. Meanwhile, the traditional approach confines task planning to a 2D environment, which deviates from the real world. In this paper, we design a 3D dynamic environment and propose a method for task planning based on sequence-to-sequence multi-agent deep deterministic policy gradient (SMADDPG) algorithm. First, we construct the task-planning problem as a multi-agent system based on the Markov decision process. Then, the DDPG is combined sequence-to-sequence to learn the system to solve task assignment and path planning simultaneously according to the corresponding reward function. We compare our approach with the traditional reinforcement learning algorithm in this system. The simulation results show that our approach satisfies the task-planning requirements and can accomplish tasks more efficiently in competitive as well as cooperative scenarios with dynamic or constant scales.
Article
Full-text available
This study investigated a crop and soil classification applying the Random Forest machine learning algorithm based on the red-green-blue (RGB) and multispectral sensor imaging deploying an unmanned aerial vehicle (UAV). The study area covered two 10 x 10 m subsets of a maize-sown agricultural parcel near Koška. The highest overall accuracy was obtained in the combination of the red edge (RE), near-infrared (NIR), and normalized difference vegetation index (NDVI) in both subsets, with a 99.8% and 91.8% overall accuracy, respectively. The conducted analysis proved that the RGB camera obtained sufficient accuracy and was an acceptable solution to the soil and vegetation classification. Additionally, a multispectral camera and spectral analysis allowed for a more detailed analysis, primarily of the spectrally similar areas. Thus, this procedure represents a basis for both the crop density calculation and weed detection while deploying an unmanned aerial vehicle. To ensure crop classification effectiveness in practical application, it is necessary to further integrate the weed classes in the current vegetation class and separate them into crop and weed classes.
Chapter
Artificial Intelligence (AI) is a smart technology that can make decisions in real-time and low effort in many fields including agriculture. The AI technologies are changing agriculture based on sensors and cameras. This chapter reports on a literature review of the global collaboration and trends on AI in agriculture field (AIA). A number of 2143 documents were retrieved from 1984 to 2021 and processed using VOSviewer tool. The findings show an increasing trend in productivity on AIA, and 2013 was the starting point of an exponential evolution of the number of publications. Fu, Z. is the top author concerning the number of indexed publications followed by Ampatzidis. China Agricultural University, Chinese Academy of Science, and the University of Florida are the most influential affiliations. China (n = 364), the United States (n = 317), and India (n = 311) are the top three most productive countries on this topic. Research on AIA is turning to agricultural robots, Internet of things (IoT), big data, deep learning, and internet, etc. The keywords estimated using the software were classified based on field criteria into four levels: resources (groundwater), farms & agriculture, tools & techniques, and decision & management. This classification was used to develop a framework that can be considered as a decision-making tool to optimize agriculture using AI.KeywordsGlobal collaborationVOSviewerBibliometric analysisResearch trendsPredictive analytics
Article
Full-text available
The use of robots in precision farming has the potential to reduce the reliance on herbicides and pesticides through selectively spraying individual plants or through manual weed removal. A prerequisite for that is the ability of the robot to separate and identify the value crops and the weeds in the field. Based on the output of the robot's perception system, it can trigger the actuators for spraying or removal. In this paper, we address the problem of detecting sugar beet plants as well as weeds using a camera installed on a mobile field robot. We propose a system that performs vegetation detection, local as well as object-based feature extraction, random forest classification, and smoothing through a Markov random field to obtain an accurate estimate of crops and weeds. We implemented and thoroughly evaluated our system using a real farm robot in different sugar beet fields, and we illustrate that our approach allows for accurately identifying weeds in a field.
Conference Paper
Full-text available
Background: The EU-project Flourish intends to establish an autonomously operating precision farming system based on the interaction between unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). For effective mission planning and site-specific ground intervention by the UGV, the growth, mineral nutrition, weed, and health status of the crop field must be evaluated efficiently. In this regard, the survey capabilities of UAVs can substantially leverage the economic performance and ecological sustainability of precision farming systems. Methods: Our approach is based on 'sufficient performance ranges' (SPRs), which represent the expected optimal performance of phenotypic traits such as spectral indicators and growth parameters like canopy cover and crop height. Together with a priori data, such as weather situation and soil fertility maps, the UAV derived maps allow for the detection of deviations from sufficient crop development. Detected deviations are interpreted using decision tree models. Our models encompass upstream and downstream decisions necessary for scheduling site-specific and efficient ground interventions during the whole management sequence of a growing season, such as fertilizer input or weed control. Results: This contribution presents initial results for field monitoring via UAVs. The performance of our approach is supported by ground truth data, such as crop height, canopy cover and spectral indices from sugar beets collected in 2015. Effects of variable soil fertility, weed pressure and drought stress are presented. Conclusion: The initial results support the proposed intention to derive appropriate management decisions for stabilizing crop yield and quality while minimizing farm inputs. 1. Introduction Autonomous precision farming systems equipped with state-of the-art sensor technology and sophisticated analysis pipelines promise to provide solutions for many aspects of modern crop production. These envisaged benefits embrace more efficient resource inputs (including fertilizers, pesticides, fuel and labor), protection of the biotic and abiotic environment (such as avoidance of soil compaction), maintenance of yields and control of harvest quality. The central aim of the Flourish project (homepage: http://www.flourish-project.eu/) is developing an autonomously operating precision farming system, consisting of an unmanned ground vehicle (UGV) and an unmanned aerial vehicle (UAV). Both, UGV-and UAV derived data are used to evaluate the growth performance and status of the crop as well as weed infestation in the field site-specifically and non-destructively in order to derive management recommendations. The UGV will also be able to conduct crop management tasks such as mechanical or chemical weed control. In this context, UAVs can provide information to support and optimize the crop management. For example, the optimal timing, location and duration of UGV missions can be identified using previous UAV surveys. Moreover, using UAVs the amounts and composition of pesticides and fertilizers to be applied can be determined before the UGV drives to the field. Last but not least, diseases and drought events could be detected in order to enable early and efficient intervention. The EU-project Flourish started in March 2015. Seven European partner institutions are involved, covering work packages from robot design and machine vision to crop production. In this work, our basic concept for automatic UAV data interpretation is presented and three examples are chosen to demonstrate the achieved growth performance parameters: plant height, canopy cover and growth performance estimation by use of spectral indicators. The basic concept of Flourish consists in extracting quantitative data from the UAV (and UGV) imagery and comparing the present plant trait values with existing knowledge about the optimal performance of the crop cultivar growing in a given region. This empirical knowledge will be bundled in mathematical functions (hereafter called 'sufficient performance ranges', SPRs) describing the optimal development of a specific phenotypic trait as related to thermal time. Using the SPRs, deviations from the optimal performance can be detected. The next step of the pipeline consists of the integration of the SPRs into a framework of decision trees. The decision trees are envisaged to encompass relevant upstream and downstream decisions necessary for planning the field
Conference Paper
Full-text available
Robots for precision farming have the potential to reduce the reliance on herbicides and pesticides through selectively spraying individual plants or through manual weed removal. To achieve this, the value crops and the weeds must be identified by the robot’s perception system to trigger the actuators for spraying or removal. In this paper, we address the problem of detecting the sugar beet plants as well as weeds using a camera installed on a mobile robot operating on a field. We propose a system that performs vegetation detection, feature extraction, random forest classification, and smoothing through a Markov random field to obtain an accurate estimate of the crops and weeds. We implemented and thoroughly evaluated our system on a real farm robot on different sugar beet fields and illustrate that our approach allows for accurately identifying the weed on the field.
Conference Paper
Full-text available
Recent developments in Unmanned Aerial Vehicles (UAVs) have made them ideal tools for remotely monitoring agricultural fields. Complementary advancements in computer vision have enabled automated post-processing of images to generate dense 3D reconstructions in the form of point clouds. In this paper we present a monitoring pipeline that uses a readily available, low cost UAV and camera for quickly surveying a winter wheat field, generate a 3D point cloud from the collected imagery and present methods for automated crop height estimation from the extracted point cloud and compare our estimates with those using standardized techniques.
Article
Full-text available
This paper approaches the problem of weed mapping for precision agriculture, using imagery provided by Unmanned Aerial Vehicles (UAVs) from sunflower and maize crops. Precision agriculture referred to weed control is mainly based on the design of early post-emergence site-specific control treatments according to weed coverage, where one of the most important challenges is the spectral similarity of crop and weed pixels in early growth stages. Our work tackles this problem in the context of object-based image analysis (OBIA) by means of supervised machine learning methods combined with pattern and feature selection techniques, devising a strategy for alleviating the user intervention in the system while not compromising the accuracy. This work firstly proposes a method for choosing a set of training patterns via clustering techniques so as to consider a representative set of the whole field data spectrum for the classification method. Furthermore, a feature selection method is used to obtain the best discriminating features from a set of several statistics and measures of different nature. Results from this research show that the proposed method for pattern selection is suitable and leads to the construction of robust sets of data. The exploitation of different statistical, spatial and texture metrics represents a new avenue with huge potential for between and within crop-row weed mapping via UAV-imagery and shows good synergy when complemented with OBIA. Finally, there are some measures (specially those linked to vegetation indexes) that are of great influence for weed mapping in both sunflower and maize crops.
Article
We study two new informative path planning problems that are motivated by the use of aerial and ground robots in precision agriculture. The first problem, termed sampling traveling salesperson problem with neighborhoods (SamplingTSPN), is motivated by scenarios in which unmanned ground vehicles (UGVs) are used to obtain time-consuming soil measurements. The input in SamplingTSPN is a set of possibly overlapping disks. The objective is to choose a sampling location in each disk and a tour to visit the set of sampling locations so as to minimize the sum of the travel and measurement times. The second problem concerns obtaining the maximum number of aerial measurements using an unmanned aerial vehicle (UAV) with limited energy. We study the scenario in which the two types of robots form a symbiotic system—the UAV lands on the UGV, and the UGV transports the UAV between deployment locations. This paper makes the following contributions. First, we present an $\operatornamewithlimits{\mathcal {O}}(\frac{r_{\max }}{r_{\min }})$ approximation algorithm for SamplingTSPN, where $r_{\min }$ and $r_{\max }$ are the minimum and maximum radii of input disks. Second, we show how to model the UAV planning problem using a metric graph and formulate an orienteering instance to which a known approximation algorithm can be applied. Third, we apply the two algorithms to the problem of obtaining ground and aerial measurements in order to accurately estimate a nitrogen map of a plot. Along with theoretical results, we present results from simulations conducted using real soil data and preliminary field experiments with the UAV.
Article
In this review, we present a comprehensive and critical survey on image-based plant segmentation techniques. In this context, “segmentation” refers to the process of classifying an image into plant and non-plant pixels. Good performance in this process is crucial for further analysis of the plant such as plant classification (i.e. identifying the plant as either crop or weed), and effective action based on this analysis, e.g. precision application of herbicides in smart agriculture applications.
Article
This paper presents a system for weed mapping, using imagery provided by unmanned aerial vehicles(UAVs). Weed control in precision agriculture is based on the design of site-specific control treatmentsaccording to weed coverage. A key component is precise and timely weed maps, and one of the crucialsteps is weed monitoring, by ground sampling or remote detection. Traditional remote platforms, suchas piloted planes and satellites, are not suitable for early weed mapping, given their low spatial andtemporal resolutions. Nonetheless, the ultra-high spatial resolution provided by UAVs can be an efficientalternative. The proposed method for weed mapping partitions the image and complements the spectralinformation with other sources of information. Apart from the well-known vegetation indexes, whichare commonly used in precision agriculture, a method for crop row detection is proposed. Given thatcrops are always organised in rows, this kind of information simplifies the separation between weedsand crops. Finally, the system incorporates classification techniques for the characterisation of pixels ascrop, soil and weed. Different machine learning paradigms are compared to identify the best performingstrategies, including unsupervised, semi-supervised and supervised techniques. The experiments studythe effect of the flight altitude and the sensor used. Our results show that an excellent performance isobtained using very few labelled data complemented with unlabelled data (semi-supervised approach),which motivates the use of weed maps to design site-specific weed control strategies just when farmersimplement the early post-emergence weed control.
Article
Creeping thistle (Cirsium arvensis (L.) Scop.) is a perennial weed that causes yield loss in sugar beet (Beta vulgaris L.) crops. The weeds are usually mapped for site specific weed management because they tend to grow in patches. Remote sensing techniques have shown promising results in species discrimination and therefore provide potential for weed mapping. In this study we examined the feasibility of high-resolution imaging for sugar beet and thistle discrimination and proposed a protocol to select multispectral camera filters. Spectral samples from sugar beet and thistle were acquired with a field portable spectroradiometer under field conditions and Partial Least Squares Discriminant Analysis (PLS-DA) classification models were developed with 211 and 36 spectral features of 1.56 and 10 nm bandwidths, respectively. The classification rates obtained using these models were regarded as the maximum obtainable. Then, spectral responses of a multi-band camera equipped with the filter configuration proposed by the PLS-DA models were simulated. Finally, a simulation of crop-weed discrimination was made using small unmanned aerial vehicles (UAV)-based multispectral images. More than 95% of the thistles and 89% of the sugar beets were correctly classified when continuous spectral data were used with 1.56 and 10 nm bandwidths. Accuracy dropped to 93% of thistles identified and 84% of sugar beets correctly classified when only the four best bands were used. The validation based on aerial images showed that sugar beets and thistle plants could be discriminated in images if sufficient pure pixels containing leaf spectra were available, that is with spatial resolutions of 6 mm pixel−1 or finer.