ArticlePDF Available

Abstract and Figures

Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over 3 experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds·m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages
Content may be subject to copyright.
fpls-08-00739 May 13, 2017 Time: 16:27 # 1
published: 16 May 2017
doi: 10.3389/fpls.2017.00739
Edited by:
John Doonan,
Aberystwyth University, UK
Reviewed by:
Eric R. Casella,
Forest Research Agency (Forestry
Commission), UK
Julia Christine Meitz-Hopkins,
Stellenbosch University, South Africa
Ankush Prashar,
Newcastle University, UK
Shouyang Liu
Specialty section:
This article was submitted to
Technical Advances in Plant Science,
a section of the journal
Frontiers in Plant Science
Received: 20 September 2016
Accepted: 20 April 2017
Published: 16 May 2017
Liu S, Baret F, Andrieu B, Burger P
and Hemmerlé M (2017) Estimation
of Wheat Plant Density at Early
Stages Using High Resolution
Imagery. Front. Plant Sci. 8:739.
doi: 10.3389/fpls.2017.00739
Estimation of Wheat Plant Density at
Early Stages Using High Resolution
Shouyang Liu1*, Fred Baret1, Bruno Andrieu2, Philippe Burger3and Matthieu Hemmerlé4
1UMR EMMAH, INRA, UAPV, Avignon, France, 2UMR ECOSYS, INRA, AgroParisTech, Université Paris-Saclay,
Thiverval-Grignon, France, 3UMR AGIR, INRA, Castanet Tolosan, France, 4Hi-Phen, Avignon, France
Crop density is a key agronomical trait used to manage wheat crops and estimate
yield. Visual counting of plants in the field is currently the most common method
used. However, it is tedious and time consuming. The main objective of this work is
to develop a machine vision based method to automate the density survey of wheat
at early stages. RGB images taken with a high resolution RGB camera are classified
to identify the green pixels corresponding to the plants. Crop rows are extracted and
the connected components (objects) are identified. A neural network is then trained to
estimate the number of plants in the objects using the object features. The method was
evaluated over three experiments showing contrasted conditions with sowing densities
ranging from 100 to 600 seeds·m2. Results demonstrate that the density is accurately
estimated with an average relative error of 12%. The pipeline developed here provides
an efficient and accurate estimate of wheat plant density at early stages.
Keywords: plant density, RGB imagery, neural network, wheat, recursive feature elimination, Hough transform
Wheat is one of the main crops cultivated around the world with sowing density usually ranging
from 150 to 400 seed·m2. Plant population density may significantly impact the competition
among plants as well as with weeds and consequently affect the effective utilization of available
resources including light, water, and nutrients (Shrestha and Steward, 2003;Olsen et al., 2006).
Crop density appears therefore as one of the important variables that drive the potential yield. This
explains why this information is often used for the management of cultural practices (Godwin and
Miller, 2003). Plant population density is still investigated most of the time by visually counting the
plants in the field over samples corresponding either to a quadrat or to a segment. This is achieved
at the stage when the majority of plants have just emerged and before the beginning of tillering
(Norman, 1995) which happens few days to few weeks after emergence. This method is time and
labor intensive and may be prone to human error.
Some efforts have been dedicated to the development of high-throughput methods for
quantifying plant density. This was mainly applied to maize using either capacitive sensors during
the harvest (Nichols, 2000;Li et al., 2009) or optical sensors including 2D cameras (Shrestha and
Steward, 2003, 2005;Tang and Tian, 2008a,b) and range sensors (Jin and Tang, 2009;Nakarmi and
Tang, 2012, 2014;Shi et al., 2013, 2015). However, quantifying the population density of maize
is much simpler than that of wheat since maize plants are normally bigger, with larger plant
spacing along the row and more evenly distributed. In wheat crops, leaves between neighboring
plants overlap rapidly, and tillers will also appear, making the plant identification very difficult
Frontiers in Plant Science | 1May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 2
Liu et al. Wheat Density Estimation from RGB Imagery
when they have more than three leaves, even using visual
counting in the field. Most studies during these early stages report
results derived from estimates of the vegetation fraction coverage
measured using high resolution imagery (Guo et al., 2013) or
based on vegetation indices computed with either multispectral
(Sankaran et al., 2015) or hyperspectral (Liu et al., 2008)
reflectance measurements. However, none of these investigations
specifically addressed the estimation of plant density. Advances
in digital photography providing very high resolution images,
combined with the development of computer vision systems,
offer new opportunities to develop a non-destructive high-
throughput method for plant density estimation.
The objective of this study is to develop a system based on
high resolution imagery that measures wheat plant population
density at early stages. The methods used to acquire the RGB
images and the experimental conditions are first presented. Then
the pipeline developed to process the images is described. Finally,
the method is evaluated with emphasis on its accuracy and on its
corresponding domain of validity.
Field Experiments and Measurements
Three experiments were conducted in 2014 in France (Table 1):
Avignon, Toulouse, and Paris. In Avignon, four sowing densities
(100, 200, 300, and 400 seeds·m2) with the same “Apache”
cultivar were sampled. In Toulouse, five plant densities (100,
200, 300, 400, and 600 seeds·m2) with two different cultivars,
“Apache” and “Caphorn” were considered. In Paris, two cultivars
with a single sowing density of 150 seeds·m2were sampled.
All measurements were taken around 1.5 Haun stage, when
most plants already emerged. A total of 16 plots were therefore
available over the three experiments under contrasted conditions
in terms of soil, climate, cultivars, and sowing densities. All the
plots were at least 10 m length by 2 m width.
In Toulouse and Avignon, images were acquired using
an RGB camera fixed on a light moving platform, termed
Phenotypette (Figure 1). The platform was driven manually at
about 0.5 m·s1. For each plot, at least 10 images were collected
to be representative of the population. For Paris experiment,
the camera was mounted on a monopod to take two pictures
with no overlap. In all the cases, the camera was oriented at 45
inclination perpendicular to the row direction and was pointing
at the center row from a distance of 1.5 m and with spatial
resolution around 0.2 mm (Figure 1 and Table 1). For each plot,
10 images were selected randomly among the whole set of images
acquired. The number of plants located in the two central rows
was then visually counted over each of the 10 selected images to
derive the reference plant density.
Image Processing
Each image was processed using the pipeline sketched on
Figure 2. It was mainly programmed using MATLAB and Image
Processing Toolbox R2016a (code available on request). To
facilitate the application, the corresponding MATLAB functions
used are also given in the text.
Classification of Green Elements
The images display green pixels corresponding to the emerged
plants, and brown pixels corresponding to the soil background.
The RGB color space was firstly transformed into Lab, to
enhance its sensitivity to variations in greenness (Philipp and
Rath, 2002). The Otsu automatic thresholding method (Otsu,
1975) was then applied to channel ‘a’ to separate the green
from the background pixels (function: graythresh). Results show
that the proposed method performs well (Figure 2b) under
the contrasted illumination conditions experienced (Table 1).
Further, this approach provides a better identification of the green
pixels (results not presented for the sake of brevity) as compared
to the use of supervised methods (Guo et al., 2013) based on
indices such as the excess green (Woebbecke et al., 1995) or more
sophisticated indices proposed by Meyer and Neto (2008).
Geometric Transformation
The perspective effect creates a variation of the spatial resolution
within the image: objects close to the lens appear large while
distant objects appear small. A transformation was therefore
applied to remap the image into an orthoimage where the
spatial resolution remains constant. The transformation matrix
was calibrated using an image of a chessboard for each camera
setup (Figure 2c). The chessboard covered the portion of the
image that was later used for plant counting. The corners of the
squares in the chessboard were identified automatically (function:
detectCheckerboardPoints). Then the transformation matrix can
be derived once the actual dimension of the squares of the
chessboard is provided (function: fitgeotrans) (Figure 2c). The
transformation matrix was finally applied to the whole image for
given camera setup (function: tformfwd) (Figure 2d). This allows
remapping the image into a homogeneously distributed domain
on the soil surface.
Row Identification and Orientation
The plant density measurement for row crops such as wheat is
achieved by counting plants over a number of row segments
of given length. Row identification is therefore a mandatory
step as sketched in Figure 2e. Row identification methods
have been explored intensively mostly for the automation of
robot navigation in field (Vidovi´
c et al., 2016). Montalvo et al.
(2012) reviewed the existing methods and found that the
Hough transform (Slaughter et al., 2008) is one of the most
common and reliable methods. It mainly involves computing
the co-distribution of the length (ρ) and orientation (θ) of
the segments defined by two green pixels (Figure 3A). The
Hough transform detects dominant lines even in the presence
of noise or discontinuous rows. The noise could include objects
between rows such as weeds or misclassified background pixels
such as stones (Marchant, 1996;Rovira-Más et al., 2005).
Although the Hough transform is computationally demanding,
its application on edge points of the green objects decreases this
constraint. Hence, the ‘Canny Edge Detector’ (Canny, 1986) was
consequently used to detect edges prior to the application of the
Hough transform. The Hough transform was conducted with
orientation 90o<θ<90owith 0.1oangular steps and a radius
Frontiers in Plant Science | 2May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 3
Liu et al. Wheat Density Estimation from RGB Imagery
TABLE 1 | Characteristics of the three experimental sites.
Sites Latitude Longitude Cultivars Sowing
Cameras Resolution Focal
Toulouse 43.5N 1.5E Apache 100, 200, 300,
400, 600
106, 187, 231,
350, 525
Diffuse Sigma
2640 by
50 mm 0.23
Caphorn 100, 200, 300,
400, 600
118, 206, 250,
387, 431
Paris 48.8N 1.9E Premio 150 154 Flash NIKON
4496 by
Attlass 150 182
Avignon 43.9N 4.8E Apache 100, 200, 300,
54, 129, 232,
Direct Sigma
4608 by
FIGURE 1 | The image acquisition in the field with the Phenotypette.
3000 <ρ<3000 pixels with 1 pixel steps (function: hough)
(Figure 3A).
Five main components show up in the image (Figure 3A),
corresponding to the five rows of the original image (Figure 2a).
As all rows are expected to be roughly parallel, their orientation
could be inferred as the θvalue, θrow (where θrow =90o
corresponds to the horizontal orientation on the images on
Figure 2f), that maximizes the variance of ρ. The positions of
the rows are derived from the peaks of frequency for θ=θrow
(Figure 3B). Five lines on Figure 2e highlight the center of each
row. Because of the uncertainty in the orientation of the camera
along the row, the row line drawn on the images are not exactly
horizontal. This is illustrated in Figure 2f where θrow = −88.2o.
The images were therefore rotated according to θrow (function:
imrotate), so that the rows are strictly horizontal in the displayed
images Figure 2g.
Object Identification and Feature Extraction
An object in a binary image refers to a set of pixels that form a
connected group with the connectivity of eight neighbors. Each
object was associated to the closest row line and characterized
by 10 main features (function: bwmorph) (the top 10 features
in Table 2). Three additional features were derived from
skeletonization of the object: the length, number of branch and
end points of the skeleton (function: regionprops) (the last three
features in Table 2). More details on the feature extraction
function used can be found in
Estimation of the Number of Plants
Contained in Each Object
Machine learning methods were used to estimate the number
of plants contained in each object from the values of their 13
Frontiers in Plant Science | 3May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 4
Liu et al. Wheat Density Estimation from RGB Imagery
FIGURE 2 | The methodology involving image processing feature extraction. (a) Original image. (b) Binary image. (c) Image of a chessboard to derive the
transformation matrix. (d) Calibrated image. (e) Detecting rows in the image, corresponding to red dashed lines. (f) Labeling rows with different colors. (g) Correcting
row orientation to be horizontal.
FIGURE 3 | Hough transform to detect rows. (A) Hough transform. (B) Identification of the peaks of ρcorresponding to the rows.
associated features (Table 2). Artificial neural networks (ANNs)
have been recognized as one of the most versatile and powerful
method to relate a set of variables to one or more variables. ANNs
are interconnected neurons characterized by a transfer function.
They combine the input values (the features of the object) to
best match the output values (number of plants in our case)
over a training database. The training process requires first to
define the network architecture (the number of hidden layers and
Frontiers in Plant Science | 4May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 5
Liu et al. Wheat Density Estimation from RGB Imagery
TABLE 2 | The 13 features extracted for each of the connected object.
# Name Meaning Unit
F1 Area Number of pixels of the connected
component (object)
F2 FilledArea Number of pixels of the object with all
holes filled
F3 ConvexArea Number of pixels within the associated
convex hull
F4 Solidity Ratio of number of pixels in the region to
that of the convex hull
F5 Extent Ratio of number of pixels in the region to
that of the bounding box
F6 EquivDiameter Diameter of a circle with the same area as
the region
F7 MajorAxisLength Length of the major axis of the ellipse
equivalent to the region.
F8 MinorAxisLength Length of the minor axis of the ellipse
equivalent to the region.
F9 Eccentricity Eccentricity of the equivalent ellipse to the
F10 Orientation Orientation of the major axis of the
equivalent ellipse
F11 LengthSkelet Number of pixels of the skeleton Pixel
F12 NumEnd Number of end points of the skeleton Scalar
F13 NumBranch Number of branch points of the skeleton Scalar
nodes per layer and the type of transfer function of each neuron).
Then the synaptic weights and biases are tuned to get a good
agreement between the number of plants per object estimated
from the object’s features and the corresponding number of plants
per object in the training database. A one-layer feed-forward
network with kntangent sigmoid hidden neurons and none
linear neuron was used. The number of hidden nodes was varied
between 1 kn10 to select the best architecture. The weights
and biases were initialized randomly. The training was achieved
independently over each site considering 90% of the data set
corresponding to a total of the 606 (Toulouse), 347 (Paris), and
476 (Avignon) objects. The remaining 10% objects of each site
was used to evaluate the performance of the training. Note that
the estimates of number of plants per object were continuous, i.e.,
representing actually the average probability of getting a discrete
number of plants.
A compact, parsimonious and non-redundant subset of
features should contribute to speed up the learning process and
improve the generalization of predictive models (Tuv et al.,
2009;Kuhn and Johnson, 2013;Louppe, 2014). Guyon et al.
(2002) proposed recursive feature elimination (RFE) to select the
optimal subtest of features. Specific to ANN, the combinations of
the absolute values of the weights were used firstly to rank the
importance of predictors (features) (Olden and Jackson, 2002;
Gevrey et al., 2003). For the subset including nfeatures, RFE
presumes that the subset of the top nfeatures outperforms
the other possible combinations (Guyon et al., 2002;Granitto
et al., 2006). Then 13 iterations corresponding to the 13 features
need to be computed to select the optimal subset defined as
the smallest set providing a RMSEnlower than 1.02 RMSEbest,
where RMSEbest is the minimum RMSE value observed when
using the 13 features. To minimize possible overfitting of the
training dataset, a cross-validation scheme was used (Seni and
Elder, 2010) with the training data set including 90% of the cases
and the test data set containing the remaining 10%. The process
was repeated five times with a random drawing of the training
and test data sets for each trial.
Number of Plants per Object and Object
Feature Selection
The number of plants per object resulted in a consistently right-
skewed distribution over the three experimental sites (Figure 4).
For all the plots, objects containing single plants have the highest
FIGURE 4 | Number of plants per object over the three sites.
Frontiers in Plant Science | 5May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 6
Liu et al. Wheat Density Estimation from RGB Imagery
FIGURE 5 | The correlation among the 13 objects’ features for the Toulouse site (∗∗ : 0.01, : 0.05). The abbreviation of features refers to names in Table 2.
probability of occurrence. However, objects contain generally
more plants for high density as compared to the low density
conditions. Note that 10–20% of the objects were classified as
null, i.e., containing no plants. This corresponds to errors in
separating plants from the background: objects such as straw
residues, stone, or weeds may show colors difficult to separate
in the classification step. Further, due to the variability of the
illumination conditions, plants may be misclassified into two
disconnected objects. In this case, the larger part is considered as a
plant while the smaller remaining part is considered as non-plant,
i.e., set to 0.
Most of the 13 features described in Table 2 are closely related
as illustrated by the plot-matrix of the Toulouse site (Figure 5).
Correlations are particularly high between the four area related
features (F1, F2, F3, F6), between the skeleton derived features
(F11, F12, F13), and between the area and skeleton related
features. Similar correlations were observed over the Paris and
Avignon sites. These strong relationships indicate the presence
of redundancy between the 13 features, which may confuse the
training of ANN. However, this could be partly overcome by the
RFE feature selection algorithm.
The estimation performances of the number of plants per
object were evaluated with the RMSE metrics as a function of the
number of features used (Figure 6 and Table 3). Note that the
RMSE value was calculated based on the visual identification of
the number of plants per object in the dataset. Figure 6 shows that
the RMSE decreases consistently when the number of features
used increases. However, after using the first four features, the
Frontiers in Plant Science | 6May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 7
Liu et al. Wheat Density Estimation from RGB Imagery
FIGURE 6 | RMSE associated to the estimates of the number of plants
per object as a function of the number of features used. The RMSE was
evaluated over the test data set for each individual site.
TABLE 3 | Performance of the estimation of the number of plants per
object over three experiments.
Sites Training
nnode Number of
Toulouse 606 2 10 0.83 0.83 0.28
Paris 347 2 8 0.79 0.47 0.077
Avignon 476 2 4 0.61 0.87 0.45
improvement in estimation performances is relatively small when
including remaining features. The number of features required
according to our criterion (1.02. RMSEbest) varies from 10
(Toulouse) to 4 (Avignon). A more detailed inspection of the
main features used across the three sites (Table 4) shows the
importance of the area related features (F1, F2, F3, F4, and F6)
despite their high inter-correlation (Figure 5). The length of
the skeleton (F11) also appears important particularly for the
Avignon site, while the orientation and extent do not help much
(Table 4).
As expected, the model performs the best for the Paris site
(Table 3) where the situation is simpler because of the low
density inducing limited overlap between plants (Figure 4).
For sowing density <=300 seeds·m2, a better accuracy is
reached in Toulouse (RMSE =0.51) and Avignon (RMSE =0.68)
sites. Conversely, the larger number of null objects (Figure 4)
corresponding to misclassified objects or split plants in the
Avignon site, explains the degraded performance (Table 3). The
bias in the estimation of the number of plants per object appears
relatively small, except for the Avignon site. Attention should be
paid on the bias since the application of the neural network on a
larger number of objects is not likely to improve the estimation
of the total number of plants. The bias is mostly due to difficulties
associated to the misclassified objects (Figure 7). Note that the
TABLE 4 | Features selected and the corresponding rank over three sites.
# Features Toulouse Paris Avignon
F1Area 2 1 3
F2FilledArea 1 3
F3ConvexArea 4 4 2
F4Solidity 10 7
F6EquivDiameter 3 2 4
F8MinorAxisLength 5 5
F9Eccentricity 8 8
F10 Orientation
F11 LengthSkelet 6 6 1
F12 NumEnd 7
F13 NumBranch 9
estimation performance degraded for the larger number of plants
per objects (Figure 7) as a consequence of more ambiguities and
smaller samples used in the training process.
Performance of the Method for Plant
Density Estimation
The estimates of plant density were computed by summing the
number of plants in all the objects extracted from the row
segments identified in the images, divided by the segment area
(product of the segment length and the row spacing). The
reference density was computed from the visually identified
plants. Results show a good agreement between observations
and predictions over sowing densities ranging from 100 to 600
plants·m2(Figure 8). The performances slightly degrade for
densities higher than 350 plants·m2. This may be explained
by the difficulty to handle more complex situation when plant
spacing decreases, with a higher probability of plant overlap
(Figure 7). Note that the slight overestimation observed for the
low densities in the Avignon site is mainly attributed to the bias
in the estimation of the number of plants per object due to the
classification problem already outlined.
The method proposed in this study relies on the ability to identify
plants or group of plants from RGB images. Image classification
is a thus a critical step driving the accuracy of the plant
density estimation. Wheat plants at emergence have a relatively
simple structure and color. The image quality is obviously very
important, including the optimal spatial resolution that should
be better than 0.4 mm as advised by Jin et al. (2016). Further,
the image quality should not be compromised by undesirable
effects due to image compression algorithms. As a consequence,
when the resolution is between 0.2 and 0.5 mm, it would be
preferable to record images in raw format to preserve its quality.
A known and fixed white balance should be applied to make the
series of images comparable in terms of color. Finally, the view
direction was chosen to increase the plant cross section by taking
Frontiers in Plant Science | 7May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 8
Liu et al. Wheat Density Estimation from RGB Imagery
FIGURE 7 | Comparison between the estimated number of plants per object with the value measured over the test dataset for each individual site.
FIGURE 8 | Performance of density estimation over the three sites.
images inclined at around 45zenith angle in a compass direction
perpendicular to the row orientation. Note that too inclined views
may result in large overlap of plants from adjacent rows which
will pose problems for row (and plant) identification.
Plants were separated from the background based on their
green color. A unique unsupervised method based on the Lab
transform on which automatic thresholding is applied was
used with success across a range of illumination conditions.
However, the method should be tested under a much larger
range of illumination and soil conditions before ensuring that
it is actually applicable in all scenarios. Additionally, attention
should be paid to weeds that are generally green. Fortunately,
weeds were well-controlled in our experiments. Although this
is also generally observed during emergence, weed detection
algorithm could be integrated in the pipeline in case of significant
infestation. Weeds may be identified by their position relative to
the row (Woebbecke et al., 1995). However, for the particular
observational configuration proposed (45perpendicular to the
row), the application of these simple algorithms are likely to fail.
Additional (vertical) images should be taken, or more refined
methods based on the color (Gée et al., 2008) or shape (Swain
et al., 2011) should be implemented.
Once the binary images are computed from the original RGB
ones, objects containing uncertain number of plants can be easily
identified. An ANN method was used in this study to estimate the
number of plants from the 13 features of each object. Alternative
machine learning techniques were tested including random forest
(Breiman, 2001), multilinear regression (Tabachnick et al., 2001)
and generalized linear model (Lopatin et al., 2016). The ANN was
demonstrated to perform better for the three sites (results not
presented in this study for the sake of brevity). The RFE algorithm
used to select the minimum subset of features to best estimate
the number of plants per object (Granitto et al., 2006) resulted in
4–10 features depending on the data set considered. The features
selected are mainly related to the object area and the length of
the corresponding skeleton. Conversely, object orientation and
extent appear to contribute marginally to the estimation of the
number of plants per object. The RFE framework employed
here partly accounts for the strong co-dependency between the
13 features considered. The selection process could probably be
improved using a recursive scheme similar to the one employed
in stepwise regression, or a transformation of the space of the
input features.
The wheat population density was estimated with an average
of 12% relative error. The error increases with the population
density because of the increase of overlap between plants
creating larger objects, hence making it more difficult to associate
accurately the number of plants they contain. Likewise, a
degradation of the performances is also expected when plants
are well-developed. Jin and Tang (2009) found that the selection
of the optimal growth stage is critical to get accurate estimation
of the plant density in maize crops. A timely observation just
between Haun stage 1.5–2 corresponding to 1.5–2 phyllochron
after emergence appears optimal: plants are enough developed to
Frontiers in Plant Science | 8May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 9
Liu et al. Wheat Density Estimation from RGB Imagery
be well-identified while the overlap between plants is minimized
because of the low number of leaves (between 1 and 2) and their
relatively erect orientation. However, in case of heterogeneous
emergence, it is frequent to observe a delay of about 1
phyllochron (Jamieson et al., 2008;Hokmalipour, 2011) between
the first and the last plant emerged. Observation between Haun
stage 1.5 and 2 can thus ensure that the majority has emerged.
Since the phyllochron varies between 63 and 150C·d (McMaster
and Wilhem, 1995), the optimal time window of 0.5 phyllochron
(between Haun stage 1.5 and 2) can last about 4–8 days under an
average 10C air temperature. This short optimal time window
for acquiring the images is thus a strong constraint when
operationally deploying the proposed method.
The success of the method relies heavily on the estimation
of the number of plants per object. The machine learning
technique used in this study was trained independently for
each site. This provides the best performances because it takes
into account the actual variability of single plant structure that
depends on its development stage at the time of observation,
on the genotypic variability as well as on possible influence
of the environmental conditions, especially wind. Operational
deployment of the method therefore requires the model to
be re-calibrated over each new experimental site. However, a
single training encompassing all the possible situations may
be envisioned in near future. This requires a large enough
training data set representing the variability of genotypes,
development stage and environmental conditions. This single
training data base could also include other cereal crop species
similar to wheat at emergence such as barley, triticale, or
Several vectors could be used to take the RGB images,
depending mostly on the size of the experiment and the
resources available. A monopod and a light rolling platform, the
Phenotypette, were used in our study. More sophisticated vectors
with higher throughput could be envisioned in the next step,
based either on a semi-automatic (Comar et al., 2012) or fully
automatic rover (de Solan et al., 2015) or on a UAV platform as
recently demonstrated by Jin et al. (2016).
The experiment and algorithm development were mainly
accomplished by SL and FB. SL wrote the manuscript and FB
made very significant revisions. BA also read and improved
the final manuscript. All authors participated the discussion of
experiment design. BA, PB, and MH significantly contributed
to the field experiment in Paris, Toulouse, and Avignon,
This study was supported by “Programme d’investissement
d’Avenir” PHENOME (ANR-11-INBS-012) and Breedwheat
(ANR-10-BTR-03) with participation of France Agrimer and
“Fonds de Soutien à l’Obtention Végétale”. The grant of the
principal author was funded by the Chinese Scholarship Council.
We also thank the people from Paris, Toulouse, and Avignon sites
who participated in the field experiments.
Breiman, L. (2001). Random forests. J. Mach. Learn. 45, 5–32. doi: 10.1023/A:
Canny, J. (1986). A computational approach to edge detection. IEEE Trans. Pattern
Anal. Mach. Intell. 8, 679–698. doi: 10.1109/TPAMI.1986.4767851
Comar, A., Burger, P., de Solan, B., Baret, F., Daumard, F., and Hanocq, J.-F. (2012).
A semi-automatic system for high throughput phenotyping wheat cultivars in-
field conditions: description and first results. Funct. Plant Biol. 39, 914–924.
doi: 10.1071/FP12065
de Solan, B., Baret, F., Thomas, S., Beauchêne, K., Comar, A., Fournier, A., et al.
(2015). “PHENOMOBILE V: a fully automated high throughput phenotyping
system,” in Poster at the Eucarpia, Montpellier.
Gée, C., Bossu, J., Jones, G., and Truchetet, F. (2008). Crop/weed discrimination
in perspective agronomic images. Comput. Electron. Agric. 60, 49–59.
doi: 10.1016/j.compag.2007.06.003
Gevrey, M., Dimopoulos, I., and Lek, S. (2003). Review and comparison of methods
to study the contribution of variables in artificial neural network models. Ecol.
Model. 160, 249–264. doi: 10.1016/S0304-3800(02)00257- 0
Godwin, R., and Miller, P. (2003). A review of the technologies for mapping
within-field variability. Biosyst. Eng. 84, 393–407. doi: 10.1016/S1537-5110(02)
Granitto, P. M., Furlanello, C., Biasioli, F., and Gasperi, F. (2006). Recursive feature
elimination with random forest for PTR-MS analysis of agroindustrial products.
Chemometr. Intell. Lab. Syst. 83, 83–90. doi: 10.1016/j.chemolab.2006.01.007
Guo, W., Rage, U. K., and Ninomiya, S. (2013). Illumination invariant
segmentation of vegetation for time series wheat images based on decision
tree model. Comput. Electron. Agric. 96, 58–66. doi: 10.1016/j.compag.2013.
Guyon, I., Weston, J., Barnhill, S., and Vapnik, V. (2002). Gene selection for
cancer classification using support vector machines. Mach. Learn. 46, 389–422.
doi: 10.1023/A:1012487302797
Hokmalipour, S. (2011). The study of phyllochron and leaf appearance rate in three
cultivar of maize (Zea mays L.) At Nitrogen Fertilizer Levels. World Appl. Sci. J.
12, 850–856.
Jamieson, P., Brooking, I., Zyskowski, R., and Munro, C. (2008). The vexatious
problem of the variation of the phyllochron in wheat. Field Crops Res. 108,
163–168. doi: 10.1016/j.fcr.2008.04.011
Jin, J., and Tang, L. (2009). Corn plant sensing using real-time stereo vision. J. Field
Robot. 26, 591–608. doi: 10.1002/rob.20293
Jin, X., Liu, S., Baret, F., Hemerlé, M., and Comar, A. (2016). Estimates of plant
density from images acquired from UAV over wheat crops at emergence.
Remote Sens. Environ.
Kuhn, M., and Johnson, K. (2013). Applied Predictive Modeling. Berlin: Springer.
doi: 10.1007/978-1- 4614-6849-3
Li, H., Worley, S., and Wilkerson, J. (2009). Design and optimization of a
biomass proximity sensor. Trans. ASABE 52, 1441–1452. doi: 10.13031/2013.
Liu, J., Miller, J. R., Haboudane, D., Pattey, E., and Hochheim, K. (2008). Crop
fraction estimation from casi hyperspectral data using linear spectral unmixing
and vegetation indices. Can. J. Remote Sens. 34, S124–S138. doi: 10.5589/
Lopatin, J., Dolos, K., Hernández, H., Galleguillos, M., and Fassnacht, F. (2016).
Comparing generalized linear models and random forest to model vascular
Frontiers in Plant Science | 9May 2017 | Volume 8 | Article 739
fpls-08-00739 May 13, 2017 Time: 16:27 # 10
Liu et al. Wheat Density Estimation from RGB Imagery
plant species richness using LiDAR data in a natural forest in central Chile.
Remote Sens. Environ. 173, 200–210. doi: 10.1016/j.rse.2015.11.029
Louppe, G. (2014). Understanding random forests: from theory to practice. arXiv
1407, 7502.
Marchant, J. (1996). Tracking of row structure in three crops using image analysis.
Comput. Electron. Agric. 15, 161–179. doi: 10.1016/0168-1699(96)00014-2
McMaster, G. S., and Wilhem, W. W. (1995). Accuracy of equations predicting
the phyllochron of wheat. Crop Sci. 35, 30–36. doi: 10.2135/cropsci1995.
Meyer, G. E., and Neto, J. C. (2008). Verification of color vegetation indices for
automated crop imaging applications. Comput. Electron. Agric. 63, 282–293.
doi: 10.1016/j.compag.2008.03.009
Montalvo, M., Pajares, G., Guerrero, J. M., Romeo, J., Guijarro, M., Ribeiro, A.,
et al. (2012). Automatic detection of crop rows in maize fields with high weeds
pressure. Expert Syst. Appl. 39, 11889–11897. doi: 10.1016/j.eswa.2012.02.117
Nakarmi, A., and Tang, L. (2012). Automatic inter-plant spacing sensing at early
growth stages using a 3D vision sensor. Comput. Electron. Agric. 82, 23–31.
doi: 10.1016/j.compag.2011.12.011
Nakarmi, A. D., and Tang, L. (2014). Within-row spacing sensing of maize
plants using 3D computer vision. Biosyst. Eng. 125, 54–64. doi: 10.1016/j.
Nichols, S. W. (2000). Method and Apparatus for Counting Crops. US 6073427.
Norman, D. W. (1995). The Farming Systems Approach to Development and
Appropriate Technology Generation. Rome: Food Agriculture Organization.
Olden, J. D., and Jackson, D. A. (2002). Illuminating the “black box”: a
randomization approach for understanding variable contributions in artificial
neural networks. Ecol. Model. 154, 135–150. doi: 10.1016/S0304-3800(02)
Olsen, J., Kristensen, L., and Weiner, J. (2006). Influence of sowing density and
spatial pattern of spring wheat (Triticum aestivum) on the suppression of
different weed species. Weed Biol. Manag. 6, 165–173. doi: 10.1111/j.1445-6664.
Otsu, N. (1975). A threshold selection method from gray-level histograms.
Automatica 11, 23–27.
Philipp, I., and Rath, T. (2002). Improving plant discrimination in image
processing by use of different colour space transformations. Comput. Electron.
Agric. 35, 1–15. doi: 10.1016/S0168-1699(02)00050-9
Rovira-Más, F., Zhang, Q., Reid, J., and Will, J. (2005). Hough-transform-based
vision algorithm for crop row detection of an automated agricultural vehicle.
J. Automobile Eng. 219, 999–1010. doi: 10.1243/095440705X34667
Sankaran, S., Khot, L. R., and Carter, A. H. (2015). Field-based crop phenotyping:
multispectral aerial imaging for evaluation of winter wheat emergence and
spring stand. Comput. Electron. Agric. 118, 372–379. doi: 10.1016/j.compag.
Seni, G., and Elder, J. F. (2010). Ensemble methods in data mining: improving
accuracy through combining predictions. Synth. Lect. Data Mining Knowl.
Discov. 2, 1–126. doi: 10.1186/1471-2105- 14-206
Shi, Y., Wang, N., Taylor, R., and Raun, W. (2015). Improvement of
a ground-LiDAR-based corn plant population and spacing measurement
system. Comput. Electron. Agric. 112, 92–101. doi: 10.1016/j.compag.2014.
Shi, Y., Wang, N., Taylor, R. K., Raun, W. R., and Hardin, J. A. (2013). Automatic
corn plant location and spacing measurement using laser line-scan technique.
Precis. Agric. 14, 478–494. doi: 10.1007/s11119-013-9311-z
Shrestha, D. S., and Steward, B. L. (2003). Automatic corn plant population
measurement using machine vision. Trans. ASAE 46, 559. doi: 10.13031/2013.
Shrestha, D. S., and Steward, B. L. (2005). Shape and size analysis of corn
plant canopies for plant population and spacing sensing. Appl. Eng. Agric. 21,
295–303. doi: 10.13031/2013.18144
Slaughter, D., Giles, D., and Downey, D. (2008). Autonomous robotic weed control
systems: a review. Comput. Electron. Agric. 61, 63–78. doi: 10.1016/j.compag.
Swain, K. C., Nørremark, M., Jørgensen, R. N., Midtiby, H. S., and Green, O.
(2011). Weed identification using an automated active shape matching (AASM)
technique. Biosyst. Eng. 110, 450–457. doi: 10.1016/j.biosystemseng.2011.
Tabachnick, B. G., Fidell, L. S., and Osterlind, S. J. (2001). Using Multivariate
Statistics. Glenview, IL: Harpercollins College Publishers.
Tang, L., and Tian, L. F. (2008a). Plant identification in mosaicked crop row images
for automatic emerged corn plant spacing measurement. Trans. ASABE 51,
2181–2191. doi: 10.13031/2013.25381
Tang, L., and Tian, L. F. (2008b). Real-time crop row image reconstruction
for automatic emerged corn plant spacing measurement. Trans. ASABE 51,
1079–1087. doi: 10.13031/2013.24510
Tuv, E., Borisov, A., Runger, G., and Torkkola, K. (2009). Feature selection with
ensembles, artificial variables, and redundancy elimination. J. Mach. Learn. Res.
10, 1341–1366.
c, I., Cupec, R., and Hocenski, Ž (2016). Crop row detection by global energy
minimization. Pattern Recognit. 55, 68–86. doi: 10.1016/j.patcog.2016.01.013
Woebbecke, D., Meyer, G., Von Bargen, K., and Mortensen, D. (1995). Color
indices for weed identification under various soil, residue, and lighting
conditions. Trans. ASAE 38, 259–269. doi: 10.13031/2013.27838
Conflict of Interest Statement: The authors declare that the research was
conducted in the absence of any commercial or financial relationships that could
be construed as a potential conflict of interest.
Copyright © 2017 Liu, Baret, Andrieu, Burger and Hemmerlé. This is an open-access
article distributed under the terms of the Creative Commons Attribution License
(CC BY). The use, distribution or reproduction in other forums is permitted, provided
the original author(s) or licensor are credited and that the original publication in this
journal is cited, in accordance with accepted academic practice. No use, distribution
or reproduction is permitted which does not comply with these terms.
Frontiers in Plant Science | 10 May 2017 | Volume 8 | Article 739
... However, previous studies that utilized machine learning only focused on the number of plants of a single type of crop. These supervised models may not work well if they were directly migrated to another crop because plant stand counting was affected by many factors (crop type, plant size, leaf overlap, variable spacing, etc.) (Csillik et al., 2018;Jin et al., 2017;Liu et al., 2017aLiu et al., , 2017bZhao et al., 2018). When these factors change, the model needs to be re-trained, which requires extensive training data, considerable time, and space (Machefer, 2020). ...
... Zhao et al. (2018) demonstrated the same conclusion that a strong correlation existed between plant shape and number. This problem caused by the shape of plants was particularly prominent on small crops such as wheat (Liu et al., 2017a(Liu et al., , 2017b. In such cases, the supervised learning methods provide excellent counting performance (Varela et al., 2018). ...
... This may be because the template method needs high-resolution images to train maize image features and can be affected by weed in the field. Similarly, a neural network was trained to extract wheat features and count the number of plants with a relative error of 12% (Liu et al., 2017a(Liu et al., , 2017b. These methods were suitable at early growth stage when the distance between crops was large and there was no weed. ...
Full-text available
Acquiring the crop plant count is critical for enhancing field decision-making at the seedling stage. Remote sensing using unmanned aerial vehicles (UAVs) provide an accurate and efficient way to estimate plant count. However, there is a lack of a fast and robust method for counting plants in crops with equal spacing and overlapping. Moreover, previous studies only focused on the plant count of a single crop type. Therefore, this study developed a method to fast and non-destructively count plant numbers using high-resolution UAV images. A computer vision-based peak detection algorithm was applied to locate the crop rows and plant seedlings. To test the method’s robustness, it was used to estimate the plant count of two different crop types (maize and sunflower), in three different regions, at two different growth stages, and on images with various resolutions. Maize and sunflower were chosen to represent equidistant crops with distinct leaf shapes and morphological characteristics. For the maize dataset (with different regions and growth stages), the proposed method attained R2 of 0.76 and relative root mean square error (RRMSE) of 4.44%. For the sunflower dataset, the method resulted in R2 and RRMSE of 0.89 and 4.29%, respectively. These results showed that the proposed method outperformed the watershed method (maize: R2 of 0.48, sunflower: R2 of 0.82) and better estimated the plant numbers of high-overlap plants at the seedling stage. Meanwhile, the method achieved higher accuracy than watershed method during the seedling stage (2–4 leaves) of maize in both study sites, with R2 up to 0.78 and 0.91, respectively, and RRMSE of 2.69% and 4.17%, respectively. The RMSE of plant count increased significantly when the image resolution was lower than 1.16 cm and 3.84 cm for maize and sunflower, respectively. Overall, the proposed method can accurately count the plant numbers for in-field crops based on UAV remote sensing images.
... Unmanned aerial vehicles (UAV) have become more widely used for autonomous mission planning in precision agriculture (Duan et al., 2017;Li et al., 2020) as this method is completely nondestructive at all growth stages (Portz et al., 2012) and can be used under adverse field conditions with adjustable speeds (Chapman et al., 2014). Vegetation indices (VIs) calculated from UAV images demonstrated promising results in ground cover estimation and plant emergence for a variety of row crops (Chu et al., 2016;Duan et al., 2017;Jin et al., 2017;Koh et al., 2019;Li et al., 2019;Liu et al., 2017;Zhao et al., 2018). ...
... However, due to low spectral resolution, it was challenging to identify and segment plants from the background (Zhao et al., 2018). Jin et al. (2017) and Liu et al. (2017) advocated high-resolution multispectral images for improved accuracy under overlapped seedling conditions. ...
... One of the major limitations of the quality of the UAV image analysis was the ground resolution that might affect segmentation accuracy and repeatability (Jin et al., 2017;Liu et al., 2017;Weber et al., 2006). The RMSE and MAE in our data could have been significantly reduced if the spatial resolution was higher. ...
Full-text available
Abstract Plant density and canopy cover are key agronomic traits for cotton (Gossypium hirsutum L.) and sorghum [Sorghum bicolor (L.) Moench] phenotypic evaluation. The objective of this study was to evaluate utility of broadband red–green–blue (RGB) and narrowband green, red, red‐edge, and near‐infrared spectral data taken by an unmanned aerial vehicle (UAV), and RGB taken by a digital single‐lens reflex camera for assessing the cotton and sorghum stands. Support Vector Machine was used to analyze UAV images, whereas ImageJ was used for RGB images. Fifteen vegetation indices (VIs) were evaluated for their accuracy, predictability, and residual yield. All VIs had Cohen's k > .65, F score > .63, and User and Producer accuracy of more than 71 and 69%, respectively. Soil‐adjusted vegetation indices (SAVIs) among narrowband VIs and excess green minus excess red (ExG–ExR) among broadband VIs provided more agreeable estimates of cotton and sorghum density than the remaining VIs with R2 and index of agreement (IoA) up to .79 and .92, respectively. The estimated canopy cover explained up to 83 and 82% variability in leaf area index (LAI) of cotton and sorghum, respectively. The ImageJ produced R2 from .79 to .90 and .83 to .86 and IoA .89 to .97 and ∼.91 between estimated and observed cotton and sorghum density, respectively. ImageJ explained up to 82 and 79% variability in cotton and sorghum LAI, respectively. Although ImageJ can give close estimates of crop density and cover, UAV‐based narrowband VIs still can provide an agreeable, reliable, and time‐efficient estimate of these attributes.
... Plant density is a critical variable for crop growth and yield by influencing inter-and intraspecific competition for the available resources (e.g., water, nutrients, and radiation) [1]. During the growth season, sowing and planting densities should be similar in optimal conditions. ...
... This study presented a deep learning model for wheat plant density estimation. Unlike traditional research that estimated wheat plant density before tillering [1,13], this study challenged the wheat density after tillering when wheat plants were highly clustered. Specifically, this study's main research contributions are as follows. ...
Full-text available
Plant density is a significant variable in crop growth. Plant density estimation by combining unmanned aerial vehicles (UAVs) and deep learning algorithms is a well-established procedure. However, flight companies for wheat density estimation are typically executed at early development stages. Further exploration is required to estimate the wheat plant density after the tillering stage, which is crucial to the following growth stages. This study proposed a plant density estimation model, DeNet, for highly accurate wheat plant density estimation after tillering. The validation results presented that (1) the DeNet with global-scale attention is superior in plant density estimation, outperforming the typical deep learning models of SegNet and U-Net; (2) the sigma value at 16 is optimal to generate heatmaps for the plant density estimation model; (3) the normalized inverse distance weighted technique is robust to assembling heatmaps. The model test on field-sampled datasets revealed that the model was feasible to estimate the plant density in the field, wherein a higher density level or lower zenith angle would degrade the model performance. This study demonstrates the potential of deep learning algorithms to capture plant density from high-resolution UAV imageries for wheat plants including tillers.
... The relationship between VIs and canopy status was found to be strongly influenced by the phenological stage of maize plants (Burkart et al., 2018). During the initial stage when the maize leaves did not cover the whole plots, the VI values calculated from UAV images may be influenced by the background such as soil and other disturbances even though methods had been adopted to eliminate the impacts (Meyer and Neto, 2008;Jin et al., 2017;Liu et al., 2017). Therefore, it was suggested to integrate the VI acquired at important growth stages of maize which can reflect more precisely the temporal dynamic changes of growth conditions and achieve the highest precision for yield prediction (Guo et al., 2020). ...
Full-text available
Recovery of biobased fertilizers derived from manure to replace synthetic fertilizers is considered a key strategy to close the nutrients loop for a more sustainable agricultural system. This study evaluated the nitrogen (N) fertilizer value of five biobased fertilizers [i.e., raw pig manure (PM), digestate (DIG), the liquid fraction of digestate (LFD), evaporator concentrate (EVA) and ammonia water (AW)] recovered from an integrated anaerobic digestion–centrifugation–evaporation process. The shoot and root growth of maize (Zea mays L.) under biobased fertilization was compared with the application of synthetic mineral N fertilizer, i.e., calcium ammonium nitrate (CAN). The non-invasive technologies, i.e., minirhizotron and unmanned aerial vehicle (UAV) based spectrum sensing, were integrated with the classic plant and soil sampling to enhance the in-season monitoring of the crop and soil status. Results showed no significant difference in the canopy status, biomass yield or crop N uptake under biobased fertilization as compared to CAN, except a lower crop N uptake in DIG treatment. The total root length detected by minirhizotron revealed a higher early-stage N availability at the rooting zone under biobased fertilization as compared to CAN, probably due to the liquid form of N supplied by biobased fertilizers showing higher mobility in soil under dry conditions than the solid form of CAN. Given a high soil N supply (averagely 70–232 kg ha−1) in the latter growing season of this study, the higher N availability in the early growing season seemed to promote a luxury N uptake in maize plants, resulting in significantly (p < 0.05) higher N concentrations in the harvested biomass of PM, LFD and AW than that in the no-N fertilized control. Therefore, the biobased fertilizers, i.e., PM, LFD, EVA and AW have a high potential as substitutes for synthetic mineral N fertilizers, with additional value in providing easier accessible N for crops during dry seasons, especially under global warming which is supposed to cause more frequent drought all over the world.
... VF, GF, and SF can be also computed using very high spatial resolution images with pixel sizes from a fraction of mm to cm, i.e., significantly smaller than the typical dimension of the objects (plants, organs). RGB cameras with few to tens of millions of pixels are currently widely used as noninvasive high-throughput techniques applied to plant breeding, farm management, and yield prediction [14][15][16]. These cameras are borne on multiple platforms, including drones [17], ground vehicles [18], and handheld systems [19], or set on a fixed pod [16]. ...
Full-text available
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated ( R 2 = 0.94 ) by the SegVeg model, while the senescent and background fractions show slightly degraded performances ( R 2 = 0.70 and 0.73 , respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.
... In this study, cultivation density had a significant effect on the content of anisodamine ( Figure 2B) and anisodine ( Figure 2D), i.e., medium density (40 cm × 50 cm) had significantly higher levels than other cultivation densities in terms of alkaloid content. Therefore, cultivation density affected the synthesis of anisodamine and anisodine to some extent [40,41]. Cultivation density further affected the secondary metabolites of medicinal plants by influencing the primary metabolism of plants, and it has been shown that high density not only inhibits glycolysis, the tricarboxylic acid cycle, and sugar and starch metabolic processes in the young leaves of Ginkgo biloba, but also inhibits the flavonoid biosynthesis pathway [42]; the increase in glucose-1-phosphate levels under medium density conditions promoted ginsenoside synthesis in ginseng [9]. ...
Full-text available
Anisodus tanguticus (Maxim.) Pascher, a medicinal plant growing in the Tibetan Plateau region with various medicinal values, is mainly used for the extraction of tropane alkaloids (TAs), and the increased demand for A. tanguticus has triggered its overexploitation. The cultivation of this plant is necessary for the quality control and conservation of wild resources. During 2020 and 2021, a split-plot experiment with three replicates was used to study different planting densities (D1: 30 × 50 cm; D2: 40 × 50 cm; D3: 50 × 50 cm; D4: 60 × 50 cm) and different growth periods (first withering period: October 2020; greening period: June 2021; growth period: August 2021; second withering period: October 2021) on the yield and alkaloid content (atropine, scopolamine, anisodamine, anisodine) of A. tanguticus. The results showed that the mass per plant of A. tanguticus was higher at low density, while the yield per unit area of the underground parts (25288.89 kg/ha) was greater at high density, and the mass of the aboveground parts (14933.33 kg/ha) was higher at low density. The anisodamine (0.0467%) and anisodine (0.1201%) content of D2 (40 cm × 50 cm) was significantly higher than that of the other densities during the green period. The content of all four alkaloids was highest during the greening period, and the scopolamine, anisodamine, and anisodine content was higher in the aboveground parts than in the underground parts. The total alkaloid accumulation per unit area of the whole plant reached its maximum value (1.08%, 139.48 kg/ha) in the growth period of D2; therefore, for economic efficiency and the selection of the best overall quality, it was concluded that the aboveground parts also had medicinal value, the growth period was the best harvesting period, and D2 (40 cm × 50 cm) was the best planting density for A. tanguticus.
... In this setting, investigating non-destructive highthroughput approaches to estimate spike DM through volume or size could be valuable in screening for this trait, as discussed above. Considering the other numerical component of FE (grain number, a product of spike number per unit area and grains per spike) (Slafer et al., 1996), neural network approaches to count plants (Liu et al., 2017), or spikes per unit area (Velumani et al., 2017;Fernandez-Gallego et al., 2018;Sadeghi-Tehran et al., 2019) from RGB images in wheat plots have shown promising results. On the contrary, grain counting (per spike or unit area) in the field represents a bigger challenge and is best estimated or derived from samples taken after harvest (Dreccer et al., 2019). ...
As the global population moves towards 10 billion people, record high temperatures are being set on an annual basis, while agricultural soils and water resources are experiencing serious attrition. Clearly the challenge of food security is ever more urgent. Increasing genetic gains of staple foods is a key part of the solution, since seed-embedded technologies are readily adopted by farmers. Improved breeding technologies facilitate the identification of complementary parents for hybridization and the selection of improved progeny. This chapter outlines a historical basis for plant selection and examines the areas in which sensor technology has evolved from our eyes to the application of proximal and remote sensing. The use of none-invasive methods are presented that can aid with selection of crop characteristics critical for improving yield potential, such as photosynthesis and partitioning-related traits, as well as the detection of traits that help protect yield, related to disease and lodging resistance.
... However, the current application of the method is limited to plant counting under the condition of uniform planting in the field (Fig. 9). Liu, Baret, et al. (2017) proposed that there was redundancy between morphological features, which may confuse the training of the discriminative model. The morphological characteristics of maize seedlings were compared here with weeds and the contribution of various morphological characteristics were then assessed to improve the accuracy of the discriminative model at V2-V4 leaf stage ( Table 2). ...
Full-text available
Accurate maize plant counting plays an essential role in prediction of leaf area index (LAI), aboveground biomass (AGB) and yield. Plant counting of maize inbred lines at early growth stage will result in counting bias caused by death and growth of small seedlings. Therefore, the estimation of LAI and AGB might be negatively affected by plant counting bias at early growth stage. In this study, morphologic discrimination model (MDM) and interpolation discriminant model (IDM) were proposed for plant counting of maize inbred lines at second to fourth (V2–V4) leaf and fourth to sixth (V4–V6) leaf stages with different uncrewed aerial vehicles (UAV) flight heights. Automatic optimum angle calculation of each row, location-based plant cluster segmentation and mosaic method were presented to improve the estimation accuracy of plant counting. Then, the impact of accurate plant counting was evaluated in LAI and AGB prediction at the two growth stages. The results indicated that germination rate difference of some inbred lines could reach up to 38% between V2–V4 and V4–V6 leaf stages. The proposed method accurately estimated the plant counting in the UAV images during V2–V4 leaf stage (R² = 0.98, RMSE = 7.7, rRMSE = 2.6%) and V4–V6 leaf stage (R² = 0.86, RMSE = 2.0, rRMSE = 5.5%). The estimated LAI and AGB with plant numbers calculated at V4–V6 leaf stage correlated better with the field measurements (R² = 0.85 and R² = 0.9, respectively) compared with those estimated at V2–V4 leaf stage (R² = 0.8 and R² = 0.86, respectively). This research indicates that better estimation of LAI and AGB in the field were obtained by accurate plant counting in the late growth stage using UAV images and provides valuable insight for more accurate prediction of yield and crop management and breeding.
... So far, there has been a lot of research on the practical application of digital camera images to crop growth status monitoring (Hunt et al., 2005;Yu et al., 2013), including identification and monitoring of crop seedlings under field conditions (Buters et al., 2019). Liu et al. (2017) developed a field mobile platform equipped with a digital camera to estimate plant density. The results showed that the wheat seeding plant density was between 100 and 600 seeds·m −2 and the average relative error was 12%. ...
Full-text available
Accurately identifying the quantity of maize seedlings is useful in improving maize varieties with high seedling emergence rates in a breeding program. The traditional method is to calculate the number of crops manually, which is labor-intensive and time-consuming. Recently, observation methods utilizing a UAV have been widely employed to monitor crop growth due to their low cost, intuitive nature and ability to collect data without contacting the crop. However, most investigations have lacked a systematic strategy for seedling identification. Additionally, estimating the quantity of maize seedlings is challenging due to the complexity of field crop growth environments. The purpose of this research was to rapidly and automatically count maize seedlings. Three models for estimating the quantity of maize seedlings in the field were developed: corner detection model (C), linear regression model (L) and deep learning model (D). The robustness of these maize seedling counting models was validated using RGB images taken at various dates and locations. The maize seedling recognition rate of the three models were 99.78% (C), 99.9% (L) and 98.45% (D) respectively. The L model can be well adapted to different data to identify the number of maize seedlings. The results indicated that the high-throughput and fast method of calculating the number of maize seedlings is a useful tool for maize phenotyping.
... VF, GF, and SF can be also computed using very high spatial resolution images with pixel sizes from a fraction of mm to cm, i.e., significantly smaller than the typical dimension of the objects (plants, organs). RGB cameras with few to tens of millions of pixels are currently widely used as noninvasive high-throughput techniques applied to plant breeding, farm management, and yield prediction [14][15][16]. These cameras are borne on multiple platforms, including drones [17], ground vehicles [18], and handheld systems [19], or set on a fixed pod [16]. ...
This paper presents a new efficient method for crop row detection which uses a dynamic programming technique to combine image evidence and prior knowledge about the geometric structure which is searched for in the image. The proposed approach consists of three steps, i.e., (i) vegetation detection, (ii) detection of regular patterns, and (iii) determining an optimal crop model. The method is capable of accurately detecting both straight and curved crop rows. The proposed approach is experimentally evaluated on a set of 281 real-world camera images of crops of maize, celery, potato, onion, sunflower and soybean. The proposed approach is compared to two Hough transform based methods and one method based on linear regression. The methods are compared using a novel approach for evaluation of crop row detection methods. The experiments performed demonstrate that the proposed method outperforms the other three considered methods in straight crop row detection and that it is capable of detecting curved crop rows accurately.
Image processing algorithms for individual corn plant and plant stem center identification were developed. These algorithms were applied to mosaicked crop row image for automatically measuring corn plant spacing at early growth stages. These algorithms utilized multiple sources of information for corn plant detection and plant center location estimation including plant color, plant morphological features, and the crop row centerline. The algorithm was tested over two 41 m (134.5 ft) long corn rows using video acquired two times in both directions. The system had a mean plant misidentification ratio of 3.7%. When compared with manual plant spacing measurements, the system achieved an overall spacing error (RMSE) of 1.7 cm and an overall R 2 of 0.96 between manual plant spacing measurement and the system estimates. The developed image processing algorithms were effective in automated corn plant spacing measurement at early growth stages. Interplant spacing errors were mainly due to crop damage and sampling platform vibration that caused mosaicking errors. © 2008 American Society of Agricultural and Biological Engineers.
The physical growing environment of winter wheat can critically be affected by micro-climatic and seasonal changes in a given agroclimatic zone. Therefore, winter wheat breeding efforts across the globe focus heavily on emergence and winter survival, as these traits must first be accomplished before yield potential can be evaluated. In this study, multispectral imaging using unmanned aerial vehicle was investigated for evaluation of seedling emergence and spring stand (an estimate of winter survival) of three winter wheat market classes in Washington State. The studied market classes were soft white club, hard red, and soft white winter wheat varieties. Strong correlation between the ground-truth and aerial image-based emergence (Pearson correlation coefficient, r=. 0.87) and spring stand (. r=. 0.86) estimates was established. Overall, aerial sensing technique can be a useful tool to evaluate emergence and spring stand phenotypic traits. Also, the image database can serve as a virtual record during winter wheat variety development and may be used to evaluate the variety performance over the study years.
In-field variations in corn plant spacing and population can lead to significant yield differences. To minimize these variations, seeds should be placed at a uniform spacing during planting. Since the ability to achieve this uniformity is directly related to planter performance, intensive field evaluations are vitally important prior to design of new planters and currently the designers have to rely on manually collected data that is very time consuming and subject to human errors. A machine vision-based emerged crop sensing system (ECSS) was developed to automate corn plant spacing measurement at early growth stages for planter design and testing engineers. This article documents the first part of the ECSS development, which was the real-time video frame mosaicking for crop row image reconstruction. Specifically, the mosaicking algorithm was based on a normalized correlation measure and was optimized to reduce the computational time and enhance the frame connection accuracy. This mosaicking algorithm was capable of reconstructing crop row images in real-time while the sampling platform was traveling at a velocity up to 1.21 m s -1 (2.73 mph). The mosaicking accuracy of the ECSS was evaluated over three 40 to 50 m long crop rows. The ECSS achieved a mean distance measurement error ratio of -0.11% with a standard deviation of 0.74%. © 2008 American Society of Agricultural and Biological Engineers.
A biomass population sensor can benefit producers, seed companies, and researchers. The primary advantage of this sensor is the ability to generate better site-specific population density maps. The primary objective of this study was to develop a capacitance-based biomass proximity sensor with the performance characteristics necessary to detect the presence of biomass (e.g., corn stalks). The detection and quantification of corn stalks under harvest conditions was chosen as an example application of this technology. In this study, a non-intrusive, capacitive, single-sided biomass proximity sensor was developed, and the suitability of this sensor to biomass population quantification was evaluated. A number of capacitive sensor patterns were simulated using the finite element method, and then the patterns were fabricated and tested in the laboratory. The design, modeling, and laboratory testing resulted in a high-sensitivity, low-noise sensing system that utilized a capacitive sensor, Wien bridge oscillator, phase-locked loop, and operational amplifier that could detect stalk presence and transform sensor capacitance change into a change in electrical potential. Sensor operating parameters were optimized to detect corn stalks in this application. This sensor system was then evaluated under field harvest conditions. The signal-to-noise ratio of this system was greater than 10 in both laboratory and proof-of-concept field tests. When compared to hand counts obtained before harvest, the sensor count error was less than 5% for five of the six rows harvested and less than 2% averaged over the six rows harvested. Future work will involve comprehensive field evaluation of the moisture-based biomass proximity sensor to quantify corn stalk population.© 2009 American Society of Agricultural and Biological Engineers ISSN 0001-2351.
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ∗∗∗, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.