ArticlePDF Available

Abstract and Figures

The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z-value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight underestimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values (H 2 > 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable (H 2 > 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed.
Content may be subject to copyright.
fpls-08-02002 November 23, 2017 Time: 16:0 # 1
METHODS
published: 27 November 2017
doi: 10.3389/fpls.2017.02002
Edited by:
Yann Guédon,
Centre de Coopération Internationale
en Recherche Agronomique pour le
Développement (CIRAD), France
Reviewed by:
Barbara George-Jaeggli,
The University of Queensland,
Australia
Andreas Bolten,
University of Cologne, Germany
*Correspondence:
Simon Madec
simon.madec@inra.fr
Specialty section:
This article was submitted to
Technical Advances in Plant Science,
a section of the journal
Frontiers in Plant Science
Received: 22 August 2017
Accepted: 09 November 2017
Published: 27 November 2017
Citation:
Madec S, Baret F, de Solan B,
Thomas S, Dutartre D, Jezequel S,
Hemmerlé M, Colombeau G and
Comar A (2017) High-Throughput
Phenotyping of Plant Height:
Comparing Unmanned Aerial Vehicles
and Ground LiDAR Estimates.
Front. Plant Sci. 8:2002.
doi: 10.3389/fpls.2017.02002
High-Throughput Phenotyping
of Plant Height: Comparing
Unmanned Aerial Vehicles and
Ground LiDAR Estimates
Simon Madec1*, Fred Baret1, Benoît de Solan2, Samuel Thomas2, Dan Dutartre3,
Stéphane Jezequel2, Matthieu Hemmerlé3, Gallian Colombeau1and Alexis Comar3
1INRA, UMR EMMAH, Avignon, France, 2ARVALIS – Institut du végétal, Avignon, France, 3HIPHEN, Avignon, France
The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height
estimates as a high-throughput plant phenotyping trait was explored. An experiment
over wheat genotypes conducted under well watered and water stress modalities was
conducted. Frequent LiDAR measurements were performed along the growth cycle
using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution
RGB camera was flying the experiment several times to retrieve the digital surface
model from structure from motion techniques. Both techniques provide a 3D dense
point cloud from which the plant height can be estimated. Plant height first defined
as the z-value for which 99.5% of the points of the dense cloud are below. This
provides good consistency with manual measurements of plant height (RMSE =3.5 cm)
while minimizing the variability along each microplot. Results show that LiDAR and
structure from motion plant height values are always consistent. However, a slight under-
estimation is observed for structure from motion techniques, in relation with the coarser
spatial resolution of UAV imagery and the limited penetration capacity of structure from
motion as compared to LiDAR. Very high heritability values (H2>0.90) were found for
both techniques when lodging was not present. The dynamics of plant height shows
that it carries pertinent information regarding the period and magnitude of the plant
stress. Further, the date when the maximum plant height is reached was found to be
very heritable (H2>0.88) and a good proxy of the flowering stage. Finally, the capacity
of plant height as a proxy for total above ground biomass and yield is discussed.
Keywords: plant height, high throughput, unmanned aerial vehicles, dense point cloud, LiDAR, phenotyping,
broad-sense heritability
INTRODUCTION
Plant height is recognized as a good proxy of biomass (Yin et al., 2011;Bendig et al., 2014;Ota
et al., 2015;Tilly et al., 2015). Stem height that defines plant height appears to be sensitive to the
stresses subjected by the crop (Rawson and Evans, 1971). It is also one of the input of models
used to evaluate water stress (Blonquist et al., 2009). Plant height is known to make the crop
Abbreviations: DaS, day after sowing; Dflowering, date of flowering; Dmax(PH ), date of maximum height; GDD, growth degree
day; GSD, ground sampling distance; H2, broad sense heritability; LiDAR, light detection and ranging; RMSE, root mean
square error; Rp, rank percentile; UAV, unmanned aerial vehicle; WS, water stress modality; WW, well watered modality.
Frontiers in Plant Science | www.frontiersin.org 1November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 2
Madec et al. High-Throughput Phenotyping of Plant Height
more sensitive to lodging (Berry et al., 2003). Plant height
appears thus a highly appealing trait for plant breeders
within phenotyping experiments, particularly under natural
field conditions. Current methods based on manual evaluation
with a ruler on a limited sample size for each microplot
are labor intensive, low throughput and prone to errors in
the sampling, ruler adjustment, reading and recording the
data. Alternative methods have been developed either from
LiDAR (Light Detection And Range) often called laser scanning
(Hoffmeister et al., 2015), ultrasonic sensors also called sonar
(Turner et al., 2007), or from depth camera also called time
of flight camera (Chéné et al., 2012;Schima et al., 2016),
and finally from RGB high resolution imagery associated with
structure from motion algorithms. Depth cameras are limited
to close range applications (Schima et al., 2016). Ultrasonic
systems are considered as a relatively low-cost solution and user
friendly. However, LiDAR measurements have been generally
preferred for their increased spatial resolution, higher throughput
and independency from air temperature and wind (Tumbo
et al., 2002;Escolà et al., 2011;Llorens et al., 2011). LiDAR
scanning can be performed from the ground with terrestrial
laser scanner). However, terrestrial laser scanners are conical
scanners that are well suited for vertically developed objects
such as buildings or forests. Their application to crops with
limited vertical extent and a canopy volume densely populated
by leaves and stems or other organs appears limited (Zhang
and Grift, 2012;Bareth et al., 2016): the system needs to be
moved over a high number of places for large phenotyping
platforms. Further, the several microplots may be seen from
different distances and angles with impact on the spatial
resolution and associated bias introduced between microplots.
It seems therefore preferable to observe crops from near nadir
directions.
Several manned or semi-autonomous GPS (Geo-Positioning
System) navigated vehicles, have been developed in the recent
years where vertically scanning LiDARs have been setup. LiDARs
provide a full description of the profile of interception, either
with single echo (Lisein et al., 2013;James and Robson, 2014)
when the resolution is fine enough, or with full wave form
systems (Mallet and Bretar, 2009) or an approximation of it
with multi-echo systems (Moras et al., 2010). Because of the
penetration of the laser beam into most canopies, nadir looking
LiDAR techniques provide at the same time the digital surface
model corresponding to the top envelope of the crop (called
also crop surface model) and the elevation of the background
surface called the digital terrain model. Plant height is then
simply computed as the difference between the digital surface
model and the digital terrain model. Accuracy on plant height
measurement using such LiDAR techniques were reported to be
better than a few centimeters (Deery et al., 2014;Virlet et al.,
2016). Because of their high accuracy, their independency from
the illumination conditions and therefore their high repeatability,
these LiDAR based techniques are expected to be more accurate
than traditional manual height ruler measurements in the
field.
RGB image-based retrieval of crop height remains, however,
the most widely used approach (Bendig et al., 2013) because
of its low cost and high versatility (Remondino and El-
Hakim, 2006). Further, the advances in sensors (smaller,
lighter and cheaper, increased resolution and sensitivity) and
improvements in computer performances along with advances
in algorithms have contributed to the recent success of
such techniques (Remondino et al., 2014). The 3D dense
point clouds are generated by using a large set of high
resolution overlapping images. They are processed using
structure from motion algorithms implemented in either
commercial software (Smith et al., 2015) such as pix4d1,
Agisoft photoscan2or in open-source software including mic-
mac (MicMac, IGN, France) or Bundler (Snavely et al.,
2006). Nevertheless, accurate retrieval of 3D characteristics of
the canopy from structure from motion algorithms requires
careful completion of the image acquisition that should
provide enough view directions for each point of the scene
and with crisp high resolution images to identify the tie
points used for the 3D reconstruction of the surface (Turner
et al., 2014;Smith et al., 2015). Several factors will thus
influence the quality and accuracy of the dense point cloud,
including flight configuration (altitude, speed, frequency of
acquisitions, trajectory design and sensor orientation) camera
setting (resolution, field of view, image quality), illumination
and wind conditions, the distribution of ground control points
as well as the parameters used to run the structure from
motion algorithm (Dandois and Ellis, 2013;Remondino et al.,
2014).
Because of the spatial resolution of the images used for
the structure from motion algorithms and more importantly
because of the occlusions observed when a single point is to
be seen from two distinct directions, structure from motion
algorithms do not penetrate deeply into dense canopies (Lisein
et al., 2013;Grenzdörffer, 2014;Ota et al., 2015). Structure
from motion technique provides generally a good description
of the digital surface model but accessing the digital terrain
model is only possible when the ground is clearly visible
(Khanna et al., 2015). This is the case for low canopy coverage
or for phenotyping platforms where the ground is visible in
the alleys and between the plots (Holman et al., 2016). The
identification of ground points can be done directly by the
photogrammetric software such as Agisoft Photoscan (Geipel
et al., 2014). However, this method will depend on the choice of
the classification parameters and the type and stage of vegetation.
(Khanna et al., 2015) used the green index (Gitelson, 2004)
and applied the Otsu automatic thresholding method (Otsu,
1979) over green crops. For senescent vegetation this approach
will not provide good results because of confusions between
senescent crop and bare soil. Therefore, the generation of the
digital terrain model from the dense point cloud appears to
be still a challenge in many situations. The problem could
be solved by using a digital terrain model derived from an
independent source of information (Bendig et al., 2014;Geipel
et al., 2014;Grenzdörffer, 2014), assuming that the digital
1www.pix4d.com
2www.agisoft.com
Frontiers in Plant Science | www.frontiersin.org 2November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 3
Madec et al. High-Throughput Phenotyping of Plant Height
terrain model does not vary significantly during the growing
season.
The objective of this study is to develop a methodology
for estimating plant height of wheat crops from RGB camera
aboard UAV or LiDAR aboard a phénomobile (fully automatic
rover) in the context of high-throughput field phenotyping.
For this purpose, a comprehensive experiment was setup
where the field phenotyping platform was sampled several
times during the growing season with the UAV and the
phénomobile. A definition of the plant height is first provided
from the dense point cloud derived from the LiDAR that
will constitute the reference. The UAV derived plant height
based on the structure from motion algorithm will then
be compared with the LiDAR reference plant height, with
emphasis on the way the digital terrain model is computed.
The flowering date of wheat was estimated from the dynamics
of plant height. Finally, the broad-sense heritability of plant
height and its correlation with yield and biomass were
evaluated.
MATERIALS AND METHODS
Study Area
The field phenotyping platform (Figure 1B) is located in Gréoux
les Bains (France, 43.7latitude North, 5.8longitude East,
Figure 1A). The platform is approximately 200 m by 250 m size
and is mainly flat with a 1 m maximum elevation difference.
Wheat was sown on October the 29th 2015 with a row spacing
of 17.5 cm and a seed density of 300 seeds·m2. It was
harvested on the 6th July 2016. A total of 1173 microplots of
1.9 m width (11 rows) by 10 m long was considered, each
of them corresponding to a given genotype among a total of
550 genotypes grown under contrasted irrigation modalities:
about half of the platform was irrigated (WW) while the other
part was subjected to water stress (WS modality). A moderate
water stress took place in the 2015–2016 season. The cumulated
water deficit was 126 mm for the WS modality and 18 mm for
the WW modality. A subset of 19 contrasting genotypes was
considered here to evaluate the plant height heritability. Each
of those genotypes were replicated three times over the WW
and WS modalities organized in an alpha plan experimental
design.
Plant Height, Biomass and Flowering
Stage Ground Measurements
Plant height was manually measured on 12 microplots: on each
microplot, the average of 20 height measurements was calculated;
each individual sample measurement corresponds to the highest
point of the representative plant within an area of 30 cm radius,
corresponding either to leaf or to an ear.
The above ground biomass was measured over three segments
of 2 m length by two adjacent rows. The first two rows
located at the border of the microplots were not considered
in the sampling to minimize border effects. The samples
were weighed fresh, and a subsample of around 30 plants
taken to measure the water content by weighing it fresh and
drying it in an oven for 24 h at 80C. Around stage Zadoks
26, 6 microplots were sampled. At the stage Zadoks 32, 54
microplots were sampled, corresponding to one replicate of
27 genotypes both in WW and WS modalities. Finally at
the flowering stage (Zadoks 50), 80 microplots were sampled
corresponding to one replicate of 40 genotypes grown under
the two irrigation modalities. However, due to measurement
errors, the biomass measurement one microplot was missing. The
invasive measurements were taken within less than 4 days from
the closest LiDAR survey.
The yield of all the microplots corresponding to 19 genotypes
times the three replicates in the two irrigation modalities was
measured during the harvest: the weight of harvested grain was
divided by the microplot area and the grain fresh weight was
normalized to 12% relative moisture.
The flowering date was eventually scored visually every 3 days
on one replicate for 19 genotypes grown under both irrigation
modalities. The usual scoring system was used: flowering stage
corresponds to the date when 50% of the ears have their stamina
visible.
LIDAR Reference Measurements
The LiDAR on the Phénomobile
The phénomobile, a ground-based high-throughput phenotyping
robot rover is equipped with a measurement head (Figure 1C)
that is maintained automatically at a constant distance from
the top of the canopy. The system steps over the microplots
with a maximum 1.35 m clearance and an adjustable width
of 2 m ±0.5 m. The phénomobile automatically follows
a predefined trajectory in the experimental field using a
FIGURE 1 | (A) Localization of the platform in France; (B) Aerial view of the experimental field; (C) and the Phénomobile rover robot on which the LiDARs are fixed.
Frontiers in Plant Science | www.frontiersin.org 3November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 4
Madec et al. High-Throughput Phenotyping of Plant Height
centimetric accuracy real time kinetics GPS and accelerometers.
The measurement head is equipped with several instruments
including two LMS400 LiDARs (SICK, Germany) operating at
650 nm and scanning downward with ±35zenith angle in
a direction perpendicular to the rows at a frequency of 290
scans per second (Lefsky et al., 2002). The two LiDARs allow
getting denser sampling of the scene. As the platform moves
forward (Figure 1C) at a speed of 0.3 m·s1as recorded with
the GPS information, the distance between two consecutive
scans of a LiDAR along the row direction is around 1 mm.
Measurements are taken every 0.2along the scanning direction.
The size of the footprint will depend on the distance to the
sensor that varies from 2.4 mm ×5 mm at 0.7 m minimum
measuring distance up to 10.5 mm ×5 mm at 3 m maximum
measuring distance. The distance between the sensor and the
target is measured from the phase shift principle (Neckar and
Adamek, 2011). The intensity of the reflected signal and the
distance are recorded at the same time. When the target in the
LiDAR footprint is not horizontal or made of elements placed
at several heights, the distance and the intensity computed by
the LiDAR is approximately the average value over the LiDAR
footprint. The nominal error on the distance is 4 mm under
our experimental conditions. The scan of one microplot takes
about 30 s during which about 3 million points are recorded with
associated intensity and x-y-z coordinates. Each plot was sampled
14 times during the entire growth cycle to describe the whole
season.
Data Processing and Height Definition
A strip of 0.6 m width located in the center of the microplot was
extracted from the 3D point cloud (Figure 2A). This corresponds
roughly to three rows and allows to limit possible border effects
while increasing the probability to get points reflected by the soil
by limiting the scan angle. Noise from the resulting points where
then filtered using the Matlab implementation of the method
proposed by Rusu et al. (2008). This process removed about 1%
of the points. They were mainly located in the upper and lower
part of the regions of interest.
The 0.6 m width strip was further divided into 20 consecutive
non-overlapping elementary cells of 0.5 m length where the
canopy height was assessed (Figure 2A). This allows accounting
for possible variation of the digital terrain model if the microplot
is not perfectly flat. This cell size was large enough to get a
good description of the z profile (Figure 2C) including enough
points corresponding to the ground level used to define the
digital terrain model. The k-means clustering method (Seber,
1984) with two classes was applied to separate the ground
from the vegetation from both the distance and the intensity
values (Figure 2B). The maximum peak in the z-distribution
of the resulted non-vegetation points was assigned as the
ground level. The distance of the ground was subtracted from
the distance of the 3D point cloud for each elementary cell
in the microplot resulting into a distribution of the height
values. The height of the canopy is then defined as the height
value corresponding to a given Rpof the cumulated height
distribution of the vegetation points. The Rp=99.5% was
selected here to define the vegetation height at the elementary
cell level. When considering the later stages where a large
heterogeneity of the height is observed at the top layer because
of the presence of ears, this corresponds roughly to the area
covered by 50 ears for each unit ground area, considering
an ear diameter of 1 cm and a typical ear density. The
sensitivity of the height to this percentile value will be later
discussed in the results section. Finally, the median value of the
elementary cells of the microplot was considered as the plant
height.
Plant Height Estimates from the UAV
RGB Camera and UAV Flight
A Sony ILCE-6000 digital camera with a 6000 ×4000 pixels
sensor was carried by a hexacopter with approximately 20 min
autonomy. The camera was fixed on a 2 axes gimbal that
maintains the nadir view direction during the flight. The larger
dimension of the image was oriented across track to get larger
swath. The camera was set to speed priority of 1/1250 s to avoid
movement blur. The aperture and ISO were thus automatically
adjusted by the camera. The camera was triggered by an
intervalometer set at 1Hz frequency that corresponds to the
maximum frequency with which RGB images can be recorded
on the flash memory card of the camera. The images were
recorded in the jpg format. Two different focal lengths were
used: 19 and 30 mm with respectively ±31.0and ±21.5
field of view across track. The flight altitude was designed
to get around 1 cm GSD for both focal lengths (Table 1).
Five measurements were completed from tillering to flowering
(Table 1).
The speed of the UAV was set to 2.5 m/s to provide 90 and
94% overlap between images along the track respectively for the
30 mm and 19 mm focal lengths. The distance between tracks was
set to 9 and 11.8 m respectively for the 19 and 30 mm focal lengths
to provide 70% overlap across track. Two elevations of 10–15 min
were necessary to cover the full area of interest. No images were
acquired during the UAV stabilization over the waypoints. In
addition, images corresponding to the takeoffs and landings were
not used. This resulted in about 600 images for each date. The
typical flight plan is shown in Figure 3.
Ground Targets and Georeferencing Accuracy
A total of 19 ground targets were evenly distributed over
the platform with fixed position for all the flights. They
were made of painted PVC disks of 60 cm diameter where
the central 40 cm diameter disk was 20% gray level and
was surrounded by a 60% gray level color external crown.
These gray levels were selected to avoid saturation and allow
automatic target detection on the images. Their location was
measured with a real time kinetics GPS device ensuring a
1 cm horizontal and vertical accuracy for every flight. Among
the 19 targets, 14 were used in the generation of the dense
point cloud (ground control points) while the five additional
ones were used to evaluate the accuracy of the geo-referencing
(Check Points). The spatial distribution of the targets was
designed to get some even coverage of the field considered
(Figure 3).
Frontiers in Plant Science | www.frontiersin.org 4November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 5
Madec et al. High-Throughput Phenotyping of Plant Height
FIGURE 2 | (A) UAV-RGB images of one microplot where a 0.6 m ×0.5 m elementary cell is identified; (B) The 3D LiDAR points for an elementary cell. Each point
colored with the intensity value of the returned signal [from blue (low intensity) to yellow (high intensity)]; (C) The corresponding z-distribution of the 3-D points.
Generation of the 3D Dense Point Cloud from the
RGB Images
The ensemble of RGB images was processed with Agisoft
Photoscan Professional (V 1.2.6) software. The first step consists
in the image alignment performed using the scale invariant
feature transform algorithm (Lowe, 2004). An “on-the-job-
calibration” was applied to adjust the camera parameters within
the structure from motion process. The application of this
method was possible because of the high overlap between images
(Turner et al., 2014) and the suitable distribution of the ground
control points (James and Robson, 2014;Harwin et al., 2015).
The Agisoft software generates in a first step a set of tie points,
each point being associated with a projection error. As advised by
Agisoft, tie points with a projection error higher than 0.3 ground
sample distance were removed. A bundle adjustment is then
applied (Granshaw, 1980;Triggs et al., 1999). Further, points with
a low reconstruction uncertainty (points, reconstructed from
nearby photos with small baseline) were then removed. These
points are generally observed for small overlapping fraction
between images along with a large view zenith angle resulting
in larger ground sample distance. The ground control points
used in this process were automatically identified using a custom
developed pipeline. The check points were not used in the
bundle adjustment, the average accuracy on the check points
reported in Table 1 (σx, σy, and σz) were in agreement with
the recommendations from (Vautherin et al., 2016): 1–2 times
the ground sample distance in x and y directions, and 2–3 times
the ground sample distance in the z direction. The dense point
cloud is generated from dense-matching photogrammetry using
a moderate depth filtering option and the full image resolution
as implemented in Photoscan 1.2.6. This filtering process results
in more variable density of points of the dense cloud, the mean
density of points in the vegetation part of the study area was 2300
points/m2.
TABLE 1 | Characteristics of the five flights completed over the Gréoux experiment in 2016.
Date (DaS) Illumination
conditions
Wind speed
(km/h)
Focal length
(mm)
Altitude (m) GSD (cm) Overlap (%) σx (cm) σy (cm) σz (cm)
along across
139 Covered 8 30 75 0.98 90 70 2.4 3.1 5.5
152 Sunny 6 30 75 0.98 90 70 4.5 1.3 3.3
194 Sunny 10 19 50 1.04 94 70 5.1 1.3 3.9
216 Cloudy 7 19 50 1.04 94 70 2.1 2.9 2.8
225 Sunny 5 30 75 0.98 90 70 5.0 2.6 3.9
σx, σy, σz, correspond to the standard deviation of the localization of the control points used to quantify the geometric accuracy.
Frontiers in Plant Science | www.frontiersin.org 5November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 6
Madec et al. High-Throughput Phenotyping of Plant Height
FIGURE 3 | The flight plan with ground control points (yellow circles with red outline) and check points (yellow circles).
Derivation of the Digital Terrain Model
Two methods were used to derive the digital terrain model. The
first one is simply based on the collection of the coordinates
of the points recorded during sowing by the sowing machine
equipped with a centimetric accuracy Real Time Kinematic GPS.
The second approach is based on the extraction of ground
points from the dense point cloud and interpolation between
them to generate the digital terrain model. The phenotyping
platform (Figure 3) was split into 13 m ×13 m cells with a
75% overlapping (50% in both x and y directions). The size of
the cell is a compromise between a small one that allows to get
of finer description of digital terrain model variations, and a
large one that will ensure to get at least few background points
from the dense cloud points. Similarly to the LiDAR processing,
a k-means clustering (Seber, 1984) with 2 classes is applied
using the z-value and the red and green color associated to each
point of the dense cloud. This k-means clustering is iterated
over the previous background class if the standard deviation
in the background class, σb, is lower than 0.14 m. However,
if σb>0.14 m after the 4th iteration the iteration process is
stopped and no background z-value is assigned to the considered
cell. The σb>0.14 m value corresponds approximatively to
the background roughness expected over the 13 m ×13 m cell
and was defined after several trial and error tests. Then, ground
point cloud was filtered using (Rusu et al., 2008) algorithm to
regularize the z-values over each cell. Finally, a natural neighbor
interpolation (Owen, 1992) was applied to compute the z value
for each microplot. Note that here the microplot is assumed to
be flat.
Plant Height Estimation
For each plot, the z-values of the dense cloud points were
subtracted from the z-value of the digital terrain model assigned
to the microplot. Finally, the microplot is divided into 20
consecutive non-overlapping elementary cells of 50 cm ×60 cm
similarly to what was achieved for the LiDAR data. The median
value of plant height corresponding to a given Rpof the
cumulated z distribution is finally computed and considered as
the microplot crop plant height. The selection of the value of the
Rpused to define plant height will be discussed later in the Section
“Results.”
Date When the Maximum Plant Height Is
Reached
The flowering date appears roughly when the vegetative growth
is completed, i.e., when the stems reached their maximum height.
This stage could thus be tentatively estimated using the plant
height time course. This requires obviously frequent observations
as completed in this study with the LiDAR while the plant height
monitoring with the UAV was too sparse. As a consequence, only
the LiDAR measurements will be used here for estimating the
flowering stage. When expressing the time in GDDs the plant
height temporal profile can be approximated by a vegetative
growth phase, followed by a plateau during the reproductive
phase. The plant height corresponding to the plateau was simply
defined by the maximum plant height value over the whole cycle.
A second-order polynomial regression was used to describe the
plant height during the vegetative growth. The vegetative growth
period was assumed to start for GDD =1000C.day. It was
Frontiers in Plant Science | www.frontiersin.org 6November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 7
Madec et al. High-Throughput Phenotyping of Plant Height
then incrementally extended by including additional observation
dates for GDD >1500C.day if the corresponding plant height
elongation rate does not decrease by more than 60% than that
of the previous value. The intersection of the elongation curve
with the plateau provides the date when plant height reaches its
maximum.
RESULTS AND DISCUSSION
LiDAR Measurements of Plant Height
The LiDAR plant height defined using Rp=99.5% were
compared with the available manual measurements in the field.
Results show a strong agreement with a low RMSE of 3.47 cm
and small bias (bias =1.41 cm) (Figure 4).
The impact of the Rpvalue on plant height was further
investigated using the difference 1PH =PHxPH99.5where
PHxand PH99.5represent the plant height values respectively
for Rp=x% and Rp=99.5%. Results (Figure 5A) show that
very high values of Rp=99.99% increases plant height by more
than 1PH = +5 cm in most situations. Conversely, Rp=99.0%
decreases plant height by more than 1PH = −5 cm. The absolute
difference 1PH increases rapidly with plant height for PH <0.1
(Figure 5A). Then, 1PH increases only slightly with plant height
(Figure 5A), with, however, significant scatter for the larger
plant height values and when Rpis different from the nominal
value (Rp=99.5%). The variability of plant height across the 20
elementary cells within a microplot (Figure 5B) shows that it is
minimum for Rp=99.5% with STD =3.1 cm. It increases rapidly
either for Rp<99.5% or for Rp>99.5% although the STD value
keeps relatively small (STD <3.7 cm for Rp=99.99% or for
FIGURE 4 | Comparison between plant height derived from LiDAR
measurements with plant height measured manually in the field (n= 14). Solid
line is the 1:1 line.
Rp=90%). The use of the median values computed over the 20
elementary cells provides in addition a better representativeness
of the plant height of a microplot. This appears more important
at the tillering stage where the plant height variability within a
microplot is the largest. These results confirm that Rp=99.5%
provides an accurate and precise plant height estimation.
Derivation of the Digital Terrain Model
with Structure from Motion Algorithm
The digital terrain model extracted from the dense point cloud
for each of the 5 flights were compared. In addition, the digital
terrain model generated from the real time kinetics GPS placed
on the sowing machine during sowing was also used. A mean
altitude value of the ground level for the 1173 microplots was
then computed for the 7 digital terrain models. Results show
that the correlation between the altitudes computed from all
the digital terrain model combinations is always very high with
R2>0.97 (Table 2). This indicates that all the digital terrain
models were capturing consistently the general topography of the
experimental platform.
Results show further that the RMSE values are between 2.6 and
6.8 cm (Table 2), except for DaS 152 that shows larger values.
No clear explanation was found for the degraded performances of
DaS 152. However, better consistency seems to be observed when
using a shorter focal length (comparison between DaS 194, DaS
216 and Sowing).
Comparison of Plant Height Derived
from Structure from Motion and LiDAR
The LiDAR was more frequently sampling the platform along the
growth season as compared to the UAV flights (Table 1). Plant
height derived from the LiDAR were thus interpolated to the
dates of the UAV flights. However, if the LiDAR acquisition of
a microplot differs by more than a week from that of the UAV
flight, the corresponding microplot was not considered in the
comparison. This resulted in a total of 2076 couples of structure
from motion and LiDAR plant height. The plant height from
structure from motion was first derived using the same Rpas that
used for the LiDAR (Rp=99.5%). Results (Figure 6) show that
structure from motion plant height are strongly correlated with
LiDAR reference plant height across the 5 UAV flights available.
This corroborates previous results reported (Bareth et al., 2016;
Fraser et al., 2016;Holman et al., 2016). The same level of
consistency is observed for plant height derived from a digital
terrain model computed from the same dense cloud (R2=0.97,
RMSE =7.7 cm) as compared to using the digital terrain model
derived from the sowing (R2=0.98, RMSE =8.4 cm). The
correlations are generally weaker for the early stages due to the
limited range of variation of plant height (DaS 139, DaS 152).
Further, using the 30 mm camera focal length (DaS 139 DaS
152 DaS 225) tends to decrease the plant height consistency
with the reference LiDAR derived plant height as compared to
the 19 mm focal length (Table 3). The 19 mm focal length
increases the disparity in the view configurations which may
help the structure from motion algorithm to get more accurate
estimates of the z component in the dense cloud as earlier
Frontiers in Plant Science | www.frontiersin.org 7November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 8
Madec et al. High-Throughput Phenotyping of Plant Height
FIGURE 5 | (A) Difference in plant height estimates using several Rpz-values (Rp) against estimate using Rp=99.5; (B) Standard deviation of plant height (STD
intraplot, in cm) computed within a microplot between the 20 elementary cells as a function of the Rpvalue selected to define plant height.
TABLE 2 | Correlation (R2, bottom triangle) and RMSE (top triangle) values between the digital terrain models computed over the 1173 microplots for the 5 flights as well
as that derived from the real time kinetics GPS on the sowing machine.
R2
RMSE (cm) Sowing DaS 139DaS 152DaS 194 DaS 216 DaS 225
Sowing 2.6 7.2 3.4 2.6 6.0
DaS 1391.00 7.2 4.5 2.9 5.0
DaS 1520.96 0.95 9.2 7.4 9.8
DaS 194 0.99 0.99 0.95 3.8 6.8
DaS 216 0.99 0.99 0.96 0.99 6.0
DaS 2250.97 0.97 0.91 0.97 0.96
Indicates that the camera was equipped with the 30 mm focal length instead of the 19 mm one.
FIGURE 6 | (A) Plant Height computed from the background points identified over each date; (B) Plant height computed from the digital terrain model derived from
the sowing machine. Each color corresponds to a flight date (DaS). Indicates that the camera was equipped with the 30 mm focal length instead of the 19 mm one.
Frontiers in Plant Science | www.frontiersin.org 8November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 9
Madec et al. High-Throughput Phenotyping of Plant Height
TABLE 3 | Agreement between LiDAR and structure from motion derived plant height when the digital terrain model used come either from the same dense cloud or
from the Sowing.
Digital terrain model from the dense cloud Digital terrain model from Sowing
DaS R2RMSE (cm) Bias (cm) R2RMSE (cm) Bias (cm)
1390.76 5.0 4.4 0.50 6.8 5.6
1520.31 9.2 8.6 0.45 9.0 9.0
194 0.84 11.0 9.4 0.80 9.9 7.7
216 0.92 5.1 3.9 0.91 6.2 5.0
2250.59 8.7 0.38 0.63 9.8 5.4
All 0.97 7.7 5.1 0.98 8.4 6.5
Indicates that the camera was equipped with the 30 mm focal length instead of the 19 mm one.
reported (James and Robson, 2014). This result also confirms the
ability of Agisoft to model the radial lens distortion of wide field
of view lens. However, the calibration of the camera from the
bundle adjustment requires an even distribution of a sufficient
number ground control points (James and Robson, 2014;Harwin
et al., 2015) and a high overlapping between images as done in
this study.
A systematic overestimation of the plant height derived from
structure from motion is observed as compared to the reference
plant height derived from the LiDAR. This agrees with results
from other studies (Grenzdörffer, 2014;Bareth et al., 2016;van
der Voort, 2016) who found that structure from motion lacked
the ability to reconstruct accurately the top of the canopy. This
is partly due to the spatial resolution difference between the
LiDAR (3–5 mm) and the RGB camera (10 mm) as compared
to the size of the objects at the top of the canopy (on the
order of the cm). However, increasing the spatial resolution
will lead to more noisy dense cloud with more gaps over
vegetated areas as reported by Brocks et al. (2016) and as was
FIGURE 7 | Impact of the rank percentile (Rp) used to defined plant height
from the dense cloud derived from structure from motion on RMSE and bias
(left yaxis) and the variability of plant height along the microplot (right yaxis).
The reference plant height used here is that derived from the LiDAR with
Rp = 99.5%.
experienced also in this study (results not shown for the sake of
brevity).
The principles of height measurement are very different
between the LiDAR and structure from motion: the structure
from motion algorithm uses two different directions to build the
dense cloud, limiting the penetration capacity because of possible
occultation; conversely LiDAR uses only a single direction with
much better penetration in the canopy. As a consequence, the
z profiles are expected to be different between LiDAR and
structure from motion. The impact of the Rpvalue used to define
plant height from the dense cloud derived from structure from
motion was thus further investigated on the 2076 couples of
measurements. As expected, increasing the Rpvalue decreases
the bias and thus RMSE with the reference LiDAR plant height
(Figure 7). However, the decrease seems to be limited after
Rp>99%, reaching 8 cm difference for Rp=99.99% Note
that the 99.99% percentile corresponds to very few points in
the dense point cloud since the cell of 0.5 m ×0.6 m contains
around 1000 points. Increasing Rpreduces the variability of
plant height between the 20 elementary cells within a microplot
up to Rp=99.9% (Figure 8). This simple sensibility analysis
shows that best consistency with the LiDAR reference plant
height is obtained for 99.5% <Rp<99.99% with actually small
improvement for Rplarger than 99.5%. This justifies a posteriori
the Rp=99.5% value used for plant height estimation from
structure from motion.
Plant Height as a Reliable Trait for Wheat
Phenotyping
Broad Sense Heritability
The H2quantifying the repeatability of the plant height
trait estimation was computed as the ratio between the
genotypic to the total variances (Holland et al., 2002).
A linear mixed-effects statistical model was applied on
each date to quantify the genetic variance. The ‘lm4’ R
package applied to our experimental design (alpha design)
was used here (Bates et al., 2014). The soil water holding
capacity that was carefully documented was used as fixed
effect in the model. We write the model (random terms
underlined) as:
Y = µ+S+G+L+C+L:C +ε
Frontiers in Plant Science | www.frontiersin.org 9November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 10
Madec et al. High-Throughput Phenotyping of Plant Height
FIGURE 8 | Heritability (H2) of plant height for different environments
conditions and methods along the growth cycle. The two modalities (WW and
WS) and plant height derived from LiDAR and structure from motion are
individually presented.
FIGURE 9 | Dynamics of the LiDAR plant height for two genotypes (red and
blue lines and symbols) with three replicates in the WW environment (‘+’ solid
lines) and in the WS environment (‘’ dashed lines) the date when the
maximum plant height is reached is indicated by the vertical line. Time is
expressed in Growing Degree Days (GDD, C.day).
Where Sis the soil water holding capacity. Gis the random
effect of the genotypes. Land Care, respectively, the random
lines and column effects in our alpha design plan and L:C is the
random sub-block effect. µis the intercept term (fixed) and εthe
random residual error.
The plant height trait derived from the LiDAR shows a high
H2up to DaS 210 (Figure 8) for the WW modality. It drops
dramatically at the end of the growth cycle in relation to lodging
that was affecting differently the replicates. Conversely, the WS
modality keeps relatively stable during the whole growth cycle
because no lodging was observed. However, when the water stress
FIGURE 10 | Comparison between the date expressed in Growing Degree
Day (GDD) of the maximum plant height growth with the flowering date visually
scored (expressed in GDD) (n= 114).
starts to impact crop growth around DaS 180, a small decrease of
the H2is observed: residual environmental effects not accounted
for by the alpha experimental plan and the soil water holding
capacity were slightly degrading the H2value.
The H2values computed over the WW modality from
structure from motion are close to those observed for the
LiDAR, with, however, a slight degradation of the performances.
Conversely, the H2values computed on the WS modality from
structure from motion show the smallest H2values. On DaS 194,
the H2is low for the WS modality. A detailed inspection shows a
noisy dense point cloud in the WS part of the field that impacted
the height computation and thus H2. At this specific date and
location, the phénomobile was operating during the UAV flights
which induces artifacts and problems in the dense point cloud
generation from structure from motion.
Plant Height as a Proxy of the Flowering Stage
Due to the reduced observation frequency of the UAV, flowering
time was only assessed using the LiDAR plant height. The
date when the maximum plant height is reached, Dmax(PH), is
considered as a proxy of the flowering stage. Figure 9 shows
that Dmax(PH)is well identified based on the simple algorithm
presented in the methods section. Further, it appears that
Dmax(PH)is little dependent on the environmental conditions:
WW and WS modalities are very close and for the WS modality,
there is no difference due to the soil water holding capacity
although differences in max(PH) are observed.
The flowering dates are well correlated with Dmax(PH)
(Figure 10) (R2=0.24, RMSE =76, Dmax(PH)=0.7
Dflowering +541). However, the best linear fit shows that the
earlier genotypes reach the maximum plant height about 100
GDD after the flowering stage, which corresponds approximately
Frontiers in Plant Science | www.frontiersin.org 10 November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 11
Madec et al. High-Throughput Phenotyping of Plant Height
FIGURE 11 | (left) Regression between plant height derived from LiDAR and Biomass (n= 139): (right) regression between plant height derived from structure from
motion and Biomass (n= 86). Blue and red symbols correspond respectively to the WW and WS modalities. Unfortunately, no UAV acquisition was conducted for
the Zadocks 32 stage. The Z32 and flowering stages are indicated by the corresponding green envelope of the points in the figures.
to 7 days. The late genotypes show less differences, around 20
GDD corresponding to 1 or 2 days after flowering. Dmax(PH)
appears thus to be a reasonable proxy of the flowering stage
considering that the accuracy of its visual scoring date is
around 2–3 days. Nevertheless, some genotypes show significant
differences from the main relationship as illustrated in Figure 10.
The heritability of Dmax(PH)was very high, H2=0.96 and
H2=0.88, respectively for the WW and WS modalities. This
confirms the small influence of the environment for the genetic
expression of this trait.
Relationship with Above Ground Biomass and Yield
Correlations between plant height and biomass along the
growing season are very strong (Figure 11) both for the LiDAR
(R2=0.88, RMSE =112.2 g/m2) and the structure from motion
(R2=0.91, RMSE =98.0 g/m2). These good relationships
confirm observations by several authors (Yin et al., 2011;Bendig
et al., 2014;Ota et al., 2015;Tilly et al., 2015). However, these
correlations are mainly driven by the variability across stages
along the growth cycle. For a given stage, little prediction power
of the biomass is observed from plant height (Figure 11). The
correlation at the flowering stage is relatively low (R2=0.5)
for both methods. Other variables such as the basal area should
be used to improve the correlations. Yield is poorly correlated
with maximum plant height both when derived from LiDAR
(R2=0.22, RMSE =149.6 g.m2) and structure from motion
(R2=0.13, RMSE =152.3 g.m2). This is consistent with the
poor correlation with biomass observed for a given growing stage,
assuming that the harvest index varies within a small range.
DISCUSSION AND CONCLUSION
Since crop surface is very rough, an important point addressed
in this study was to propose a definition of plant height from
the 3D point cloud retrieved from LiDAR or structure from
motion techniques. The 99.5% percentile of the cumulated
z-value was found to be optimal for comparison with ground
ruler measurements while minimizing the spatial variability over
each microplot. However, this definition will probably slightly
depend on the canopy surface roughness. As a consequence, the
99.5% percentile used as a reference for wheat should be checked
and possibly adapted for other crops as well as a function of
the spatial resolution used. LiDAR measurements are based on
a single source/view configuration allowing to penetrate into the
canopy and reach the ground reference surface. Plant height
could then be directly measured because of the availability of
ground reference points within a microplot. Conversely, the
penetration capacity of structure from motion methods based
on the combination of distinct view directions from the UAV is
limited because of possible occultation that will increase when the
canopy closes. In these conditions, two strategies were compared:
(1) either find ground reference points over the whole 3D dense
point cloud and interpolate these points to get the digital terrain
model; or (2) use and ancillary digital terrain model, that was in
this study derived from real time kinetics GPS acquired during
the sowing of the crop. The first approach might be limited in the
case of a terrain presenting a complex topography when only few
ground points are identified. Note that the ground control points
could be used as ground level points if the distance to the ground
is precisely known. Results show that both methods reach the
same level of accuracy. For the two approaches investigated here
to define the digital terrain model and extract the plant height
of each microplot, the methods presented here were designed
to process automatically the original imagery. This includes
automatic and direct extraction of the microplots as well as
of the digital terrain model from the dense cloud as opposed
to earlier studies where plant height was derived from a crop
surface model generated from the dense cloud (Bareth et al.,
2016).
Frontiers in Plant Science | www.frontiersin.org 11 November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 12
Madec et al. High-Throughput Phenotyping of Plant Height
The comparison between plant height derived from LiDAR
and structure from motion shows a very high consistency
with strong correlation (R20.98) and small RMSE values
(RMSE =8.4 cm). Most of the RMSE was explained by a
significant bias, the plant height being underestimated. This
may be partly due to the differences in the spatial resolution
of the two systems (about 4 mm for LiDAR and 10 mm for
UAV imagery) as well as in differences in canopy penetration
capacity. However, plant height derived from structure from
motion is systematically lower than that of the LiDAR. Our
results further indicate that larger field of view with shorter
focal lengths would generate more accurate 3D dense point
clouds from structure from motion and thus plant height
because of the increased disparity between the several view
points. However, complementary study should investigate more
deeply this effect as well as the impact of a degraded spatial
resolution.
High H2(repeatability) of plant height was observed both
for LiDAR and structure from motion. The water stress
experiment over which the LiDAR and structure from motion
techniques were evaluated shows that plant height is a
very pertinent trait to characterize the impact of drought
before flowering stage: plant height not only quantifies the
magnitude of the stress, it allows also to date precisely
when the stress started to impact plant growth if sufficiently
frequent observations are available. In addition, the date when
plant height reaches its maximum was demonstrated to be
a reasonable proxy of the flowering date with, however,
some slight variability between genotypes. The heritability
of the Dmax(PH)reached was very heritable since it was
demonstrated to be very little dependent on the water stress
experienced by the plants in this experiment. The phasing
difference between the end of the vegetative growth period
and the flowering date might be investigated by breeders
as a new trait of interest. Finally, plant height provides
obviously a very easy and convenient way to identify plant
lodging either based on the temporal evolution of the
microplot, or from the variance between the 20 elementary
cells considered in each microplot. All these results make
the plant height trait very interesting for plant breeders.
However, very low correlation with total above ground biomass
and yield were observed for a given date of observation
while high correlations are found across stages. Additional
variables should be used such as the basal area to get the
biovolume to get a better proxy of the above ground biomass at
harvest.
Plant height derived from UAV using structure from motion
algorithms were demonstrated here to lead to similar degree
of accuracy as compared to the LiDAR observations from the
phénomobile. The affordability and flexibility of UAV platforms
and the constant improvement of cameras (better, smaller, lighter,
cheaper) will probably make UAVs the basic vehicle to be used
for high-throughput field phenotyping of plant height. Further,
the recent availability of centimetric knowledge of the camera
position for each image based on real time kinetics techniques will
ease the structure from motion processing while possibly limiting
the number of ground control points to be set up in the field.
AUTHOR CONTRIBUTIONS
SJ manage the field platform. SM and MH design the flight plan.
ST, BdS, FB, and AC manage the phénomobile acsuisition. MH
pilot the UAV. GC develop some routines for the processing of
the images from the UAV. ST develop some routines for the
preprocessing of the LiDAR. The algorithm development were
mainly accomplished by SM, with the advices and comment from
FB, BdS, and DD. SM wrote the manuscript and FB made very
significant revisions. All authors participated in the discussion.
FUNDING
This study was supported by “Programme d’Investissement
d’Avenir” PHENOME (ANR-11-INBS-012) and Breedwheat
(ANR-10-BTR-03) with participation of France Agrimer and
“Fonds de Soutien à l’Obtention Végétale.” The work was
completed within the UMT-CAPTE funded by the French
Ministry of Agriculture.
ACKNOWLEDGMENTS
We thank very much Olivier Moulin, Guillaume Meloux, and
Magalie Camous from the Arvalis experimental station in Gréoux
for their kind support during the measurements. We also thank
Florent Duyme and Emmanuelle Heritier from Arvalis for
important discussions and help about the statistical analysis.
REFERENCES
Bareth, G., Bendig, J., Tilly, N., Hoffmeister, D., Aasen, H., and Bolten, A.
(2016). A comparison of UAV- and TLS-derived plant height for crop
monitoring: using polygon grids for the analysis of Crop Surface Models
(CSMs). Photogramm. Fernerkund. Geoinf. 2016, 85–94. doi: 10.1127/pfg/2016/
0289
Bates, D., Mächler,M., Bolker, B., and Walker, S. (2014). Fitting linear mixed-effects
models using lme4. arXiv:1406.5823
Bendig, J., Bolten, A., and Bareth, G. (2013). UAV-based imaging for multi-
temporal, very high Resolution Crop Surface Models to monitor Crop Growth
VariabilityMonitoring des Pflanzenwachstums mit Hilfe multitemporaler
und hoch auflösender Oberflächenmodelle von Getreidebeständen auf Basis
von Bildern aus UAV-Befliegungen. Photogramm. Fernerkund. Geoinf. 2013,
551–562. doi: 10.1127/1432-8364/2013/0200
Bendig, J., Bolten, A., Bennertz, S., Broscheit, J., Eichfuss, S., and Bareth, G. (2014).
Estimating biomass of barley using Crop Surface Models (CSMs) derived
from UAV-based RGB imaging. Remote Sens. 6, 10395–10412. doi: 10.3390/
rs61110395
Berry, P. M., Sterling, M., Baker, C. J., Spink, J., and Sparkes, D. L. (2003).
A calibrated model of wheat lodging compared with field measurements.
Agric. For. Meteorol. 119, 167–180. doi: 10.1016/S0168-1923(03)
00139-4
Blonquist, J. M., Norman, J. M., and Bugbee, B. (2009). Automated measurement
of canopy stomatal conductance based on infrared temperature. Agric. For.
Meteorol. 149, 2183–2197. doi: 10.1016/j.agrformet.2009.10.003
Frontiers in Plant Science | www.frontiersin.org 12 November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 13
Madec et al. High-Throughput Phenotyping of Plant Height
Brocks, S., Bendig, J., and Bareth, G. (2016). Toward an automated low-cost three-
dimensional crop surface monitoring system using oblique stereo imagery
from consumer-grade smart cameras. J. Appl. Remote Sens. 10, 046021–046021.
doi: 10.1117/1.JRS.10.046021
Chéné, Y., Rousseau, D., Lucidarme, P., Bertheloot, J., Caffier, V., Morel, P.,
et al. (2012). On the use of depth camera for 3D phenotyping of entire
plants. Comput. Electron. Agric. 82, 122–127. doi: 10.1016/j.compag.2011.
12.007
Dandois, J. P., and Ellis, E. C. (2013). High spatial resolution three-dimensional
mapping of vegetation spectral dynamics using computer vision. Remote Sens.
Environ. 136, 259–276. doi: 10.1016/j.rse.2013.04.005
Deery, D., Jimenez-Berni, J., Jones, H., Sirault, X., and Furbank, R. (2014). Proximal
remote sensing buggies and potential applications for field-based phenotyping.
Agronomy 4, 349–379. doi: 10.3390/agronomy4030349
Escolà, A., Planas, S., Rosell, J. R., Pomar, J., Camp, F., Solanelles, F., et al. (2011).
Performance of an ultrasonic ranging sensor in apple tree canopies. Sensors 11,
2459–2477. doi: 10.3390/s110302459
Fraser, R. H., Olthof, I., Lantz, T. C., and Schmitt, C. (2016). UAV photogrammetry
for mapping vegetation in the low-Arctic. Arctic Sci. 2, 79–102. doi: 10.1139/as-
2016-0008
Geipel, J., Link, J., and Claupein, W. (2014). Combined spectral and spatial
modeling of corn yield based on aerial images and crop surface models acquired
with an unmanned aircraft system. Remote Sens. 6, 10335–10355. doi: 10.3390/
rs61110335
Gitelson, A. A. (2004). Wide dynamic range vegetation index for remote
quantification of biophysical characteristics of vegetation. J. Plant Physiol. 161,
165–173. doi: 10.1078/0176-1617- 01176
Granshaw, S. I. (1980). Bundle adjustment methods in engineering
photogrammetry. Photogramm. Rec. 10, 181–207. doi: 10.1111/j.1477-
9730.1980.tb00020.x
Grenzdörffer, G. J. (2014). Crop height determination with UAS point clouds.
Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 1, 135–140. doi: 10.5194/
isprsarchives-XL-1-135-2014
Harwin, S., Lucieer, A., and Osborn, J. (2015). The impact of the calibration
method on the accuracy of point clouds derived using unmanned aerial
vehicle multi-view stereopsis. Remote Sens. 7, 11933–11953. doi: 10.3390/rs709
11933
Hoffmeister, D., Waldhoff, G., Korres, W., Curdt, C., and Bareth, G. (2015).
Crop height variability detection in a single field by multi-temporal
terrestrial laser scanning. Precis. Agric. 17, 296–312. doi: 10.1007/s11119-015-
9420-y
Holland, J. B., Nyquist, W. E., and Cervantes-Martínez, C. T. (2002). “Estimating
and Interpreting heritability for plant breeding: an update,” in Plant Breeding
Reviews, ed. J. Janick (Hoboken, NJ: John Wiley & Sons, Inc.), 9–112.
doi: 10.1002/9780470650202.ch2
Holman, F. H., Riche, A. B., Michalski, A., Castle, M., Wooster, M. J., and
Hawkesford, M. J. (2016a). High throughput field phenotyping of wheat plant
height and growth rate in field plot trials using UAV based remote sensing.
Remote Sens. 8:1031. doi: 10.3390/rs8121031
James, M. R., and Robson, S. (2014). Mitigating systematic error in topographic
models derived from UAV and ground-based image networks. Earth Surf.
Process. Landf. 39, 1413–1420. doi: 10.1002/esp.3609
Khanna, R., Moller, M., Pfeifer, J., Liebisch, F., Walter, A., and Siegwart, R. (2015).
“Beyond point clouds - 3D mapping and field parameter measurements using
UAVs,” in Proceedings of the IEEE 20th Conference on Emerging Technologies
& Factory Automation (ETFA), Luxembourg, 1–4. doi: 10.1109/ETFA.2015.
7301583
Lefsky, M. A., Cohen, W. B., Parker, G. G., and Harding, D. J. (2002). Lidar
remote sensing for ecosystem studies lidar, an emerging remote sensing
technology that directly measures the three-dimensional distribution of
plant canopies, can accurately estimate vegetation structural attributes and
should be of particular interest to forest, landscape, and global ecologists.
BioScience 52, 19–30. doi: 10.1641/0006-3568(2002)052[0019:LRSFES]
2.0.CO;2
Lisein, J., Pierrot-Deseilligny, M., Bonnet, S., and Lejeune, P. (2013).
A photogrammetric workflow for the creation of a forest canopy height
model from small unmanned aerial system imagery. Forests 4, 922–944.
doi: 10.3390/f4040922
Llorens, J., Gil, E., Llop, J., and Escolà, A. (2011). Ultrasonic and LIDAR sensors
for electronic canopy characterization in vineyards: advances to improve
pesticide application methods. Sensors 11, 2177–2194. doi: 10.3390/s1102
02177
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. Int.
J. Comput. Vis. 60, 91–110. doi: 10.1023/B:VISI.0000029664.99615.94
Mallet, C., and Bretar, F. (2009). Full-waveform topographic lidar: state-of-the-
art. ISPRS J. Photogramm. Remote Sens. 64, 1–16. doi: 10.1016/j.isprsjprs.2008.
09.007
Moras, J., Cherfaoui, V., and Bonnifait, P. (2010). “A lidar perception scheme
for intelligent vehicle navigation,” in Proceedings of the 11th International
Conference on Control, Automation, Robotics and Vision, Singapore, 1809–1814.
doi: 10.1109/ICARCV.2010.5707962
Neckar, P., and Adamek, M. (2011). Software and hardware specification for area
segmentation with laser scanner SICK LMS 400. J. Syst. Appl. Eng. Dev. 5,
674–681.
Ota, T., Ogawa, M., Shimizu, K., Kajisa, T., Mizoue, N., Yoshida, S., et al. (2015).
Aboveground biomass estimation using structure from motion approach
with aerial photographs in a seasonal tropical forest. Forests 6, 3882–3898.
doi: 10.3390/f6113882
Otsu, N. (1979). A threshold selection method from gray-level histograms.
IEEE Trans. Syst. Man Cybern. 9, 62–66. doi: 10.1109/TSMC.1979.431
0076
Owen, S. J. (1992). An Implementation of Natural Neighbor Interpolation in Three
Dimensions. Masters thesis, Brigham Young University, Provo, UT.
Rawson, H., and Evans, L. (1971). The contribution of stem reserves to grain
development in a range of wheat cultivars of different height. Aust. J. Agric. Res.
22, 851. doi: 10.1071/AR9710851
Remondino, F., and El-Hakim, S. (2006). Image-based 3D Modeling: A Review.
Available at: http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=
8913373 [accessed March 22, 2016].
Remondino, F., Grazia, M., Nocerino, E., Menna, F., and Nex, F. (2014). State
of the art in high density image matching. Photogramm. Rec. 29, 144–166.
doi: 10.1111/phor.12063
Rusu, R. B., Marton, Z. C., Blodow, N., Dolha, M., and Beetz, M.
(2008). Towards 3D point cloud based object maps for household
environments. Rob. Auton. Syst. 56, 927–941. doi: 10.1016/j.robot.2008.
08.005
Schima, R., Mollenhauer, H., Grenzdörffer, G., Merbach, I., Lausch, A., Dietrich, P.,
et al. (2016). Imagine all the plants: evaluation of a light-field camera
for on-site crop growth monitoring. Remote Sens. 8:823. doi: 10.3390/rs810
0823
Seber, G. A. F. (ed.) (1984). “Multivariate distributions,” in Multivariate
Observations, (Hoboken, NJ: John Wiley & Sons, Inc.), 17–58. doi: 10.1002/
9780470316641.ch2
Smith, M. W., Carrivick, J. L., and Quincey, D. J. (2015). Structure from
motion photogrammetry in physical geography. Prog. Phys. Geogr. 40, 247–275.
doi: 10.1177/0309133315615805
Snavely, N., Seitz, S. M., and Szeliski, R. (2006). Photo Tourism: Exploring Photo
Collections in 3D. in ACM Transactions on Graphics, 835–846. Available at:
http://dl.acm.org/citation.cfm?id=1141964 [accessed January 3, 2017].
Tilly, N., Aasen, H., and Bareth, G. (2015). Fusion of plant height and vegetation
indices for the estimation of barley biomass. Remote Sens. 7, 11449–11480.
doi: 10.3390/rs70911449
Triggs, B., McLauchlan, P. F., Hartley, R. I., and Fitzgibbon, A. W. (1999). “Bundle
adjustment—a modern synthesis,” in Proceedings of the International workshop
on Vision Algorithms, eds B. Triggs, A. Zisserman, and R. Szeliski (Berlin:
Springer), 298–372.
Tumbo, S. D., Salyani, M., Whitney, J. D., Wheaton, T. A., and Miller, W. M.
(2002). Investigation of laser and ultrasonic ranging sensors for measurements
of citrus canopy volume. Appl. Eng. Agric. 18, 367–372. doi: 10.13031/2013.
8587
Turner, D., Lucieer, A., and Wallace, L. (2014). Direct georeferencing of ultrahigh-
resolution UAV imagery. IEEE Trans. Geosci. Remote Sens. 52, 2738–2745.
doi: 10.1109/TGRS.2013.2265295
Turner, P., Tubana, B., Girma, K., Holtz, S., Kanke, Y., Lawles, K., et al. (2007).
Indirect Measurement of Crop Plant Height. Stillwater, OK: Oklahoma State
University.
Frontiers in Plant Science | www.frontiersin.org 13 November 2017 | Volume 8 | Article 2002
fpls-08-02002 November 23, 2017 Time: 16:0 # 14
Madec et al. High-Throughput Phenotyping of Plant Height
van der Voort, D. (2016). Exploring the Usability of Unmanned Aerial
Vehicles for Non-Destructive Phenotyping of Small-Scale Maize Breeding Trials.
Wageningen: Wageningen University and Research Centre.
Vautherin, J., Rutishauser, S., Schneider-Zapp, K., Choi, H. F., Chovancova, V.,
Glass, A., et al. (2016). Photogrammetric accuracy and modeling
of rolling shutter cameras. ISPRS Ann. Photogramm. Remote Sens.
Spat. Inf. Sci. 3, 139–146. doi: 10.5194/isprs-annals-III-3-139-
2016
Virlet, N., Sabermanesh, K., Sadeghi-Tehran, P., and Hawkesford, M. J. (2016).
Field Scanalyzer: an automated robotic field phenotyping platform for
detailed crop monitoring. Funct. Plant Biol 44, 143–153. doi: 10.1071/FP
16163
Yin, X., McClure, M. A., Jaja, N., Tyler, D. D., and Hayes, R. M. (2011). In-season
prediction of corn yield using plant height under major production systems.
Agron. J. 103:923. doi: 10.2134/agronj2010.0450
Zhang, L., and Grift, T. E. (2012). A LIDAR-based crop height measurement system
for Miscanthus giganteus.Comput. Electron. Agric. 85, 70–76. doi: 10.1016/j.
compag.2012.04.001
Conflict of Interest Statement: The authors declare that the research was
conducted in the absence of any commercial or financial relationships that could
be construed as a potential conflict of interest.
Copyright © 2017 Madec, Baret, de Solan, Thomas, Dutartre, Jezequel, Hemmerlé,
Colombeau and Comar. This is an open-access article distributed under the terms
of the Creative Commons Attribution License (CC BY). The use, distribution or
reproduction in other forums is permitted, provided the original author(s) or licensor
are credited and that the original publication in this journal is cited, in accordance
with accepted academic practice. No use, distribution or reproduction is permitted
which does not comply with these terms.
Frontiers in Plant Science | www.frontiersin.org 14 November 2017 | Volume 8 | Article 2002
... At the same time, advances in remote sensing led to the estimation of other variables more linked to crop structural properties such as crop height (Jimenez-Berni et al., 2018) and canopy volume (Calou et al., 2019), thanks to the development and the diffusion of new sensors such as LiDAR, multispectral imaging sensors for photogrammetry mounted on UAV, 3d reconstruction and ultrasonic sonars. Specifically, crop height is well known to be related to crop biomass within crop species (Madec et al., 2017) and final yield (Bendig et al., 2015). Moreover, it varies with variations of crop nitrogen and water availability (Azimi et al., 2021;Madec et al., 2017). ...
... Specifically, crop height is well known to be related to crop biomass within crop species (Madec et al., 2017) and final yield (Bendig et al., 2015). Moreover, it varies with variations of crop nitrogen and water availability (Azimi et al., 2021;Madec et al., 2017). For these reasons, the literature has focused on proving the ability of new sensors and on data analysis techniques to provide good estimates of crop height; various sensors and techniques have been proposed and compared on different crops (Madec et al., 2017;Roth & Streit, 2018). ...
... Moreover, it varies with variations of crop nitrogen and water availability (Azimi et al., 2021;Madec et al., 2017). For these reasons, the literature has focused on proving the ability of new sensors and on data analysis techniques to provide good estimates of crop height; various sensors and techniques have been proposed and compared on different crops (Madec et al., 2017;Roth & Streit, 2018). Despite the great importance of crop height in describing crop status, only a few studies have verified the opportunity of integrating it with vegetation indices in order to improve the prediction of crop biomass (Sharma et al., 2016) using, specifically, plant height obtained from digital cameras mounted on UAVs. ...
Article
Full-text available
Vegetation indices are used in precision agriculture to estimate crop aboveground biomass (AGB) and, in turn, to quantify crop needs. However, crop species and development stage affect vegetation indices limiting the setup of generalized models for AGB estimation. Some approaches to overcome this issue have combined vegetation indices and structural crop properties such as crop height. However, only a few studies have considered different herbaceous crops like forages and cover crops. A 2-year field experiment was carried out on five winter cover crops with different habits at a high cover fraction (on average 93%) to study if combining vegetation indices, crop height and the fraction of soil covered by the crop could improve AGB estimation. Seven vegetation indices, crop height and cover fraction were derived from UAV-multispectral images. Species-specific and global (including all species) regression models were built and tested through cross-validation (CV). Green-based indices were the best estimators of AGB (RCV² = 0.56–0.93, normalized root mean square error in CV nRMSECV = 26–38%) of the five species, separately. A global linear model using crop height alone, provided good results (RCV² = 0.57, nRMSECV = 42%). Also, stepwise multiple regression was used to get a global model with crop height and five vegetation indices (RCV² = 0.75, nRMSECV = 31%). Finally, a model was proposed where AGB was estimated by a vegetation index until plants covered 97% of soil or its height was shorter than 125 mm and by crop height for vegetation taller than 125 mm. The promising results (RCV² = 0.65, nRMSECV = 36%) suggested the possibility of increasing AGB estimation by considering both vegetation indices and structural crop properties.
... The camera faced the ground from nadir at an approximately fixed distance ( Table 3). The 68 available annotated images covered a wide range of wheat genotypes grown at several locations in France, representing different growth stages, soil backgrounds, and illumination conditions (ii) The PHENOMOBILE dataset was acquired with the Phenomobile system ( Figure 4), an unmanned [46]. This system uses flashes for image acquisition making the measurements independent of the natural illumination conditions. ...
Article
Full-text available
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated ( R 2 = 0.94 ) by the SegVeg model, while the senescent and background fractions show slightly degraded performances ( R 2 = 0.70 and 0.73 , respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.
... When it comes to the detection of plant responses to diverse stresses, visible (VIS, 400-700 nm), near-infrared (NIR, 700-1300 nm) and shortwave infrared (SWIR, 1300-2500 nm) reflectance, but also TIR and solar-induced fluorescence (SIF, often at 687 nm and 760 nm, or over the full emission wavelength between 650 and 800 nm) have been the most exploited passive sensing signals (Gerhards et al., 2019). In these spectral domains, information on plant morphology and structure can also be derived by means of active sensing devices, such as LiDAR (light detection and ranging; Madec et al., 2017), and through stereophotogrammetry using multi-spectral imagery (St-Onge et al., 2008). Note that this review explicitly excludes microwave technologies. ...
Article
Full-text available
Remote detection and monitoring of the vegetation responses to stress became relevant for sustainable agriculture. Ongoing developments in optical remote sensing technologies have provided tools to increase our understanding of stress-related physiological processes. Therefore, this study aimed to provide an overview of the main spectral technologies and retrieval approaches for detecting crop stress in agriculture. Firstly, we present integrated views on: i) biotic and abiotic stress factors, the phases of stress, and respective plant responses, and ii) the affected traits, appropriate spectral domains and corresponding methods for measuring traits remotely. Secondly, representative results of a systematic literature analysis are highlighted, identifying the current status and possible future trends in stress detection and monitoring. Distinct plant responses occurring under short-term, medium-term or severe chronic stress exposure can be captured with remote sensing due to specific light interaction processes, such as absorption and scattering manifested in the reflected radiance, i.e. visible (VIS), near infrared (NIR), shortwave infrared, and emitted radiance, i.e. solar-induced fluorescence and thermal infrared (TIR). From the analysis of 96 research papers, the following trends can be observed: increasing usage of satellite and unmanned aerial vehicle data in parallel with a shift in methods from simpler parametric approaches towards more advanced physically-based and hybrid models. Most study designs were largely driven by sensor availability and practical economic reasons, leading to the common usage of VIS-NIR-TIR sensor combinations. The majority of reviewed studies compared stress proxies calculated from single-source sensor domains rather than using data in a synergistic way. We identified new ways forward as guidance for improved synergistic usage of spectral domains for stress detection: (1) combined acquisition of data from multiple sensors for analysing multiple stress responses simultaneously (holistic view); (2) simultaneous retrieval of plant traits combining multi-domain radiative transfer models and machine learning methods; (3) assimilation of estimated plant traits from distinct spectral domains into integrated crop growth models. As a future outlook, we recommend combining multiple remote sensing data streams into crop model assimilation schemes to build up Digital Twins of agroecosystems, which may provide the most efficient way to detect the diversity of environmental and biotic stresses and thus enable respective management decisions.
Chapter
Phosphorus is a key element for improving yield and quality of crop plants. Since phosphorus is poorly available in the soil rhizosphere, phosphorus stress occurs in susceptible crop plants. Phosphorus use efficiency may be improved by adapting phenotyping characters such as root, shoot, plant height, canopy structure, rhizosphere, photosynthesis, chlorophyll contents, biomass, and leaf area index. Here we review sensors for plant phenotyping and applications. Phenotyping assessment can be done using red, green and blue wavelength cameras, multispectral and hyperspectral imaging cameras, and thermal infrared cameras. In some plants, young leaves, meristems, flowers, transfer approximately half of the phosphorus back to the xylem. The proliferation of the lateral and shallow roots in rice may enhance the exploration of inorganic phosphate in the topsoil. Many plant species use arbuscular mycorrhizal fungi to promote foraging and access to phosphorus. Vegetative and physiological traits are closely linked to phosphorus use efficiency. Phosphorus-efficient genotypes can be used for developing crop varieties by plant breeding.KeywordsCrop plantsPhosphorusPhenotypingRootShootRhizosphereSensors
Article
Precision Agriculture (PA) promises to boost crop productivity while reducing agricultural costs and environmental footprints, and therefore is attracting ever-increasing interests in both academia and industry. This management strategy is underpinned by various advanced technologies including Unmanned Aerial Vehicle (UAV) sensing systems and Artificial Intelligence (AI) perception algorithms. In particular, due to their unique advantages such as a low cost, high spatio-temporal resolutions, flexibility, automation functions and minimized risk of operation, UAV sensing systems have been extensively applied in many civilian applications including PA since 2010. In parallel, AI algorithms (deep learning since 2012 in particular) are also drawing ever-increasing attention in different fields, since they are able to analyse an unprecedented volume/velocity/variety of data (semi-)automatically, which are also becoming computationally practical with the advancements of cloud computing, Graphics Processing Units and parallel computing. In this survey paper, therefore, a thorough review is performed on recent use of UAV sensing systems (e.g., UAV platforms, external sensing units) and AI algorithms (mainly supervised learning algorithms) in PA applications throughout the crop life-cycle, as well as the challenges and prospects for future development of UAVs and AI in agriculture sector. It is envisioned that this review is able to provide a timely technical reference, demystifying and promoting research, deployment and successful exploitation of AI empowered UAV perception systems for PA, and therefore contributing to addressing future agricultural and human nutrition challenges.
Article
3D point cloud and remote sensing images have been the primary data types for the development of high-precision positioning systems. Equipped with a LiDAR and a camera, an Unmanned Aerial Vehicle (UAV) can explore an uncharted territory and gather both 3D scans and aerial images in real time to dynamically inspect the surroundings. However, the high cost of a high-resolution LiDAR hinders the development of the perception module of an UAV. Also, it is essential to adopt accurate image semantic segmentation (SemSeg) algorithms to better understand the sensing environment. As hardware advancement is ongoing, support from the software side is crucial. A promising strategy for cost control in building a LiDAR-based positioning system is through point cloud super-resolution (SupRes), a technique that improves the point cloud resolution via algorithms. This study investigates a deep learning-based framework that adopts a classic encoder–decoder structure for both point cloud SupRes and image SemSeg. Unlike prior studies that mainly use convolutional neural networks (CNNs) for feature extraction, our model, named Swin-T-NFC CRFs, consists of a Vision Transformer (ViT)-based encoder and a fully connected conditional random fields (FC-CRFs)-based decoder, connected via a pyramid pooling module and multiple skip connections. Moreover, both encoder and decoder are coupled with a shifted window strategy that allows cross-window connection. As such, patches from different windows of the feature map can participate in self-attention computation, leading to more powerful modeling ability. Experimental results demonstrate that our method can effectively boost the prediction accuracy, reduce the error, and consistently outperform the state-of-the-art methods on simulated/real-world point cloud datasets and the urban drone dataset version 6.
Chapter
Full-text available
Seit einigen Jahren versprechen Drohnen für einen weiteren Durchbruch des Precision Farming zu sorgen, um Erträge zu steigern und gleichzeitig Wasser- und Nahrungsmittelkrisen zu lösen. Auch wird der landwirtschaftliche Sektor in vielen Studien als großer Zukunftsmarkt für Drohnen gehandelt. Fast jeder zehnte Landwirt in Deutschland setzt auf seinem Betrieb Drohnen ein. Das ergab eine 2018 Umfrage unter 420 landwirtschaftlichen Betriebsleitern, die in Zusammenarbeit vom Deutschen Bauernverband und Digitalverband Bitkom in Auftrag gegeben wurde (Bitkom 2018). Vor allem Betriebe mit mehr als 100 Hektar Fläche nutzen demnach Drohnen. Wenn diese Umfrage die aktuelle Lage widerspiegelt, sind Drohnen in der landwirtschaftlichen Praxis bereits angekommen und das Wissen um die viel-fältigen Anwendungsmöglichkeiten ist weit verbreitet. Schließlich gibt es in kaum einer anderen Branche einen vergleichbar intensiven Einsatz von Droh-nen, von der Vermessungsbranche einmal abgesehen. Abseits der Umfragen haben Drohnen zumindest bis vor kurzem in Deutschland leider keinen signifi-kanten Einfluss auf die landwirtschaftliche Praxis gehabt. Warum ist das so? Viele Landwirte experimentieren mit ihren Drohnen und setzen diese in ers-ter Linie zur schnellen Informationsgewinnung ein, ohne dass sich Arbeitsab-läufe grundlegend ändern und die vielen Möglichkeiten der neuen Technologie voll genutzt werden. Hierfür fehlt es oft an dem speziellen Knowhow, an guter und intuitiver Software und natürlich auch an Zeit, um sich damit auch inten-siv zu beschäftigen. Nichtsdestotrotz tut sich in letzter Zeit viel. So haben sich schon einige spe-zielle drohnenbasierte Dienstleistungen etabliert. Im Feldversuchswesen wer-den mit Drohnen – ganz objektiv und nicht invasiv –Versuchsparzellen vergli-chen, einzelne Pflanzen erfasst, gezählt und analysiert. Deep Learning ist ein weiteres Zauberwort, das helfen soll große Datenmengen performant und zuverlässig zu analysieren. Drohnen können beispielsweise Informationen über die Wasserhaltekapazität des Bodens und der Effektivität von Bewässe-rungsmaßnahmen liefern. Multispektrale Bilddaten können zur Ableitung von Applikationskarten für eine teilflächenspezifische Stickstoffdüngung oder Anwendung von Pflanzenschutzmitteln eingesetzt werden, z.B. Montgomery et al. 2020; Maes und Steppe 2018; Näsi et al. 2018. Drohnen sind also prinzipiell in der Lage verschiedene landwirtschaftliche Praktiken und Verfahren zu ändern, sowohl für Landwirte als auch für deren Dienstleister wie Maschinenringe und landwirtschaftliche Berater. Leider konn-ten viele der Möglichkeiten, die Landwirten von Drohnen-Dienstleistern in Aussicht gestellt wurden, bisher so nicht realisiert werden, selbst wenn die theo-retischen Grundlagen bekannt sind. Es stellt sich die Frage, warum nicht? Nun, vieles was in einer kontrollierten Forschungsumgebung prinzipiell funktioniert, lässt sich nicht einfach auf landwirtschaftlichen Flächen und unter praktischen Rahmenbedingungen wirtschaftlich umsetzen. Deshalb wird aktuell aus der Praxis anwendungsnahe Forschung und Expertise nachgefragt. Auch die Entwicklung geeigneter Software, die aus den sehr großen Daten-mengen, die bei Drohnenbefliegungen anfallen, effizient, performant und zeitnah die richtigen Informationen extrahieren kann, schreitet kontinuierlich voran. Die komplexe Regulierung in Deutschland muss an dieser Stelle ebenfalls genannt werden, die es Landwirten und Dienstleistern nicht einfach macht landwirtschaftliche Flächen zu befliegen, falls diese z.B. in einem FFH-Gebiet liegen oder an einen hoch frequentierten Fuß- und Radweg, eine Bundestraße etc. grenzen. Um wirtschaftlich größere Flächen zu befliegen sind Flüge außer-halb der Sichtweite (BVLOS) für die Landwirtschaft sehr wichtig, was sich unter dem aktuellen Regulierungsrahmen allerdings als sehr schwierig gestaltet.
Article
Full-text available
Wheat is one of the important food crops, and it is often subjected to different stresses during its growth. Lodging is a common disaster in filling and maturity for wheat, which not only affects the quality of wheat grains, but also causes severe yield reduction. Assessing the degree of wheat lodging is of great significance for yield estimation, wheat harvesting and agricultural insurance claims. In particular, point cloud data extracted from unmanned aerial vehicle (UAV) images have provided technical support for accurately assessing the degree of wheat lodging. However, it is difficult to process point cloud data due to the cluttered distribution, which limits the wide application of point cloud data. Therefore, a classification method of wheat lodging degree based on dimensionality reduction images from point cloud data was proposed. Firstly, 2D images were obtained from the 3D point cloud data of the UAV images of wheat field, which were generated by dimensionality reduction based on Hotelling transform and point cloud interpolation method. Then three convolutional neural network (CNN) models were used to realize the classification of different lodging degrees of wheat, including AlexNet, VGG16, and MobileNetV2. Finally, the self-built wheat lodging dataset was used to evaluate the classification model, aiming to improve the universality and scalability of the lodging discrimination method. The results showed that based on MobileNetV2, the dimensionality reduction image from point cloud obtained by the method proposed in this paper has achieved good results in identifying the lodging degree of wheat. The F1-Score of the classification model was 96.7% for filling, and 94.6% for maturity. In conclusion, the point cloud dimensionality reduction method proposed in this study could meet the accurate identification of wheat lodging degree at the field scale.
Article
Measuring leaf area is a critical task in plant biology. Meshing techniques, parametric surface modelling and implicit surface modelling allow estimating plant leaf area from acquired 3D point clouds. However, there is currently no consensus on the best approach because of little comparative evaluation. In this paper, we provide evidence about the performance of each approach, through a comparative study of four meshing, three parametric modelling and one implicit modelling methods. All selected methods are freely available and easy to use. We have also performed a parameter sensitivity analysis for each method in order to optimise its results and fully automate its use. We identified nine criteria affecting the robustness of the studied methods. These criteria are related to either the leaf shape (length/width ratio, curviness, concavity) or the acquisition process (e.g. sampling density, noise, misalignment, holes). We used synthetic data to quantitatively evaluate the robustness of the selected approaches with respect to each criterion. In addition we evaluated the results of these approaches on five tree and crop datasets acquired with laser scanners or photogrammetry. This study allows us to highlight the benefits and drawbacks of each method and evaluate its appropriateness in a given scenario. Our main conclusion is that fitting a Bézier surface is the most robust and accurate approach to estimate plant leaf area in most cases.
Article
Full-text available
Across-season biomass assessment is crucial in the cultivar selection process to accurately evaluate the yield performance of lines under different growing conditions. However, it has been difficult to have an accurate, reliable, and repeated fresh biomass (FM) estimation of large populations of plants in the field without destructive harvesting, which incurs significant labor and operation costs. Sensor-based phenotyping platforms have advanced in the data collection of structural and vegetative information of plants, but the developed prediction models are still limited by low correlations at different growth stages and seasons. In this study, our objective was to develop and validate the global prediction models for across-season harvested fresh biomass (FM) yield based on the ground- and air-based sensor data including ground-based LiDAR, ground-based ultrasonic, and air-based multispectral camera to extract LiDAR plant volume (LV), LiDAR point density (LV_Den), height, and Normalized Difference Vegetative Index (NDVI). The study was conducted in a row-plot field trial with 480 rows (3 rows in a plot per cultivar) throughout the whole 2020 growing season up to the reproductive stage. We evaluated the performance of each plant parameter, their relationship, and the best subset prediction models using statistical stepwise selection at the row and plot levels through the seasonal and combined seasonal datasets. The best performing model: F M ~ L V ∗ L V _ D e n ∗ N D V I had a determination of coefficient R ² of at least 0.9 in vegetative stages and 0.8 in the reproductive stage. Similar results can be achieved in a simpler model with just two LiDAR variables— F M ~ L V ∗ L V _ D e n . In addition, LV and LV_Den showed a robust correlation with FM on their own over seasons and growth stages, while NDVI only performed well in some seasons. The simpler model based on only LiDAR data can be widely applied over season without the need of additional sensor data and may thus make the in-field across-season biomass assessment more feasible and practical for fast and cost-effective development of higher biomass yield cultivars.
Article
Full-text available
In this article, we assess the potential of depth imaging systems for 3D measurements in the context of plant phenotyping. We propose an original algorithm to segment depth images of plant from a single top-view. Various applications of biological interest involving for illustration rosebush, yucca and apple tree are then presented to demonstrate the practical interest of such imaging systems. In addition, the depth camera used here is very low cost and low weight. The present results therefore open interesting perspectives in the direction of high-throughput phenotyping in controlled environment or in field conditions.
Article
Full-text available
There is a growing need to increase global crop yields, whilst minimising use of resources such as land, fertilisers and water. Agricultural researchers use ground-based observations to identify, select and develop crops with favourable genotypes and phenotypes; however, the ability to collect rapid, high quality and high volume phenotypic data in open fields is restricting this. This study develops and assesses a method for deriving crop height and growth rate rapidly from multi-temporal, very high spatial resolution (1 cm/pixel), 3D digital surface models of crop field trials produced via Structure from Motion (SfM) photogrammetry using aerial imagery collected through repeated campaigns flying an Unmanned Aerial Vehicle (UAV) with a mounted Red Green Blue (RGB) camera. We compare UAV SfM modelled crop heights to those derived from terrestrial laser scanner (TLS) and to the standard field measurement of crop height conducted using a 2 m rule. The most accurate UAV-derived surface model and the TLS both achieve a Root Mean Squared Error (RMSE) of 0.03 m compared to the existing manual 2 m rule method. The optimised UAV method was then applied to the growing season of a winter wheat field phenotyping experiment containing 25 different varieties grown in 27 m² plots and subject to four different nitrogen fertiliser treatments. Accuracy assessments at different stages of crop growth produced consistently low RMSE values (0.07, 0.02 and 0.03 m for May, June and July, respectively), enabling crop growth rate to be derived from differencing of the multi-temporal surface models. We find growth rates range from -13 mm/day to 17 mm/day. Our results clearly display the impact of variable nitrogen fertiliser rates on crop growth. Digital surface models produced provide a novel spatial mapping of crop height variation both at the field scale and also within individual plots. This study proves UAV based SfM has the potential to become a new standard for high-throughput phenotyping of in-field crop heights.
Article
Full-text available
Abstract. Crop surface models (CSMs) representing plant height above ground level are a useful tool for monitoring in-field crop growth variability and enabling precision agriculture applications. A semiautomated system for generating CSMs was implemented. It combines an Android application running on a set of smart cameras for image acquisition and transmission and a set of Python scripts automating the structure-from-motion (SfM) software package Agisoft Photoscan and ArcGIS. Only ground-control-point (GCP) marking was performed manually. This system was set up on a barley field experiment with nine different barley cultivars in the growing period of 2014. Images were acquired three times a day for a period of two months. CSMs were successfully generated for 95 out of 98 acquisitions between May 2 and June 30. The best linear regressions of the CSM-derived plot-wise averaged plant-heights compared to manual plant height measurements taken at four dates resulted in a coefficient of determination R2 of 0.87 and a root-mean-square error (RMSE) of 0.08 m, with Willmott’s refined index of model performance dr equaling 0.78. In total, 103 mean plot heights were used in the regression based on the noon acquisition time. The presented system succeeded in semiautomatedly monitoring crop height on a plot scale to field scale.
Article
Full-text available
The desire to obtain a better understanding of ecosystems and process dynamics in nature accentuates the need for observing these processes in higher temporal and spatial resolutions. Linked to this, the measurement of changes in the external structure and phytomorphology of plants is of particular interest. In the fields of environmental research and agriculture, an inexpensive and field-applicable on-site imaging technique to derive three-dimensional information about plants and vegetation would represent a considerable improvement upon existing monitoring strategies. This is particularly true for the monitoring of plant growth dynamics, due to the often cited lack of morphological information. To this end, an innovative low-cost light-field camera, the Lytro LF (Light-Field), was evaluated in a long-term field experiment. The experiment showed that the camera is suitable for monitoring plant growth dynamics and plant traits while being immune to ambient conditions. This represents a decisive contribution for a variety of monitoring and modeling applications, as well as for the validation of remote sensing data. This strongly confirms and endorses the assumption that the light-field camera presented in this study has the potential to be a light-weight and easy to use measurement tool for on-site environmental monitoring and remote sensing purposes.
Article
Full-text available
Plot-scale field measurements are necessary to monitor changes to tundra vegetation, which has a small stature and high spatial heterogeneity, while satellite remote sensing can be used to track coarser changes over larger regions. In this study, we explored the potential of unmanned aerial vehicle (UAV) photographic surveys to map low-Arctic vegetation at an intermediate scale. A multicopter was used to capture highly overlapping, subcentimetre photographs over a 2 ha site near Tuktoyaktuk, Northwest Territories. Images were processed into ultradense 3D point clouds and 1 cm resolution orthomosaics and vegetation height models using Structure-from-Motion (SfM) methods. Shrub vegetation heights measured on the ground were accurately represented using SfM point cloud data (r2 = 0.96, SE = 8 cm, n = 31) and a combination of spectral and height predictor variables yielded an 11-class classification with 82% overall accuracy. Differencing repeat UAV surveys before and after manually trimming shrub patches showed that vegetation height decreases in trimmed areas (− 6.5 cm, SD = 21 cm). Based on these findings, we conclude that UAV photogrammetry provides a promising, cost-efficient method for high-resolution mapping and monitoring of tundra vegetation that can be used to bridge the gap between plot and satellite remote sensing measurements.
Article
Full-text available
Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.
Article
We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites.
Article
Current approaches to field phenotyping are laborious or permit the use of only a few sensors at a time. In an effort to overcome this, a fully automated robotic field phenotyping platform with a dedicated sensor array that may be accurately positioned in three dimensions and mounted on fixed rails has been established, to facilitate continual and high-throughput monitoring of crop performance. Employed sensors comprise of high-resolution visible, chlorophyll fluorescence and thermal infrared cameras, two hyperspectral imagers and dual 3D laser scanners. The sensor array facilitates specific growth measurements and identification of key growth stages with dense temporal and spectral resolution. Together, this platform produces a detailed description of canopy development across the crops entire lifecycle, with a high-degree of accuracy and reproducibility.