ArticlePDF Available

A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data

Authors:
  • NASA GSFC / SSAI

Abstract and Figures

This study presents a comparative study of multispectral and RGB (red, green, and blue) sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS) imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an RGB-based vegetation index with morphological closing. The field experiment was established in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids. Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over multiple flights during the growing season of the cotton crop. Initially, the normalized difference vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the comparison with RGB-based canopy cover estimations. To test the maximum achievable performance of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later, four RGB-based canopy cover estimation methods were implemented using RGB images, namely Canopeo, the excessive greenness index, the modified red green vegetation index and the red green blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model was considered to be a more stable and accurately estimating canopy cover model, whereas the RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves changed color after canopy maturation. The application of a morphological closing operation after the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue vegetation index turned out to be the most efficient vegetation index to extract canopy cover with very low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model provides an affordable alternate of the multispectral sensors which are more sensitive and expensive.
Content may be subject to copyright.
remote sensing
Article
A Comparative Study of RGB and Multispectral
Sensor-Based Cotton Canopy Cover Modelling Using
Multi-Temporal UAS Data
Akash Ashapure 1, Jinha Jung 1, * , Anjin Chang 2, Sungchan Oh 1, Murilo Maeda 3and
Juan Landivar 4
1Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA;
aashapur@purdue.edu (A.A.); oh231@purdue.edu (S.O.)
2School of Engineering & Computing Science, Texas A&M University– Corpus Christi, Corpus Christi, TX
78412, USA; anjin.chang@tamucc.edu
3Texas A&M AgriLife Extension, Lubbock, TX 79403, USA; mmaeda@ag.tamu.edu
4Texas A&M AgriLife Research, Corpus Christi, TX 78406, USA; jalandivar@ag.tamu.edu
*Correspondence: jinha@purdue.edu; Tel.: +1-765-496-1267
Received: 22 October 2019; Accepted: 21 November 2019; Published: 23 November 2019


Abstract:
This study presents a comparative study of multispectral and RGB (red, green, and blue)
sensor-based cotton canopy cover modelling using multi-temporal unmanned aircraft systems (UAS)
imagery. Additionally, a canopy cover model using an RGB sensor is proposed that combines an
RGB-based vegetation index with morphological closing. The field experiment was established
in 2017 and 2018, where the whole study area was divided into approximately 1 x 1 m size grids.
Grid-wise percentage canopy cover was computed using both RGB and multispectral sensors over
multiple flights during the growing season of the cotton crop. Initially, the normalized dierence
vegetation index (NDVI)-based canopy cover was estimated, and this was used as a reference for the
comparison with RGB-based canopy cover estimations. To test the maximum achievable performance
of RGB-based canopy cover estimation, a pixel-wise classification method was implemented. Later,
four RGB-based canopy cover estimation methods were implemented using RGB images, namely
Canopeo, the excessive greenness index, the modified red green vegetation index and the red green
blue vegetation index. The performance of RGB-based canopy cover estimation was evaluated
using NDVI-based canopy cover estimation. The multispectral sensor-based canopy cover model
was considered to be a more stable and accurately estimating canopy cover model, whereas the
RGB-based canopy cover model was very unstable and failed to identify canopy when cotton leaves
changed color after canopy maturation. The application of a morphological closing operation after
the thresholding significantly improved the RGB-based canopy cover modeling. The red green blue
vegetation index turned out to be the most ecient vegetation index to extract canopy cover with very
low average root mean square error (2.94% for the 2017 dataset and 2.82% for the 2018 dataset), with
respect to multispectral sensor-based canopy cover estimation. The proposed canopy cover model
provides an aordable alternate of the multispectral sensors which are more sensitive and expensive.
Keywords: precision agriculture; canopy cover; UAS; image analysis; multispectral; crop mapping
1. Introduction
Numerous studies are being conducted on cotton crop growth monitoring for precision agriculture.
Cotton is an important crop in the state of Texas, which produces more than 50% of the total cotton
produced by the entire country, comprising a spatial coverage of around six million acres [
1
]. Recent
advances in genetic engineering and genomics have significantly accelerated the breeding process
Remote Sens. 2019,11, 2757; doi:10.3390/rs11232757 www.mdpi.com/journal/remotesensing
Remote Sens. 2019,11, 2757 2 of 18
of cotton [
2
]. There is a growing need for phenotyping to match this high pace breeding process.
Consequently, plant breeders and agriculture scientists have recognized the need for a high-throughput
phenotyping (HTP) system that can eciently measure phenotypic traits such as crop height, volume,
canopy cover, and vegetation indices (VIs) with reasonable accuracy [
3
]. An accurate phenotyping
process is very critical for the reliable quantification of phenotypical traits to select the genotypes of
interest. HTP is an extensively discussed phenomenon; however, until recently, its implementation
has been rather fragmentary [
4
]. The change in this situation has been mainly attributed to the recent
developments in unmanned aircraft systems (UAS). Lightweight platforms combined with consumer
grade imaging sensors have provided an aordable system to perform the necessary remote sensing
activities for precision agriculture, especially with low altitude flights that provide high temporal and
spatial resolution data [58].
In this paper, canopy cover (CC), which is commonly expressed as the percentage of total ground
areal coverage by the vertical projection of plant canopy, is studied. Plant canopy cover is strongly
related to crop growth, development, water use, and photosynthesis, which makes it an important
trait to be observed throughout the growing season [
9
]. In addition, CC is an important ancillary
variable in the estimation of the leaf area index (LAI) [
10
]. Various remote sensing techniques have
been employed in the literature to compute CC, and these include satellite imagery with varying degree
of resolutions [
11
15
], airborne imagery [
16
] and light detection and ranging (LiDAR) data [
17
,
18
].
Satellite imagery has the advantage of providing large spatial coverage. However, coarser spatial
resolution limits its application in computing CC over small breeding fields where genotype screening
is the objective. Moreover, the temporal resolution of satellite imagery is also not enough for phenotypic
applications. Furthermore, satellite imagery is highly aected by cloud cover and other atmospheric
conditions [
19
]. On the other hand, aerial imagery usually has a higher spatial resolution, but it has
fewer spectral bands as compared to satellite imagery [
20
]. CC estimation using LiDAR data can be
slightly biased in visual interpretation; however, in general, it is particularly useful in the estimation
of vertical canopy cover and angular canopy closure which is otherwise dicult to compute [
21
].
Terrestrial and airborne LiDAR data have been successfully used to compute CC in the literature [
22
,
23
].
However, data collection frequency has remained a significant issue, as LiDAR sensors and airborne
imaging sensors are relatively expensive compared to UAS. Recently, UAS have emerged as an
alternate to the satellite, airborne imaging sensors or LiDAR sensors to estimate CC, and this approach
is more aordable and could provide higher temporal and spatial resolution [
24
28
]. UAS-based
CC measurements have been eciently used to estimate LAI [
29
,
30
] and have been used as one of
the comparison parameters to quantify the dierence between various crop management practices
throughout the growing season [
31
]. Moreover, a recent study conducted over maize field indicated
that UAS-based CC is significantly correlated with the grain yield [32].
CC computation using multispectral (MS) sensors has gained more popularity over RGB(red,
green, and blue)-based CC, with the primary reason being that the MS sensor is more stable over time
and remains relatively unaected by changes in environmental conditions (e.g., sunlight angle and
cloud cover) throughout the crop growing season due to its irradiance sensor [
3
,
7
,
33
,
34
]. However, MS
sensors are more sensitive and expensive compared to RGB sensors. RGB-based CC estimation methods
can be divided into two categories, namely the thresholding method and the pixel classification method.
Thresholding methods require the specification of the color thresholds or the ratios to identify canopy
pixels. Pixel classification methods use a supervised or unsupervised pixel-wise classification method
to identify canopy pixels. Though pixel classification methods are highly accurate, they are time
consuming and computationally extensive. Supervised classification methods require training samples
to be collected, which is expensive and prone to human error. However, pixel classification methods
are particularly useful to calibrate thresholding methods [
35
]. There is an ample amount of work in
the literature that has used RGB sensors to compute CC. Early work in this direction includes the
quantification of turfgrass cover using digital image analysis by Richarson et al., (2001) [
36
]. Lee and
Lee, (2011), estimated canopy cover over the rice field using an RGB sensor [
37
]. Patrignani and Ochsner,
Remote Sens. 2019,11, 2757 3 of 18
(2015), developed the Canopeo algorithm to extract fractional green canopy cover [
38
]. Despite having
a significant amount of previous literature exploring RGB-based CC estimation, there is a scarcity of
work that compares dierent CC estimations throughout the crop growing season. Torres-S
á
nchez et al.,
(2014), [
39
] developed a multitemporal CC framework for a wheat field using UAS-based RGB images.
However, it was limited to early season CC estimation only. Moreover, the highest accuracy that they
achieved in mapping CC was less than 92%. Fang et al., (2016), [
40
] presented a case study of CC
estimation using UAS-based MS sensor data over an oilseed rape. However, their study was aimed to
provide CC estimation and flower fraction for the crop species that have conspicuous non-green flowers
or fruits. Moreover, they primarily used MS sensor-based CC estimation methodology in their study,
with only one RGB-based CC estimation approach that only worked eciently during the vegetative
period. Marcial-Pablo et al., (2019), [
41
] compared CC estimation using RGB and MS sensor-based
vegetation indices over a maize field. Their results suggested that RGB-based CC estimation can be
useful in the early-season growth stage of the crop, while later in the season CC estimation, using
MS sensor-based indices was more accurate. Moreover, the accuracy of the CC estimation was also
dependent on automatic thresholding using the Otsu method.
Lima-Cueto et al., (2019), [42]
used
11 VIs to quantify vegetation cover in olive groves, and they suggested that MS sensor-based CC
had better accuracy as compared to RGB-based CC. A consistent observation in the aforementioned
case studies was that RGB-based CC estimation was not ecient in the late season. Therefore, the
objective of this study was not only to compare various RGB-based CC estimation methods with MS
sensor-based CC estimation but also to improve RGB-based CC estimation to provide a more aordable
option to breeders and agriculture scientists, particularly in late season.
2. Materials and Methods
2.1. Study Area and Sensors
A field experiment was established at the Texas A&M AgriLife Research and Extension Center in
Corpus Christi, TX (Latitude 27
46’59” N and longitude 97
34’13” W). The trial consisted of 5 cotton
genotypes from the Texas A&M AgriLife Cotton Breeding Program (presented in Figure 1). Genotypes
were planted 22 March 2017 in skip and solid row patterns (i.e., one- or two-row plots, respectively),
and each was replicated four times. For canopy cover estimation, another field experiment was
established at the same location in 2018. The trial consisted of 10 cotton genotypes from the Texas
A&M AgriLife Cotton Breeding Program. Genotypes were planted in the first week of April in skip
and solid row patterns. To maintain the integrity of the experiment, only part of the field which is
highlighted by yellow boxes in Figure 1, was considered for both 2017 and 2018, as the selected area
was the only area which had an alternate pattern of skip and non-skip rows and a common variety. The
selected area was divided into 1 x 1 m size grids. The number of grids in 2017 and 2018 experiments
was 300 and 600, respectively.
RGB and MS sensors were used for this study, as presented in Figure 2. A DJI Phantom 4 Pro
(SZ DJI Technology Co., Ltd., Shenzhen, China) was used for RGB data collection. It comprised a
3-axis gimbal-stabilized RGB sensor with a resolution of 20 mega pixels. MS data were collected using
the DJI Matrice 100 platform (SZ DJI Technology Co., Ltd., Shenzhen, China) with a SlantRange 3p
sensor (Slantrange Inc, San Diego, CA) equipped with integrated solar spectrometer for frame-to-frame,
radiometrically accurate reflectance measurements. The sensor has a spatial resolution of 4.8 cm/pixel
at 120 m above ground level. It collects data using four spectral bands, namely green, red, red-edge,
and near infrared bands, for which the peak wavelengths are presented in Table 1.
Remote Sens. 2019,11, 2757 4 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 4 of 17
(a) (b)
Figure 1. Experimental field setup consisted of cotton in skip and solid row patterns in: (a) 2017 and
(b) 2018. The experimental field setup is presented with an RGB (red, green, and blue) orthomosaic
of the study area on June 7, 2017, and June 6, 2018.
RGB and MS sensors were used for this study, as presented in Figure 2. A DJI Phantom 4 Pro
(SZ DJI Technology Co., Ltd., Shenzhen, China) was used for RGB data collection. It comprised a 3-
axis gimbal-stabilized RGB sensor with a resolution of 20 mega pixels. MS data were collected using
the DJI Matrice 100 platform (SZ DJI Technology Co., Ltd., Shenzhen, China) with a SlantRange 3p
sensor (Slantrange Inc, San Diego, CA) equipped with integrated solar spectrometer for frame-to-
frame, radiometrically accurate reflectance measurements. The sensor has a spatial resolution of 4.8
cm/pixel at 120 m above ground level. It collects data using four spectral bands, namely green, red,
red-edge, and near infrared bands, for which the peak wavelengths are presented in Table 1.
Table 1. Peak wavelength and FWHM (full width at half maximum) for SlantRange 3p sensor
bands.
SlantRange 3p Sensor Band Peak Wavelength (nm) FWHM (nm)
Green 560 40
Red 655 35
Red-edge 710 20
Near infrared 830 110
(a)
(b)
Figure 1.
Experimental field setup consisted of cotton in skip and solid row patterns in: (
a
) 2017 and
(
b
) 2018. The experimental field setup is presented with an RGB (red, green, and blue) orthomosaic of
the study area on June 7, 2017, and June 6, 2018.
Remote Sens. 2019, 10, x FOR PEER REVIEW 4 of 17
(a) (b)
Figure 1. Experimental field setup consisted of cotton in skip and solid row patterns in: (a) 2017 and
(b) 2018. The experimental field setup is presented with an RGB (red, green, and blue) orthomosaic
of the study area on June 7, 2017, and June 6, 2018.
RGB and MS sensors were used for this study, as presented in Figure 2. A DJI Phantom 4 Pro
(SZ DJI Technology Co., Ltd., Shenzhen, China) was used for RGB data collection. It comprised a 3-
axis gimbal-stabilized RGB sensor with a resolution of 20 mega pixels. MS data were collected using
the DJI Matrice 100 platform (SZ DJI Technology Co., Ltd., Shenzhen, China) with a SlantRange 3p
sensor (Slantrange Inc, San Diego, CA) equipped with integrated solar spectrometer for frame-to-
frame, radiometrically accurate reflectance measurements. The sensor has a spatial resolution of 4.8
cm/pixel at 120 m above ground level. It collects data using four spectral bands, namely green, red,
red-edge, and near infrared bands, for which the peak wavelengths are presented in Table 1.
Table 1. Peak wavelength and FWHM (full width at half maximum) for SlantRange 3p sensor
bands.
SlantRange 3p Sensor Band Peak Wavelength (nm) FWHM (nm)
Green 560 40
Red 655 35
Red-edge 710 20
Near infrared 830 110
(a)
(b)
Figure 2.
RGB and multispectral sensors used for data collection. (
a
) The DJI Phantom 4 Pro for RGB
and (b) the DJI Matrice 100 platform with the SlantRange 3p sensor for multispectral data collection.
Table 1. Peak wavelength and FWHM (full width at half maximum) for SlantRange 3p sensor bands.
SlantRange 3p Sensor Band Peak Wavelength (nm) FWHM (nm)
Green 560 40
Red 655 35
Red-edge 710 20
Near infrared 830 110
2.2. Data Collection and Preprocessing
Data collection and preprocessing was followed from the method of Ashapure et al. (2019) [
31
].
UAS data (both MS and RGB) were collected over the experimental field on a weekly basis. Table 2
presents the flight specifications for both RGB and MS data collection. A total of eleven and ten flights
were conducted in 2017 and 2018, respectively, using RGB and MS sensors. The overlap for MS sensor
data collection was 70%, and it was 80%–85% for RGB sensor data collection. In 2017, the altitude
Remote Sens. 2019,11, 2757 5 of 18
was about 20 m for the RGB sensor and 25 m for the MS sensor. In 2018, the flight altitude for RGB
and MS sensors was 35 and 47 m, respectively. Provided the experimental field was in a coastal area,
wind speed and rain were the potential factors to be considered before every flight. Most flights were
conducted between 10:00AM and 2:00PM, except under unfavorable weather conditions, such as a
wind speed greater than 15 mph or raining. Moreover, the temperature variation throughout the
growing season varied between 79 and 96 F.
Table 2. UAS data collection timeline and sensor-wise flight specification.
Date Flight Altitude (m) Overlap (%) Spatial Resolution (cm)
RGB Multispectral RGB Multispectral RGB Multispectral
24 April 2017
20 30 85 75 0.51 0.93
5 May 2017 20 25 85 70 0.50 0.85
12 May 2017 20 25 85 70 0.51 0.81
20 May 2017 20 25 85 70 0.52 0.82
30 May 2017 20 25 85 70 0.51 0.85
7 June 2017 20 25 85 70 0.51 0.83
19 June 2017 20 25 85 70 0.52 0.81
5 July 2017 20 25 85 70 0.51 0.81
10 July 2017 20 25 85 70 0.50 0.83
18 July 2017 20 25 85 70 0.51 0.82
23 July 2017 20 25 85 70 0.51 0.82
23 April 2018
35 47 80 70 0.73 1.61
7 May 2018 35 47 80 70 0.69 1.65
14 May 2018 35 47 80 70 0.71 1.61
23 May 2018 35 47 80 70 0.71 1.64
1 June 2018 37 47 80 70 0.73 1.62
6 June 2018 35 47 80 70 0.72 1.61
13 June 2018 35 47 80 70 0.71 1.63
3 July 2018 35 47 80 70 0.71 1.61
9 July 2018 35 47 80 70 0.72 1.63
19 July 2018 35 47 80 70 0.70 1.62
Generally, UAS are equipped with a consumer grade global positioning system (GPS) that do
not have satisfactory location accuracy for aerial mapping applications. To overcome this problem,
semi-permanent ground control points (GCPs) with a high reflectance were installed over the study area.
The GCPs were surveyed using a dual frequency, post processed kinematic (PPK) GPS system, 20 Hz
V-Map Air model (Micro Aerial Project L.L.C., Gainesville, FL). Images obtained from the UAS platform
with significant overlaps along with the 3D coordinates of the GCPs were imported to Agisoft Photoscan
Pro (Agisoft LLC, St. Petersburg, Russia), which uses structure from motion (SfM) photogrammetry
algorithms to derive high dens 3D point clouds, fine spatial resolution 2D orthomosaics, and digital
surface models (DSM). The SfM refers to the process of finding the three-dimensional structure of an
object by analyzing local motion signals over time [43].
2.3. Canopy Cover Computation
The percentage CC was computed as the ratio of canopy area to the total area of the grid computed
using Equation (1), where GSD is ground sampling distance. An RGB orthomosaic image was converted
into a binary image, where zero represents non-canopy pixels and one represents canopy pixels. As
the whole field was divided into a square meter grid by using Equation (1), a grid-wise percentage CC
was computed (As shown in Figure 3).
CC =
PGCD2if Canopy Pixel
PGCD2
×100, (1)
Remote Sens. 2019,11, 2757 6 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 6 of 17
The percentage CC was computed as the ratio of canopy area to the total area of the grid
computed using Equation (1), where GSD is ground sampling distance. An RGB orthomosaic image
was converted into a binary image, where zero represents non-canopy pixels and one represents
canopy pixels. As the whole field was divided into a square meter grid by using Equation (1), a grid-
wise percentage CC was computed (As shown in Figure 3).
CC =𝑮𝑪𝑫𝟐 𝒊𝒇 𝑪𝒂𝒏𝒐𝒑𝒚 𝑷𝒊𝒙𝒆𝒍
𝑮𝑪𝑫𝟐 ×100, (1)
Figure 3. Canopy cover estimation from the orthomosaic images (Red square: individual crop grid,
and each grid is 1 × 1 m): an RGB orthomosaic image collected using the unmanned aerial systems
(UAS) platform, followed by the binary classification results of the orthomosaic image, where white
represents the canopy class and black represents the non-canopy class; the last image represents the
grid-wise estimated canopy cover (CC).
As mentioned earlier, MS sensor-based CC estimation is considered, in the literature, as the most
reliable estimation technique that uses a normalized difference vegetation index (NDVI) to separate
the canopy from the non-canopy areas (computed using Equation (2) [44]). SlantView, the software
developed for the SlantRange 3p MS sensor, was used for the radiometric calibration in order to
accurately compare the crop conditions across datasets collected in varying lighting conditions
throughout the day and growing season. A detailed visual inspection was performed to find a
threshold NDVI value to separate the canopy area from the non-canopy area in the image throughout
the growing season regardless of the growth stage.
NDVI =
 , (2)
To investigate the accurate CC estimation using the RGB-based sensor, which is equivalent to
CC estimation using NDVI, a pixel-wise classification method was implemented; this is presented in
Figure 4. As found in the literature, pixel classification methods are considered highly accurate for
separating the canopy and non-canopy classes, and they are mainly used to calibrate RGB-based
methods [35,38,45]. A pixel classification method based on K-means clustering was used to compare
the RGB-based methods that use vegetation indices to separate canopy areas from non-canopy areas.
Initially, K-means clustering with five classes was applied to the RGB orthomosaics, considering five
potential classes representing soil, shadow, cotton bolls, green canopy and brown canopy, as
presented in Figure 4. After assigning the class labels to the clustered map, it was validated using
ground-truth samples collected over RGB orthomosaics using visual inspection, and the overall
classification accuracy was found to be at least 97%. Later, soil, shadow and cotton bolls were merged
and assigned as non-canopy, and green and brown canopy were merged and identified as canopy
classes. However, as pixel-wise classification-based CC estimation is computationally extensive, its
implementation was solely done with an intention to investigate the maximum achievable
performance using the RGB-based sensors and to compare it with MS sensor-based CC estimation.
Figure 3.
Canopy cover estimation from the orthomosaic images (Red square: individual crop grid,
and each grid is 1
×
1 m): an RGB orthomosaic image collected using the unmanned aerial systems
(UAS) platform, followed by the binary classification results of the orthomosaic image, where white
represents the canopy class and black represents the non-canopy class; the last image represents the
grid-wise estimated canopy cover (CC).
As mentioned earlier, MS sensor-based CC estimation is considered, in the literature, as the
most reliable estimation technique that uses a normalized dierence vegetation index (NDVI) to
separate the canopy from the non-canopy areas (computed using Equation (2) [
44
]). SlantView, the
software developed for the SlantRange 3p MS sensor, was used for the radiometric calibration in order
to accurately compare the crop conditions across datasets collected in varying lighting conditions
throughout the day and growing season. A detailed visual inspection was performed to find a threshold
NDVI value to separate the canopy area from the non-canopy area in the image throughout the growing
season regardless of the growth stage.
NDVI = NIR Red
NIR +Red !, (2)
To investigate the accurate CC estimation using the RGB-based sensor, which is equivalent to
CC estimation using NDVI, a pixel-wise classification method was implemented; this is presented
in Figure 4. As found in the literature, pixel classification methods are considered highly accurate
for separating the canopy and non-canopy classes, and they are mainly used to calibrate RGB-based
methods [
35
,
38
,
45
]. A pixel classification method based on K-means clustering was used to compare
the RGB-based methods that use vegetation indices to separate canopy areas from non-canopy areas.
Initially, K-means clustering with five classes was applied to the RGB orthomosaics, considering
five potential classes representing soil, shadow, cotton bolls, green canopy and brown canopy, as
presented in Figure 4. After assigning the class labels to the clustered map, it was validated using
ground-truth samples collected over RGB orthomosaics using visual inspection, and the overall
classification accuracy was found to be at least 97%. Later, soil, shadow and cotton bolls were merged
and assigned as non-canopy, and green and brown canopy were merged and identified as canopy
classes. However, as pixel-wise classification-based CC estimation is computationally extensive, its
implementation was solely done with an intention to investigate the maximum achievable performance
using the RGB-based sensors and to compare it with MS sensor-based CC estimation.
Remote Sens. 2019,11, 2757 7 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 7 of 17
Figure 4. K-means clustering-based pixel classification method workflow where the orthomosaic is
classified into five classes, and, later, classes are merged into two clusters, namely canopy and non-
canopy. The RGB orthomosaic presented was captured on 19th June 2017.
In this study, four different RGB-based methods were used, namely Canopeo, the excessive
greenness index (ExG), the modified green red vegetation index (MGRVI) and the red green blue
vegetation index (RGBVI), to generate the binary images to separate canopy areas from non-canopy
areas (Table 3).
Table 3. RGB image-based vegetation indices and their formula.
Vegetation Index Formula Reference
Canopeo
𝑐𝑎𝑛𝑜𝑝𝑦 = (𝑖<𝜃
)×(𝑖<𝜃
)×(𝑖>𝜃
)
𝑖=
, 𝑖=
 , 𝑖= 2× 𝑔𝑟𝑒𝑒𝑛 − 𝑏𝑙𝑢𝑒− 𝑟𝑒𝑑
𝜃= 0.95, 𝜃=0.95, 𝜃
=20
[38]
ExG
2𝐺−𝑅−𝐵
𝑅=
, 𝐺=
, 𝐵=

[46]
MGRVI 𝐺−𝑅
𝐺+𝑅
[47]
RGBVI 𝐺−𝑅×𝐵
𝐺+𝑅×𝐵 [47]
The overall procedure to compute the CC binary map using RGB vegetation indices is presented
in Figure 5. The Canopeo algorithm resulted in a binary map that separated canopy areas from non-
canopy areas; however, applying vegetation indices over the RGB mosaics resulted in a grayscale
image. Similar to the NDVI, an empirical evaluation was performed over all the grey scale vegetation
index maps to decide a threshold value for the ExG, the MGRVI and the RGBVI that could separate
canopy areas from non-canopy areas. A detailed visual inspection was performed to determine a
threshold value to separate canopy areas from non-canopy areas in the image throughout the
Figure 4.
K-means clustering-based pixel classification method workflow where the orthomosaic
is classified into five classes, and, later, classes are merged into two clusters, namely canopy and
non-canopy. The RGB orthomosaic presented was captured on 19th June 2017.
In this study, four dierent RGB-based methods were used, namely Canopeo, the excessive
greenness index (ExG), the modified green red vegetation index (MGRVI) and the red green blue
vegetation index (RGBVI), to generate the binary images to separate canopy areas from non-canopy
areas (Table 3).
Table 3. RGB image-based vegetation indices and their formula.
Vegetation Index Formula Reference
Canopeo
canopy =(i1< θ2)×(i2< θ1)×(i3> θ3)
i1=red
green ,i2=blue
green ,i3=2×green blue red
θ1=0.95, θ2=0.95, θ3=20
[38]
ExG 2GnRnBn
Rn=R
R+G+B,Gn=G
R+G+B,Bn=B
R+G+B
[46]
MGRVI G2R2
G2+R2[47]
RGBVI G2R×B
G2+R×B[47]
The overall procedure to compute the CC binary map using RGB vegetation indices is presented
in Figure 5. The Canopeo algorithm resulted in a binary map that separated canopy areas from
non-canopy areas; however, applying vegetation indices over the RGB mosaics resulted in a grayscale
image. Similar to the NDVI, an empirical evaluation was performed over all the grey scale vegetation
index maps to decide a threshold value for the ExG, the MGRVI and the RGBVI that could separate
canopy areas from non-canopy areas. A detailed visual inspection was performed to determine a
threshold value to separate canopy areas from non-canopy areas in the image throughout the growing
season, regardless of the growth stage. Considering the homogeneity of the crop, only a subset of
the image was used to determine the threshold, and the threshold chosen for each VI is presented in
Table 4. The demonstration of visual inspection is presented in Figure 6, where a subset of an early
Remote Sens. 2019,11, 2757 8 of 18
stage and a mature stage RGB image of the same area is considered. Originally for all the RGB images
in the growing season, a range of threshold values with a step size of 0.01 was applied to a subset
area in the images to generate binary map using the grayscale VI map of the subset area. However,
for the demonstration, only one VI (ExG) image was considered, and this was generated using one
early stage (image taken on 7 June 2017) and one mature stage (image taken on 10 July 2017) RGB
image with a range of threshold values and a step size of 0.02. It can be observed from Figure 6that
variation in threshold values did not aect the binarization of the early stage image, as most of the
canopy was green. However with the higher threshold (0.22), the binary image had some canopy
pixels not classified as canopy due to their darker color. The eect of variation in threshold was more
significant in the mature stage image. A lower threshold value of 0.18 resulted in a lot of non-canopy
pixels classified as canopy, especially the shadow pixels. A slightly higher threshold value of 0.22
resulted in more conservative classification that omitted a substantial number of canopy pixels. Visual
inspection suggested that the threshold value of 0.2 resulted in a most appropriate classification for the
ExG. Similarly, other VIs were also examined under visual inspection to select a single threshold value
for all the images in the growing season.
Remote Sens. 2019, 10, x FOR PEER REVIEW 8 of 17
growing season, regardless of the growth stage. Considering the homogeneity of the crop, only a
subset of the image was used to determine the threshold, and the threshold chosen for each VI is
presented in Table 4. The demonstration of visual inspection is presented in Figure 6, where a subset
of an early stage and a mature stage RGB image of the same area is considered. Originally for all the
RGB images in the growing season, a range of threshold values with a step size of 0.01 was applied
to a subset area in the images to generate binary map using the grayscale VI map of the subset area.
However, for the demonstration, only one VI (ExG) image was considered, and this was generated
using one early stage (image taken on 7 June 2017) and one mature stage (image taken on 10 July
2017) RGB image with a range of threshold values and a step size of 0.02. It can be observed from
Figure 6 that variation in threshold values did not affect the binarization of the early stage image, as
most of the canopy was green. However with the higher threshold (0.22), the binary image had some
canopy pixels not classified as canopy due to their darker color. The effect of variation in threshold
was more significant in the mature stage image. A lower threshold value of 0.18 resulted in a lot of
non-canopy pixels classified as canopy, especially the shadow pixels. A slightly higher threshold
value of 0.22 resulted in more conservative classification that omitted a substantial number of canopy
pixels. Visual inspection suggested that the threshold value of 0.2 resulted in a most appropriate
classification for the ExG. Similarly, other VIs were also examined under visual inspection to select a
single threshold value for all the images in the growing season.
Table 4. Threshold chosen for vegetation indices (Vis) to separate canopy and non-canopy areas.
VI Threshold Range
NDVI 0.6 0 to 1
ExG 0.2 2 to 2
MGRVI 0.15 1 to 1
RGBVI 0.15 1 to 1
Figure 5. Procedure to generate a binary map that indicates canopy and non-canopy areas. Applying
the vegetation index over the RGB orthomosaic resulted in a grayscale image. By applying
thresholding, a binary image was generated. Lastly, morphological closing was applied over binary
image to improve binary classification. The presented RGB orthomosaic was captured on 10 July 2017,
and the excessive greenness index (ExG) was the VI used for the demonstration of the methodology.
Figure 5.
Procedure to generate a binary map that indicates canopy and non-canopy areas. Applying
the vegetation index over the RGB orthomosaic resulted in a grayscale image. By applying thresholding,
a binary image was generated. Lastly, morphological closing was applied over binary image to improve
binary classification. The presented RGB orthomosaic was captured on 10 July 2017, and the excessive
greenness index (ExG) was the VI used for the demonstration of the methodology.
Table 4. Threshold chosen for vegetation indices (Vis) to separate canopy and non-canopy areas.
VI Threshold Range
NDVI 0.6 0 to 1
ExG 0.2 2 to 2
MGRVI 0.15 1 to 1
RGBVI 0.15 1 to 1
RGB-based vegetation indices accurately identified healthy green canopy; however, later in the
season as the canopy started to change the color, their ability to identify canopy deteriorated. To further
improve the binary map (indicating canopy and non-canopy areas), a morphological closing operation
Remote Sens. 2019,11, 2757 9 of 18
was performed. The morphological closing operation is a combination of dilation and erosion, and it
helps to remove small holes while keeping the separation boundary intact [
48
]. For this experiment, a
3×3 kernel window over one iteration was used to perform closing operation.
Remote Sens. 2019, 10, x FOR PEER REVIEW 9 of 17
Figure 6. Procedure to select appropriate threshold value to generate a binary map that indicates
canopy and non-canopy areas. The first images are a subset of RGB images captured on 7 June 2017
and 10 July 2017. Second images are the result of applying the vegetation index (ExG) over RGB
images. The next three images are the result of applying varying threshold values that were
superimposed on the original RGB images (the canopy classified pixels are represented by the red
color, and non-canopy pixels were set to transparent).
RGB-based vegetation indices accurately identified healthy green canopy; however, later in the
season as the canopy started to change the color, their ability to identify canopy deteriorated. To
further improve the binary map (indicating canopy and non-canopy areas), a morphological closing
operation was performed. The morphological closing operation is a combination of dilation and
erosion, and it helps to remove small holes while keeping the separation boundary intact [48]. For
this experiment, a 3 × 3 kernel window over one iteration was used to perform closing operation.
3. Results
The CC grid maps at each flight for both 2017 and 2018 are presented in Figures 7 and 8,
respectively. The visual inspection of the grid maps revealed that the percentage canopy cover
increased in the growing season and reached its plateau right after the middle of the season (19 June
for the 2017 experiment and 13 June for the 2018 experiment). Later in the season, it was observed
that the percentage CC started to slightly decrease with the canopy senescence. For the 2017
experiment, a rapid decay in CC was observed between 18 July and 23 July due to a common practice
in the cotton fields known as defoliation, which prepares the crop for harvesting. A similar effect was
observed in the 2018 experiment between 9 July and 19 July.
Following the methodology presented in Figure 4 using RGB images, K-means clustering-based
classification maps were generated considering five clusters which were later labeled to represent
soil, cotton boll, shadow, green canopy, and brown canopy. Later, binary maps were generated by
merging soil, cotton boll and shadow classes to indicate non-canopy, while brown canopy and green
canopy classes were merged to indicate canopy pixels. The comparison of the average CC per grid
using the NDVI and K-means (also referred as RGB reference) is presented in Figures 9 and 10 for the
2017 and 2018 experiments, respectively. From both the 2017 and 2018 experiments, it was observed
that the NDVI-based and RGB reference-based average CC per grid followed the same trend
throughout the growing season, and there was a one-to-one correspondence between the two when
plotted as a straight line at an intercept of one with very high R2 values (0.98 for 2017 and 0.97 for
2018). The results indicated that it is possible to achieve the same level of performance using the RGB-
based sensor as that of the MS sensors using NDVI. However, the purpose of the RGB reference CC
estimation was only to provide a comparison reference for the MS sensor-based CC estimation.
Figure 6.
Procedure to select appropriate threshold value to generate a binary map that indicates
canopy and non-canopy areas. The first images are a subset of RGB images captured on 7 June 2017
and 10 July 2017. Second images are the result of applying the vegetation index (ExG) over RGB images.
The next three images are the result of applying varying threshold values that were superimposed on
the original RGB images (the canopy classified pixels are represented by the red color, and non-canopy
pixels were set to transparent).
3. Results
The CC grid maps at each flight for both 2017 and 2018 are presented in Figures 7and 8, respectively.
The visual inspection of the grid maps revealed that the percentage canopy cover increased in the
growing season and reached its plateau right after the middle of the season (19 June for the 2017
experiment and 13 June for the 2018 experiment). Later in the season, it was observed that the
percentage CC started to slightly decrease with the canopy senescence. For the 2017 experiment, a
rapid decay in CC was observed between 18 July and 23 July due to a common practice in the cotton
fields known as defoliation, which prepares the crop for harvesting. A similar eect was observed in
the 2018 experiment between 9 July and 19 July.
Following the methodology presented in Figure 4using RGB images, K-means clustering-based
classification maps were generated considering five clusters which were later labeled to represent soil,
cotton boll, shadow, green canopy, and brown canopy. Later, binary maps were generated by merging
soil, cotton boll and shadow classes to indicate non-canopy, while brown canopy and green canopy
classes were merged to indicate canopy pixels. The comparison of the average CC per grid using
the NDVI and K-means (also referred as RGB reference) is presented in Figures 9and 10 for the 2017
and 2018 experiments, respectively. From both the 2017 and 2018 experiments, it was observed that
the NDVI-based and RGB reference-based average CC per grid followed the same trend throughout
the growing season, and there was a one-to-one correspondence between the two when plotted as a
straight line at an intercept of one with very high R
2
values (0.98 for 2017 and 0.97 for 2018). The results
indicated that it is possible to achieve the same level of performance using the RGB-based sensor as
that of the MS sensors using NDVI. However, the purpose of the RGB reference CC estimation was
only to provide a comparison reference for the MS sensor-based CC estimation.
Remote Sens. 2019,11, 2757 10 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 10 of 17
Figure 7. CC grid maps generated at each flight in growing season using normalized difference
vegetation index (NDVI) maps for the 2017 dataset.
Figure 8. CC grid maps generated at each flight in growing season using NDVI maps for the 2018
dataset.
Figure 7.
CC grid maps generated at each flight in growing season using normalized dierence
vegetation index (NDVI) maps for the 2017 dataset.
Remote Sens. 2019, 10, x FOR PEER REVIEW 10 of 17
Figure 7. CC grid maps generated at each flight in growing season using normalized difference
vegetation index (NDVI) maps for the 2017 dataset.
Figure 8. CC grid maps generated at each flight in growing season using NDVI maps for the 2018
dataset.
Figure 8.
CC grid maps generated at each flight in growing season using NDVI maps for the 2018 dataset.
Remote Sens. 2019,11, 2757 11 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 11 of 17
(a)
(b)
Figure 9. For the 2017 experiment: (a) the average NDVI and RGB reference-based percentage CC for
each flight in the growing season; (b) a comparison of the NDVI and RGB reference-based percentage
CC techniques with R2.
(a) (b)
Figure 10. For the 2018 experiment: (a) the average NDVI and RGB reference-based percentage CC
for each flight in the growing season; (b) a comparison of the NDVI and RGB reference-based
percentage CC techniques with R2.
Using RGB images, four thresholding-based CC estimation methods were implemented, namely
Canopeo, the ExG, the MGRVI and the RGBVI. Along with the NDVI-based average CC estimation
per grid, the average CC estimation per grid in the growing season for each RGB-based CC estimation
method before and after applying morphological closing (MC) is presented in Figures 11 and 12 for
the 2017 and 2018 experiments, respectively. It can be observed from Figure 11 that, early in the
growing season, the average CC estimated by all the methods was in agreement with very less
variation and followed the same increasing trend as the NDVI-based estimation. However, the
Canopeo and ExG methods reached their peak early in the season (7th June), as compared to other
methods. The main reason for this was that they were accurately identifying healthy green canopy,
but later, as the canopy started to change color from green to yellow and eventually to brown, they
were not as efficient as other methods to identify the matured canopy. Moreover, after they reached
their peak, they rapidly started to decrease later in the season, commensurate with the rate of change
of color in the canopy in the later season. As can be observed from Table 5, the average root mean
square error (RMSE) of the percentage CC turned out to be highest amongst all (17.87 for Canopeo
Figure 9.
For the 2017 experiment: (
a
) the average NDVI and RGB reference-based percentage CC for
each flight in the growing season; (
b
) a comparison of the NDVI and RGB reference-based percentage
CC techniques with R2.
Remote Sens. 2019, 10, x FOR PEER REVIEW 11 of 17
(a)
(b)
Figure 9. For the 2017 experiment: (a) the average NDVI and RGB reference-based percentage CC for
each flight in the growing season; (b) a comparison of the NDVI and RGB reference-based percentage
CC techniques with R2.
(a) (b)
Figure 10. For the 2018 experiment: (a) the average NDVI and RGB reference-based percentage CC
for each flight in the growing season; (b) a comparison of the NDVI and RGB reference-based
percentage CC techniques with R2.
Using RGB images, four thresholding-based CC estimation methods were implemented, namely
Canopeo, the ExG, the MGRVI and the RGBVI. Along with the NDVI-based average CC estimation
per grid, the average CC estimation per grid in the growing season for each RGB-based CC estimation
method before and after applying morphological closing (MC) is presented in Figures 11 and 12 for
the 2017 and 2018 experiments, respectively. It can be observed from Figure 11 that, early in the
growing season, the average CC estimated by all the methods was in agreement with very less
variation and followed the same increasing trend as the NDVI-based estimation. However, the
Canopeo and ExG methods reached their peak early in the season (7th June), as compared to other
methods. The main reason for this was that they were accurately identifying healthy green canopy,
but later, as the canopy started to change color from green to yellow and eventually to brown, they
were not as efficient as other methods to identify the matured canopy. Moreover, after they reached
their peak, they rapidly started to decrease later in the season, commensurate with the rate of change
of color in the canopy in the later season. As can be observed from Table 5, the average root mean
square error (RMSE) of the percentage CC turned out to be highest amongst all (17.87 for Canopeo
Figure 10.
For the 2018 experiment: (
a
) the average NDVI and RGB reference-based percentage CC for
each flight in the growing season; (
b
) a comparison of the NDVI and RGB reference-based percentage
CC techniques with R2.
Using RGB images, four thresholding-based CC estimation methods were implemented, namely
Canopeo, the ExG, the MGRVI and the RGBVI. Along with the NDVI-based average CC estimation per
grid, the average CC estimation per grid in the growing season for each RGB-based CC estimation
method before and after applying morphological closing (MC) is presented in Figures 11 and 12 for the
2017 and 2018 experiments, respectively. It can be observed from Figure 11 that, early in the growing
season, the average CC estimated by all the methods was in agreement with very less variation and
followed the same increasing trend as the NDVI-based estimation. However, the Canopeo and ExG
methods reached their peak early in the season (7th June), as compared to other methods. The main
reason for this was that they were accurately identifying healthy green canopy, but later, as the canopy
started to change color from green to yellow and eventually to brown, they were not as ecient as other
methods to identify the matured canopy. Moreover, after they reached their peak, they rapidly started
to decrease later in the season, commensurate with the rate of change of color in the canopy in the later
Remote Sens. 2019,11, 2757 12 of 18
season. As can be observed from Table 5, the average root mean square error (RMSE) of the percentage
CC turned out to be highest amongst all (17.87 for Canopeo and 16.97 for the ExG). The MGRVI showed
a slightly better performance over Canopeo and the ExG, and it was able to identify mature canopy.
However, in comparison to the NDVI, it also reached its peak relatively early. The RGBVI turned out
to be the most ecient method to estimate CC, as, especially later in the season, it outperformed the
other RGB-based methods. However, the RGBVI still could not match the NDVI-based CC estimation.
It was noticed that morphological closing significantly improved the CC estimation, and the average
RMSE with the NDVI-based CC estimation was substantially reduced (Table 5).
Remote Sens. 2019, 10, x FOR PEER REVIEW 12 of 17
and 16.97 for the ExG). The MGRVI showed a slightly better performance over Canopeo and the ExG,
and it was able to identify mature canopy. However, in comparison to the NDVI, it also reached its
peak relatively early. The RGBVI turned out to be the most efficient method to estimate CC, as,
especially later in the season, it outperformed the other RGB-based methods. However, the RGBVI
still could not match the NDVI-based CC estimation. It was noticed that morphological closing
significantly improved the CC estimation, and the average RMSE with the NDVI-based CC
estimation was substantially reduced (Table 5).
(a) (b)
(c) (d)
Figure 11. For the 2017 experiment, the average CC estimation per grid using the NDVI-based CC
estimation throughout the growing season, along with the average CC estimation using (a) Canopeo,
(b) the ExG, (c) the modified green red vegetation index (MGRVI) and (d) the red green blue
vegetation index (RGBVI), before and after applying the morphological closing (MC) operation.
(a) (b)
Figure 11.
For the 2017 experiment, the average CC estimation per grid using the NDVI-based CC
estimation throughout the growing season, along with the average CC estimation using (
a
) Canopeo,
(
b
) the ExG, (
c
) the modified green red vegetation index (MGRVI) and (
d
) the red green blue vegetation
index (RGBVI), before and after applying the morphological closing (MC) operation.
Remote Sens. 2019,11, 2757 13 of 18
Remote Sens. 2019, 10, x FOR PEER REVIEW 12 of 17
and 16.97 for the ExG). The MGRVI showed a slightly better performance over Canopeo and the ExG,
and it was able to identify mature canopy. However, in comparison to the NDVI, it also reached its
peak relatively early. The RGBVI turned out to be the most efficient method to estimate CC, as,
especially later in the season, it outperformed the other RGB-based methods. However, the RGBVI
still could not match the NDVI-based CC estimation. It was noticed that morphological closing
significantly improved the CC estimation, and the average RMSE with the NDVI-based CC
estimation was substantially reduced (Table 5).
(a) (b)
(c) (d)
Figure 11. For the 2017 experiment, the average CC estimation per grid using the NDVI-based CC
estimation throughout the growing season, along with the average CC estimation using (a) Canopeo,
(b) the ExG, (c) the modified green red vegetation index (MGRVI) and (d) the red green blue
vegetation index (RGBVI), before and after applying the morphological closing (MC) operation.
(a) (b)
Remote Sens. 2019, 10, x FOR PEER REVIEW 13 of 17
(c) (d)
Figure 12. For the 2018 experiment, the average CC estimation per grid using the NDVI-based CC
estimation throughout the growing season, along with the average CC estimation using (a) Canopeo,
(b) the ExG, (c) the MGRVI and (d) the RGBVI, before and after applying the morphological closing
(MC) operation.
Table 5. Average RMSE of the thresholding-based CC estimation methods with respect to the NDVI-
based CC estimation (%) throughout the growing season.
RGB-Based Method
Average RMSE with Respect to NDVI-Based CC Estimation (%)
2017 Experiment 2018 experiment
Before MC After MC Before MC After MC
Canopeo 17.87 13.34 15.56 9.73
ExG 16.97 13.00 15.51 8.67
MGRVI 13.11 10.35 14.34 6.95
RGBVI 7.44 2.94 8.85 2.82
A similar trend was observed in the performance of thresholding-based CC estimation methods
in the 2018 experiment (Figure 12). Canopeo and the ExG method had the higher RMSE, as compared
to other methods (Table 5). However, CC estimation values of 2018 experiment were slightly higher
compared to CC estimation values of 2017 experiment. The RGBVI turned out to be the most efficient
CC estimation method amongst all. The application of the morphological closing operation
significantly reduced the RMSE. It was observed that morphological closing operation resulted in
slightly better performance in the 2018 experiment as compared to the 2017 experiment. It was
noticed that in the 2018 experiments, the spatial resolution of the RGB images was slightly lower than
the 2017 experiments. However, the difference in the spatial resolution was not much (difference of
~0.2cm), and a slight decrease of the GSD might contribute to the better classification and later
filtering performance because a very high resolution leads to more details being observed, which
could be undesirable when the information class under consideration is more general. Furthermore,
a little difference in the CC growth pattern in two years was observed, which was a function of
weather conditions (such as daily temperature and precipitation) and the type of genotype planted.
4. Discussion
Recent years have witnessed an upsurge in UAS and sensor technology, an upsurge which has
made it possible to collect high temporal and spatial resolution data over crops throughout the
growing season. The main objective of this study was to provide a comparison framework between
MS sensor-based CC estimation and RGB-based CC estimation, as very scarce attention has been paid
to explore different VIs generated using UAS-based sensors to compute canopy cover in the
literature. As mentioned in the literature, MS sensor-based CC estimation is more accurate and stable
because it accounts for the live canopy based on the chlorophyll content present in the canopy, and
Figure 12.
For the 2018 experiment, the average CC estimation per grid using the NDVI-based CC
estimation throughout the growing season, along with the average CC estimation using (
a
) Canopeo,
(
b
) the ExG, (
c
) the MGRVI and (
d
) the RGBVI, before and after applying the morphological closing
(MC) operation.
Table 5.
Average RMSE of the thresholding-based CC estimation methods with respect to the
NDVI-based CC estimation (%) throughout the growing season.
RGB-Based Method
Average RMSE with Respect to NDVI-Based CC Estimation (%)
2017 Experiment 2018 Experiment
Before MC After MC Before MC After MC
Canopeo 17.87 13.34 15.56 9.73
ExG 16.97 13.00 15.51 8.67
MGRVI 13.11 10.35 14.34 6.95
RGBVI 7.44 2.94 8.85 2.82
A similar trend was observed in the performance of thresholding-based CC estimation methods
in the 2018 experiment (Figure 12). Canopeo and the ExG method had the higher RMSE, as compared
to other methods (Table 5). However, CC estimation values of 2018 experiment were slightly higher
compared to CC estimation values of 2017 experiment. The RGBVI turned out to be the most ecient
CC estimation method amongst all. The application of the morphological closing operation significantly
reduced the RMSE. It was observed that morphological closing operation resulted in slightly better
Remote Sens. 2019,11, 2757 14 of 18
performance in the 2018 experiment as compared to the 2017 experiment. It was noticed that in the 2018
experiments, the spatial resolution of the RGB images was slightly lower than the 2017 experiments.
However, the dierence in the spatial resolution was not much (dierence of ~0.2cm), and a slight
decrease of the GSD might contribute to the better classification and later filtering performance because
a very high resolution leads to more details being observed, which could be undesirable when the
information class under consideration is more general. Furthermore, a little dierence in the CC
growth pattern in two years was observed, which was a function of weather conditions (such as daily
temperature and precipitation) and the type of genotype planted.
4. Discussion
Recent years have witnessed an upsurge in UAS and sensor technology, an upsurge which
has made it possible to collect high temporal and spatial resolution data over crops throughout the
growing season. The main objective of this study was to provide a comparison framework between
MS sensor-based CC estimation and RGB-based CC estimation, as very scarce attention has been
paid to explore dierent VIs generated using UAS-based sensors to compute canopy cover in the
literature. As mentioned in the literature, MS sensor-based CC estimation is more accurate and stable
because it accounts for the live canopy based on the chlorophyll content present in the canopy, and
this content is highly reflected in the near-infrared (NIR) band; hence, the canopy varieties which
are not green in color can be correctly accounted for. Moreover, changes in the color as the season
progresses can also be identified accurately. However, the accuracy of the NDVI is a function of the
type and quality of the multispectral sensor used to collect the image. In this study, temporal NDVI
maps were comparable, despite changes in lighting conditions over dierent flights throughout the
season, as they were generated from multispectral data collected using the SlantRange 3p sensor that
performed radiometric calibration. The RGB-based CC estimation performed inadequately if the plant
color deviated from green, which was confirmed by the experiments performed in both 2017 and 2018.
Except for the Canopeo method, all the other RGB VIs considered in this study required a threshold
to be applied in order to separate canopy areas from non-canopy areas. In previous studies, the
Otsu method has mostly been used for automatic thresholding [
41
]. The Otsu method resulted in
an accurate thresholding early in the season, as the image had a mostly bimodal histogram and the
variances of the spectral clusters were small compared to the mean dierence. However, later in the
season, thresholding by the Otsu method was questionable, because, as the season progressed, the
variance in the spectral signature of the canopy increased, and, closer to senescence, the image no
longer possessed a bimodal histogram due to the emergence of new spectral classes such as open
cotton bolls. Consequently, in this study, the VIs were examined under visual inspection to select a
single threshold value for all the images in the growing season. It was observed that the selected single
threshold value successfully classified canopy pixels. Though they were less aected due to senescence
as compared to the RGB-based VIs, the overall NDVI values still decreased in the late season; however,
the selected single threshold value successfully classified canopy pixels with reasonable accuracy based
on visual interpretation. Since this study was limited to the cotton crop, the threshold value might
dier for other crops, and there might be a dierent trend observed in response to the senescence in
the growing season. In future, an ecient thresholding method that can classify canopy regardless of
growth stage would help automate the process.
As previous studies have armed that RGB-based CC estimation eciently works early in the
season [
40
,
41
], it was also observed in this study that early in the season, both the MS- and RGB-based
CC estimations were in agreement and followed a similar increasing trend. Moreover, in previous
studies, MS sensor-based CC estimation has been found to be more accurate in the later season
as compared to RGB-based CC estimation [
41
]. In this study, it was observed that as the season
progressed, MS sensor-based CC estimation kept on increasing, but RGB-based CC estimation peaked
early and started to drop relatively rapidly for both the 2017 and 2018 experiments. Nevertheless,
the RGBVI outperformed other RBG-based CC estimation methods, though still not close enough to
Remote Sens. 2019,11, 2757 15 of 18
match the estimation by MS sensor-based CC estimation. That led to the question “is it possible to
achieve the same level of accuracy by using RGB-based CC estimation as that of the NDVI-based CC
estimation?” With the aforementioned objective, a K-means clustering-based CC estimation method
was implemented, and this was tested by using ground truth samples for the canopy and non-canopy
classes. It was observed that K-means clustering-based approach matched the accuracy level of the
MS sensor-based CC estimation. However, there was a requirement to investigate any scope for an
improvement to the RGB-based CC estimation approach, as the K-means clustering-based approach, or
any classification-based approach is accurate but computationally extensive, especially in its parameter
tuning and demand for ground truth sample collection, which is labor-intensive and time consuming.
The objective of this study was to improve the RGB-based CC estimation approach. It was noticed that,
later in the season, RGB-based indices were not able to capture the canopy pixels that were not green,
which resulted in CC maps with a lot of small holes. The morphological closing operation proved to
be a solution to this problem and helped to fill very small holes and keep the boundary of the canopy
intact. In both the 2017 and 2018 experiments, it was noticed that applying the morphological closing
operation significantly improved the performance of the RGB-based CC estimation. The RGBVI with
morphological closing applied was found to have a CC estimation very close to the NDVI-based CC
estimation. With the proposed approach, a more aordable alternate to the MS sensor can be provided
to estimate CC.
5. Conclusions
A comparative study was performed to evaluate CC estimation using an RGB sensor. With a
multi-year CC analysis, MS sensor-based CC estimation was used as a reference, as it is considered
a stable and accurate form of estimation. The correlation of RGB reference-based CC estimation
with MS sensor-based CC estimation ensured the feasibility of using an RGB sensor to match the MS
sensor-based CC estimation. An analysis of RGB-based methods suggested that the RGBVI was more
tolerant to changes in the color of the canopy when the canopy started to senescence. Moreover, when
applied with the morphological closing operation, the RGBVI-based CC estimation was found to be as
accurate as MS sensor-based CC estimation, with an average RMSE of less than three percent. CC is a
good predictor variable for plant growth parameters. As multispectral sensors are more sensitive and
expensive, the proposed RGB-based CC estimation could provide an aordable alternate to agriculture
scientists and breeders. In the future, this methodology will be investigated on other crops, as there
might be a dierent trend observed in response to senescence in the growing season.
Author Contributions:
conceptualization, A.A. and J.J.; data curation, A.A., A.C. and S.O.; formal analysis, A.A.;
investigation, M.M.; methodology, A.A. and J.J.; project administration, J.J., M.M. and J.L.; resources, J.L.; software,
A.A.; supervision, J.J.; validation, A.A.; visualization, A.A.; writing—original draft, A.A.; writing—review &
editing, J.J., A.C., S.O. and J.L.
Funding: This research received no external funding.
Acknowledgments: This research was supported by Texas A&M AgriLife Research, Corpus Christi.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Adhikari, P.; Gowda, P.; Marek, G.; Brauer, D.; Kisekka, I.; Northup, B.; Rocateli, A. Calibration and validation
of csm-cropgro-cotton model using lysimeter data in the texas high plains. J. Contemp. Water Res. Educ.
2017
,
162, 61–78. [CrossRef]
2. Phillips, R.L. Mobilizing science to break yield barriers. Crop Sci. 2010,50, 99–108. [CrossRef]
3.
Xu, R.; Li, C.; Paterson, A.H. Multispectral imaging and unmanned aerial systems for cotton plant phenotyping.
PLoS ONE 2019,14, e0205083. [CrossRef] [PubMed]
4.
Pierpaoli, E.; Carli, G.; Pignatti, E.; Canavari, M. Drivers of precision agriculture technologies adoption: A
literature review. Procedia Technol. 2013,8, 61–69. [CrossRef]
Remote Sens. 2019,11, 2757 16 of 18
5.
Tokekar, P.; Vander Hook, J.; Mulla, D.; Isler, V. Sensor planning for a symbiotic uav and ugv system for
precision agriculture. IEEE Trans. Robot. 2016,32, 1498–1511. [CrossRef]
6.
Singh, K.K.; Frazier, A.E. A meta-analysis and review of unmanned aircraft system (uas) imagery for
terrestrial applications. Int. J. Remote Sens. 2018,39, 5078–5098. [CrossRef]
7.
Roth, L.; Streit, B. Predicting cover crop biomass by lightweight uas-based rgb and nir photography: An
applied photogrammetric approach. Precis. Agric. 2018,19, 93–114. [CrossRef]
8.
Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of spectral–temporal response surfaces by
combining multispectral satellite and hyperspectral uav imagery for precision agriculture applications. IEEE
J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015,8, 3140–3146. [CrossRef]
9.
Trout, T.J.; Johnson, L.F.; Gartung, J. Remote sensing of canopy cover in horticultural crops. HortScience
2008
,
43, 333–337. [CrossRef]
10.
Nielsen, D.C.; Miceli-Garcia, J.J.; Lyon, D.J. Canopy cover and leaf area index relationships for wheat, triticale,
and corn. Agron. J. 2012,104, 1569–1573. [CrossRef]
11.
Chopping, M. Canapi: Canopy analysis with panchromatic imagery. Remote Sens. Lett.
2011
,2, 21–29.
[CrossRef]
12.
Halperin, J.; LeMay, V.; Chidumayo, E.; Verchot, L.; Marshall, P. Model-based estimation of above-ground
biomass in the miombo ecoregion of zambia. For. Ecosyst. 2016,3, 14. [CrossRef]
13.
Hansen, M.C.; DeFries, R.S.; Townshend, J.R.; Carroll, M.; DiMiceli, C.; Sohlberg, R.A. Global percent tree
cover at a spatial resolution of 500 meters: First results of the modis vegetation continuous fields algorithm.
Earth Interact. 2003,7, 1–15. [CrossRef]
14.
Korhonen, L.; Hovi, A.; Rönnholm, P.; Rautiainen, M. The accuracy of large-area forest canopy cover
estimation using landsat in boreal region. Int. J. Appl. Earth Obs. Geoinf. 2016,53, 118–127.
15.
Chemura, A.; Mutanga, O.; Odindi, J. Empirical modeling of leaf chlorophyll content in coee (coea arabica)
plantations with sentinel-2 msi data: Eects of spectral settings, spatial resolution, and crop canopy cover.
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017,10, 5541–5550. [CrossRef]
16.
Melin, M.; Korhonen, L.; Kukkonen, M.; Packalen, P. Assessing the performance of aerial image point cloud
and spectral metrics in predicting boreal forest canopy cover. ISPRS J. Photogramm. Remote Sens.
2017
,129,
77–85. [CrossRef]
17.
Korhonen, L.; Ali-Sisto, D.; Tokola, T. Tropical forest canopy cover estimation using satellite imagery and
airborne lidar reference data. Silva Fenn. 2015,49, 1–18. [CrossRef]
18.
Li, W.; Niu, Z.; Huang, N.; Wang, C.; Gao, S.; Wu, C. Airborne lidar technique for estimating biomass
components of maize: A case study in zhangye city, northwest china. Ecol. Indic.
2015
,57, 486–496.
[CrossRef]
19.
Tattaris, M.; Reynolds, M.P.; Chapman, S.C. A direct comparison of remote sensing approaches for
high-throughput phenotyping in plant breeding. Front. Plant Sci. 2016,7, 1131. [CrossRef]
20.
Chen, A.; Orlov-Levin, V.; Elharar, O.; Meron, M. Comparing satellite and high-resolution visible and thermal
aerial imaging of field crops for precision irrigation management and plant biomass forecast. In Precision
Agriculture’19; Wageningen Academic Publishers: Wageningen, The Netherlands, 2019; pp. 37–44.
21.
Korhonen, L.; Korpela, I.; Heiskanen, J.; Maltamo, M. Airborne discrete-return lidar data in the estimation of
vertical canopy cover, angular canopy closure and leaf area index. Remote Sens. Environ.
2011
,115, 1065–1080.
[CrossRef]
22.
Anderson, K.E.; Glenn, N.F.; Spaete, L.P.; Shinneman, D.J.; Pilliod, D.S.; Arkle, R.S.; McIlroy, S.K.;
Derryberry, D.R. Estimating vegetation biomass and cover across large plots in shrub and grass dominated
drylands using terrestrial lidar and machine learning. Ecol. Indic. 2018,84, 793–802. [CrossRef]
23.
Ma, Q.; Su, Y.; Guo, Q. Comparison of canopy cover estimations from airborne lidar, aerial imagery, and
satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017,10, 4225–4236. [CrossRef]
24.
Holman, F.; Riche, A.; Michalski, A.; Castle, M.; Wooster, M.; Hawkesford, M. High throughput field
phenotyping of wheat plant height and growth rate in field plot trials using uav based remote sensing.
Remote Sens. 2016,8, 1031. [CrossRef]
25.
Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Rindinella, A.; Corona, P.
Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing uav.
Int. J. Appl. Earth Obs. Geoinf. 2016,47, 60–68. [CrossRef]
Remote Sens. 2019,11, 2757 17 of 18
26. Fernandez-Gallego, J.A.; Kefauver, S.C.; Kerfal, S.; Araus, J.L. Remote Sensing for Agriculture, Ecosystems,
and Hydrology XX. In Comparative Canopy Cover Estimation Using RGB Images from UAV and Ground;
International Society for Optics and Photonics: Berlin, Germany, 2018; p. 107830J.
27.
Ashapure, A.; Oh, S.; Marconi, T.G.; Chang, A.; Jung, J.; Landivar, J.; Enciso, J. Autonomous Air and Ground
Sensing Systems for Agricultural Optimization and Phenotyping IV. In Unmanned Aerial System Based Tomato
Yield Estimation Using Machine Learning; International Society for Optics and Photonics: Baltimore, MD, USA,
2019.
28.
Chu, T.; Chen, R.; Landivar, J.A.; Maeda, M.M.; Yang, C.; Starek, M.J. Cotton growth modeling and assessment
using unmanned aircraft system visual-band imagery. J. Appl. Remote Sens. 2016,10, 036018. [CrossRef]
29.
Ballesteros, R.; Ortega, J.; Hern
á
ndez, D.; Moreno, M. Applications of georeferenced high-resolution images
obtained with unmanned aerial vehicles. Part II: Application to maize and onion crops of a semi-arid region
in spain. Precis. Agric. 2014,15, 593–614. [CrossRef]
30.
Ballesteros, R.; Ortega, J.F.; Hernandez, D.; Moreno, M.A. Onion biomass monitoring using uav-based rgb
imaging. Precis. Agric. 2018,19, 840–857. [CrossRef]
31.
Ashapure, A.; Jung, J.; Yeom, J.; Chang, A.; Maeda, M.; Maeda, A.; Landivar, J. A novel framework to
detect conventional tillage and no-tillage cropping system eect on cotton growth and development using
multi-temporal uas data. ISPRS J. Photogramm. Remote Sens. 2019,152, 49–64. [CrossRef]
32.
Makanza, R.; Zaman-Allah, M.; Cairns, J.; Magorokosho, C.; Tarekegne, A.; Olsen, M.; Prasanna, B.
High-throughput phenotyping of canopy cover and senescence in maize field trials using aerial digital
canopy imaging. Remote Sens. 2018,10, 330. [CrossRef]
33.
Clevers, J.; Kooistra, L.; Van Den Brande, M. Using sentinel-2 data for retrieving lai and leaf and canopy
chlorophyll content of a potato crop. Remote Sens. 2017,9, 405. [CrossRef]
34.
Pauly, K. Applying conventional vegetation vigor indices to uas-derived orthomosaics: Issues and
considerations. In Proceedings of the International Conference of Precision Agriculture (ICPA), Sacramento,
CA, USA, 20–23 July 2014.
35.
Booth, D.T.; Cox, S.E.; Berryman, R.D. Point sampling digital imagery with ‘samplepoint’. Environ. Monit.
Assess. 2006,123, 97–108. [CrossRef] [PubMed]
36.
Richardson, M.; Karcher, D.; Purcell, L. Quantifying turfgrass cover using digital image analysis. Crop Sci.
2001,41, 1884–1888. [CrossRef]
37.
Lee, K.J.; Lee, B.W. Estimating canopy cover from color digital camera image of rice field. J. Crop Sci.
Biotechnol. 2011,14, 151–155. [CrossRef]
38.
Patrignani, A.; Ochsner, T.E. Canopeo: A powerful new tool for measuring fractional green canopy cover.
Agron. J. 2015,107, 2312–2320. [CrossRef]
39.
Torres-S
á
nchez, J.; Peña, J.M.; de Castro, A.I.; L
ó
pez-Granados, F. Multi-temporal mapping of the vegetation
fraction in early-season wheat fields using images from uav. Comput. Electron. Agric.
2014
,103, 104–113.
[CrossRef]
40.
Fang, S.; Tang, W.; Peng, Y.; Gong, Y.; Dai, C.; Chai, R.; Liu, K. Remote estimation of vegetation fraction and
flower fraction in oilseed rape with unmanned aerial vehicle data. Remote Sens. 2016,8, 416. [CrossRef]
41.
Marcial-Pablo, M.D.J.; Gonzalez-Sanchez, A.; Jimenez-Jimenez, S.I.; Ontiveros-Capurata, R.E.;
Ojeda-Bustamante, W. Estimation of vegetation fraction using rgb and multispectral images from uav.
Int. J. Remote Sens. 2019,40, 420–438. [CrossRef]
42.
Lima-Cueto, F.J.; Blanco-Sep
ú
lveda, R.; G
ó
mez-Moreno, M.L.; Galacho-Jim
é
nez, F.B. Using vegetation indices
and a uav imaging platform to quantify the density of vegetation ground cover in olive groves (Olea europaea
L.) in southern spain. Remote Sens. 2019,11, 2564. [CrossRef]
43.
Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M.
‘Structure-from-motion’photogrammetry: A low-cost, eective tool for geoscience applications.
Geomorphology 2012,179, 300–314. [CrossRef]
44.
Rouse, J.W., Jr.; Haas, R.; Schell, J.; Deering, D. Monitoring Vegetation Systems in the Great Plains with ERTS;
Texas A&M Univ.: College Station, TX, USA, 1974.
45.
Hulvey, K.B.; Thomas, K.; Thacker, E. A comparison of two herbaceous cover sampling methods to assess
ecosystem services in high-shrub rangelands: Photography-based grid point intercept (gpi) versus quadrat
sampling. Rangelands 2018,40, 152–159. [CrossRef]
Remote Sens. 2019,11, 2757 18 of 18
46.
Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D. Color indices for weed identification under
various soil, residue, and lighting conditions. Trans. ASAE 1995,38, 259–269. [CrossRef]
47.
Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining
uav-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass
monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015,39, 79–87. [CrossRef]
48. Dougherty, E.R. An Introduction to Morphological Image Processing; SPIE: Bellingham, WA, USA, 1992.
©
2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... There are a variety of classification techniques that have been successful. The most common method utilizes thresholding color parameters such as hue and saturation values, green vegetation index, lightness, green-magenta, blue-yellow (LAB) color space, excess green (ExG) index, and normalized green red difference index (NGRDI) (Ashapure et al., 2019;Han et al., 2018;Lang et al., 2019;Marcial-Pablo et al., 2019;Purcell, 2000;Schirrmann et al., 2016;Varshney, 2017). However, finding an optimal threshold value that is effective across different timepoints and lighting conditions requires normalization or calibration of the image color parameters prior to analysis due to changes in soil color and light conditions on a daily basis. ...
... The size and complexity of the data are important to take into account when deciding which classification method to use. No matter the classification method, RGB indices have a limited ability to capture nongreen canopy pixels, therefore early season classification has the highest accuracy (80%+) with a rapid drop throughout development due to browning vegetative tissues and non-green flowers or fruits (60%+) (Ashapure et al., 2019;Marcial-Pablo et al., 2019). Accuracy can be improved through a morphological closing operation that fills in small gaps and keeps the boundary of the canopy intact (Ashapure et al., 2019). ...
... No matter the classification method, RGB indices have a limited ability to capture nongreen canopy pixels, therefore early season classification has the highest accuracy (80%+) with a rapid drop throughout development due to browning vegetative tissues and non-green flowers or fruits (60%+) (Ashapure et al., 2019;Marcial-Pablo et al., 2019). Accuracy can be improved through a morphological closing operation that fills in small gaps and keeps the boundary of the canopy intact (Ashapure et al., 2019). This method does not improve classification of weed pixels and weed removal or suppression is still necessary for accurate canopy cover estimations. ...
Article
Full-text available
Developing the resilient crops of the future will require access to a broad set of tools. While advances in sequencing and marker technologies have facilitated marker‐trait associations and the ability to predict the phenotype of an individual from its genotypic information, other tools such as high‐throughput phenotyping are still in their infancy. Advances in sensors, aeronautics, and computing have enabled progress. Here, we review current platforms and sensors available for top‐down field phenotyping with a focus on unoccupied aerial vehicles (UAVs) and red, green, blue sensors. We also review the ability and effectiveness of extracting traits from images captured using combinations of these platforms and sensors. Improvements in trait standardization and extraction software are expected to increase the use of high‐throughput phenotyping in the coming years and further facilitate crop improvement. High throughput phenotyping is an area of active development. Aerial imagery from drones allows for high spatial, temporal, and potentially spectral resolution. Variability in ability and effectiveness of extracting traits from images from different platforms and sensors are reviewed.
... There is also potential to investigate the use of other colour spaces, including HSV (hue, saturation, value) and L*a*b* that may more accurately assess colour as they separate the lighting and colour information. Machine vision algorithms can also assess physical size features including fractional canopy cover using a plant segmentation algorithm (Ashapure et al. 2019) and leaf/branch counts (density) and geometry (area, height, width) using a line detection algorithm (Boyle et al. 2015). ...
... After data collection at GS31, the plant sensor data and images corresponding to each sampling location were extracted. Fractional canopy cover was measured using a plant segmentation algorithm (Kumar & Miklavcic 2018), and the average colour index across the segmented plant pixels in each image were calculated after extracting the individual red (R), green (G) and blue (B) channels from the RGB images; hue/colour, saturation/shade and intensity/lighting channels from the HSV images; and L*, a* and b* from the L*a*b* images (Ashapure et al. 2019). Line detection algorithms were implemented to count the leaves and branches per unit area and their areas, widths and lengths (Boyle et al. 2015). ...
Conference Paper
Full-text available
Existing approaches for determining nitrogen (N) requirements typically involve measuring biomass and sensing near-infrared-based crop reflectance indices. There is potential for automated assessments of tiller counts, plant size and colour using machine vision to help indicate plant N status. Existing demonstrations of machine vision systems are typically for a single field rather than multiple fields. A barley and wheat field study has been conducted to identify robustness of machine vision across multiple sites for assessing biomass, and plant N status and concentration. Three N trial sites were established in Western Australia and South Australia during the 2020 season with low and rich-N strips. Each strip and the paddock were sampled in five to seven locations for plant N uptake, plant N concentration, and plant response using crop dry biomass and machine vision cameras. Machine vision algorithms were implemented on oblique images to extract indicators of vigour (colour) and physical size (line length and density that represent tillers and branches). Linear regression analysis identified that a normalised green red difference index from the colour machine vision system was strongly correlated with biomass and could add value to biomass and plant N assessment. Further work is to incorporate machine vision parameters into a data-driven N decision making method. Introduction Nitrogen (N) represents 30-40% of input costs for cereal grain crops and this cost is continually increasing. Automated crop sensors can help assess biomass to guide site-specific plant N status assessment. These crop response measurements are typically determined from colour indices in near-infrared wavelengths using proximal sensors (Poley & McDermid 2020) and satellite imagery (Revill et al. 2019) and linked with machine learning models (Chlingaryan et al. 2018), N dilution curves (Wang et al. 2017) and/or biophysical models (Lawes et al. 2019) to assess N requirements. Reflectance sensing (e.g. the normalized difference vegetation index; NDVI) can have inconsistent correlations with N status across different stages and seasons (Porter 2010). For example, Colaço and Bramley (2020) report the highest coefficient of determination (R²) of 0.65 for the normalized difference red edge index (NDRE) vs mid-season N uptake univariate models. Biomass is also commonly assessed using height measurements from a LiDAR scanner (e.g. Colaço et al. 2021a with R²=0.62 for a single field).
... Canopy cover estimation with multispectral sensors (MS) is gaining prevalence over traditional Red-Green-Blue (RGB) sensors, as they show more stability over time (Ashapure et al., 2019). MS includes irradiance sensors, which provide radiometric parameters to correct for radiation changes throughout the growing season, making them less sensitive to ever changing weather conditions (Candiago et al., 2015;Li et al., 2015). ...
... The Normalized Difference Vegetation Index (NDVI) is calculated as the normalized difference between the nearinfrared and infrared bands and is used as an indicator of the density of the green vegetation in response to several practices (Duan et al., 2017;Xu et al., 2019). Various studies have shown that CC estimation based on NDVI calculation is more reliable than the RGB-based approaches, mainly when the plants start changing color as a result of the senescence or other factors (Ashapure et al., 2019;Kayahan et al., 2020). ...
Article
This study presents the use of a linear mixed model to accurately estimate the dynamics of canopy cover (CC) during an irrigation experiment on a tomato crop, performed in the Peruvian coastal desert. Initially, the normalized difference vegetation index (NDVI)-based CC was computed using multispectral sensors over consecutive weekly UAV flights, during the growing season of the crop. The drone high-resolution sequential imagery, combined with image segmentation on plant level, results in repeated measures growth curves of the coverage per plant. To analyze these repeated measures data per plant, cubic polynomials without intercept showed the best goodness-of-fit of the CC dynamics per plant. As a consequence, this polynomial was incorporated in a linear mixed model (LMM) as a random coefficient model to fit the plant-specific time evolution of CC as deviations from the mean time effects per irrigation treatment. The mixed model approach is capable to estimate the coverage curves per plant with high accuracy and a very limited number of model parameters. The results and statistical analysis demonstrate the potential benefits of the linear mixed model for incorporating plant-specific random components in addition to classical fixed effects models. Thus, the proposed linear mixed canopy model is a very efficient way to model plant-specific growth curves together with the treatment and design structure of the experiment.
... The vegetation index method is commonly used to extract crop canopy coverage from photographs, and different vegetation indexes have varying impacts on vegetation extraction. EXG, EXR and other regularly used vegetation indices have been utilized to extract the canopy coverage of cotton [41] and wheat [42] with improved extraction accuracy. To investigate changes in canopy coverage throughout growth stages, EXG, EXR, and RGBVI were used to extract potato canopy coverage, with EXG achieving higher extraction accuracy in all three experimental plots, demonstrating that EXG has strong adaptability. ...
Article
Full-text available
Canopy coverage and plant height are the main crop canopy parameters, which can obviously reflect the growth status of crops on the field. The ability to identify canopy coverage and plant height quickly is critical for farmers or breeders to arrange their working schedule. In precision agriculture, choosing the opportunity and amount of farm inputs is the critical part, which will improve the yield and decrease the cost. The potato canopy coverage and plant height were quickly extracted, which could be used to estimate the spraying volume using the evaluation model obtained by indoor tests. The vegetation index approach was used to extract potato canopy coverage, and the color point cloud data method at different height rates was formed to estimate the plant height of potato at different growth stages. The original data were collected using a low-cost UAV, which was mounted on a high-resolution RGB camera. Then, the Structure from Motion (SFM) algorithm was used to extract the 3D point cloud from ordered images that could form a digital orthophoto model (DOM) and sparse point cloud. The results show that the vegetation index-based method could accurately estimate canopy coverage. Among EXG, EXR, RGBVI, GLI, and CIVE, EXG achieved the best adaptability in different test plots. Point cloud data could be used to estimate plant height, but when the potato coverage rate was low, potato canopy point cloud data underwent rarefaction; in the vigorous growth period, the estimated value was substantially connected with the measured value (R2 = 0.94). The relationship between the coverage area of spraying on potato canopy and canopy coverage was measured indoors to form the model. The results revealed that the model could estimate the dose accurately (R2 = 0.878). Therefore, combining agronomic factors with data extracted from the UAV RGB image had the ability to predict the field spraying volume.
... However, their unstable thresholding limits the usage of these indices. Cameras that take advantage of at least one band in the Near-Infrared (NIR) region, such as Colored Infrared (CIR) or multispectral cameras, perform significantly better than RGB cameras that rely solely on visible region, in terms of vegetation segmentation (Zheng et al., 2018) and canopy cover estimation throughout the season (Ashapure et al., 2019). ...
Article
Full-text available
Background Due to their high protein content, nuts (almond, walnut, and pistachio) are among the main substitutes for meat, with a growing share of the food basket in the United States. However, the rapidly growing acreage of these crops, new legislations, the necessity of minimizing the environmental footprint, and a cost-effective production demand certain managerial practices based on precision agriculture and remote sensing, which have shown promising results in food production. Scope and approach This paper presents a comprehensive review of remote sensing platforms, sensors, applications, and analytic pipelines with a focus on nut crops, even though the materials are applicable for other specialty crops. In this regard, the paper is divided into five main sections: First, the problems and potential solutions are elaborated in the introduction. Second, the available platforms: satellites, manned aircraft, and UASs are discussed. Then the sensors used for remote sensing, their working principle, and the pros and cons of each are presented. Next, practiced and suggested applications of remote sensing data are reviewed. Finally, data processing and analytics needed to produce and interpret reliable results are highlighted. Key findings and conclusions Key findings are listed as: 1) The acreage of the nut orchards and the purpose of the studies determine the fitting sensor and platform. 2) Although various sensors are available and reported to have promising results in other crops, they have not been used for nut crops. 3) Accurate sensor calibration is crucial for repeatable results as well as temporal and inter-field comparisons. 4) Except for water management, most remote sensing applications are limitedly studied in nut orchards, creating some research opportunities. 5) Finally, increasing data size requires new machine learning techniques and data fusion frameworks to handle all variables and fill the knowledge gap.
... They also found InceptionResNetV2 as the most efficient state-of-the-art CNN (compared to DenseNet121, InceptionV3, VGG16, VGG19, Xception and ResNet50) for classifying complex multispectral remote sensing wetlands scenes (F1 score of 93%). In their pursuit of maximizing the distinction between the target vegetation type (weeds) and the surroundings, C. Hung et al. [35] argued for the use of imagery from different seasons to take advantage of phenological dynamics and seasonal changes in the vegetation appearance, as well as performing the survey at lower flight altitudes (below 100 m [36]) or using higher resolution sensors to obtain more detail. ...
Article
Full-text available
In Mediterranean landscapes, the encroachment of pyrophytic shrubs is a driver of more frequent and larger wildfires. The high-resolution mapping of vegetation cover is essential for sustainable land planning and the management for wildfire prevention. Here, we propose methods to simplify and automate the segmentation of shrub cover in high-resolution RGB images acquired by UAVs. The main contribution is a systematic exploration of the best practices to train a convolutional neural network (CNN) with a segmentation network architecture (U-Net) to detect shrubs in heterogeneous landscapes. Several semantic segmentation models were trained and tested in partitions of the provided data with alternative methods of data augmentation, patch cropping, rescaling and hyperparameter tuning (the number of filters, dropout rate and batch size). The most effective practices were data augmentation, patch cropping and rescaling. The developed classification model achieved an average F1 score of 0.72 on three separate test datasets even though it was trained on a relatively small training dataset. This study demonstrates the ability of state-of-the-art CNNs to map fine-grained land cover patterns from RGB remote sensing data. Because model performance is affected by the quality of data and labeling, an optimal selection of pre-processing practices is a requisite to improve the results.
... In (Qiao et al., 2020) classical plant segmentation methods were used: Otsu thresholding of the NIR channel and RGB vegetation index ExG (Woebbecke et al., 1995). In (Ashapure et al., 2019) the thresholding of RGB vegetation indices Canopeo (Patrignani and Ochsner, 2015), ExG, MGRVI, RGBVI (Bendig et al., 2015) proved to be the most effective method for cotton canopy ex-traction. In (Castillo-Martínez et al., 2020) a plant segmentation method based on thresholding modified RGB vegetation indices was proposed. ...
Conference Paper
Full-text available
Crop segmentation is a crucial part of computer vision methods for precision agriculture. Two types of crop segmentation approaches can be observed – based on pixel intensity thresholding of vegetation indices and classification-based including context (e.g. deep convolutional neural network). Threshold-based methods work well when images do not contain disruptions (weeds, overlapping, different illumination). Although deep learning methods can cope with the mentioned problems their development requires a large number of labelled samples. In this study, we propose a hybrid method for the rapid development of efficient and robust models for in-row crop segmentation, combining the advantages of described approaches. Our method consists of two-step labelling with the generation of synthetic crop images and the following training of the Mask R-CNN model. The proposed method has been tested comprehensively on samples characterised by different types of disruptions. Already the first labelling step based mainly on cluster labelling significantly increased the average F1-score in crop detection task compared to binary thresholding of vegetation indices. The second stage of the labelling allowed this result to be increased. As part of this research, an algorithm for row detection and row-based filtering was also proposed, which reduced the number of FP errors made during inference.
Article
Guayule (Parthenium argentatum, A. Gray), a perennial desert shrub, produces high-quality natural rubber and is targeted as a domestic natural rubber source in the U.S. While commercialization efforts for guayule are on- going, crop management requires plant growth monitoring, irrigation requirement assessment, and final yield estimation. Such assistance for guayule management could be provided with remote sensing (RS) data. In this study, field and RS data, collected via drones, from a 2-year guayule irrigation experiment conducted at Mar- icopa, Arizona were evaluated. In-season field measurements included fractional canopy cover (fc), basal (Kcb) and single (Kc) crop coefficients, and final yields of dry biomass (DB), rubber (RY), and resin (ReY). The ob- jectives of this paper were to compare vegetations indices from MS data (NDVI) and RGB data (triangular greenness index, TGI); and derive linear prediction models for estimating fc, Kcb, Kc, and yield as functions of the MS and RGB indices. The NDVI and TGI showed similar seasonal trends and were correlated at a coefficient of determination (r2) of 0.52 and a root mean square error (RMSE) of 0.11. The prediction of measured fc as a linear function of NDVI (r2 = 0.90) was better than by TGI (r2 = 0.50). In contrast to TGI, the measured fc was highly correlated with estimated fc based on RGB image evaluation (r2 = 0.96). Linear models of Kcb and Kc, developed over the two years of guayule growth, had similar r2 values vs NDVI (r2 = 0.46 and 0.41, respectively) and vs TGI (r2 = 0.48 and 0.40, respectively). Final DB, RY, and ReY were predicted by both NDVI (r2 = 0.75, 0.53, and 0.70, respectively) and TGI (r2 = 0.72, 0.48, and 0.65, respectively). The RS-based models enable estimation of irrigation requirements and yields in guayule production fields in the U.S.
Article
Hundreds of crop variety trial sites are operated across Australia with up to 100 small plots (20 m²) that require manual, labour-intensive monitoring for emergence, height, canopy cover and flowering status. Machine vision systems can reduce labour in monitoring, and infield fixed cameras are most suitable to provide low-cost continuous sensing without requiring labour of travelling to sites. Height is commonly detected using stereo cameras; however, a low-cost single camera system is preferable for broad scale use at variety trial sites. Existing low-cost fixed camera systems assess multiple plots in the image’s field of view from a fixed mask that needs to be manually updated as the crop grows. Perspective transformation could be applied automatically to identify plot locations in the image. Existing systems also analyse multiple images independently and filter results to remove impacts from lighting variations. An alternative approach is to only analyse images in the same lighting conditions each day to reduce the need for filtering. In addition, a series of daily images can be used to track multiple leaf positions to monitor plant growth. A machine vision system has been developed that combines these technologies to track plant height from a series of daily images selected in the same lighting conditions, in combination with perspective transformation, in multiple plots in the camera’s field of view. Emergence, canopy cover and flowering status were also identified at the canopy surface in each plot using colour segmentation and/or shape analysis. The algorithms were evaluated on 5 and 10 m towers monitoring randomised trials of 16 maize and soybean plots and detected maize flowering date within one day, soybean height (RMSE = 18.38 cm; R² = 0.880), maize height (RMSE = 47.73 cm; R² = 0.838), soybean canopy cover (RMSE = 22.14%; R² = 0.818) and maize canopy cover (RMSE = 14.01%; R² = 0.750). The larger error in maize height detection was due to the flowers being tracked by the algorithm instead of vegetation. Further work is required to transfer the algorithms to other crops and varieties.
Article
Pineapple becomes number 4th highest fruit production in Indonesia in 2020 and is one of the mainstay export commodities. Estimates of pineapple production using high-resolution images from drones are extremely rare, whereas production estimates for popular commodities do not pay attention to plant growth stages. This study aims to estimate pineapple biomass using UAVs at various pineapple growth stages to obtain the best formulation. Aerial photographs were taken using the Quest UAV, equipped with visible light and near infra-red camera. Ultra-high-resolution aerial photographs were taken using a UAV with a multispectral camera then transformed into a vegetation index (GDVI, NDVI, OSAVI, and TDVI). Images were taken on land plots with plant forcing ages (F) F-5, F-4, F-3, F-3, F-2, F-1, F0, F+1, and F+2. The results show that each pineapple growth stage has a different index to estimate. GDVI can be used to estimate the age of pineapples at F-5, F-4, F-3, and F+1 stage, while OSAVI can be used for F-2, F-1, and F + 2 TDVI for F0. The validation test result using a paired t-test showed that the biomass data on the field's measurement results are no different from the estimate data, so the best vegetation index can be used to estimate biomass.
Article
Full-text available
In olive groves, vegetation ground cover (VGC) plays an important ecological role. The EU Common Agricultural Policy, through cross-compliance, acknowledges the importance of this factor, but, to determine the real impact of VGC, it must first be quantified. Accordingly, in the present study, eleven vegetation indices (VIs) were applied to quantify the density of VGC in olive groves (Olea europaea L.), according to high spatial resolution (10–12 cm) multispectral images obtained by an unmanned aerial vehicle (UAV). The fieldwork was conducted in early spring, in a Mediterranean mountain olive grove in southern Spain presenting various VGC densities. A five-step method was applied: (1) generate image mosaics using UAV technology; (2) apply the VIs; (3) quantify VGC density by means of sampling plots (ground-truth); (4) calculate the mean reflectance of the spectral bands and of the VIs in each sampling plot; and (5) quantify VGC density according to the VIs. The most sensitive index was IRVI, which accounted for 82% (p < 0.001) of the variability of VGC density. The capability of the VIs to differentiate VGC densities increased in line with the cover interval range. RVI most accurately distinguished VGC densities > 80% in a cover interval range of 10% (p < 0.001), while IRVI was most accurate for VGC densities < 30% in a cover interval range of 15% (p < 0.01). IRVI, NRVI, NDVI, GNDVI and SAVI differentiated the complete series of VGC densities when the cover interval range was 30% (p < 0.001 and p < 0.05).
Article
Full-text available
• We used photography-based grid point intercept (GPI) analysis and Daubenmire to assess ecosystem services in high-shrub rangelands. • Cover estimates were higher for some functional groups when using Daubenmire, likely because Daubenmire frames were situated below the shrub canopy and thus included subcanopy cover, whereas GPI photographs taken above the canopy could not eliminate shrubs that obscured subcanopy attributes. • Choice of methods affected assessment of two ecosystem services: sage-grouse habitat quality and site biodiversity; each was higher when using Daubenmire. • Understanding cover-estimate differences that stem from using GPI photo plots versus Daubenmire will allow practitioners to decide if GPI methods address project objectives.
Article
Full-text available
This paper demonstrates the application of aerial multispectral images in cotton plant phenotyping. Four phenotypic traits (plant height, canopy cover, vegetation index, and flower) were measured from multispectral images captured by a multispectral camera on an unmanned aerial system. Data were collected on eight different days from two fields. Ortho-mosaic and digital elevation models (DEM) were constructed from the raw images using the structure from motion (SfM) algorithm. A data processing pipeline was developed to calculate plant height using the ortho-mosaic and DEM. Six ground calibration targets (GCTs) were used to correct the error of the calculated plant height caused by the georeferencing error of the DEM. Plant heights were measured manually to validate the heights predicted from the imaging method. The error in estimation of the maximum height of each plot ranged from -40.4 to 13.5 cm among six datasets, all of which showed strong linear relationships with the manual measurement (R² > 0.89). Plot canopy was separated from the soil based on the DEM and normalized differential vegetation index (NDVI). Canopy cover and mean canopy NDVI were calculated to show canopy growth over time and the correlation between the two indices was investigated. The spectral responses of the ground, leaves, cotton flower, and ground shade were analyzed and detection of cotton flowers was satisfactory using a support vector machine (SVM). This study demonstrated the potential of using aerial multispectral images for high throughput phenotyping of important cotton phenotypic traits in the field.
Article
Full-text available
In the crop breeding process, the use of data collection methods that allow reliable assessment of crop adaptation traits, faster and cheaper than those currently in use, can significantly improve resource use efficiency by reducing selection cost and can contribute to increased genetic gain through improved selection efficiency. Current methods to estimate crop growth (ground canopy cover) and leaf senescence are essentially manual and/or by visual scoring, and are therefore often subjective, time consuming, and expensive. Aerial sensing technologies offer radically new perspectives for assessing these traits at low cost, faster, and in a more objective manner. We report the use of an unmanned aerial vehicle (UAV) equipped with an RGB camera for crop cover and canopy senescence assessment in maize field trials. Aerial-imaging-derived data showed a moderately high heritability for both traits with a significant genetic correlation with grain yield. In addition, in some cases, the correlation between the visual assessment (prone to subjectivity) of crop senescence and the senescence index, calculated from aerial imaging data, was significant. We concluded that the UAV-based aerial sensing platforms have great potential for monitoring the dynamics of crop canopy characteristics like crop vigor through ground canopy cover and canopy senescence in breeding trial plots. This is anticipated to assist in improving selection efficiency through higher accuracy and precision, as well as reduced time and cost of data collection.
Article
Full-text available
Biomass monitoring is one of the main pillars of precision farm management as it involves deeper knowledge about pest and weed status, soil quality, water stress, and yield prediction, among others. This research focuses on estimating crop biomass from high-resolution red, green, blue imaging obtained with an unmanned aerial vehicle. Onion, as one of the most cultivated vegetables, was studied for two seasons under non-controlled conditions in two commercial plots. Green canopy cover, crop height, and canopy volume (Vcanopy) were the predictor variables extracted from the geomatic products. Strong relationships were found between Vcanopy and dry leaf biomass and dry bulb biomass. Adjusted coefficient of determination (\({\text{R}}_{\text{adj}}^2\)) values were 0.76 and 0.95, respectively. Nevertheless, crop management practices and leaf depletion at vegetative stages significantly affect the accuracy of the canopy model. These results suggested that obtaining biomass using aerial images are a good alternative to other sensors and platforms as they have high spatial and temporal resolution to perform high-quality biomass monitoring.
Article
Full-text available
Over the past decade, the remote-sensing community has eagerly adopted unmanned aircraft systems (UAS) as a cost-effective means to capture imagery at spatial and temporal resolutions not typically feasible with manned aircraft and satellites. The rapid adoption has outpaced our understanding of the relationships between data collection methods and data quality, causing uncertainties in data and products derived from UAS and necessitating exploration into how researchers are using UAS for terrestrial applications. We synthesize these procedures through a meta-analysis of UAS applications alongside a review of recent, basic science research surrounding theory and method development. We performed a search of the Web of Science (WoS) database on 17 May 2017 using UAS-related keywords to identify all peer-reviewed studies indexed by WoS. We manually filtered the results to retain only terrestrial studies ( ) and further categorized results into basic theoretical studies ( ), method development ( ), and applications ( ). After randomly selecting a subset of applications ( ), we performed an in-depth content analysis to examine platforms, sensors, data capture parameters (e.g. flight altitude, spatial resolution, imagery overlap, etc.), preprocessing procedures (e.g. radiometric and geometric corrections), and analysis techniques. Our findings show considerable variation in UAS practices, suggesting a need for establishing standardized image collection and processing procedures. We reviewed basic research and methodological developments to assess how data quality and uncertainty issues are being addressed and found those findings are not necessarily being considered in application studies.
Article
Recent years have witnessed enormous interest in the application of Unmanned Aerial Systems (UAS) for precision agriculture. This study presents a novel approach to use multi-temporal UAS data for comparison of two management practices in cotton, conventional tillage (CT) and no-tillage (NT). The plant parameters considered for the comparison are: canopy height (CH), canopy cover (CC), canopy volume (CV) and Normalized Difference Vegetation Index (NDVI). Initially, the whole study area was divided into approximately one square meter size grids. Measurements were extracted grid wise using high resolution UAS data captured ten times over whole crop growing season of the cotton. One tailed Z-test hypothesis reveals that there is a significant difference between cotton growth under CT and NT for almost all the epochs. With 95% confidence interval, the crop grown under NT found to have taller canopy, higher canopy cover, bigger biomass and higher NDVI, as compared to those under CT cropping system.
Article
The vegetation fraction (VF) monitoring in a specific area is a very important parameter for precision agriculture. Until a few years ago, high-cost flights on aeroplanes and satellite imagery were the only option to acquire data to estimate VF remotely. Recently, Unmanned Aerial Vehicles (UAVs) have emerged as a novel and economic tool to supply high-resolution images useful to estimate VF. VF is usually estimated by spectral indices using red-green-blue (RGB) and near-infrared (NIR) bands data. For this study, a UAV equipped with both kinds of sensors (RGB and NIR) was used to obtain high-resolution imagery over a maize field in progressive dates along the mid-season and the senescence development stages. The early-season stage was also monitored using only RGB spectral indices. Flights were performed at 52 m over the terrain, obtaining RGB images of 1.25 cm pixel⁻¹ and multispectral images of 2.10 cm pixel⁻¹. Three spectral indices in the visible region, Excess Green (ExG), Colour Index of Vegetation (CIVE), and Vegetation Index Green (VIg), and three NIR-based vegetation indices, Normalized Difference Vegetation Index (NDVI), Green NDVI (GNDVI), and Normalized Green (NG), were evaluated for VF estimation. Otsu’s method was applied to automatically determine the threshold value to classify the vegetation coverage. Results show that ExG presents the higher mean accuracy (85.66%) among all the visible indices, with values ranging from 72.54% to 99.53%, having its best performance in the earlier development stage. Nevertheless, GNDVI mean accuracy (97.09%) overcomes all the indices (visible and multispectral), ranging in value from 92.71% to 99.36%. This allowed comparing the accuracy difference gained by using a NIR sensor, with a higher economic cost than required using a simple RGB sensor. The results suggest that ExG can be a very suitable option to monitor VF in the early-season growth stage of the crop, while later stages could require NIR-based indices. Thus, the selection of the index will depend on the objectives of the study and the equipment capacity.