ArticlePDF Available

Increasing the Accuracy and Automation of Fractional Vegetation Cover Estimation from Digital Photographs

Authors:
  • Agricultural & Environmental Research Editing & Advising

Abstract and Figures

The use of automated methods to estimate fractional vegetation cover (FVC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive FVC measurements. Wide acceptance has been delayed because of the limitations in accuracy, speed, automation and generalization of these methods. This work introduces a novel technique, the Automated Canopy Estimator (ACE) that overcomes many of these challenges to produce accurate estimates of fractional vegetation cover using an unsupervised segmentation process. ACE is shown to outperform nine other segmentation algorithms, consisting of both threshold-based and machine learning approaches, in the segmentation of photographs of four different crops (oat, corn, rapeseed and flax) with an overall accuracy of 89.6%. ACE is similarly accurate (88.7%) when applied to remotely sensed corn, producing FVC estimates that are strongly correlated with ground truth values.
Content may be subject to copyright.
remote sensing
Article
Increasing the Accuracy and Automation of
Fractional Vegetation Cover Estimation
from Digital Photographs
André Coy 1, *, Dale Rankine 1, Michael Taylor 1, David C. Nielsen 2and Jane Cohen 3
1Department of Physics, The University of the West Indies, Mona, Jamaica;
dralstonrankine@gmail.com (D.R.); michael.taylor@uwimona.edu.jm (M.T.)
2Central Great Plains Research Station, USDA-ARS, Akron, CO 80720, USA; david.nielsen@ars.usda.gov
3
Department of Life Sciences, The University of the West Indies, Mona, Jamaica; jane.cohen@uwimona.edu.jm
*Correspondence: andre.coy@gmail.com; Tel.: +1-876-927-2480; Fax: +1-876-977-1595
Academic Editors: Sangram Ganguly, Parth Sarathi Roy and Prasad S. Thenkabail
Received: 4 April 2016; Accepted: 23 May 2016; Published: 23 June 2016
Abstract:
The use of automated methods to estimate fractional vegetation cover (FVC) from digital
photographs has increased in recent years given its potential to produce accurate, fast and inexpensive
FVC measurements. Wide acceptance has been delayed because of the limitations in accuracy, speed,
automation and generalization of these methods. This work introduces a novel technique, the
Automated Canopy Estimator (ACE) that overcomes many of these challenges to produce accurate
estimates of fractional vegetation cover using an unsupervised segmentation process. ACE is shown
to outperform nine other segmentation algorithms, consisting of both threshold-based and machine
learning approaches, in the segmentation of photographs of four different crops (oat, corn, rapeseed
and flax) with an overall accuracy of 89.6%. ACE is similarly accurate (88.7%) when applied to
remotely sensed corn, producing FVC estimates that are strongly correlated with ground truth values.
Keywords:
fractional vegetation cover; automated canopy estimation; unsupervised image
segmentation; digital photographs
1. Introduction
Fractional vegetation cover (FVC), which is defined as the vertical projection of foliage onto
a horizontal surface, is an important measure of crop development [
1
]. FVC can be used as direct input
to crop models or as a predictor of crop yield, above-ground biomass and plant nutritional status [
2
7
].
An advantage that FVC holds over other measures of crop development, such as Leaf Area Index (LAI),
is that it can be estimated from the analysis of digital photographs. This holds the potential for a simple,
low cost, approach to measuring crop development.
The segmentation of digital photographs is becoming increasingly important in agriculture.
Applications include vegetation monitoring [
8
,
9
], estimation of LAI [
10
12
], plant nutritional status [
6
,
7
,
13
],
fractional vegetation cover measurement [
1
,
14
17
], growth characteristics [
18
], weed detection [
19
]
and crop identification [
20
]. Determining FVC from digital photographs is often simpler, faster and
more economical than measuring LAI [
1
,
15
,
17
]. FVC values derived from ground and near-ground
remotely sensed images for validation of FVC estimated from satellite images is vital in ensuring the
quality of FVC estimates derived from satellite images [
9
,
21
25
]. However, there are often significant
problems with current FVC estimation methodologies. Owing to a lack of automation, the approaches
are sometimes very time-consuming. Additionally, they may require the user to have knowledge of
image processing techniques. Most importantly, current techniques generally lack sufficient accuracy
to be useful [16,26].
Remote Sens. 2016,8, 474; doi:10.3390/rs8070474 www.mdpi.com/journal/remotesensing
Remote Sens. 2016,8, 474 2 of 14
Image segmentation for fractional vegetation cover estimation typically utilizes one of
two approaches, either a threshold-based approach, or a machine learning approach. Examples of
threshold-based approaches include the Color Index of Vegetation Extraction (CIVE) [
27
], Excess
green and Otsu’s method (ExG & Otsu) [
28
] and Green Crop Tracker [
10
]. Both CIVE and ExG & Otsu
manipulate the values of the RGB channels of an image in order to distinguish pixels dominated by
green crop canopy from soil and background pixels. Otsu’s method [
29
] is then applied in order to
extract the fractional vegetation cover. The Green Crop Tracker automatically derives estimates of LAI
and FVC from digital photographs of growing crops. The software estimates fractional vegetation
cover by segmenting green canopy from soil and other background material using a histogram-based
threshold technique applied to the RGB values of the photographs. Limitations of these approaches
include an underperformance of the segmentation algorithm when the canopy is close to closure and
less accurate results in varying plot illumination [
10
,
30
]. SHAR-LABFVC [
31
] is an algorithm developed
to deal with the effect of shadows in classification. The algorithm converts a brightness-enhanced image
to the Commission Internationale de l’Éclairage (CIE) L*a*b* space and fits a log-normal distribution to
the greenness values and a Gaussian to the background values. Threshold determination assumes that
misclassification of vegetation and background is approximately equal; the segmentation threshold is
chosen to ensure this.
Several machine learning approaches have been developed to estimate fractional vegetation cover.
The environmentally adaptive segmentation (EASA) employed an adaptive, partially supervised
clustering process coupled with a Bayesian classifier to achieve segmentation of RGB images [
32
].
The segmentation method proposed by [
12
] proceeds with the supervised selection of a region of
an RGB image, which is converted to CIE L*a*b* space. K-means is used to cluster the sub-image
into plant canopy and background. The clusters representing plant canopy are then used to train
an artificial neural network to segment images into plant canopy and background. Bai et al. employed
morphology modeling in their development of a crop model for rice [
30
]. Digital photographs of
rice plants are divided into blocks and manually segmented into plant canopy and background.
The manually segmented images are converted to CIE L*a*b* space, and the distributions of color at
different levels of illuminance are generated and morphological operations (dilation and erosion) are
applied, resulting in a color model that can be used to segment rice plants from photographs taken
under variable lighting conditions.
While the machine learning approaches give good results, there are some limitations. Firstly, they
are typically supervised methods requiring varying levels of human intervention, both for model
training and for testing. For instance, the method of Bai et al. [
30
] requires that a separate model be
developed for each crop type that is to be analyzed. Secondly, they tend to employ computationally
intensive processes, which may limit the scope of their use.
The objective of this study was to evaluate the performance of a novel, unsupervised,
threshold-based, segmentation technique, the Automated Canopy Estimator (ACE) that overcomes
many of the aforementioned challenges to produce accurate estimates of fractional vegetation cover
for both terrestrial and remotely sensed photographs. The following section describes the technique.
Section 3evaluates the performance of ACE versus other methods using datasets also detailed in
Section 2. Section 4discusses the value of the new approach. Thereafter, a conclusion is offered in
Section 5.
2. Materials and Methods
2.1. Data Collection
The digital photographs used in the study were taken during previously conducted field studies.
Four separate crops (seeding rates and row spacing given in parentheses) were included in the
analysis: corn: Zea mays L. (34,600 seeds/ha; 76 cm), oat: Avena sativa L. (94.0 kg/ha; 20 cm), rapeseed:
Brassica napus L. (7.4 kg/ha; 20 cm) and flax: Linum usitatissimum L. (39.2 kg/ha; 20 cm).
Remote Sens. 2016,8, 474 3 of 14
The oat, rapeseed, and flax data were collected from field studies conducted at the United States
Department of Agriculture, Agricultural Research Service (USDA-ARS) Central Great Plains Research
Station (40
˝
09
1
N, 103
˝
09
1
W, 1383 m elevation above sea level) located near Akron, CO, USA [
33
].
The crops were planted no-till into proso millet stubble on 4 April 2013. Plot sizes were 6.1 by
12.2 m. The plot area was sprayed with glyphosate prior to planting and fertilized with 34 kg N/ha.
Hand-weeding was performed twice early in the growing season. Corn was grown in a separate study
(details given in [
10
]) and planted in early May in 2010 and 2011 at both the Akron site and at Sidney,
NE, USA (41
˝
12
1
N, 103
˝
0
1
W, 1315 m above mean sea level). Soil type at both locations was classified
as silt loam (Aridic Argiustolls). Crops in both studies were grown under both dryland (rain-fed) and
limited supplemental irrigation conditions.
Photographs were taken with a digital camera held level with the horizon and at arm’s length to
the south of the photographer at midday. Owing to the differing growth patterns and the varyingstages
of crop growth, the depth of shadow varies from none to deep shadow. Height above the soil surface
was approximately 1.5 m. For photographs of corn taken after 12 July, the photographer climbed
a stepladder to get above the canopy. Two different cameras were used for taking the photographs.
Inexpensive (<USD $175) digital cameras were used—for corn: a Panasonic Lumix DMC-FS3 and
a Panasonic DMC-FP1 (image specifications for both Panasonic cameras were JPEG image format,
2048 by 1536 pixels, 180 ppi). For all other crops, the DMC-FP1 was used.
Additional photographs of corn were taken to compare the analyses of the hand-held digital
photographs with analyses of images acquired from a true remote sensing platform. These additional
images were obtained from irrigated and rain-fed corn plots at the USDA-ARS Limited Irrigation
Research Farm (40
˝
27
1
N, 104
˝
38
1
W, 1427 m elevation above sea level) located near Greeley, CO,
USA [
34
]. The corn was planted at 85,500 seeds/ha with 76-cm row spacing on 15 May 2015. Plot size
was 9.0 by 43.0 m. A total of 139 kg N/ha fertilizer was applied throughout the growing season.
The soil type at this site was classified as sandy loam (Ustic Haplargids). The camera used in this study
was a Canon EOS 50D, which produced JPEG images (2352 by 1568 pixels). The camera was attached
to a tractor-mounted boom and raised to about 7 m above the soil surface. Thermal images were taken
around solar noon on seven dates from 22 June to 1 October. Each image covered an area of about
5.9 m ˆ4.2 m over the six center rows of each plot.
2.2. Image Segmentation
This paper proposes a new approach to the automated estimation of FVC from digital photographs.
The technique converts a digital photograph from RGB to the CIE L*a*b* color space. This color
space was developed by the CIE based on human perception of color. A clear benefit of working in
the CIE L*a*b* is that it is device-independent [
35
], therefore a single approach can be used in the
segmentation of images captured using different devices, without the negative effects of differing color
representations. Colors are represented using three values: the L* value represents the brightness of
the image; the a* value represents color on the red-green axis, while the b* value represents color along
the blue-yellow axis. By converting to the L*a*b* space, the luminescence of the image is separated
from the color information, thus reducing the impact of excessive brightness on the segmentation
process. The algorithm extracts and processes the a* channel in order to segment the image into green
canopy and background (background crop residues or soil).
Threshold Determination
Figure 1shows a flowchart detailing the threshold determination of the ACE algorithm. The a*
values for a file are compiled and a histogram is plotted. Histogram values are smoothed using
a kernel-based method in order to remove spurious peaks from the distribution. Figure 2shows the
smoothed histogram for an image of flax (Figure 2a) in which the a* values representing green canopy
and background are both well represented (Figure 2b). The distribution is bimodal, with distinct
regions for green colored pixels (to the left) and for background pixels (to the right). In the ideal
Remote Sens. 2016,8, 474 4 of 14
case, this bimodality is clearly seen in the distribution; however, this is not always so, for instance,
when the canopy is near closed or where fractional vegetation cover is sparse. In these circumstances,
a measure of the underlying bimodality of the distribution must be determined. This is achieved by
fitting a Gaussian mixture model (GMM) to the smoothed data. The use of Gaussians is motivated by
the observation that the distributions of vegetation and background colors are approximately normal.
The GMM is expressed as:
g`xˇˇµi,Σi˘
N
ÿ
i1
wi
1
p2πq1
2|Σi|1
2
e´1
2px´µiqTΣi
´1px´µiq,
where
x
is a vector of a* color values,
µi
,
and Σi
are the means and covariances of each Gaussian,
respectively. The contribution of the ith Gaussian is determined by the weight
wi
. For the bimodal
distribution,
N
is set to 2. The GMM is estimated using the non-linear iterative Nelder–Mead simplex
optimization algorithm [
36
]. The Nelder–Mead algorithm is a direct search algorithm that does not
use derivatives, so it can be used for discontinuous functions. The algorithm takes an initial solution
set and forms a simplex. The function is evaluated at each point of the simplex and minimized by
shifting (reflecting, shrinking or expanding) the vertices of the simplex. The algorithm iteratively seeks
an improved estimate, based on a specified tolerance.
Remote Sens. 2016, 8, 474 4 of 14
is motivated by the observation that the distributions of vegetation and background colors are
approximately normal. The GMM is expressed as:
(|,)=
()
||

()(),

where x is a vector of a* color values, , and are the means and covariances of each Gaussian,
respectively. The contribution of the ith Gaussian is determined by the weight wi. For the bimodal
distribution, N is set to 2. The GMM is estimated using the non-linear iterative Nelder–Mead simplex
optimization algorithm [36]. The Nelder–Mead algorithm is a direct search algorithm that does not
use derivatives, so it can be used for discontinuous functions. The algorithm takes an initial solution
set and forms a simplex. The function is evaluated at each point of the simplex and minimized by
shifting (reflecting, shrinking or expanding) the vertices of the simplex. The algorithm iteratively
seeks an improved estimate, based on a specified tolerance.
Figure 1. Flowchart of the ACE algorithm.
A threshold, Tgb, for segmentation is determined automatically by detecting the lowest point of
intersection between the peaks of the distribution (see Figure 2). An image is segmented by marking
the pixels with a* color values less than or equal to Tgb as belonging to the canopy and all other pixels
Figure 1. Flowchart of the ACE algorithm.
A threshold, T
gb
, for segmentation is determined automatically by detecting the lowest point of
intersection between the peaks of the distribution (see Figure 2). An image is segmented by marking
the pixels with a* color values less than or equal to T
gb
as belonging to the canopy and all other pixels
as background. Fractional vegetation cover is estimated as the ratio of the number pixels in the canopy
to the total number of pixels in the photograph.
Remote Sens. 2016,8, 474 5 of 14
Remote Sens. 2016, 8, x; doi: www.mdpi.com/journal/remotesensing
(a) (b)
Figure 2. An image of flax (a) and the smoothed probability distribution of its a* color values (b). Tgb
is the threshold for segmentation in the ACE algorithm.
Figure 2.
An image of flax (
a
) and the smoothed probability distribution of its a* color values (
b
). T
gb
is
the threshold for segmentation in the ACE algorithm.
In some instances, the bimodality is not revealed by mixture modeling. For example, in Figure 3a,
the shadowed areas beneath a corn canopy obscure some regions of soil. The histogram in Figure 3b
reveals the lack of obvious bimodality in the distribution. A generalized thresholding technique would
fail to discriminate between green canopy and shadow, leading to under-segmentation of the image
and an overestimate of FVC. While the bimodality is not evident, the assumption that the distribution
is in fact bimodal remains valid as the shadowed sections of the image simply act to flatten the second
peak of the distribution. In ACE, these occurrences are dealt with by finding the inflection point
equivalent to the point at which the two underlying peaks intersect. The inflection point is detected by
first taking the difference between adjacent points of the distribution. The vector of differences shows
a series of peaks and troughs, indicating the number and direction of the inflection points found in
the color value distribution (see Figure 3c, for example). Inflection points mark the place where the
direction of the distribution changes from clockwise to anti-clockwise, or vice versa. These directional
changes are indicated by peaks and troughs in the difference vector; a trough represents a change
from clockwise to anti-clockwise, while a peak represents the reverse. The second step involves the
detection of the peak in the difference vector that identifies the correct inflection point, viz., which is
indicative of the threshold. With an almost closed canopy, the inflection point is found to the right
of the main peak in the distribution. The second peak in the difference vector, which corresponds to
an anti-clockwise to clockwise direction change, indicates the position of the threshold (see Figure 3,
for example). When the photo shows mostly background, the inflection point is found to the left of the
main peak in the smoothed distribution. In this case, the first trough in the difference vector, which
identifies the position of the clockwise to anti-clockwise direction change, indicates the position of the
segmentation threshold.
Remote Sens. 2016, 8, 474 5 of 14
as background. Fractional vegetation cover is estimated as the ratio of the number pixels in the
canopy to the total number of pixels in the photograph.
(a) (b)
Figure 2. An image of flax (a) and the smoothed probability distribution of its a* color values (b). T
gb
is the threshold for segmentation in the ACE algorithm.
In some instances, the bimodality is not revealed by mixture modeling. For example, in Figure
3a, the shadowed areas beneath a corn canopy obscure some regions of soil. The histogram in Figure 3b
reveals the lack of obvious bimodality in the distribution. A generalized thresholding technique would
fail to discriminate between green canopy and shadow, leading to under-segmentation of the image and
an overestimate of FVC. While the bimodality is not evident, the assumption that the distribution is in
fact bimodal remains valid as the shadowed sections of the image simply act to flatten the second peak
of the distribution. In ACE, these occurrences are dealt with by finding the inflection point
equivalent to the point at which the two underlying peaks intersect. The inflection point is detected
by first taking the difference between adjacent points of the distribution. The vector of differences
shows a series of peaks and troughs, indicating the number and direction of the inflection points
found in the color value distribution (see Figure 3c, for example). Inflection points mark the place
where the direction of the distribution changes from clockwise to anti-clockwise, or vice versa. These
directional changes are indicated by peaks and troughs in the difference vector; a trough represents
a change from clockwise to anti-clockwise, while a peak represents the reverse. The second step
involves the detection of the peak in the difference vector that identifies the correct inflection point,
viz., which is indicative of the threshold. With an almost closed canopy, the inflection point is found
to the right of the main peak in the distribution. The second peak in the difference vector, which
corresponds to an anti-clockwise to clockwise direction change, indicates the position of the
threshold (see Figure 3, for example). When the photo shows mostly background, the inflection
point is found to the left of the main peak in the smoothed distribution. In this case, the first trough
in the difference vector, which identifies the position of the clockwise to anti-clockwise direction
change, indicates the position of the segmentation threshold.
(a)
(b)
Figure 3. Cont.
Remote Sens. 2016,8, 474 6 of 14
Remote Sens. 2016, 8, 474 6 of 14
(c)
Figure 3. An image of corn (a), its smoothed probability distribution (b), and the difference of
probability values (c). The inflection point in subfigure (b) (marked as Tgb) indicates the segmentation
value. Tgb is detected as the second peak in the difference curve (subfigure (c)).
3. Results
3.1. Image Segmentation
One of the aims of using image analysis techniques for segmentation is to develop an
automated, low cost, fast and replicable approach to the segmentation of vegetation cover from
digital photographs. Using ACE, segmentation was performed on all four crops over a range of
growing conditions and canopy development. Representative examples of segmentation for the four
crops are shown in Figure 4; Figure 4a,c,e,g show the original terrestrial photographs of corn, flax,
rapeseed and oat, respectively. The corresponding segmented images are shown in Figure 4b,d,f,h.
Remotely sensed corn and its segmentation are shown in Figure 4i,j. Examination of the segmentation
results shows the utility of ACE for performing accurate segmentation of vegetation from backgrounds
under varying conditions. The figures all have regions of shadow that can lead to misclassification of
background pixels, especially for ocular segmentation. The corresponding segmentations show the
accurate exclusion of the shadowed areas from the vegetation by ACE. ACE also successfully
compensated for variations in lighting and quality of individual images.
Automation of the segmentation process removes the component of human error and ensures
the replicability of the results. The technique works for the entire range of fractional vegetation
cover, from fully open to fully closed. Segmentation of each image takes less than a second on a
computer with a 2.2 GHz CPU and 8 GB of memory, compared with just under five seconds for
SHAR-LAB and approximately two minutes for a supervised technique such as SamplePoint [37]
(http://samplepoint.org/), in which 64 randomly selected points per image are classified by a human
observer. Though the images were captured with two separate cameras, the segmentation was
consistent across the dataset, a result that has not been achieved in previous studies (see [10,15] for
example).
Figure 3.
An image of corn (
a
), its smoothed probability distribution (
b
), and the difference of
probability values (
c
). The inflection point in subfigure (
b
) (marked as T
gb
) indicates the segmentation
value. Tgb is detected as the second peak in the difference curve (subfigure (c)).
3. Results
3.1. Image Segmentation
One of the aims of using image analysis techniques for segmentation is to develop an automated,
low cost, fast and replicable approach to the segmentation of vegetation cover from digital photographs.
Using ACE, segmentation was performed on all four crops over a range of growing conditions and
canopy development. Representative examples of segmentation for the four crops are shown in
Figure 4; Figure 4a,c,e,g show the original terrestrial photographs of corn, flax, rapeseed and oat,
respectively. The corresponding segmented images are shown in Figure 4b,d,f,h. Remotely sensed
corn and its segmentation are shown in Figure 4i,j. Examination of the segmentation results shows the
utility of ACE for performing accurate segmentation of vegetation from backgrounds under varying
conditions. The figures all have regions of shadow that can lead to misclassification of background
pixels, especially for ocular segmentation. The corresponding segmentations show the accurate
exclusion of the shadowed areas from the vegetation by ACE. ACE also successfully compensated for
variations in lighting and quality of individual images.
Automation of the segmentation process removes the component of human error and ensures
the replicability of the results. The technique works for the entire range of fractional vegetation
cover, from fully open to fully closed. Segmentation of each image takes less than a second on
a computer with a 2.2 GHz CPU and 8 GB of memory, compared with just under five seconds for
SHAR-LAB and approximately two minutes for a supervised technique such as SamplePoint [
37
]
(http://samplepoint.org/), in which 64 randomly selected points per image are classified by a human
observer. Though the images were captured with two separate cameras, the segmentation was
consistent across the dataset, a result that has not been achieved in previous studies (see [
10
,
15
]
for example).
Remote Sens. 2016,8, 474 7 of 14
Remote Sens. 2016, 8, x; doi: www.mdpi.com/journal/remotesensing
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 4.
(
a
j
) Image segmentation using ACE. Photographs taken with a handheld digital camera
along with the respective segmentations performed by ACE. Corn (
a
,
b
); flax (
c
,
d
); rapeseed (
e
,
f
) and
oat (
g
,
h
). Remotely sensed corn, taken from a platform 7 m above the soil surface (
i
); and its ACE
segmentation (j).
3.2. Fractional Vegetation Cover Estimation
Twenty images of each crop, a total of 80 images, were selected for testing. An additional six
per crop were used as a development set. The images cover the full range of fractional vegetation cover
Remote Sens. 2016,8, 474 8 of 14
from open to near closed. Additionally, the crops present a challenge for segmentation as they have
different leaf shapes, sizes and orientations and a variety of growth patterns while being photographed
under varying lighting conditions.
3.2.1. Ground Truth Image Segmentation
Ground truth segmentation values were estimated by two individuals using Photoshop (Adobe
Systems Incorporated, San Jose, CA, USA), each pixel was classified as either leaf or soil/crop residue.
FVC percentage was calculated as the fraction of pixels classified as part of the canopy. These values of
FVC were taken as ground truth despite the known limitations of the technique: user bias, age-related
color perception challenges and other natural variation between users.
3.2.2. Comparison of Image Segmentation Techniques
In order to validate the accuracy of the segmentations generated by ACE, comparisons were made
with nine other segmentation algorithms. The algorithms include eight threshold based approaches
CIVE [
27
], ExG & Otsu [
28
], visible vegetation index (VVI), Mean Shift—MS, MSCIVE and MSVVI [
38
],
SHAR-LABFVC [
31
] and one machine learning approach [
30
]. Segmentation accuracy was determined
using the following measure:
accuracy 100 ˆˆ|AXB|
|AYB|˙,
where
A
represents the set of pixels in the ground truth image that is marked as crop canopy and
B
represents the set of pixels in the segmentation that is marked as crop canopy. This measure of
accuracy determines how closely the segmentation matches the ground truth, with 100% indicating
an exact match and perfect segmentation. Table 1lists mean accuracies and standard deviations for all
nine segmentation algorithms.
Table 1.
Mean segmentation accuracy and standard deviations for the ten segmentation algorithms
evaluated. Accuracies are shown for each crop and for all crops combined.
Algorithm Corn Oat Flax Rapeseed Overall
µ(%) σ(%) µ(%) σ(%) µ(%) σ(%) µ(%) σ(%) µ(%) σ(%)
CIVE 40.0 18.0 63.0 8.0 60.0 18.0 51.0 1.5 52.6 17.3
ExG 67.0 8.0 58.0 9.0 63.0 16.0 50.0 1.5 59.6 13.6
VVI 30.0 8.0 45.0 9.0 48.4 10.0 39.4 10.0 40.0 10.9
MS 35.0 9.0 54.0 7.0 48.0 10.0 43.1 13.0 44.2 11.7
MSCIVE 85.4 6.0 61.0 25.0 74.0 6.0 75.4 10.0 74.4 15.8
MSExG 85.0 7.0 62.0 25.0 73.0 6.0 76.0 8.0 74.4 15.2
MSVVI 32.3 9.0 55.0 7.0 44.0 10.0 42.0 12.0 42.5 11.9
Bai et al. 88.0 5.0 85.0 6.4 84.4 8.0 87.0 8.1 86.1 7.0
SHAR-LAB
87.0 5.0 82.3 11.6 81.3 6.0 85.0 10.0 82.3 9.0
ACE 89.2 2.6 89.1 4.3 89.8 5.1 90.4 4.5 89.6 4.5
ACE outperforms all nine segmentation algorithms evaluated on the current dataset.
The improvement over Bai et al. (between 1.2% and 5.4%) and SHAR-LAB (between 2.2% and 8.5%)
indicates the level of performance ACE achieves on a challenging dataset. One-way ANOVA analysis
showed that the segmentation results for rapeseed, oat and flax were significantly (
α
= 0.05) better than
all algorithms except for Bai et al. and SHAR-LAB. Standard devitions were also quite low; for corn, oat,
and flax, ACE had the lowest standard deviations across the test set, except for rapeseed. For rapeseed,
CIVE and ExG have lower standard deviations; however, both have very low accuracy and their low
standard deviations suggest that the accuracies are low across the test set. When accuracies of all the
crops are combined, ANOVA testing confirms that ACE produces the most accurate segmentations
across the dataset (p= 0.05). While ACE produced more accurate results than Bai et al. overall (average
89.6% compared to 86.0%), the two are statistically similar. ACE also produces estimates with the
Remote Sens. 2016,8, 474 9 of 14
lowest variance, showing a high degree of consistency in the estimates across all crops. The boxplot
(Figure 5) highlights this result graphically.
Remote Sens. 2016, 8, 474 9 of 14
Figure 5. Boxplot of segmentation accuracy of all algorithms tested for the combined dataset. ACE
has an overall higher segentation accuracy than all other algorithms. Bai et al. is second best, with an
overall accuracy of 86.1%. ACE also has the lowest overall standard deviation across all crop types.
3.2.3. Fractional Vegetation Cover Estimates
The accuracy of FVC values estimated by ACE and the other nine techniques is evaluated by
comparing the estimated FVC values with the corresponding ground truth values. Table 2 lists the
root mean square error (RMSE) and standard deviation of FVC for all algorithms. Over the wide
range of FVC values observed, there was no systematic error in the ACE estimation process, with
some values being overestimated and others underestimated. Overall, ACE produces the most
accurate estimates of FVC as evidenced by the RMSE and standard deviation values in Table 2. The
low standard deviation highlights the consistency of the FVC values estimated by ACE. ACE
outperforms the other algorithms for oat and rapeseed; the RMSE values are 5.8% (σ = 6.0%) and 5.3% (σ
= 4.3%), respectively. Ninety percent of the errors for oat and 85% for rapeseed fall within one standard
deviation of the mean, suggesting a high level of agreement between the estimates generated by
ACE and the ground truth values. The errors occur for both high and low fractional vegetation cover
values, thus reinforcing the overall lack of bias. ACE and Bai et al. have a similar RMSE for rapeseed,
but the standard deviation for ACE is lower, suggesting a more consistent FVC estimation. Bai et al.
produces estimates with marginally better RMSE values for corn and flax. The standard deviations
are identical for corn, while, for flax, the standard deviation is lower for Bai et al.
Table 2. RMSE and standard deviation of difference between estimated FVC values and the ground
truth FVC values for the ten algorithms evaluated.
Algorithm Corn Oat Flax Rapeseed Overall
RMSE σ (%) RMSE σ (%) RMSE σ (%) RMSE σ (%) RMSE σ (%)
CIVE 33.7 17.4 17.9 14.3 35.0 14.5 31.1 31.8 30.2 24.8
ExG 11.7 7.1 18.6 7.0 14.8 7.4 33.3 12.4 21.3 11.9
VVI 20.0 17.3 21.8 13.8 15.4 15.8 36.1 29.3 24.6 21.3
MS 16.8 16.0 15.2 15.3 19.4 18.9 34.3 31.4 22.7 22.8
MSCIVE 4.7 4.8 29.1 22.7 8.6 8.6 20.9 14.3 18.6 16.3
MSExG 5.7 5.7 28.4 22.8 8.2 8.1 19.7 12.8 18.0 15.6
MSVVI 16.6 16.2 15.4 15.4 18.0 18.4 34.7 31.2 22.6 22.5
Bai et al. 2.1 2.2 8.2 8.4 5.6 4.2 5.3 5.0 5.7 5.6
SHAR-LAB 9.2 6.4 16.0 11.4 11.6 6.0 8.7 4.7 11.7 7.7
ACE 2.7 2.2 5.8 6.0 5.7 5.8 5.3 4.3 5.0 4.9
3.2.4. Segmentation and FVC Estimates for Remotely Sensed Corn
The ten algorithms were used to perform segmentation on several images of corn taken by a
camera suspended from a boom arm (see Section 2 for details of data collection). The mean
segmentation accuracy of ACE was 88.7%, with a standard deviation of 5.4%, which is the highest
Figure 5.
Boxplot of segmentation accuracy of all algorithms tested for the combined dataset. ACE has
an overall higher segentation accuracy than all other algorithms. Bai et al. is second best, with an overall
accuracy of 86.1%. ACE also has the lowest overall standard deviation across all crop types.
3.2.3. Fractional Vegetation Cover Estimates
The accuracy of FVC values estimated by ACE and the other nine techniques is evaluated by
comparing the estimated FVC values with the corresponding ground truth values. Table 2lists the
root mean square error (RMSE) and standard deviation of FVC for all algorithms. Over the wide range
of FVC values observed, there was no systematic error in the ACE estimation process, with some
values being overestimated and others underestimated. Overall, ACE produces the most accurate
estimates of FVC as evidenced by the RMSE and standard deviation values in Table 2. The low standard
deviation highlights the consistency of the FVC values estimated by ACE. ACE outperforms the other
algorithms for oat and rapeseed; the RMSE values are 5.8% (
σ
= 6.0%) and 5.3% (
σ
= 4.3%), respectively.
Ninety percent of the errors for oat and 85% for rapeseed fall within one standard deviation of the
mean, suggesting a high level of agreement between the estimates generated by ACE and the ground
truth values. The errors occur for both high and low fractional vegetation cover values, thus reinforcing
the overall lack of bias. ACE and Bai et al. have a similar RMSE for rapeseed, but the standard deviation
for ACE is lower, suggesting a more consistent FVC estimation. Bai et al. produces estimates with
marginally better RMSE values for corn and flax. The standard deviations are identical for corn, while,
for flax, the standard deviation is lower for Bai et al.
Table 2.
RMSE and standard deviation of difference between estimated FVC values and the ground
truth FVC values for the ten algorithms evaluated.
Algorithm Corn Oat Flax Rapeseed Overall
RMSE σ(%) RMSE σ(%) RMSE σ(%) RMSE σ(%) RMSE σ(%)
CIVE 33.7 17.4 17.9 14.3 35.0 14.5 31.1 31.8 30.2 24.8
ExG 11.7 7.1 18.6 7.0 14.8 7.4 33.3 12.4 21.3 11.9
VVI 20.0 17.3 21.8 13.8 15.4 15.8 36.1 29.3 24.6 21.3
MS 16.8 16.0 15.2 15.3 19.4 18.9 34.3 31.4 22.7 22.8
MSCIVE 4.7 4.8 29.1 22.7 8.6 8.6 20.9 14.3 18.6 16.3
MSExG 5.7 5.7 28.4 22.8 8.2 8.1 19.7 12.8 18.0 15.6
MSVVI 16.6 16.2 15.4 15.4 18.0 18.4 34.7 31.2 22.6 22.5
Bai et al. 2.1 2.2 8.2 8.4 5.6 4.2 5.3 5.0 5.7 5.6
SHAR-LAB
9.2 6.4 16.0 11.4 11.6 6.0 8.7 4.7 11.7 7.7
ACE 2.7 2.2 5.8 6.0 5.7 5.8 5.3 4.3 5.0 4.9
Remote Sens. 2016,8, 474 10 of 14
3.2.4. Segmentation and FVC Estimates for Remotely Sensed Corn
The ten algorithms were used to perform segmentation on several images of corn taken by
a camera suspended from a boom arm (see Section 2for details of data collection). The mean
segmentation accuracy of ACE was 88.7%, with a standard deviation of 5.4%, which is the highest
accuracy and lowest standard deviation of the algorithms tested (see Table 3and Figure 4). This high
level of segmentation accuracy and low standard deviation is consistent with the accuracies obtained
with terrestrial photographs of corn in Section 3.2.3. Compared to the ground truth values, the FVC
estimates resulting from the ACE segmentation process had RMSE and standard deviation of 4.1 and
4.1, respectively (Table 3). The FVC values produced by ACE were only fractionally better than those
produced by SHAR-LAB (by 0.4% and 0.1% RMSE and standard deviation, respectively), but much
better than the other eight algorithms, including Bai et al. [30].
Table 3.
Segmentation accuracy; RMSE and standard deviation of difference between estimated FVC
values and the ground truth FVC values for the ten algorithms evaluated using a remotely sensed
corn crop.
Algorithm Segmentation Accuracy FVC
µ(%) σ(%) RMSE σ(%)
CIVE 53.7 19.7 29.9 22.2
ExG 41.0 18.9 31.2 16.5
VVI 41.7 18.8 20.0 20.3
MS 44.2 17.6 22.6 23.2
MSCIVE 65.4 11.7 16.8 17.1
MSExG 65.5 11.5 15.3 15.6
MSVVI 43.2 17.7 22.3 22.9
Bai et al. 74.9 17.7 13.1 8.4
SHAR-LAB 81.1 11.4 4.5 4.2
ACE 88.7 5.4 4.1 4.1
4. Discussion
4.1. Comparative Advantages of ACE
ACE offers an accurate, low-cost approach to FVC estimation that automates the process
of segmentation, eliminates human subjectivity and provides a more efficient means of surface
characterization, which closely matches the ground truth. Consistency in estimation across the
four crop types evaluated suggests ease of application to other crops.
The current approach appears to overcome many of the limitations faced by other techniques that
estimate FVC. For example, ACE is able to accurately estimate FVC when the canopy is of any size
compared to the background, from small to near to closure. Unlike the other algorithms presented,
ACE maintains its accuracy even when the photographs are taken with multiple cameras—this
increases options for inexpensive data capture and in different lighting conditions with varying
degrees of shadowing. Accuracy, flexibility, full automation and speed are properties of ACE that are
improvements over state-of-the-art algorithms.
ACE performs well in cases where the distribution of color values is not bimodal because it is
flexible enough to correctly estimate segmentation thresholds even where two peaks are not visible.
The test set had several such examples, and, in those cases, ACE outperformed the other algorithms.
For instance, the explicit reliance on bimodality and the assumption that errors in classification are
equal for the vegetation and background [
31
] led to under-segmentation and the inclusion of shaded
background regions by SHAR-LAB.
Remote Sens. 2016,8, 474 11 of 14
4.2. Implications for Remote Sensing
Testing with remotely sensed corn produced an accuracy of 88.7%, similar to the accuracy obtained
with terrestrial images and more accurate than the other techniques. The low standard deviation
points to the consistency of the segmentations produced by ACE. The FVC estimates were similarly
close to the ground truth values with approximately 4% error on average. ACE outperformed all the
other algorithms, though the RMSE and its standard deviation are both statistically similar to those
produced by SHAR-LAB.
The accuracy and flexibility of the proposed algorithm point to its potential for use in more
remote sensing applications than the one currently evaluated. In particular, the ability to accurately
segment images containing significant shadowing of the background is seen as beneficial for remotely
captured images. This functionality is important when validating the FVC values estimated from
airborne or spaceborne platforms. The fact that ACE has been shown to accurately estimate the FVC of
different crop types from photographs taken at heights of up to 7 m emphasizes its utility as more than
a ground-based FVC estimation tool. The use of near-ground remotely sensed images provides the
opportunity to increase sample collection, survey efficiency, spatial and temporal resolution as well as
decrease the unit cost of sampling [
22
,
25
,
39
,
40
]. ACE, therefore, makes a contribution to the existing
approach and offers the prospect of advancing the emerging approach of near-ground remote sensing.
The variety of crops tested and the resulting segmentation accuracy indicates that ACE can work
with a range of crops with different growth patterns and leaf geometries. This is a distinct advantage
when validating FVC derived from remotely sensed images. It has been shown that the validation
process is more time consuming and complex, requiring more than one estimation algorithm, when
the FVC estimators are limited in the range of crops for which they produce accurate results [41].
ACE is the most consistently accurate of the algorithms tested. The results of tests on both
terrestrial and remotely sensed crops show that the software produces similarly accurate segmentations
and FVC values. This suggests that ACE is able to overcome issues specific to remotely sensed crops,
and thus it shows great potential for use in remote sensing applications.
4.3. Implications for Senescent Crop Cover
After the onset of senescence, there are no easy ways to measure fractional vegetation cover
due to the dual coloration of leaves at this stage and intermingling [
42
]. Preliminary analyses (data
not presented) suggest that ACE can be used to distinguish between senescent and green vegetation,
thus obviating the need to employ expensive, time-consuming methods to first calculate LAI and
then convert to FVC. Further research is required to validate the use of ACE to reduce the difficulties
associated with measuring fractional vegetation cover during senescence.
4.4. Application of ACE to Crop Modelling
Accurate estimates of FVC are required for crop simulation models. For models such as AquaCrop
(Food and Agriculture Organization of the United Nations, Rome, Italy, http://www.fao.org/
nr/water/aquacrop.html) (an FAO model largely developed in recognition of impending food crisis in
developing countries), canopy development is characterized by FVC only and not the more common
leaf area index (LAI) [
2
,
42
]. It has recently been shown that, because ACE provides more accurate
estimates of FVC than segmentation tools previously used, improved model outputs are obtained [
43
].
There are, as a result, implications for the wider use of crop modelling as a tool in the prediction of
the effects of climate change on crop production in the Caribbean and regions with similar small island
developing states. With access to accurate values of FVC, existing relationships between FVC and LAI
can be validated or refined [
11
], and new relationships can be developed for crop species that have not
yet been studied. This is especially important for foods, such as sweet potato (Ipomoea batatas L.), that
are central to food and nutritional security. As there are no existing databases of FVC values for many of
these crops, ACE will be useful in providing either FVC or an estimate of LAI values for use with crop
Remote Sens. 2016,8, 474 12 of 14
models. Therefore, the results of such studies would give access to currently unavailable parameters
and could increase the use of crop models to optimize agricultural crop production and policy.
5. Conclusions
ACE is an accurate, automated method of segmenting both terrestrial and remotely sensed digital
photographs, and thus estimating fractional vegetation cover. Evaluations of the accuracy of the
method were successfully conducted using four different crops—corn, rapeseed, oat and flax—and
were compared to nine other segmentation algorithms. The ACE algorithm was then applied to the
terrestrial and remotely sensed photographs to produce estimates of FVC from the segmentation.
For all the crops in the study, ACE either outperformed or matched the FVC estimation of all the other
algorithms, including the state-of-the-art machine learning algorithm.
Such an accurate, fast and automated method for estimating FVC from digital photographs is
potentially beneficial for many applications, including crop modelling. With improved accuracy,
existing relationships between FVC and LAI can be validated or refined and new relationships
developed for crops that are yet to be studied. Automation facilitates easier measurement of a crop
development parameter, thereby removing one of the perceived barriers to calibrating and using crop
models, particularly within regions like the Caribbean.
Software Availability: ACE is available online at http://173.230.158.211/.
Acknowledgments:
This research has been produced with the financial assistance of the European Union.
The contents of this document are the sole responsibility of the University of the West Indies and its partners
in the Global-Local Caribbean Climate Change Adaptation and Mitigation Scenarios (GoLoCarSce) project and
can under no circumstances be regarded as reflecting the position of the European Union. The authors would
also like to thank X.D. Bai and Xihan Mu for providing their algorithms for comparison and K.C. DeJonge for
providing the remotely sensed corn images. The authors would also like to thank the anonymous reviewers for
their comments and suggestions, which have served to significantly improve the paper.
Author Contributions:
André Coy developed ACE and wrote the first draft of the manuscript. Dale Rankine
gave advice during the development of ACE and contributed to the development and revision of the manuscript.
Michael Taylor and Jane Cohen supervised the project that led to the development of the algorithm and made
significant contributions to the development of the manuscript. David Nielsen provided the images for analysis
and contributed to the revision of the manuscript.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Fiala, A.C.S.; Garman, S.L.; Gray, A.N. Comparison of five canopy cover estimation techniques in the western
Oregon Cascades. For. Ecol. Manag. 2006,232, 188–197. [CrossRef]
2.
Steduto, P.; Hsaio, T.C.; Raes, D.; Fereres, E. AquaCrop—The FAO model to simulate yield response to water.
I. Concepts and underlying principles. Agron. J. 2009,101, 426–437. [CrossRef]
3.
Behrens, T.; Diepenbrock, W. Using digital image analysis to describe canopies of winter oilseed rape
(Brassica napus L.) during vegetative developmental stages. J. Agron. Crop Sci.
2006
,192, 295–302. [CrossRef]
4.
Pan, G.; Li, F.; Sun, G. Digital camera based measurement of crop cover for wheat yield prediction.
In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain,
23–28 July 2007; pp. 797–800.
5.
Richardson, M.D.; Karcher, D.E.; Purcell, L.C. Quantifying turfgrass cover using digital image analysis.
Crop Sci. 2001,4, 1884–1888. [CrossRef]
6.
Jia, L.; Chen, X.; Zhang, F. Optimum nitrogen fertilization of winter wheat based on color digital camera
image. Commun. Soil Sci. Plant Anal. 2007,38, 1385–1394. [CrossRef]
7.
Li, Y.; Chen, D.; Walker, C.N.; Angus, J.F. Estimating the nitrogen status of crops using a digital camera.
Field Crop. Res. 2010,118, 221–227. [CrossRef]
8.
Sakamoto, T.; Gitelson, A.A.; Nguy-Robertson, A.L.; Arkebauer, T.J.; Wardlow, B.D.; Suyker, A.E.; Verma, S.B.;
Shibayama, M. An alternative method using digital cameras for continuous monitoring of crop status.
Agric. For. Meteorol. 2012,154–155, 113–126. [CrossRef]
Remote Sens. 2016,8, 474 13 of 14
9.
Ding, Y.; Zheng, X.; Zhao, K.; Xin, X.; Liu, H. Quantifying the impact of NDVI
soil
determination methods
and NDVI
soil
variability on the estimation of fractional vegetation cover in Northeast China. Remote Sens.
2016,8, 29. [CrossRef]
10.
Liu, J.; Pattey, E. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops.
Agric. For. Meteorol. 2010,150, 1485–1490. [CrossRef]
11.
Nielsen, D.; Miceli-Garcia, J.J.; Lyon, D.J. Canopy cover and leaf area index relationships for wheat, triticale,
and corn. Agron. J. 2012,104, 1569–1573. [CrossRef]
12.
Córcoles, J.I.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Estimation of leaf area index in onion (Allium cepa L.)
using an unmanned aerial vehicle. Biosyst. Eng. 2013,115, 31–42. [CrossRef]
13.
Wang, Y.; Wang, D.J.; Zhang, G.; Wang, J. Estimating nitrogen status of rice using the image segmentation of
G-R thresholding method. Field Crop. Res. 2013,149, 33–39. [CrossRef]
14.
Booth, D.T.; Cox, S.E.; Fifield, C.; Phillips, M.; Williamson, N. Image analysis compared with other methods
for measuring ground cover. Arid Land Res. Manag. 2005,19, 91–100. [CrossRef]
15.
Guevara-Escobar, A.; Tellez, J.; Gonzalez-Sosa, E. Use of digital photography for analysis of canopy closure.
Agrofor. Syst. 2005,65, 1–11. [CrossRef]
16.
Laliberte, A.S.; Rango, A.; Herrick, J.E.; Fredrickson, E.L.; Burkett, L. An object-based image analysis
approach for determining fractional cover of senescent and green vegetation with digital plot photography.
J. Arid Environ. 2007,69, 1–14. [CrossRef]
17.
Lee, K.; Lee, B. Estimating canopy cover from color digital camera image of rice field. J. Crop Sci. Biotechnol.
2011,14, 151–155. [CrossRef]
18.
Yu, Z.-H.; Cao, Z.-G.; Wu, X.; Bai, X.; Qin, Y.; Zhuo, W.; Xiao, Y.; Zhang, X.; Xue, H. Automatic
image-based detection technology for two critical growth stages of maize: Emergence and three-leaf stage.
Agric. For. Meteorol. 2013,174, 65–84. [CrossRef]
19.
Sui, R.X.; Thomasson, J.A.; Hanks, J.; Wooten, J. Ground-based sensing system for weed mapping in cotton.
Comput. Electron. Agric. 2008,60, 31–38. [CrossRef]
20.
Abbasgholipour, M.; Omid, M.; Keyhani, A.; Mohtasebi, S.S. Color image segmentation with genetic
algorithm in a raisin sorting system based on machine vision in variable conditions. Expert Syst. Appl.
2011
,
38, 3671–3678. [CrossRef]
21.
Przeszlowska, A.; Trlica, M.; Weltz, M. Near-ground remote sensing of green area index on the shortgrass
prairie. Rangel. Ecol. Manag. 2006,59, 422–430. [CrossRef]
22.
Zelikova, T.J.; Williams, D.G.; Hoenigman, R.; Blumenthal, D.M.; Morgan, J.A.; Pendall, E. Seasonality of
soil moisture mediates responses of ecosystem phenology to elevated CO2 and warming in a semi-arid
grassland. J. Ecol. 2015,103, 1119–1130. [CrossRef]
23.
Mu, X.; Hu, M.; Song, W.; Ruan, G.; Ge, Y.; Wang, J.; Huang, S.; Yan, G. Evaluation of sampling methods for
validation of remotely sensed fractional vegetation cover. Remote Sens. 2015,7, 16164–16182. [CrossRef]
24.
Ding, Y.; Zheng, X.; Jiang, T.; Zhao, K. Comparison and validation of long time serial global GEOV1 and
regional Australian MODIS fractional vegetation cover products over the Australian continent. Remote Sens.
2015,7, 5718–5733.
25.
Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Rindinella, A.; Corona, P.
Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing
UAV. Int. J. Appl. Earth Obs. Geoinf. 2016,47, 60–68.
26.
Booth, D.T.; Cox, S.E.; Meikle, T.W.; Fitzgerald, C. The accuracy of ground-cover measurements.
Rangel. Ecol. Manag. 2006,59, 179–188.
27.
Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision.
In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics
(AIM 2003), Kobe, Japan, 20–24 July 2003; Volume 2, pp. b1079–b1083.
28. Neto, J.C. A Combined Statistical-Soft Computing Approach for Classification and Mapping Weed Species
in Minimum-Tillage Systems. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, 2006.
29.
Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern.
1979
,9,
62–66.
30.
Bai, X.D.; Cao, Z.G.; Wanga, Y.; Yua, Z.H.; Zhang, X.F.; Li, C.N. Crop segmentation from images by
morphology modeling in the CIE L*a*b* color space. Comput. Electron. Agric. 2013,99, 21–34.
Remote Sens. 2016,8, 474 14 of 14
31.
Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the green fractional vegetation cover from digital images
using a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 2015,7, 10425–10443.
32.
Tian, L.F.; Slaughter, D.C. Environmentally adaptive segmentation algorithm for outdoor image segmentation.
Comput. Electron. Agric. 1998,21, 153–168.
33.
Nielsen, D.C.; Lyon, D.J.; Hergert, G.W.; Higgins, R.K.; Calderon, F.J.; Vigil, M.F. Cover crop mixtures do not
use water differently than single-species plantings. Agron. J. 2015,107, 1025–1038. [CrossRef]
34.
DeJonge, K.C.; Taghvaeian, S.; Trout, T.J.; Comis, L.H. Comparison of canopy temperature-based water stress
indices for maize. Agric. Water Manag. 2015,156, 51–62.
35.
Tkal
ˆ
ci
ˆ
c, M.; Tasi
ˆ
c, J.F. Colour spaces: Perceptual, historical and applicational background. In Proceedings
of the IEEE Region 8 EUROCON 2003, Computer as a Tool, Ljubljana, Slovenia, 22–24 September 2003;
Volume 1, pp. 304–308.
36.
O’Haver, T. Peak Finding and Measurement. Custom Scripts for the MATLAB Platform, 2006. Available
online: http://www.wam.umd.edu/~toh/spectrum/PeakFindingandMeasurement.htm (accessed on
15 June 2014).
37.
Booth, D.T.; Cox, S.E.; Berryman, R.D. Point sampling digital imagery with ‘SamplePoint’.
Environ. Monit. Assess. 2006,123, 97–108.
38.
Ponti, M.P. Segmentation of low-cost remote sensing images combining vegetation indices and mean shift.
IEEE Geosci. Remote Sens. Lett. 2013,10, 67–70.
39.
Seefeldt, S.S.; Booth, D.T. Measuring plant cover in sagebrush steppe rangelands: A comparison of methods.
Environ. Manag. 2006,37, 703–711.
40.
Booth, D.T.; Cox, S.E.; Meikle, T.; Zuuring, H.R. Ground-cover measurements: Assessing correlation among
aerial and ground-based methods. Environ. Manag. 2008,42, 1091–1100.
41.
Jia, K.; Liang, S.; Gu, X.; Baret, F.; Wei, X.; Wang, X.; Yao, Y.; Yang, L.; Li, Y. Fractional vegetation cover
estimation algorithm for Chinese GF-1 wide field view data. Remote Sens. Environ. 2016,177, 184–191.
42.
Steduto, P.; Hsiao, T.C.; Fereres, E.; Raes, D. Crop Yield Response to Water; FAO Irrigation and Drainage
Paper 66; Food and Agricultural Organization of the United Nations (FAO): Rome, Italy, 2012.
43.
Rankine, D.R.; Cohen, J.E.; Taylor, M.A.; Coy, A.D.; Simpson, L.A.; Stephenson, T.; Lawrence, J.L.
Parameterizing the FAO AquaCrop model for rainfed and irrigated field-grown sweet potato. Agron. J.
2015
,
107, 375–387.
©
2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
... The most common approaches for canopy cover estimation include thresholding and classification/segmentation methods. Single or multiple thresholds are set to RGB imagery * Corresponding author transformed color space, such as CIE (Commission Internationale d'Eclairage) L*a*b* and HSI (hue saturation intensity) color (Liu et al., 2012), as well as vegetation indices such as ExG (Excessive Green Index) and CIVE (Color Index of Vegetation Extraction) (Meyer and Neto, 2008) through manual (Grieder et al., 2015), semi-automated (Xu et al., 2020) or fully automated (Coy et al., 2016;Yu et al., 2017) methods to separate the plant pixels from background non-green vegetation pixels such as soil, plant residue, and non-photosynthetic plant leaves etc. The thresholding approach is simple and widely employed in many applications, but it is less effective with images captured at various illumination conditions, or from different plant growth stages, or plants with variable water, nutrient and health status (Banerjee et al., 2020;Sadeghi-Tehran et al., 2017). ...
... However, classification methods require a certain level of human intervention preventing automation. Additionally, sample selection is often time consuming and model training and application may also be computationally intensive (Coy et al., 2016;Xu et al., 2020). In addition to using RGB digital imagery for canopy cover estimation, multispectral (Yu et al., 2017) and hyperspectral imagery (Banerjee et al., 2020) were utilized for canopy cover estimation through thresholding, machine learningbased classification, as well as spectral angle mapping method. ...
... Those mixed pixels would possibly be classified as non-vegetation area by the rule-based method, which explains the relatively lower canopy cover derived by the 1 method for the UAV-sorghum scenes (Figure 3(n) and (r)). It is worth noting that, the performance of SVM and RF is more dependent on the quality of the samples selected for model training, and the criteria that those samples are defined/selected is subjective to user experiences and knowledge to some extent, for instance, whether the mixed pixels are treated as green vegetation area, or background soil area (Coy et al., 2016;Xu et al., 2020). However, the rule-based methods provided more objective and consistent results. ...
Article
Full-text available
Canopy cover is a key agronomic variable for understanding plant growth and crop development status. Estimation of canopy cover rapidly and accurately through a fully automated manner is significant with respect to high throughput plant phenotyping. In this work, we propose a simple, robust and fully automated approach, namely a rule-based method, that leverages the unique spectral pattern of green vegetation at visible (VIS) and near-infrared red (NIR) spectra regions to distinguish the green vegetation from background (i.e., soil, plant residue, non-photosynthetic vegetation leaves etc.), and then derive canopy cover. The proposed method was applied to high-resolution hyperspectral and multispectral imagery collected from gantry-based scanner and Unmanned Aerial Vehicle (UAV) platforms to estimate canopy cover. Additionally, machine learning methods, i.e., Support Vector Machine (SVM) and Random Forest (RF) were also employed as bench mark methods. The results show that: the rule-based method demonstrated promising classification accuracies that are comparable to SVM and RF for both hyperspectral and multispectral datasets. Although the rule-based method is more sensitive to mixed pixels and shaded canopy region, which potentially resulted in classification errors and underestimation of canopy cover in some cases; it showed better performance to detect smaller leaves than SVM and RF. Most importantly, the rule-based method substantially outperformed machine learning methods with respect to processing speed, indicating its greater potential for high-throughput plant phenotyping applications.
... Two approaches were proposed based on Gaussian mixture model in 2012 by Liu et al. (2012) and in 2016 by Coy et al. (2016) to segment and estimate plant canopy from background. In these studies, RGB images were converted to CIE La * b * color space. ...
... Finally, based on Gaussian distribution parameters, a proper threshold was determined to estimate fractional vegetation cover. Compared with the previous studies proposed by Liu et al. and Coy et al. (Liu et al., 2012;Coy et al., 2016), Li et al's method excluded uncertain(mixed) pixels in the histogram distributed between the bimodal peaks of background and vegetation to avoid negative influence of mixed pixels. In low-resolution low-altitude remote-sensing images, the number of mixed pixels increases. ...
Article
Full-text available
Because of the increasing global population, changing climate, and consumer demands for safe, environmentally friendly, and high‐quality food, plant breeders strive for higher yield cultivars by monitoring specific plant phenotypes. Developing new crop cultivars and monitoring through current methods is time‐consuming, sometimes subjective, and based on subsampling of microplots. High‐throughput phenotyping using unmanned aerial vehicle‐acquired aerial orthomosaic images of breeding trials improves and simplifies this labor‐intensive process. To perform per‐microplot phenotype analysis from such imagery, it is necessary to identify and localize individual microplots in the orthomosaics. This paper reviews the key concepts of recent studies and possible future developments regarding vegetation segmentation and microplot segmentation. The studies are presented in two main categories: (a) general vegetation segmentation using vegetation‐index‐based thresholding, learning‐based, and deep‐learning‐based methods; and (b) microplot segmentation based on machine learning and image processing methods. In this study, we performed a literature review to extract the algorithms that have been developed in vegetation and microplots segmentation studies. Based on our search criteria, we retrieved 92 relevant studies from five electronic databases. We investigated these selected studies carefully, summarized the methods, and provided some suggestions for future research. Algorithms that are commonly used for vegetation segmentation in the field are reviewed. The state‐of‐the‐art algorithms in vegetation segmentation are presented. The state‐of‐the‐art algorithms for microplot segmentation in the field are reviewed. Challenges created by lack of and gaps between plots in microplot segmentation in the field are analyzed. Recommendations are given on algorithms and direction of future research for vegetation and microplot segmentation.
... The data on urban activities and mining activities include the population and the coal production in Xilinhot. The specific description of the data is shown in Table 2. FVC refers to the ratio of the area of vegetation (including leaves, stems, branches, etc.) vertically projected on the ground in each pixel to the total area of the pixel [22,23]. The value of FVC can objectively and accurately reflect the spatial distribution of vegetation in the study area. ...
Article
Full-text available
Identifying the spatial range of mining disturbance on vegetation is of significant importance for the plan of environmental rehabilitation in mining areas. This paper proposes a method to identify the spatial range of mining disturbance (SRMD). First, a non-linear and quantitative relationship between driving factors and fractional vegetation cover (FVC) was constructed by geographically weighted artificial neural network (GWANN). The driving factors include precipitation, temperature, topography, urban activities, and mining activities. Second, the contribution of mining activities (Wmine) to FVC was quantified using the differential method. Third, the virtual contribution of mining activities (V-Wmine) to FVC during the period without mining activity was calculated, which was taken as the noise in the contribution of mining activities. Finally, the SRMD in 2020 was identified by the significance test based on the Wmine and noise. The results show that: (1) the mean RMSE and MRE for the 11 years of the GWANN in the whole study area are 0.0526 and 0.1029, which illustrates the successful construction of the relationship between driving factors and FVC; (2) the noise in the contribution of mining activities obeys normal distribution, and the critical value is 0.085 for the significance test; (3) most of the SRMD are inside the 3 km buffer with an average disturbance distance of 2.25 km for the whole SRMD, and significant directional heterogeneity is possessed by the SRMD. In conclusion, the usability of the proposed method for identifying SRMD has been demonstrated, with the advantages of elimination of coupling impact, spatial continuity, and threshold stability. This study can serve as an early environmental warning by identifying SRMD and also provide scientific data for developing plans of environmental rehabilitation in mining areas.
... This postprocessing should be performed with caution because varying illumination conditions can make this task difficult and cause errors in estimating L Hancock et al., 2014;Calders et al., 2018;Díaz et al., 2021). To automate and improve the accuracy of this postprocessing method, several protocols have been proposed (e.g., Nobis and Hunziker 2005, Liu and Pattey 2010, Coy et al. 2016) and noncommercial and commercial software has been made available (e.g., CAN-EYE, Weiss et al., 2004). For this segmentation task, deep-learning-based techniques (i.e., convolutional neural networks and their derivatives) are an attractive option, as several recent studies have reported the effectiveness of these techniques in differentiating leaf and nonleaf pixels (Zhou et al., 2019;Ayhan and Kwan, 2020;Díaz et al., 2021). ...
Article
Full-text available
In year-round horticultural fruit production, the leaf area index (LAI; L) is a useful indicator for proper crop management. In this study, wide-angle time-lapse digital cameras were applied to estimate the L of a row-planted eggplant canopy throughout an entire growth period. Six wide-angle time-lapse cameras were placed above the canopy; three cameras pointed to the nadir, and the other three pointed at 57.5 •-tilted angles from the nadir with different azimuthal orientations (i.e., along the row, at a 40 • angle from the row, and perpendicular to the row). L was estimated based on the gap-fraction theory using three methods: (1) the crop method, in which only a part of each photograph was used to calculate L, (2) the whole-image method, in which the entirety of each photograph was used to calculate L, and (3) the contour method, in which each photograph was divided into zones according to the view-zenith-angle contours and L was estimated through weighted averaging of the zonal Ls. The L values estimated using the crop method were heavily biased depending on the camera position and orientation, whereas the values obtained with the other two methods were less affected by the camera position and orientation, indicating the advantage of a wide viewing angle. Compared to the whole-image method, the contour method improved the L estimation accuracy. Under the assumption of a spherical leaf-angle distribution, nadir-looking photography tended to overestimate L, whereas 57.5 •-angled photography was more consistent with the allo-metrically obtained reference L value. The accuracy of the L estimations obtained using nadir-looking photography was improved considerably by applying the beta leaf-angle distribution function with optimized parameters. This study highlights the (1) applicability of wide-angle photography for estimating the L of a row-crop canopy, (2) advantage of the contour method, and (3) importance of using a proper leaf-angle distribution.
... Tian et al. [19] applied machine vision technology to estimate weed density and size in real time, which can effectively reduce the amount of herbicide in corn and soybean fields. Coy et al. [20] proposed an unsupervised segmentation algorithm for the accurate estimation of vegetation coverage, which converted the original image into a certain color space (brightness, red-green, and blue-yellow), and the results showed that the accuracy rate was more than 85%. Hamuda et al. [21] summarized image-processing techniques for the extraction and segmentation of wild plants, including color-based, threshold-based, and learning-based segmentation. ...
Article
Full-text available
Nowadays, the advanced comprehensive utilization and the complete prohibition of burning fully covered straws in croplands have become increasingly important in agriculture engineering. As a kind of direct straw-mulching method in China, conservation tillage with straw smashing is an effective method to reduce pollution and enhance fertility. In view of the high straw-returning yields, complicated manual operation, and the poor performance of straw detection with machine vision, this study introduces a novel form of uniformity detection for straws based on overlapping region analysis. An image-processing technology using a novel overlapping region analysis was proposed to overcome the inefficiency and low precision resulting from the manual identification of the straw uniformity. In this study, the debris in the gray map was removed according to region characteristics. Through using morphological theory with overlapping region analysis in low-density cases, straws of appropriate length can be identified and then uniformity detection can be accomplished. Compared with traditional threshold segmentation methods, the advantages of an accurate identification, fast operation, and high efficiency contribute to the better performance of the innovative overlapping region analysis. Finally, the proposed algorithm was verified through detecting the uniformity in low-density cases, with an average accuracy rate of 97.69%, providing a novel image recognition solution for automatic straw-mulching systems.
... The model for NGI_60 with combined data can be used as a universal FVC estimation model for semi-arid grassland. Previous models, including LABFVC (Cinat et al., 2019), SHAR-LABFVC (Song et al., 2015), and ACE (Coy et al., 2016) require manual image segmentation. In contrast, our model only required a VI C calculation to estimate grassland FVC, avoiding the need for manual interference to a certain extent. ...
Article
Measurements of fractional vegetation cover (FVC) are important for monitoring grassland growth and predicting aboveground biomass. Thus, a method to rapidly estimate grassland FVC is highly desired. Although smartphones provide a faster and less expensive method for estimating grassland FVC than satellites and unmanned aerial vehicles, their accuracy is not well understood. Here, we evaluated the use of smartphones to accurately estimate grassland FVC by taking photos in direct and indirect sunlight at three time points (8:00 a.m., 12:00 p.m. and 4:00 p.m.). We found that smartphone photography could be applied to grassland FVC estimation by extracting a specified percentile of color vegetation index (VIC) to reduce the effect of bright lighting. The 60th percentile value was more suitable for estimating grassland FVC than the 90th percentile, which was proposed in a previous study. We also found that extracting the 60th percentile VIC value from smartphone photos taken at noon under direct sunlight efficiently minimized the influence of lighting on estimated grassland FVC. We propose a model based on the VIC percentile method that can quickly estimate grassland FVC using smartphone photography. This model is very versatile for estimating FVC in semi-arid grasslands. Our findings show that using smartphone photos to quickly estimate grassland FVC is feasible and could provide a practical solution to quickly and accurately estimate grassland FVC.
... The thresholding methods are relatively simple, but may not be sufficiently adaptive and robust for dynamic field environments and multitemporal cases [71], particularly images captured under various illumination conditions. Unsupervised or supervised classification or segmentationbased approaches have also been employed to discriminate soil and vegetation pixels, such as unsupervised method K -means clustering [72], as well as supervised classification method logistic function [73], support vector machine [74], and different segmentation methods [75], [76]. Classification and segmentation methods would provide accurate results, but are often time-consuming and computationally extensive. ...
Article
Full-text available
Near-earth hyperspectral big data present both huge opportunities and challenges for spurring developments in agriculture and high-throughput plant phenotyping and breeding. In this article, we present data-driven approaches to address the calibration challenges for utilizing near-earth hyperspectral data for agriculture. A data-driven, fully automated calibration workflow that includes a suite of robust algorithms for radiometric calibration, bidirectional reflectance distribution function (BRDF) correction and reflectance normalization, soil and shadow masking, and image quality assessments was developed. An empirical method that utilizes predetermined models between camera photon counts (digital numbers) and downwelling irradiance measurements for each spectral band was established to perform radiometric calibration. A kernel-driven semiempirical BRDF correction method based on the Ross Thick-Li Sparse (RTLS) model was used to normalize the data for both changes in solar elevation and sensor view angle differences attributed to pixel location within the field of view. Following rigorous radiometric and BRDF corrections, novel rule-based methods were developed to conduct automatic soil removal; and a newly proposed approach was used for image quality assessment; additionally, shadow masking and plot-level feature extraction were carried out. Our results show that the automated calibration, processing, storage, and analysis pipeline developed in this work can effectively handle massive amounts of hyperspectral data and address the urgent challenges related to the production of sustainable bioenergy and food crops, targeting methods to accelerate plant breeding for improving yield and biomass traits.
... Particularly, change detection of FVC is widely applied to estimate changes in terrestrial ecosystems, including soil and water conservation [8,9], climate change [10][11][12], land use/cover change and applications [13][14][15], and ecosystem evaluation [16,17]. Moreover, the spatial distribution of FVC and its spatiotemporal changes are also investigated in energy exchange calculations in different application fields, including water cycle models, vegetation photosynthesis, soil water evaporation, urban expansion, urban environment monitoring, and forest fragmentation [18,19]. Therefore, an understanding of the dynamic changes in urban FVC contributes to the sustainable development of ecological civilization in the urbanization process. ...
Article
Full-text available
Vegetation measures are crucial for assessing changes in the ecological environment. Fractional vegetation cover (FVC) provides information on the growth status, distribution characteristics, and structural changes of vegetation. An in-depth understanding of the dynamic changes in urban FVC contributes to the sustainable development of ecological civilization in the urbanization process. However, dynamic change detection of urban FVC using multi-temporal remote sensing images is a complex process and challenge. This paper proposed an improved FVC estimation model by fusing the optimized dynamic range vegetation index (ODRVI) model. The ODRVI model improved sensitivity to the water content, roughness degree, and soil type by minimizing the influence of bare soil in areas of sparse vegetation cover. The ODRVI model enhanced the stability of FVC estimation in the near-infrared (NIR) band in areas of dense and sparse vegetation cover through introducing the vegetation canopy vertical porosity (VCVP) model. The verification results confirmed that the proposed model had better performance than typical vegetation index (VI) models for multi-temporal Landsat images. The coefficient of determination (R2) between the ODRVI model and the FVC was 0.9572, which was 7.4% higher than the average R2 of other typical VI models. Moreover, the annual urban FVC dynamics were mapped using the proposed improved FVC estimation model in Hefei, China (1999–2018). The total area of all grades FVC decreased by 33.08% during the past 20 years in Hefei, China. The areas of the extremely low, low, and medium grades FVC exhibited apparent inter-annual fluctuations. The maximum standard deviation of the area change of the medium grade FVC was 13.35%. For other grades of FVC, the order of standard deviation of the change ratio was extremely low FVC > low FVC > medium-high FVC > high FVC. The dynamic mapping of FVC revealed the influence intensity and direction of the urban sprawl on vegetation coverage, which contributes to the strategic development of sustainable urban management plans.
Article
Fractional vegetation cover (FVC) plays an important role in the study of vegetation growth state, and the key issue is accurately segmenting and extracting green vegetation from the background. However, the shadows generated by natural lights produce extreme illuminance differences in images, which greatly reduces the vegetation extraction accuracy. The polarization information for ground objects is independent of the physical state of the reflectivity of ground objects, and it can be used to eliminate the influence of strong reflections in images to a certain extent, reduce the illuminance differences under extreme sunlight conditions, and help improve the vegetation recognition effect under shadow conditions. To improve the accuracy of vegetation segmentation under shadow conditions, this study introduces polarized reflection information for vegetation and an improved semantic segmentation network, notably a double input residual network based on DeepLabv3plus (DIR_DeepLabv3plus), with fusion strategies based on concatenation and addition is proposed. The network extracts low-level features and high-level features at different spatial scales from both light intensity (red-greenblue (RGB)) images and degree of linear polarization (DoLP) images independently by a deep residual network and atrous spatial pyramid pooling (ASPP) structure, effectively improving the accuracy of vegetation segmentation in shadow situations. The results show that the mean intersection over union (mIoU) values of vegetation without shadows, with light shadows and with shadows are 94.01%, 92.508% and 90.969%, respectively. Compared with the color index method and green fractional vegetation cover extraction from digital images using a shadow-resistant algorithm (SHAR-LABFVC), the proposed method provides a greatly improved extraction accuracy, and it has 0.18%, 1.00% and 1.49% higher mIoU values for vegetation under different shadow conditions than the method without polarization information. This study provides a new approach for vegetation segmentation and improves the accuracy of FVC calculations under shadow conditions.
Article
Full-text available
Background Fractional vegetation cover (FVC) is an important parameter for evaluating crop-growth status. Optical remote-sensing techniques combined with the pixel dichotomy model (PDM) are widely used to estimate cropland FVC with medium to high spatial resolution on the ground. However, PDM-based FVC estimation is limited by effects stemming from the variation of crop canopy chlorophyll content (CCC). To overcome this difficulty, we propose herein a “fan-shaped method” (FSM) that uses a CCC spectral index (SI) and a vegetation SI to create a two-dimensional scatter map in which the three vertices represent high-CCC vegetation, low-CCC vegetation, and bare soil. The FVC at each pixel is determined based on the spatial location of the pixel in the two-dimensional scatter map, which mitigates the effects of CCC on the PDM. To evaluate the accuracy of FSM estimates of the FVC, we analyze the spectra obtained from (a) the PROSAIL model and (b) a spectrometer mounted on an unmanned aerial vehicle platform. Specifically, we use both the proposed FSM and traditional remote-sensing FVC-estimation methods (both linear and nonlinear regression and PDM) to estimate soybean FVC. Results Field soybean CCC measurements indicate that (a) the soybean CCC increases continuously from the flowering growth stage to the later-podding growth stage, and then decreases with increasing crop growth stages, (b) the coefficient of variation of soybean CCC is very large in later growth stages (31.58–35.77%) and over all growth stages (26.14%). FVC samples with low CCC are underestimated by the PDM. Linear and nonlinear regression underestimates (overestimates) FVC samples with low (high) CCC. The proposed FSM depends less on CCC and is thus a robust method that can be used for multi-stage FVC estimation of crops with strongly varying CCC. Conclusions Estimates and maps of FVC based on the later growth stages and on multiple growth stages should consider the variation of crop CCC. FSM can mitigates the effect of CCC by conducting a PDM at each CCC level. The FSM is a robust method that can be used to estimate FVC based on multiple growth stages where crop CCC varies greatly.
Article
Full-text available
Fractional vegetation cover (FVC) is one of the most critical parameters in monitoring vegetation status. Accurate estimates of FVC are crucial to the use in land surface models. The dimidiate pixel model is the most widely used method for retrieval of FVC. The normalized difference vegetation index (NDVI) of bare soil endmember (NDVIsoil) is usually assumed to be invariant without taking into account the spatial variability of soil backgrounds. Two NDVIsoil determining methods were compared for estimating FVC. The first method used an invariant NDVIsoil for the Northeast China. The second method used the historical minimum NDVI along with information on soil types to estimate NDVIsoil for each soil type. We quantified the influence of variations of NDVIsoil derived from the second method on FVC estimation for each soil type and compared the differences in FVC estimated by these two methods. Analysis shows that the uncertainty in FVC estimation introduced by NDVIsoil variability can exceed 0.1 (root mean square error-RMSE), with the largest errors occurring in vegetation types with low NDVI. NDVIsoil with higher variation causes greater uncertainty on FVC. The difference between the two versions of FVC in Northeast China, is about 0.07 with an RMSE of 0.07. Validation using fine-resolution FVC reference maps shows that the second approach yields better estimates of FVC than using an invariant NDVIsoil value. The accuracy of FVC estimates is improved from 0.1 to 0.07 (RMSE), on average, in the croplands and from 0.04 to 0.03 in the grasslands. Soil backgrounds have impacts not only on NDVIsoil but also on other VIsoil. Further focus will be the selection of optimal vegetation indices and the modeling of the relationships between VIsoil and soil properties for predicting VIsoil.
Article
Full-text available
Validation over heterogeneous areas is critical to ensuring the quality of remote sensing products. This paper focuses on the sampling methods used to validate the coarse-resolution fractional vegetation cover (FVC) product in the Heihe River Basin, where the patterns of spatial variations in and between land cover types vary significantly in the different growth stages of vegetation. A sampling method, called the mean of surface with non-homogeneity (MSN) method, and three other sampling methods are examined with real-world data obtained in 2012. A series of 15-m-resolution fractional vegetation cover reference maps were generated using the regressions of field-measured and satellite data. The sampling methods were tested using the 15-m-resolution normalized difference vegetation index (NDVI) and land cover maps over a complete period of vegetation growth. Two scenes were selected to represent the situations in which sampling locations were sparsely and densely distributed. The results show that the FVCs estimated using the MSN method have errors of approximately less than 0.03 in the two selected scenes. The validation accuracy of the sampling methods varies with variations in the stratified non-homogeneity in the different growing stages of the vegetation. The MSN method, which considers both heterogeneity and autocorrelations between strata, is recommended for use in the determination of samplings prior to the design of an experimental campaign. In addition, the slight scaling bias caused by the non-linear relationship between NDVI and FVC samples is discussed. The positive or negative trend of the biases predicted using a Taylor expansion is found to be consistent with that of the real biases.
Article
Full-text available
Taking photographs with a commercially available digital camera is an efficient and objective method for determining the green fractional vegetation cover (FVC) for field validation of satellite products. However, classifying leaves under shadows in processing digital images remains challenging and results in classification errors. To address this problem, an automatic shadow-resistant algorithm in the Commission Internationale d'Eclairage L*a*b* color space (SHAR-LABFVC) based on a documented FVC estimation algorithm (LABFVC) is proposed in this paper. The hue saturation intensity (HSI) is introduced in SHAR-LABFVC to enhance the brightness of shaded parts of the image. The lognormal distribution is used to fit the frequency of vegetation greenness and to classify vegetation and the background. Real and synthesized images are used for evaluation, and the results are in good agreement with the visual interpretation, particularly when the FVC is high and the shadows are deep, indicating that SHAR-LABFVC is shadow resistant. Without specific improvements to reduce the shadow effect, the underestimation of FVC can be up to 0.2 in the flourishing period of vegetation at a scale of 10 m. Therefore, the proposed algorithm is expected to improve the validation accuracy of remote sensing products.
Article
Full-text available
Fractional vegetation cover (FVC) is one of the most critical parameters in monitoring vegetation status. Comprehensive assessment of the FVC products is critical for their improvement and use in land surface models. This study investigates the performances of two major long time serial FVC products: GEOV1 and Australian MODIS. The spatial and temporal consistencies of these products were compared during the 2000-2012 period over the main biome types across the Australian continent. Their accuracies were validated by 443 FVC in-situ measurements during the 2011-2012 period. Our results show that there are strong correlations between the GEOV1 and Australian MODIS FVC products over the main Australian continent while they exhibit large differences and uncertainties in the coastal regions covered by dense forests. GEOV1 and Australian MODIS describe similar seasonal variations over the main biome types with differences in magnitude, while Australian MODIS exhibit unstable temporal variations over grasslands and shifted seasonal variations over evergreen broadleaf forests. The GEOV1 and Australian MODIS products overestimate FVC values over the biome types with high vegetation density and underestimate FVC in sparsely vegetated areas and grasslands. Overall, the GEOV1 and Australian MODIS FVC products agree with in-situ FVC values with a RMSE around 0.10 over the Australian continent.
Article
Full-text available
Infrared thermal radiometers (IRTs) are an affordable tool for researchers to monitor canopy temperature. In this maize experiment, six treatments of regulated deficit irrigation levels were evaluated. The main objective was to evaluate these six treatments in terms of six indices (three previously proposed and three introduced in this study) used to quantify water stress. Three are point-in-time indices where one daily reading is assumed representative of the day (Crop Water Stress Index – CWSI, Degrees Above Non-Stressed – DANS, Degrees Above Canopy Threshold – DACT) and three integrate the cumulative impact of water stress over time (Time Temperature Threshold – TTT, Integrated Degrees Above Non-Stressed – IDANS, Integrated Degrees Above Canopy Threshold – IDACT). Canopy temperature was highly correlated with leaf water potential (R2 = 0.895). To avoid potential bias, the lowest observation from the non-stressed treatment was chosen as the baseline for DANS and IDANS indices. Early afternoon temperatures showed the most divergence and thus this is the ideal time to obtain spot index values. Canopy temperatures and stress indices were responsive to evapotranspiration-based irrigation treatments. DANS and DACT were highly correlated with CWSI above the corn threshold 28 °C used in the TTT method, and all indices showed linear relationship with soil water deficit at high temperatures. Recommendations are given to consider soils with high water-holding capacity when choosing a site for non-stressed reference crops used in the DANS method. The DACT may be the most convenient index, as all it requires is a single canopy temperature measurement yet has strong relationships with other indices and crop water measurements.
Article
Vegetation greenness, detected using digital photography, is useful for monitoring phenology of plant growth, carbon uptake, and water loss at the ecosystem level. Assessing ecosystem phenology by greenness is especially useful in spatially extensive, water-limited ecosystems such as the grasslands of the western United States, where productivity is moisture dependent and may become increasingly vulnerable to future climate change.We used repeat photography and a novel means of quantifying greenness in digital photographs to assess how the individual and combined effects of warming and elevated CO2 impact ecosystem phenology (greenness and plant cover) in a semi-arid grassland over an 8-year period.Climate variability within and among years was the proximate driver of ecosystem phenology. Individual and combined effects of warming and elevated CO2 were significant at times, but mediated by variation in both intra- and inter-annual precipitation. Specifically, warming generally enhanced plant cover and greenness early in the growing season but often had a negative effect during the middle of the summer, offsetting the early season positive effects. The individual effects of elevated CO2 on plant cover and greenness were generally neutral.Opposing seasonal variations in the effects of warming and less so elevated CO2 cancelled each other out over an entire growing season, leading to no net effect of treatments on annual accumulation of greenness. The main effect of elevated CO2 dampened quickly, but warming continued to affect plant cover and plot greenness throughout the experiment. The combination of warming and elevated CO2 had a generally positive effect on greenness, especially early in the growing season and in later years of the experiment, enhanced annual greenness accumulation. However, interannual precipitation variation had larger effect on greenness, with 2-3 times greater greenness in wet years than in dry years.Synthesis. Seasonal variation in timing and amount of precipitation governs grassland phenology, greenness, and the potential for carbon uptake. Our results indicate that concurrent changes in precipitation regimes mediate vegetation responses to warming and elevated atmospheric CO2 in semi-arid grasslands. Even small changes in vegetation phenology and greenness in response to warming and rising atmospheric CO2 concentrations, such as those we report here, can have large consequences for the future of grasslands.This article is protected by copyright. All rights reserved.
Article
This dissertation describes a combined statistical-soft computing approach for classifying and mapping weeds species using color images in minimum-tillage systems. A new unsupervised separation index (ExGExR) is introduced to distinguish plant canopies from different soil/residue backgrounds. Results showed that ExGExR was significantly improved for all species and all three weeks over the previously published excess green (ExG). ExGExR performed very well for separating both pigweed and velvetleaf from bare soil and corn stalk backgrounds during the first and second week after crop emergence. ^ A new algorithm for individual leaf extraction was introduced based on fuzzy color clustering and genetic algorithm. Images of green canopies were segmented into fragments of potential leaf regions using clustering algorithm. Fragments were then reassembled into individual leaves using genetic optimization algorithm. The algorithm performance was evaluated by comparing the actual number leaves automatically extracted with the number of potential leaves observed visually. An overall performance of 75% for leaves correctly extracted was obtained. ^ Elliptic Fourier method was next tested for characterizing the shape of hand selected young soybean, sunflower, red root pigweed, and velvetleaf leaves. Discriminant analysis of these shape coefficients suggested that the third week after emergence was the best time to identify plant species with a correct classification average of 89.4%. When leaves from the second and third week were analyzed a correct classification average of 89.2% was reached. ^ An unsupervised method for plant species identification was finally tested. Elliptic Fourier descriptors not only provided leaflet shape information, but also a lamina boundary template, controlling where textural features were computed. Each lamina shape extracted was corrected such that all leaflets had the same orientation for texture extraction. SAS PROC DISCRIM procedure was performed to build a species classification model using selected Fourier coefficients and local homogeneity and entropy texture features. An overall success rate of 86% was obtained for plant species classification.