ArticlePDF Available

Extracting Fractional Vegetation Cover from Digital Photographs: A Comparison of In Situ, SamplePoint, and Image Classification Methods

Authors:

Abstract and Figures

Fractional vegetation cover is a key indicator of rangeland health. However, survey techniques such as line-point intercept transect, pin frame quadrats, and visual cover estimates can be time-consuming and are prone to subjective variations. For this reason, most studies only focus on overall vegetation cover, ignoring variation in live and dead fractions. In the arid regions of the Canadian prairies, grass cover is typically a mixture of green and senescent plant material, and it is essential to monitor both green and senescent vegetation fractional cover. In this study, we designed and built a camera stand to acquire the close-range photographs of rangeland fractional vegetation cover. Photographs were processed by four approaches: SamplePoint software, object-based image analysis (OBIA), unsupervised and supervised classifications to estimate the fractional cover of green vegetation, senescent vegetation, and background substrate. These estimates were compared to in situ surveys. Our results showed that the SamplePoint software is an effective alternative to field measurements, while the unsupervised classification lacked accuracy and consistency. The Object-based image classification performed better than other image classification methods. Overall, SamplePoint and OBIA produced mean values equivalent to those produced by in situ assessment. These findings suggest an unbiased, consistent, and expedient alternative to in situ grassland vegetation fractional cover estimation, which provides a permanent image record.
Content may be subject to copyright.
sensors
Article
Extracting Fractional Vegetation Cover from Digital
Photographs: A Comparison of In Situ, SamplePoint, and
Image Classification Methods
Xiaolei Yu and Xulin Guo *


Citation: Yu, X.; Guo, X. Extracting
Fractional Vegetation Cover from
Digital Photographs: A Comparison
of In Situ, SamplePoint, and Image
Classification Methods. Sensors 2021,
21, 7310. https://doi.org/10.3390/
s21217310
Academic Editor: Francesca Cigna
Received: 31 August 2021
Accepted: 1 November 2021
Published: 3 November 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Department of Geography and Planning, University of Saskatchewan, Kirk Hall, 117 Science Place,
Saskatoon, SK S7N 5C8, Canada; yux051@mail.usask.ca
*Correspondence: xulin.guo@usask.ca; Tel.: +1-306-966-5853
Abstract:
Fractional vegetation cover is a key indicator of rangeland health. However, survey tech-
niques such as line-point intercept transect, pin frame quadrats, and visual cover estimates can be
time-consuming and are prone to subjective variations. For this reason, most studies only focus on
overall vegetation cover, ignoring variation in live and dead fractions. In the arid regions of the
Canadian prairies, grass cover is typically a mixture of green and senescent plant material, and it is
essential to monitor both green and senescent vegetation fractional cover. In this study, we designed
and built a camera stand to acquire the close-range photographs of rangeland fractional vegetation
cover. Photographs were processed by four approaches: SamplePoint software, object-based image
analysis (OBIA), unsupervised and supervised classifications to estimate the fractional cover of green
vegetation, senescent vegetation, and background substrate. These estimates were compared to in
situ surveys. Our results showed that the SamplePoint software is an effective alternative to field mea-
surements, while the unsupervised classification lacked accuracy and consistency. The Object-based
image classification performed better than other image classification methods. Overall, SamplePoint
and OBIA produced mean values equivalent to those produced by in situ assessment. These findings
suggest an unbiased, consistent, and expedient alternative to in situ grassland vegetation fractional
cover estimation, which provides a permanent image record.
Keywords:
fractional vegetation cover; SamplePoint; image classification; OBIA; image analysis;
Northern Mixed Grasslands
1. Introduction
The Fractional vegetation cover (FVC) is defined as the percentage of the ground
surface covered by vegetation elements from the overhead perspective [
1
]. This metric
describes vegetation quality and composition, which contributes to ecosystem change and
control transpiration and photosynthesis among other terrestrial processes [
2
]. For this
reason, analyses of fractional vegetation cover are widely used in land surface modelling [
3
],
ecosystem monitoring [
4
], and natural resource management [
5
]. The ability to conduct
systematic, accurate, and repeatable vegetation fractional cover estimation is a fundamental
part of ecosystem biodiversity and function studies.
In arid and semiarid rangelands, live and senescent vegetation materials are often
intermixed and difficult to discriminate [
6
]. These materials both play vital, but different,
roles in ecological cycling and natural grassland conservation. Senescent vegetation consists
largely of prostrate litter and standing dead grass, but like live vegetation, provides forage
for grazers [
7
], alters the microclimate at a local scale [
8
], provides habitats to many species
at risk [
9
], and is crucial in grassland fire dynamics [
10
,
11
]. Hence, the simultaneous
estimation of both green and senescent vegetation covers is important to understand,
manage and conserve arid and semi-arid rangelands.
Sensors 2021,21, 7310. https://doi.org/10.3390/s21217310 https://www.mdpi.com/journal/sensors
Sensors 2021,21, 7310 2 of 15
Many efforts have been made to measure vegetation fractional cover in the field.
Conventional methods of ground FVC estimation include the line-point intercept transect
(LPIT) [
12
] and the Daubenmire cover class methods [
13
]. Both are labor-intensive and
prone to overestimating standing vegetation cover due to field survey design [14].
At large spatial scales, satellite-based remote sensing can be effective in estimating
FVC [
15
,
16
]. Various vegetation indices have been developed to estimate FVC in arid and
semiarid grasslands [
2
,
15
], and spectral mixing analysis (SMA) has been applied to FVC
modeling [
17
]. However, satellite remote sensing approaches need to be calibrated with
in situ data to maintain accuracy and account for spatial–temporal heterogeneity [
18
,
19
].
FVC field measurement remains critical to provide a baseline for improving inversion
algorithms and validating remote sensing products [20].
Close range photography has become a popular method to estimate FVC in the field.
It offers a nondestructive approach that has the potential to be equally as, or more accurate,
faster, and less biased than, in situ techniques [
21
24
]. Digital images taken of small plots
processed using supervised [
25
] or unsupervised [
26
] classification, object-based image
analysis (OBIA) [
27
], and rule-based decision or machine learning [
28
] have been successful
in objectively quantifying the percent of vegetation cover in a range of environments. In
concert, many tools have been developed to process close-range digital images, including
SamplePoint [
29
], VegMeasure [
30
], and Canopeo [
23
]. However, most research has focused
on estimating the live or green vegetation fractional cover, ignoring senescent vegetation,
which is a significant component of arid and semiarid prairie grasslands.
In arid and semiarid regions, senescent plant materials include standing dead biomass,
dormant grass, or litter from nondecomposed biomass. These are often intermixed with the
green vegetation making it challenging to differentiate the senescent and green vegetation
fractional cover from the background [
6
]. While a few studies have compared the results
of different image-processing methods to estimate green, nongreen and senescent vegeta-
tion [
6
,
31
], the accuracy of different approaches involving close-range imagery has yet to
be thoroughly evaluated. This is particularly the case under conditions where senescent
and green vegetation are mixed in native grasslands.
The main objective of this research is to evaluate the accuracy and consistency of
vegetation fractional cover extracted using different methods, including in situ assessment
and image analysis. To fulfill this objective, we used a camera stand to acquire close-range
digital photographs in Grassland National Park (GNP), Saskatchewan, a typical Northern
Mixed Grassland region. The digital images were then analyzed with SamplePoint software
for visual interpretation and with Environment for Visualizing Image (ENVI) software [
32
]
for pixel-based classification and object-based image analysis (OBIA) to obtain the fractional
cover of senescent and green vegetation. We used in situ assessment to quantify the
fractional vegetation cover (FVC) as a reference. FVC estimates from image processing
were compared to field measurements to corroborate links and consistency.
2. Materials and Methods
2.1. Study Area
Fieldwork was conducted in the west block of Grasslands National Park (GNP; 49
N,
107
W), located in the semiarid, mixed grassland ecoregion of southern Saskatchewan,
Canada [
33
], from 28 June to 3 July 2018. GNP lies in a region with a mean annual
temperature of 4.1
C and total annual precipitation of 352.5 mm [
34
]. Almost half of
the annual precipitation falls as rain during the spring growing season, followed by a
long, dry summer [
35
], during which annual plants die and perennial herbaceous plants
wither above ground while their below-ground parts persist [
36
]. Therefore, there is a high
proportion of brown and grey senescent vegetation (nonphotosynthetic biomass, NPV)
above ground in addition to green, photosynthetic vegetation (PV).
Sensors 2021,21, 7310 3 of 15
2.2. Field Experiment Design
We built a collapsible camera stand, two meters high with a one square meter
(
1 m ×1 m
) base frame; each edge of the frame was marked by decile ticks to facilitate
in situ measurement (Figure 1). A tripod with a pan-tilt head and independent axes and
controls was attached to the top of the camera stand (Figure 1b). A NIKON D5500 camera
with an AF-S DX 18–55 mm f/3.5–5.6 G lens was attached to the head (Figure 1c). The
camera was held at a nadir position relative to the base frame by adjusting the pan-tilt
head. An umbrella was used to shade the base above the frame. We used a remote release
to control the shutter and mobile phone connected with the camera by Bluetooth to check
photos. The camera was preset to user-shutter-priority mode with a maximum of 1/200 s
shutter speed to avoid wind effects on the grass canopy imagery [37].
Sensors 2021, 21, x FOR PEER REVIEW 3 of 16
2.2. Field Experiment Design
We built a collapsible camera stand, two meters high with a one square meter (1 m ×
1 m) base frame; each edge of the frame was marked by decile ticks to facilitate in situ
measurement (Figure 1). A tripod with a pan-tilt head and independent axes and controls
was attached to the top of the camera stand (Figure 1b). A NIKON D5500 camera with an
AF-S DX 18–55 mm f/3.5–5.6 G lens was attached to the head (Figure 1c). The camera was
held at a nadir position relative to the base frame by adjusting the pan-tilt head. An um-
brella was used to shade the base above the frame. We used a remote release to control
the shutter and mobile phone connected with the camera by Bluetooth to check photos.
The camera was preset to user-shutter-priority mode with a maximum of 1/200 s shutter
speed to avoid wind effects on the grass canopy imagery [37].
Figure 1. Illustration of field setup: (a) study-site sampling design, (b) camera stand, (c) NIKON D5500 camera, and (d)
overhead photo of stand base frame.
Figure 1.
Illustration of field setup: (
a
) study-site sampling design, (
b
) camera stand, (
c
) NIKON
D5500 camera, and (d) overhead photo of stand base frame.
Sensors 2021,21, 7310 4 of 15
We surveyed nine sites across the study area’s dominant topographical features: valley,
sloped, and upland grasslands. The distance between sites was at least 1.5 km to prevent
the spatial autocorrelation [38]. At each site, two perpendicular transects, 100 m long and
intersecting at their centers, were surveyed, oriented in the cardinal directions (
Figure 1a
).
Five images were taken along each arm at 10 m intervals, excluding the center, thus,
20 images were recorded per site (Figure 1d). At each plot, we recorded the percentage
cover of grass, forb, shrub, standing dead material, litter, lichen, moss, bare soil, and rock
by visual assessment within each base frame after taking the photo of that frame. Percent
cover was estimated to the nearest 5% for cover values ranging from 10 to 90% and the
nearest 1% for values less than 10% and greater than 90% [
39
,
40
]. To limit the subjective
bias, two people independently assessed the in situ cover and their interpretations were
averaged. We sorted the original record of each plot fraction by summing up into PV (green
grass, forb, shrub, green moss), NPV (standing dead material, litter), and BS (bare soil,
rock, lichen).
2.3. Image Analysis
We estimated the fractional cover of PV, NPV, and BS from the digital image using four
methods: (1) visual classification using SamplePoint software, (2) unsupervised image clas-
sification, (3) supervised image classification, and (4) object-based image analysis (OBIA).
The unsupervised and supervised classifications as well as the OBIA were conducted
using ENVI 5.5 (Harris Geospatial Inc. Broomfield, CO, USA) and ArcMap 10.6 (Esri Inc.
Redlands, CA, USA).
SamplePoint is a popular software for visual inspection of ground cover in grassland
and pasture research and management [
29
]. This software loads images listed by the user in
an Excel spreadsheet, and systematically or randomly identifies and locates a user-defined
number of sample points in the image. It then moves from one point to the next so that
the user can classify each point visually [
22
]. We identified the 10
×
10 (100 in total)
points systematically spread across each image for visual classification using the same
nine categories as in the field assessment and grouped the results into PV, NPV, and BS for
future analysis. Two independent assessments were performed on 180 images using the
SamplePoint software and the results from the two assessments were averaged.
For the unsupervised classification, we followed methods described by Smith, Hill
and Zhang [
26
,
41
]. Images were transferred from the original RGB (red, green, blue) color
spectrum into HIS (hue, intensity, saturation) space. Images were divided into 14 classes
using the ISODATA algorithm in ENVI 5.5 image analysis software. The original 14 classes
were visually examined with reference to the original photos and were merged to derive
PV, NPV, and BS. For the supervised classification, we used the maximum likelihood
classification algorithm by predefining the regions of interest (ROI) for PV, NPV, and BS.
For each class, at least 50 ROIs were selected for training.
For OBIA, we used the feature extraction module in ENVI 5.5. In this method, an
image is segmented into homogeneous areas based on two parameters: scale and merging
level (spectral information). The scale parameter is unitless and controls the relative size
of image objects (polygon or segment), with a smaller scale parameter resulting in more
image objects. Merging combines adjacent segments with similar spectral attributes, a
larger merging parameter results in more adjacent segments with similar colors and border
sizes. Images were segmented at a 40–70 scale level and 10–30 merging level; choosing a
high scale level results in fewer defined segments, while choosing a high merging level
results in more segments to be aggregated into small segments within larger, textured areas.
Specific parameter settings were adjusted interactively using the preview window in the
feature extraction module because the texture and color features for individual images
were dependent on the site-specific plant composition and background [
6
,
26
]. This also
allowed us to predefine the training data for PV, NPV, and BS. The image was classified
using the support vector machine (SVM) algorithm with all available attributes (spatial,
spectral, and textural).
Sensors 2021,21, 7310 5 of 15
180 images (20 images per site for 9 sites) were used to compare the nine categories
inventoried for both the field assessment and SamplePoint classification (Figure 2). We
binned the nine categories into PV, NPV, and BS and compared results from both field and
SamplePoint based on these bins. The unsupervised, supervised, and OBIA classification
methods were applied to 36 selected images (4 images randomly per site) using PV, NPV,
and BS categories. Results were compared to the field assessment.
Sensors 2021, 21, x FOR PEER REVIEW 5 of 16
180 images (20 images per site for 9 sites) were used to compare the nine categories
inventoried for both the field assessment and SamplePoint classification (Figure 2). We
binned the nine categories into PV, NPV, and BS and compared results from both field
and SamplePoint based on these bins. The unsupervised, supervised, and OBIA classifi-
cation methods were applied to 36 selected images (4 images randomly per site) using PV,
NPV, and BS categories. Results were compared to the field assessment.
Figure 2. Methodology flowchart.
Coefficient of determination (R2), root-mean-square error (RMSE) [41–43], and the
Bland-Altman plot (Tukey mean difference plot) [44] were used to evaluate the compari-
son. The Bland-Altman plot allows the identification of any systematic differences be-
tween two measurements or possible outliers by plotting the differences between the two
methods against their averages. The Cartesian coordinates of a given sample S with values
S1 and S2 are:
,=
+
2
,−
(1)
The mean of n sample pairs’ difference (S1-S2) is the estimated bias, and the standard
deviation (SD, σ) of the differences indicating the random fluctuations around this mean.
In the Bland-Altman plot, horizontal lines are drawn at the mean difference and at
the limits of agreement. These are defined as the mean difference plus and minus 1.96
times the SD of the differences. The mean difference plus and minus 3.0 times the SD lines
are defined as the extreme limits of agreement. A Wilcoxon test [45] of the paired samples
was performed to compare paired data among the in situ assessments and four other
methods (SamplePoint estimation, unsupervised classification, supervised classification,
and OBIA). The Wilcoxon test was chosen to assess whether the population mean ranks
differed among two applied FVC estimation methods. The paired Student’s t-test was not
used because the data violate the normality distribution assumption [46]. In the paired
samples Wilcoxon test, a p-value of 0.05 was selected as the threshold for significance.
3. Results and Discussion
3.1. Comparison of SamplePoint Estimation and In Situ Assessment
SamplePoint FVC estimation was consistent with the in situ assessment, however,
the correlation coefficient (R2) varied among land surface-cover subcategories (Figure 3).
Figure 2. Methodology flowchart.
Coefficient of determination (R
2
), root-mean-square error (RMSE) [
41
43
], and the
Bland-Altman plot (Tukey mean difference plot) [
44
] were used to evaluate the comparison.
The Bland-Altman plot allows the identification of any systematic differences between two
measurements or possible outliers by plotting the differences between the two methods
against their averages. The Cartesian coordinates of a given sample Swith values S
1
and
S2are:
S(x,y)=((S1+S2)/2,S1S2)(1)
The mean of nsample pairs’ difference (S
1-
S
2
) is the estimated bias, and the standard
deviation (SD, σ) of the differences indicating the random fluctuations around this mean.
In the Bland-Altman plot, horizontal lines are drawn at the mean difference and at the
limits of agreement. These are defined as the mean difference plus and minus 1.96 times
the SD of the differences. The mean difference plus and minus 3.0 times the SD lines are
defined as the extreme limits of agreement. A Wilcoxon test [
45
] of the paired samples was
performed to compare paired data among the in situ assessments and four other methods
(SamplePoint estimation, unsupervised classification, supervised classification, and OBIA).
The Wilcoxon test was chosen to assess whether the population mean ranks differed among
two applied FVC estimation methods. The paired Student’s t-test was not used because the
data violate the normality distribution assumption [
46
]. In the paired samples Wilcoxon
test, a p-value of 0.05 was selected as the threshold for significance.
3. Results and Discussion
3.1. Comparison of SamplePoint Estimation and In Situ Assessment
SamplePoint FVC estimation was consistent with the in situ assessment, however,
the correlation coefficient (R
2
) varied among land surface-cover subcategories (
Figure 3
).
Shrub and standing dead material had the highest R
2
values (0.85 and 0.73), while forb
Sensors 2021,21, 7310 6 of 15
and lichen had the lowest R
2
values (0.36 and 0.57). Standing dead material had the
highest RMSE (9.93%) (
Figure 3
). Shrub, standing dead material, and litter estimates from
SamplePoint and in situ assessments were similar, as indicated by their regression and
identity lines (Figure 3). Grass, bare ground, and lichen tend to be overestimated by
SamplePoint when the fractional cover is larger than 20% (Figure 3). For the upscaling
categories, the PV has the highest R
2
(0.78), while the R
2
values for NPV and the BS are
0.65 and 0.70, respectively (Figure 4).
The SamplePoint and in situ estimates of PV are close (Figure 4). The BS from Sample-
Point tends to be overestimated above 25% of the fractional cover, compared with the in
situ assessment (Figure 4). Similarly, NPV from SamplePoint is overestimated above 55% of
the fractional cover and underestimated below 55% of the fractional cover
(Figure 4)
. Most
of the differences between the SamplePoint assessment and the in situ estimation of PV
are within a
±
3
σ
range, with several exceptions very close to the
±
3
σ
threshold (
Figure 5
).
This indicates that the SamplePoint and in situ methods are very similar. For the NPV and
BS, almost all the differences are within the
±
3
σ
range, although there are outliers that
depart from the
±
3
σ
range (Figure 5and Table 1). These results are comparable to those
reported in Booth et al. [
47
]. Since the theoretical basis of SamplePoint relies on a discrete
classification for certain points (10
×
10 grids in this study) rather than on global image
classification, there may be considerable bias for images in complex scenes. Like in situ
assessment, which can be subjective, visual interpretation using SamplePoint software
depends on the investigator experience.
Sensors 2021, 21, x FOR PEER REVIEW 6 of 16
Shrub and standing dead material had the highest R2 values (0.85 and 0.73), while forb
and lichen had the lowest R2 values (0.36 and 0.57). Standing dead material had the highest
RMSE (9.93%) (Figure 3). Shrub, standing dead material, and litter estimates from Sam-
plePoint and in situ assessments were similar, as indicated by their regression and identity
lines (Figure 3). Grass, bare ground, and lichen tend to be overestimated by SamplePoint
when the fractional cover is larger than 20% (Figure 3). For the upscaling categories, the
PV has the highest R2 (0.78), while the R2 values for NPV and the BS are 0.65 and 0.70,
respectively (Figure 4).
The SamplePoint and in situ estimates of PV are close (Figure 4). The BS from Sam-
plePoint tends to be overestimated above 25% of the fractional cover, compared with the
in situ assessment (Figure 4). Similarly, NPV from SamplePoint is overestimated above
55% of the fractional cover and underestimated below 55% of the fractional cover (Figure
4). Most of the differences between the SamplePoint assessment and the in situ estimation
of PV are within a ±3σ range, with several exceptions very close to the ±3σ threshold (Fig-
ure 5). This indicates that the SamplePoint and in situ methods are very similar. For the
NPV and BS, almost all the differences are within the ±3σ range, although there are outli-
ers that depart from the ±3σ range (Figure 5 and Table 1). These results are comparable to
those reported in Booth et al. [47]. Since the theoretical basis of SamplePoint relies on a
discrete classification for certain points (10 × 10 grids in this study) rather than on global
image classification, there may be considerable bias for images in complex scenes. Like in
situ assessment, which can be subjective, visual interpretation using SamplePoint soft-
ware depends on the investigator experience.
Figure 3. Comparison of in situ assessment and SamplePoint estimation for nine FVC categories
(red dashed line is the 1:1 relationship; dark solid line is the linear regression). The regression sig-
nificance level (p-value) is 0.01.
Figure 3.
Comparison of in situ assessment and SamplePoint estimation for nine FVC categories (red
dashed line is the 1:1 relationship; dark solid line is the linear regression). The regression significance
level (p-value) is 0.01.
Sensors 2021,21, 7310 7 of 15
Sensors 2021, 21, x FOR PEER REVIEW 7 of 16
Figure 4. Comparison of in situ assessment and SamplePoint estimation for PV, NPV, and BS fraction cover (red dashed
line is the 1:1 relationship; dark solid line is the linear regression). The regression significance level (p-value) is 0.01.
We compared the quantile–quantile plot for the in situ assessment and SamplePoint
classification of grass and NPV fractional cover (Figure 6). The in situ assessment of green
grass cover based on the nine FVC categories had a clear clustering pattern (step curves)
(Figure 6a). The piecewise polyline indicated that the in situ assessment had a categorical
trend for the fractional cover estimation. This was caused by the protocol used in situ (as
described in Section 2.2) as the fractional cover was estimated to the nearest 5% for values
ranging from 10 to 90% and to the nearest 1% for values less than 10% and greater than
90% [39,40]. SamplePoint results resembled a normal distribution with a slight, right-
skewed distribution (Figure 6b). This phenomenon indicated that SamplePoint can
achieve a continuous estimate of detailed ground fractional cover, even when inputs are
discrete points (10 × 10 grid in this study).
Figure 5. Bland-Altman plot of the difference between SamplePoint estimation and in situ assessment for PV, NPV, and
BS fraction covers. Refer to the left panel for threshold values. The red dotted line is the regression between the two-
method mean and the two-method difference.
Figure 4.
Comparison of in situ assessment and SamplePoint estimation for PV, NPV, and BS fraction cover (red dashed line
is the 1:1 relationship; dark solid line is the linear regression). The regression significance level (p-value) is 0.01.
Sensors 2021, 21, x FOR PEER REVIEW 7 of 16
Figure 4. Comparison of in situ assessment and SamplePoint estimation for PV, NPV, and BS fraction cover (red dashed
line is the 1:1 relationship; dark solid line is the linear regression). The regression significance level (p-value) is 0.01.
We compared the quantile–quantile plot for the in situ assessment and SamplePoint
classification of grass and NPV fractional cover (Figure 6). The in situ assessment of green
grass cover based on the nine FVC categories had a clear clustering pattern (step curves)
(Figure 6a). The piecewise polyline indicated that the in situ assessment had a categorical
trend for the fractional cover estimation. This was caused by the protocol used in situ (as
described in Section 2.2) as the fractional cover was estimated to the nearest 5% for values
ranging from 10 to 90% and to the nearest 1% for values less than 10% and greater than
90% [39,40]. SamplePoint results resembled a normal distribution with a slight, right-
skewed distribution (Figure 6b). This phenomenon indicated that SamplePoint can
achieve a continuous estimate of detailed ground fractional cover, even when inputs are
discrete points (10 × 10 grid in this study).
Figure 5. Bland-Altman plot of the difference between SamplePoint estimation and in situ assessment for PV, NPV, and
BS fraction covers. Refer to the left panel for threshold values. The red dotted line is the regression between the two-
method mean and the two-method difference.
Figure 5.
Bland-Altman plot of the difference between SamplePoint estimation and in situ assessment for PV, NPV, and BS
fraction covers. Refer to the left panel for threshold values. The red dotted line is the regression between the two-method
mean and the two-method difference.
Table 1.
Mean, standard deviation (SD), and mean
±
1.96 (or 3) SD of the differences between in situ assessment and four
imagery methods (SamplePoint estimation, unsupervised classification, supervised classification, and OBIA).
SamplePoint Unsupervised Classification Supervised Classification OBIA
PV NPV BS PV NPV BS PV NPV BS PV NPV BS
Mean (%) 0.45 0.37 0.52 4.39 9.22 14.06 2.64 5.36 8.08 1.31 2.75 1.75
SD (%) 8.76 12.90 11.78 15.35 16.83 17.92 8.22 11.70 11.65 7.59 9.41 8.72
Mean ±1.96SD
16.7
~
+17.6
25.7
~
+24.9
23.6
~
+22.6
34.5
~
+25.7
42.2
~
+23.8
21.1
~
+49.2
18.8
~
+13.5
28.3
~
+17.6
14.8
~
+30.9
13.6
~
+16.2
21.2
~
+15.7
15.3
~
+18.8
Mean ±3SD
25.8
~
+26.7
39.1
~
+38.3
35.9
~
+34.8
50.4
~
+41.7
59.7
~
+44.5
39.7
~
+67.8
27.3
~
+22.0
40.5
~
+29.7
26.9
~
+43.0
21.5
~
+24.1
31.0
~
+25.5
24.4
~
+27.9
We compared the quantile–quantile plot for the in situ assessment and SamplePoint
classification of grass and NPV fractional cover (Figure 6). The in situ assessment of green
grass cover based on the nine FVC categories had a clear clustering pattern (step curves)
(Figure 6a). The piecewise polyline indicated that the in situ assessment had a categorical
trend for the fractional cover estimation. This was caused by the protocol used in situ
Sensors 2021,21, 7310 8 of 15
(as described in Section 2.2) as the fractional cover was estimated to the nearest 5% for
values ranging from 10 to 90% and to the nearest 1% for values less than 10% and greater
than 90% [
39
,
40
]. SamplePoint results resembled a normal distribution with a slight, right-
skewed distribution (Figure 6b). This phenomenon indicated that SamplePoint can achieve
a continuous estimate of detailed ground fractional cover, even when inputs are discrete
points (10 ×10 grid in this study).
Sensors 2021, 21, x FOR PEER REVIEW 8 of 16
Figure 6. Quantile-quantile plot (qqplot) of (a) in situ assessment of Grass fractional cover (%), (b) SamplePoint classifica-
tion of Grass fractional cover (%), (c) in situ assessment of NPV fractional cover (%), and (d) SamplePoint classification of
NPV fractional cover (%).
The in situ assessment of green grass cover based on the three-category schema (PV,
NPV, and BS) showed no categorical trend but an under dispersed trend with negative
excess kurtosis (Figure 6c). Comparatively, the qqplot from SamplePoint showed a
slightly S-shaped curve, suggesting that it is approaching a normal distribution.
Table 1. Mean, standard deviation (SD), and mean ± 1.96 (or 3) SD of the differences between in situ assessment and four
imagery methods (SamplePoint estimation, unsupervised classification, supervised classification, and OBIA).
SamplePoint
Unsupervised Classifi-
cation
Supervised Classifica-
tion OBIA
PV NPV BS PV NPV BS PV NPV BS PV NPV BS
Mean (%) 0.45 0.37 0.52 4.39 9.22 14.06 2.64 5.36 8.08 1.31 2.75 1.75
SD (%) 8.76 12.90 11.78 15.35 16.83 17.92 8.22 11.70 11.65 7.59 9.41 8.72
Mean ± 1.96SD
16.7
~
+17.6
25.7
~
+24.9
23.6
~
+22.6
34.5
~
+25.7
42.2
~
+23.8
21.1
~
+49.2
18.8
~
+13.5
28.3
~
+17.6
14.8
~
+30.9
13.6
~
+16.2
21.2
~
+15.7
15.3
~
+18.8
Mean ± 3SD
25.8
~
+26.7
39.1
~
+38.3
35.9
~
+34.8
50.4
~
+41.7
59.7
~
+44.5
39.7
~
+67.8
27.3
~
+22.0
40.5
~
+29.7
26.9
~
+43.0
21.5
~
+24.1
31.0
~
+25.5
24.4
~
+27.9
3.2. Comparison of Image Analysis Methods and In Situ Assessments
The image analysis methods used in this study, including the unsupervised classifi-
cation, the supervised classification, and the OBIA, perform differently than the in situ
assessment (Figure 7). For all three categories (PV, NPV, and BS), the unsupervised image
classification has the lowest R2 (all below 0.5) and the largest RMSE (Figure 7). The OBIA
has the highest R2 (all above 0.7) and a relatively smaller RMSE (Figure 7). The supervised
image classification performance is moderate. For the 36 images tested, the differences
between image analysis methods and in situ estimates are within the ±3σ range (Figure
8), however, the range varies among methods (Table 1 and Figure 8). We found larger SDs
Figure 6.
Quantile-quantile plot (qqplot) of (
a
) in situ assessment of Grass fractional cover (%), (
b
) SamplePoint classification
of Grass fractional cover (%), (
c
) in situ assessment of NPV fractional cover (%), and (
d
) SamplePoint classification of NPV
fractional cover (%).
The in situ assessment of green grass cover based on the three-category schema (PV,
NPV, and BS) showed no categorical trend but an under dispersed trend with negative
excess kurtosis (Figure 6c). Comparatively, the qqplot from SamplePoint showed a slightly
S-shaped curve, suggesting that it is approaching a normal distribution.
3.2. Comparison of Image Analysis Methods and In Situ Assessments
The image analysis methods used in this study, including the unsupervised classifi-
cation, the supervised classification, and the OBIA, perform differently than the in situ
assessment (Figure 7). For all three categories (PV, NPV, and BS), the unsupervised im-
age classification has the lowest R
2
(all below 0.5) and the largest RMSE (Figure 7). The
OBIA has the highest R
2
(all above 0.7) and a relatively smaller RMSE (Figure 7). The
supervised image classification performance is moderate. For the 36 images tested, the
differences between image analysis methods and in situ estimates are within the
±
3
σ
range
(Figure 8), however, the range varies among methods (Table 1and Figure 8). We found
larger SDs (>15%) for unsupervised classifications (Table 1). Hence, ranges of
±
1.96
σ
and
±
3
σ
are smaller for the OBIA, moderate for the supervised method, and largest for the
unsupervised method (Figure 8).
Sensors 2021,21, 7310 9 of 15
Figure 7.
Comparison of in situ assessment and three image analysis methods (unsupervised,
supervised, and OBIA) for PV, NPV, and BS fraction covers (the red dashed line is the 1:1 relationship
and the dark solid line is the linear regression). The regression significance level (p-value) is 0.01.
Sensors 2021, 21, x FOR PEER REVIEW 10 of 16
Figure 8. Bland-Altman plot of the difference between image analysis methods (unsupervised, supervised, and OBIA) and
in situ assessment for PV, NPV, and BS fraction covers. Refer to the top-left panel for threshold values. The red dotted line
is the regression between the two-method mean and the two-method difference.
The unsupervised classification misclassified rock, moss/lichen, and high reflectance
regions in the background leading to biased estimations of PV, NPV, and BS fractional
cover (Figure 9). The supervised classification was an improvement on the unsupervised
method; however, the OBIA identified greater detail in the three fractional covers. Our
findings are similar to those reported by Laliberte, Rango, Herrick, Fredrickson and
Burkett [27], in which OBIA was also used to investigate the fractional cover of North
American grassland. They suggested that shadow is the greatest problem in scene decom-
position when applying OBIA to high-resolution, close-range digital photographs. This
concern was also raised in Song, Mu, Yan and Huang [20], and was partially resolved by
using a shadow-resistant algorithm. We did not include a shadow-resistant method to
estimate the fractional cover in our study. However, we acknowledge that it is a problem
for close-range photo processing, especially in heterogeneous grasslands with complex
vertical structures and high biomass volumes. Shadow not only affects fractional cover
estimation but also affects visual interpretation. This partially explains the outliers in our
SamplePoint estimation (Figure 5), as our original photos, despite being umbrella-shaded,
still had shadow effects.
Figure 8.
Bland-Altman plot of the difference between image analysis methods (unsupervised, supervised, and OBIA) and
in situ assessment for PV, NPV, and BS fraction covers. Refer to the top-left panel for threshold values. The red dotted line is
the regression between the two-method mean and the two-method difference.
Sensors 2021,21, 7310 10 of 15
Mean differences for both PV and NPV were negative (Table 1) when we performed
supervised and unsupervised classification analyses, indicating that these two methods
underestimate PV and NPV, compared with the in situ assessment. In the OBIA, PV and
BS are overestimated (mean differences > 0) whereas NPV is underestimated (Table 1).
The unsupervised classification misclassified rock, moss/lichen, and high reflectance
regions in the background leading to biased estimations of PV, NPV, and BS fractional cover
(Figure 9). The supervised classification was an improvement on the unsupervised method;
however, the OBIA identified greater detail in the three fractional covers. Our findings are
similar to those reported by Laliberte, Rango, Herrick, Fredrickson and Burkett [
27
], in
which OBIA was also used to investigate the fractional cover of North American grassland.
They suggested that shadow is the greatest problem in scene decomposition when applying
OBIA to high-resolution, close-range digital photographs. This concern was also raised in
Song, Mu, Yan and Huang [
20
], and was partially resolved by using a shadow-resistant
algorithm. We did not include a shadow-resistant method to estimate the fractional
cover in our study. However, we acknowledge that it is a problem for close-range photo
processing, especially in heterogeneous grasslands with complex vertical structures and
high biomass volumes. Shadow not only affects fractional cover estimation but also affects
visual interpretation. This partially explains the outliers in our SamplePoint estimation
(Figure 5), as our original photos, despite being umbrella-shaded, still had shadow effects.
Sensors 2021, 21, x FOR PEER REVIEW 11 of 16
Figure 9. Example of image analyses: (a) original RGB image, (b) HIS image, (c) unsupervised image classification, (d)
supervised image classification, and (e) OBIA.
3.3. Differences between In Situ Assessment, Visual Classification with SamplePoint Software,
and Image Classification Methods
We performed a paired-samples Wilcoxon test between the in situ assessment and
four remote methods to assess PV, NPV, and BS fractional covers. Measurements assessed
using SamplePoint software and in situ samplings were not significantly different (thresh-
old p = 0.05) among the three fractional covers, while assessment of BS was marginally
insignificant (Table 2). NPV and BS estimated from the unsupervised classification, and
the BS estimated from the supervised classification, were significantly different from the
in situ assessment (Table 2). The OBIA assessment was not significantly different from the
in situ assessment, however, p values of NPV and BS were close to the threshold (Table
2). Thus, SamplePoint assessment was the most consistent with in situ assessment, com-
pared to unsupervised classification. Although the reliability of the OBIA would make it
a suitable alternative for in situ methods, the OBIA requires sophisticated image pro-
cessing and human training before it can be effective [19,27].
We found that spatial scale and merging level had varying effects on OBIA analyses
processing within a single image. The vegetation-dominated part of the image was accu-
rately assessed for green vegetation, dead and senescent materials, and background (Fig-
ure 10a,b). This suggests the reason why the OBIA had greater accuracy than supervised
and unsupervised classifications as it is based on relatively homogeneous segmented ob-
jects rather than pixels. In contrast, training samples selected in the supervised classifica-
tion were largely based on polygons containing numerous mixed pixels [48]. However, in
bare-soil dominated images, shrubs were incorrectly classified as background (Figure
10c,d, double arrow 1), as well as portions of green leaf (Figure 10c,d, double arrow 2).
Because shrub branches, green leaf, dead materials, and bare soil (as well as moss, lichen,
Figure 9.
Example of image analyses: (
a
) original RGB image, (
b
) HIS image, (
c
) unsupervised image classification,
(d) supervised image classification, and (e) OBIA.
3.3. Differences between In Situ Assessment, Visual Classification with SamplePoint Software, and
Image Classification Methods
We performed a paired-samples Wilcoxon test between the in situ assessment and
four remote methods to assess PV, NPV, and BS fractional covers. Measurements assessed
using SamplePoint software and in situ samplings were not significantly different (thresh-
Sensors 2021,21, 7310 11 of 15
old
p= 0.05
) among the three fractional covers, while assessment of BS was marginally
insignificant (Table 2). NPV and BS estimated from the unsupervised classification, and the
BS estimated from the supervised classification, were significantly different from the in situ
assessment (Table 2). The OBIA assessment was not significantly different from the in situ
assessment, however, pvalues of NPV and BS were close to the threshold (Table 2). Thus,
SamplePoint assessment was the most consistent with in situ assessment, compared to
unsupervised classification. Although the reliability of the OBIA would make it a suitable
alternative for in situ methods, the OBIA requires sophisticated image processing and
human training before it can be effective [19,27].
Table 2.
p-value of the paired-samples Wilcoxon test between in situ assessment and four other approaches (SamplePoint
estimation, unsupervised classification, supervised classification, and object-based image analysis (OBIA)).
SamplePoint 1Unsupervised Classification 2Supervised Classification 2OBIA 2
PV NPV BS PV NPV BS PV NPV BS PV
NPV
BS
p-value (%)
0.25
0.33 0.071 0.11 0.003 * 0.0002 * 0.08 0.10 0.0005 * 0.17
0.06
0.089
1n= 180; 2n= 36. * indicates p-value < 0.05
We found that spatial scale and merging level had varying effects on OBIA analy-
ses processing within a single image. The vegetation-dominated part of the image was
accurately assessed for green vegetation, dead and senescent materials, and background
(
Figure 10a,b
). This suggests the reason why the OBIA had greater accuracy than super-
vised and unsupervised classifications as it is based on relatively homogeneous segmented
objects rather than pixels. In contrast, training samples selected in the supervised classifica-
tion were largely based on polygons containing numerous mixed pixels [
48
]. However, in
bare-soil dominated images, shrubs were incorrectly classified as background (
Figure 10c,d
,
double arrow 1), as well as portions of green leaf (Figure 10c,d, double arrow 2). Because
shrub branches, green leaf, dead materials, and bare soil (as well as moss, lichen, and rocks)
all had different morphologies, a global setting of scale and merging level was unable to
segment a heterogeneous scene [49,50].
Different images with diverse species compositions required distinct scale and merg-
ing level settings when using OBIA (Figures 10 and 11). Since juniper and needle-and-
thread grass have different morphologies and community structure, the scale and merging
level were 50 and 10 for the site EC2, plot E3, and 40 and 5 for the site UG2, plot S5. The
latter image had greater fragmentation with layers of green grass (top), senescent grass
(middle), and dead material (bottom). However, the scene was relatively simple in the
former image except for the misclassification of shrub branches.
We tested the effect of spatial scale and merging level on OBIA classification (
Figure 11
).
Larger scale and merging levels (60 and 10) caused misclassification of green grass (
Figure 11b
),
while the smaller scale and merging levels (40 and 5) had better results (orange rectangle 1 in
Figure 11a–c). A scaled-in view (as shown in Figure 11d,e (orange rectangle 2 and 3)) resulted
in the pseudo-enlargement of green grass objects. This indicated that the selection of proper
segmentation and merging parameters related to scene composition was critical for accurate
assessment using OBIA. As mentioned above, we used the preview window in the Feature
Extraction Module of ENVI 5.5 for the interactive adjustment of these parameters. This is
time-consuming and needs knowledge of OBIA.
As is apparent in these comparisons, the SamplePoint-processed results were highly
related to in situ estimation, even with nine different categories (Figure 3). The unsuper-
vised image classification method was unable to discriminate PV, NPV, and BS with the
desired accuracy, while supervised image classification outperformed the unsupervised
method. OBIA had the highest accuracy among the three image classification methods
compared with the in situ estimation.
Sensors 2021,21, 7310 12 of 15
Sensors 2021, 21, x FOR PEER REVIEW 12 of 16
and rocks) all had different morphologies, a global setting of scale and merging level was
unable to segment a heterogeneous scene [49,50].
Different images with diverse species compositions required distinct scale and merg-
ing level settings when using OBIA (Figures 10 and 11). Since juniper and needle-and-
thread grass have different morphologies and community structure, the scale and merg-
ing level were 50 and 10 for the site EC2, plot E3, and 40 and 5 for the site UG2, plot S5.
The latter image had greater fragmentation with layers of green grass (top), senescent
grass (middle), and dead material (bottom). However, the scene was relatively simple in
the former image except for the misclassification of shrub branches.
Figure 10. Comparison between the original image and OBIA region means: (a) original image (vegetation (grass and
shrub) dominated), (b) OBIA region mean of (a), (c) original image (bare-soil dominated), and (d) OBIA region mean of
(c). (a,b) are subsets of site EC2, plot E3 (shrub (Juniperus) dominated). The scale level was 50 and the merging level was
10.
We tested the effect of spatial scale and merging level on OBIA classification (Figure
11). Larger scale and merging levels (60 and 10) caused misclassification of green grass
(Figure 11b), while the smaller scale and merging levels (40 and 5) had better results (or-
ange rectangle 1 in Figure 11a–c). A scaled-in view (as shown in Figure 11d,e (orange rec-
tangle 2 and 3)) resulted in the pseudo-enlargement of green grass objects. This indicated
that the selection of proper segmentation and merging parameters related to scene
Figure 10.
Comparison between the original image and OBIA region means: (
a
) original image (vegetation (grass and
shrub) dominated), (
b
) OBIA region mean of (
a
), (
c
) original image (bare-soil dominated), and (
d
) OBIA region mean of (
c
).
(a,b) are subsets of site EC2, plot E3 (shrub (Juniperus) dominated). The scale level was 50 and the merging level was 10.
Sensors 2021,21, 7310 13 of 15
Sensors 2021, 21, x FOR PEER REVIEW 13 of 16
composition was critical for accurate assessment using OBIA. As mentioned above, we
used the preview window in the Feature Extraction Module of ENVI 5.5 for the interactive
adjustment of these parameters. This is time-consuming and needs knowledge of OBIA.
Figure 11. Illustration of spatial scale and merging-level effects on OBIA classification: (a) original image, (b) OBIA region
mean scale was 60 and merging level was 10, (c) OBIA region mean scale was 40 and merging level was 5, (d) subset of
(a), and (e) subset of (b). (a) was from site UG2, plot S5 (needle-and-thread-grass (Hesperostipa comata) dominated).
Table 2. p-value of the paired-samples Wilcoxon test between in situ assessment and four other approaches (SamplePoint
estimation, unsupervised classification, supervised classification, and object-based image analysis (OBIA)).
SamplePoint
1 Unsupervised Classification
2 Supervised Classification
2 OBIA
2
PV NPV BS PV NPV BS PV NPV BS PV NPV BS
p-value (%) 0.25 0.33 0.071 0.11 0.003 * 0.0002 * 0.08 0.10 0.0005 * 0.17 0.06 0.089
1 n = 180; 2 n = 36. * indicates p-value < 0.05
As is apparent in these comparisons, the SamplePoint-processed results were highly
related to in situ estimation, even with nine different categories (Figure 3). The unsuper-
vised image classification method was unable to discriminate PV, NPV, and BS with the
desired accuracy, while supervised image classification outperformed the unsupervised
method. OBIA had the highest accuracy among the three image classification methods
compared with the in situ estimation.
4. Conclusions
In this study, a mobile camera stand equipped with a NIKON D5500 camera was
used to photograph vegetation plots in a typical northern mixed grassland, the Grassland
National Park, Canada. This grassland type has a large amount of dead senescent vegeta-
tion material. The imagery was processed by SamplePoint, unsupervised, supervised, and
object-based image classification approaches to derive the vegetation fractional covers,
which were compared with in situ visual assessment.
Figure 11.
Illustration of spatial scale and merging-level effects on OBIA classification: (
a
) original image, (
b
) OBIA region
mean scale was 60 and merging level was 10, (
c
) OBIA region mean scale was 40 and merging level was 5, (
d
) subset of (
a
),
and (e) subset of (b). (a) was from site UG2, plot S5 (needle-and-thread-grass (Hesperostipa comata) dominated).
4. Conclusions
In this study, a mobile camera stand equipped with a NIKON D5500 camera was
used to photograph vegetation plots in a typical northern mixed grassland, the Grassland
National Park, Canada. This grassland type has a large amount of dead senescent vege-
tation material. The imagery was processed by SamplePoint, unsupervised, supervised,
and object-based image classification approaches to derive the vegetation fractional covers,
which were compared with in situ visual assessment.
Our results demonstrated that imagery processing methods for mixed grassland veg-
etation communities can accurately determine the fractional vegetation cover in sample
plots, which is comparable to in situ measurement. We found that SamplePoint software
estimates corresponded highly to in situ assessments, accurately distinguishing and quan-
tifying PV, NPV, and BS fractional covers as well as the detailed vegetation community
categories. The object-based image analysis method performed better than the unsu-
pervised and supervised classification methods and produced reasonable coefficients of
determination (>0.7) for PV, NPV, and BS, comparable to in situ assessment. The OBIA
method nevertheless required sophisticated processing knowledge. Meanwhile, the unsu-
pervised classification method lacked accuracy in the discrimination of fractional cover in
mixed grassland plots. These results suggest that the in situ estimation method is compara-
ble with a more accurate SamplePoint approach based purely on imagery. Further research
into image-based estimation approaches could resolve ongoing issues with shadow and
various image-scene compositions.
Author Contributions:
Study conceptualization, manuscript review and editing, X.G.; established
the methods, conducted fieldwork, prepared manuscript drafts and revisions, X.Y. All authors have
read and agreed to the published version of the manuscript.
Sensors 2021,21, 7310 14 of 15
Funding:
This study was supported by the Canadian Natural Sciences and Engineering Research
Council [No. RGPIN-2016-03960] (X.G.) and the China Scholarship Council scholarship (X.Y.).
Data Availability Statement: Data available on request due to restrictions.
Acknowledgments:
The authors thank Parks Canada for assisting us with the fieldwork. We also
thank Tengfei Cui and Thuy Chu from the University of Saskatchewan, and Yunpei Lu, for their
contributions to the field data collection. We are grateful to D. Terrance Booth from the USDA for
valuable suggestions regarding data analysis and the design of the camera frame.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Purevdorj, T.S.; Tateishi, R.; Ishiyama, T.; Honda, Y. Relationships between percent vegetation cover and vegetation indices. Int. J.
Remote Sens. 1998,19, 3519–3535. [CrossRef]
2.
Jiapaer, G.; Chen, X.; Bao, A. A comparison of methods for estimating fractional vegetation cover in arid regions. Agric. For.
Meteorol. 2011,151, 1698–1710. [CrossRef]
3.
Zeng, X.; Dickinson, R.E.; Walker, A.; Shaikh, M.; DeFries, R.S.; Qi, J. Derivation and evaluation of global 1-km fractional
vegetation cover data for land modeling. J. Appl. Meteorol. 2000,39, 826–839. [CrossRef]
4.
Yu, X.; Guo, X.; Wu, Z. Land Surface Temperature retrieval from Landsat 8 TIRS—Comparison between radiative transfer
equation-based method, split window algorithm and single channel method. Remote Sens. 2014,6, 9829–9852. [CrossRef]
5.
Asner, G.P.; Heidebrecht, K.B. Spectral unmixing of vegetation, soil and dry carbon cover in arid regions: Comparing multispectral
and hyperspectral observations. Int. J. Remote Sens. 2002,23, 3939–3958. [CrossRef]
6.
Yu, X.; Guo, Q.; Chen, Q.; Guo, X. Discrimination of senescent vegetation cover from landsat-8 OLI imagery by spectral unmixing
in the northern mixed grasslands. Can. J. Remote Sens. 2019,45, 1–17. [CrossRef]
7.
Skidmore, A.K.; Ferwerda, J.G.; Mutanga, O.; Van Wieren, S.E.; Peel, M.; Grant, R.C.; Prins, H.H.; Balcik, F.B.; Venus, V. Forage
quality of savannas—Simultaneously mapping foliar protein and polyphenols for trees and grass using hyperspectral imagery.
Remote Sens. Environ. 2010,114, 64–72. [CrossRef]
8.
Asner, G.P.; Heidebrecht, K.B. Desertification alters regional ecosystem–climate interactions. Glob. Chang. Biol.
2005
,11, 182–194.
[CrossRef]
9.
Lucas, R.; Blonda, P.; Bunting, P.; Jones, G.; Inglada, J.; Arias, M.; Kosmidou, V.; Petrou, Z.I.; Manakos, I.; Adamo, M. The earth
observation data for habitat monitoring (EODHaM) system. Int. J. Appl. Earth Obs. Geoinform. 2015,37, 17–28. [CrossRef]
10.
Hill, M.J. Vegetation index suites as indicators of vegetation state in grassland and savanna: An analysis with simulated
SENTINEL 2 data for a North American transect. Remote Sens. Environ. 2013,137, 94–111. [CrossRef]
11.
Daubenmire, R. Ecology of fire in grasslands. In Advances in Ecological Research; Elsevier: Amsterdam, The Netherlands, 1968;
Volume 5, pp. 209–266.
12. Floyd, D.A.; Anderson, J.E. A comparison of three methods for estimating plant cover. J. Ecol. 1987,75, 221–228. [CrossRef]
13.
Hanley, T.A. A comparison of the line-interception and quadrat estimation methods of determining shrub canopy coverage.
J. Range Manag. 1978, 60–62. [CrossRef]
14. Jonasson, S. Evaluation of the point intercept method for the estimation of plant biomass. Oikos 1988,52, 101–106. [CrossRef]
15.
Jia, K.; Liang, S.; Gu, X.; Baret, F.; Wei, X.; Wang, X.; Yao, Y.; Yang, L.; Li, Y. Fractional vegetation cover estimation algorithm for
Chinese GF-1 wide field view data. Remote Sens. Environ. 2016,177, 184–191. [CrossRef]
16.
Hill, M.J.; Zhou, Q.; Sun, Q.; Schaaf, C.B.; Palace, M. Relationships between vegetation indices, fractional cover retrievals and the
structure and composition of Brazilian Cerrado natural vegetation. Int. J. Remote Sens. 2017,38, 874–905. [CrossRef]
17.
Guerschman, J.P.; Hill, M.J.; Renzullo, L.J.; Barrett, D.J.; Marks, A.S.; Botha, E.J. Estimating fractional cover of photosynthetic
vegetation, non-photosynthetic vegetation and bare soil in the Australian tropical savanna region upscaling the EO-1 Hyperion
and MODIS sensors. Remote Sens. Environ. 2009,113, 928–945. [CrossRef]
18.
Karl, J.W.; McCord, S.E.; Hadley, B.C. A comparison of cover calculation techniques for relating point-intercept vegetation
sampling to remote sensing imagery. Ecol. Indic. 2017,73, 156–165. [CrossRef]
19.
Liu, N.; Treitz, P. Modelling high arctic percent vegetation cover using field digital images and high resolution satellite data. Int. J.
Appl. Earth Obs. Geoinform. 2016,52, 445–456. [CrossRef]
20.
Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the green fractional vegetation cover from digital images using a shadow-resistant
algorithm (SHAR-LABFVC). Remote Sens. 2015,7, 10425. [CrossRef]
21.
Mu, X.; Hu, R.; Zeng, Y.; McVicar, T.R.; Ren, H.; Song, W.; Wang, Y.; Casa, R.; Qi, J.; Xie, D.; et al. Estimating structural parameters
of agricultural crops from ground-based multi-angular digital images with a fractional model of sun and shade components.
Agric. For. Meteorol. 2017,246, 162–177. [CrossRef]
22.
Booth, D.T.; Cox, S.E.; Berryman, R.D. Point sampling digital imagery with ‘SamplePoint’. Environ. Monit. Assess.
2006
,123,
97–108. [CrossRef]
23.
Patrignani, A.; Ochsner, T.E. Canopeo: A Powerful new tool for measuring fractional green canopy cover. Agron. J.
2015
,107,
2312–2320. [CrossRef]
Sensors 2021,21, 7310 15 of 15
24.
Louhaichi, M.; Johnson, M.D.; Woerz, A.L.; Jasra, A.W.; Johnson, D.E. Digital charting technique for monitoring rangeland
vegetation cover at local scale. Int. J. Agric. Biol. 2010,12, 406–410.
25.
Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci.
2012,23, 406–418. [CrossRef]
26.
Smith, A.M.; Hill, M.J.; Zhang, Y. Estimating ground cover in the mixed prairie grassland of Southern Alberta Using vegetation
indices related to physiological function. Can. J. Remote Sens. 2015,41, 51–66. [CrossRef]
27.
Laliberte, A.S.; Rango, A.; Herrick, J.E.; Fredrickson, E.L.; Burkett, L. An object-based image analysis approach for determining
fractional cover of senescent and green vegetation with digital plot photography. J. Arid Environ. 2007,69, 1–14. [CrossRef]
28.
Malenovský, Z.; Lucieer, A.; King, D.H.; Turnbull, J.D.; Robinson, S.A. Unmanned aircraft system advances health mapping of
fragile polar vegetation. Methods Ecol. Evolut. 2017,8, 1842–1857. [CrossRef]
29.
Booth, D.T.; Cox, S.E. Image-based monitoring to measure ecological change in rangeland. Front. Ecol. Environ.
2008
,6, 185–190.
[CrossRef]
30.
Louhaichi, M.; Hassan, S.; Johnson, D.E. VegMeasure: Image processing software for grassland vegetation monitoring. In
Advances in Remote Sensing and Geo Informatics Applications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 229–230.
31.
Wang, B.; Jia, K.; Liang, S.; Xie, X.; Wei, X.; Zhao, X.; Yao, Y.; Zhang, X. Assessment of Sentinel-2 MSI spectral band reflectances for
estimating fractional vegetation cover. Remote Sens. 2018,10, 1927. [CrossRef]
32.
Canty, M.J. Image Analysis, Classification and Change Detection in Remote Sensing: With Algorithms for ENVI/IDL and Python; CRC
Press: Boca Raton, FL, USA, 2014.
33. Shorthouse, J.D. Ecoregions of Canada’s prairie grasslands. Arthropods Can. Grassl. 2010,1, 53–81. [CrossRef]
34.
Environment-Canada. 1981–2010 Climate normals and averages. Can. Clim. Norm. 2015. Available online: https://climate.
weather.gc.ca/climate_normals/index_e.html (accessed on 26 October 2021).
35.
Huang, J.; Ji, M.; Xie, Y.; Wang, S.; He, Y.; Ran, J. Global semi-arid climate change over last 60 years. Clim. Dyn.
2016
,46, 1131–1150.
[CrossRef]
36.
Fischer, R.; Turner, N.C. Plant productivity in the arid and semiarid zones. Ann. Rev. Plant Physiol.
1978
,29, 277–317. [CrossRef]
37.
Booth, D.T.; Samuel, E.C.; Mounier, L.; Douglas, E.J. Technical note: Lightweight camera stand for close-to-earth remote sensing.
J. Range Manag. 2004,57, 675–678. [CrossRef]
38.
He, Y.; Guo, X.; Wilmshurst, J.; Si, B.C. Studying mixed grassland ecosystems II: Optimum pixel size. Can. J. Remote Sens.
2006
,32,
108–115. [CrossRef]
39.
Davidson, A.; Csillag, F. The influence of vegetation index and spatial resolution on a two-date remote sensing-derived relation
to C4 species coverage. Remote Sens. Environ. 2001,75, 138–151. [CrossRef]
40.
Zhang, C.; Guo, X. Measuring biological heterogeneity in the northern mixed prairie: A remote sensing approach. Can. Geogr.
2007,51, 462–474. [CrossRef]
41.
Booth, D.T.; Cox, S.E.; Meikle, T.W.; Fitzgerald, C. The accuracy of ground-cover measurements. Rangel. Ecol. Manag.
2006
,59,
179–188. [CrossRef]
42.
Yu, X.; Wu, Z.; Jiang, W.; Guo, X. Predicting daily photosynthetically active radiation from global solar radiation in the Contiguous
United States. Energy Convers. Manag. 2015,89, 71–82. [CrossRef]
43.
Yu, X.; Guo, X. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks
and conventional regressions models. Int. J. Biometeorol. 2016,60, 1247–1259. [CrossRef]
44.
Kozak, M.; Wnuk, A. Including the Tukey mean-difference (Bland–Altman) plot in a statistics course. Teach. Stat.
2014
,36, 83–87.
[CrossRef]
45. Wilcoxon, F. Individual comparisons by ranking methods. Biometr. Bull. 1945,1, 80–83. [CrossRef]
46. Pratt, J.W. Remarks on zeros and ties in the wilcoxon signed rank procedures. J. Am. Stat. Assoc. 1959,54, 655–667. [CrossRef]
47.
Booth, D.T.; Cox, S.E.; Fifield, C.; Phillips, M.; Williamson, N. Image analysis compared with other methods for measuring ground
cover. Arid Land Res. Manag. 2005,19, 91–100. [CrossRef]
48.
Feizizadeh, B.; Blaschke, T.; Tiede, D.; Moghaddam, M.H.R. Evaluating fuzzy operators of an object-based image analysis for
detecting landslides and their changes. Geomorphology 2017,293, 240–254. [CrossRef]
49. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010,65, 2–16. [CrossRef]
50.
Hay, G.; Castilla, G. Object-based image analysis: Strengths, weaknesses, opportunities and threats (SWOT). In Proceedings of
the 1st International Conference OBIA, Salzburg University, Salzburg, Austria, 4–5 July 2006; pp. 4–5.
... Models that perform automatic image segmentation to estimate percent ground cover using downward-facing digital photographs have reported promising results (e.g. Abdalla et al., 2019;Yu and Guo, 2021); however, most of this work has been conducted in agriculture or grassland environments with characteristically uniform ground cover morphologies (e.g. McCool et al., 2018), and few studies have attempted photobased measurement of the more complex and heterogenous ground cover environments characteristic of forest ecosystems. ...
Article
Ground cover and surface vegetation information are key inputs to wildfire propagation models and are important indicators of ecosystem health. Often these variables are approximated using visual estimation by trained professionals but the results are prone to bias and error. This study analyzed the viability of using nadir or downward photos from smartphones (iPhone 7) to provide quantitative ground cover and biomass loading estimates. Good correlations were found between field measured values and pixel counts from manually segmented photos delineating a pre-defined set of 10 discrete cover types. Although promising, segmenting photos manually was labor intensive and therefore costly. We explored the viability of using a trained deep convolutional neural network (DCNN) to perform image segmentation automatically. The DCNN was able to segment nadir images with 95% accuracy when compared with manually delineated photos. To validate the flexibility and robustness of the automated image segmentation algorithm, we applied it to an independent dataset of nadir photographs captured at a different study site with similar surface vegetation characteristics to the training site with promising results.
Article
The empirical retrieval method based on vegetation indices (VIs) has been extensively utilized to estimate the photosynthetic vegetation fractional cover (FPV) and non-photosynthetic vegetation fractional cover (FNPV). These indices, however, saturate in high biomass environments and are easily influenced by external factors. Three red edge (RE) bands (i.e., RE1, RE2, RE3) are available on the Sentinel-2 satellite, providing new options for estimating FPV and FNPV. Here, sensitivity analysis from PROSAIL-PRO simulations provided a theoretical foundation for developing new indices. Sentinel-2 images and field observations were collected at three growth stages to test the original and new-developed indices for FPV and FNPV estimations. Compared to the original photosynthetic vegetation indices (PVIs) containing the near-infrared (NIR) and red bands, the optimal combinations for FPV estimation in August were RE3 and red bands, while the combination of RE2 and RE1 bands performed best in April. By introducing RE3, RE2, RE1, and red bands in a certain proportion, 4-band red edge PVIs had the strongest correlation with FPV. Although the optimal 2-band non-photosynthetic vegetation indices (NPVIs) in November were two shortwave infrared (SWIR) bands, the combinations of NIR and RE3 bands performed best for FNPV estimation in April. Combining SWIR1, SWIR2, NIR, and RE3 bands, three 4-band red edge NPVIs were put forward, had the highest accuracy for estimating FNPV. The most prominent achievement of the red edge indices is to suppress undervaluation when vegetation cover is high while mitigating overestimation at low vegetation cover levels. However, the optimum weighting parameters of the improved VIs differed depending on different growth stages. We discovered novel 4-band red edge indices are useful for estimating FPV and FNPV.
Preprint
Full-text available
A common requirement of plant breeding programs across the country is companion planting-growing different species of plants in close proximity so they can mutually benefit each other. However, the determination of companion plants requires meticulous monitoring of plant growth. The technique of ocular monitoring is often laborious and error prone. The availability of image processing techniques can be used to address the challenge of plant growth monitoring and provide robust solutions that assist plant scientists to identify companion plants. This paper presents a new image processing algorithm to determine the amount of vegetation cover present in a given area, called fractional vegetation cover. The proposed technique draws inspiration from the trusted Daubenmire method for vegetation cover estimation and expands upon it. Briefly, the idea is to estimate vegetation cover from images containing multiple rows of plant species growing in close proximity separated by a multi-segment PVC frame of known size. The proposed algorithm applies a Hough Transform and Simple Linear Iterative Clustering (SLIC) to estimate the amount of vegetation cover within each segment of the PVC frame. The analysis when repeated over images captured at regular intervals of time provides crucial insights into plant growth. As a means of comparison, the proposed algorithm is compared with SamplePoint and Canopeo, two trusted applications used for vegetation cover estimation. The comparison shows a 99% similarity with both SamplePoint and Canopeo demonstrating the accuracy and feasibility of the algorithm for fractional vegetation cover estimation.
Article
Full-text available
The mixed grasslands of North America are ecosystems with a high volume of dead biomass. This characteristic underlies key ecosystem features such as the rate of carbon and nutrient uptake, heat flux exchange between the surface and the atmosphere, and wildlife habitat. Senescent vegetation is an important forage resource for grazing animals and is related to natural fire frequency and intensity. Therefore, quantitative estimation of photosynthetic vegetation (PV), senescent vegetation (NPV), and bare soil (BS) fraction is important for natural resource management. The authors propose an approach for extracting PV, NPV, and BS endmembers from the normalized difference vegetation index–dead fuel index (NDVI–DFI) plane by using the Landsat-8 imagery. The constrained linear spectral unmixing model was applied to discriminate NPV, PV, and BS using original spectral bands, NDVI–DFI indices, and original spectral plus NDVI and DFI indices. As a comparison, the traditional NDVI–SWIR32 was also investigated. Results showed that the DFI performed better than the SWIR32 to predict NPV from spectral unmixing. Index selection has a significant effect on NPV and BS cover fraction estimation. Choice of equation setup has a significant effect on the PV estimation. The methods proposed here can be applied to grassland ecosystems across the northern mixed grasslands region.
Article
Full-text available
Fractional vegetation cover (FVC) is an essential parameter for characterizing the land surface vegetation conditions and plays an important role in earth surface process simulations and global change studies. The Sentinel-2 missions carrying multi-spectral instrument (MSI) sensors with 13 multispectral bands are potentially useful for estimating FVC. However, the performance of these bands for FVC estimation is unclear. Therefore, the objective of this study was to assess the performance of Sentinel-2 MSI spectral band reflectances on FVC estimation. The samples, including the Sentinel-2 MSI canopy reflectances and corresponding FVC values, were simulated using the PROSPECT + SAIL radiative transfer model under different conditions, and random forest regression (RFR) method was then used to develop FVC estimation models and assess the performance of various band reflectances for FVC estimation. These models were finally evaluated using field survey data. The results indicate that the three most important bands of Sentinel-2 MSI data for FVC estimation are band 4 (Red), band 12 (SWIR2) and band 8a (NIR2). FVC estimation using these bands has a comparable accuracy (root mean square error (RMSE) = 0.085) with that using all bands (RMSE = 0.090). The results also demonstrate that band 12 had a better performance for FVC estimation than the green band (RMSE = 0.097). However, the newly added red-edge bands, with low scores in the RFR model, have little significance for improving FVC estimation accuracy compared with the Red, NIR2 and SWIR2 bands.
Article
Full-text available
This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.
Article
Accurate and efficient in situ measurement methods of leaf area index (LAI) and leaf angle distribution (LAD) are needed to estimate the fluxes of water and energy in agricultural settings. However, available methods: to estimate these two parameters, especially LAD, are limited. In this study, we propose a field measurement method using multi-angular digital images to estimate LAI and LAD simultaneously from the area proportions of: (i) sunlit soil; (ii) sunlit leaves; (iii) shaded soil; and (iv) shaded leaves. A new expression of the fraction of sunlit leaves is developed based on the radiative transfer theory. Coupling the measured and modeled fractions with an optimization scheme, LAI and the LAD parameters are derived from inverting a fractional model of sunlit and shaded leaves and soil. Through four tests using simulated scenes and in situ measurements for row crops, it is determined that our method performs well. The absolute error of LAI estimation is less than 0.1 when LAI is low (i.e., <1.2), and the absolute deviations of LAI estimates are approximately 0.5 when the reference LAI is 3.5. The estimation errors of LAI and the G function (a representative of LAD which quantifies the projection of unit foliage area) for in situ measurements are respectively less than 0.2 and 0.06 in general. In addition, the accuracy of estimation is even higher when leaves are simulated as randomly distributed disks or observations from multiple azimuth planes are used. One of the most interesting features of this method is its ability to estimate reasonable LAD directly from the fractions of sunlit and shaded leaves, even when LAI is high (i.e., >3), so little background soil is seen. The sensitivity and uncertainty analysis is consistent with the estimation errors. Theoretically, the application of this method is not limited to row crops or to field measurement, as the derived formulae of sunlit and shaded components can be used for other types of vegetation by introducing the clumping index and can be used in the modeling of canopy vegetation parameters (e.g., canopy reflectance).
Article
Plants like mosses can be sensitive stress markers of subtle shifts in Arctic and Antarctic environmental conditions, including climate change. Traditional ground-based monitoring of fragile polar vegetation is, however, invasive, labour intensive and physically demanding. High-resolution multispectral satellite observations are an alternative, but even their recent highest achievable spatial resolution is still inadequate, resulting in a significant underestimation of plant health due to spectral mixing and associated reflectance impurities. To resolve these obstacles, we have developed a new method that uses low-altitude unmanned aircraft system (UAS) hyperspectral images of sub-decimeter spatial resolution. Machine-learning support vector regressions (SVR) were employed to infer Antarctic moss vigour from quantitative remote sensing maps of plant canopy chlorophyll content and leaf density. The same maps were derived for comparison purposes from the WorldView-2 high spatial resolution (2.2 m) multispectral satellite data. We found SVR algorithms to be highly efficient in estimating plant health indicators with acceptable root mean square errors (RMSE). The systematic RMSEs for chlorophyll content and leaf density were 3.5-6.0 and 1.3-2.0 times smaller, respectively, than the unsystematic errors. However, application of correctly trained SVR machines on space-borne multispectral images considerably underestimated moss chlorophyll content, while stress indicators retrieved from UAS data were found to be comparable with independent field measurements, providing statistically significant regression coefficients of determination (median r² = .50, pt test = .0072). This study demonstrates the superior performance of a cost-efficient UAS mapping platform, which can be deployed even under the continuous cloud cover that often obscures optical high-altitude airborne and satellite observations. Antarctic moss vigour maps of appropriate resolution could provide timely and spatially explicit warnings of environmental stress events, including those triggered by climate change. Since our polar vegetation health assessment method is based on physical principles of quantitative spectroscopy, it could be adapted to other short-stature and fragmented plant communities (e.g. tundra grasslands), including alpine and desert regions. It therefore shows potential to become an operational component of any ecological monitoring sensor network.
Article
This study explores the use of the relationship between the normalized difference vegetation index (NDVI) and the shortwave infrared ratio (SWIR32) vegetation indices (VI) to retrieve fractional cover over the structurally complex natural vegetation of the Cerrado of Brazil using a time series of imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS). Data from the EO-1 Hyperion sensor with 30 m pixel resolution is used to sample geographic and seasonal variation in NDVI, SWIR32, and the hyperspectral cellulose absorption index (CAI), and to derive end-member values for photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV), and bare soil (BS) from a suite of protected and/or natural vegetation sites across the Cerrado. The end-members derived from relatively pure 30 m pixels are then applied to a 500 m pixel resolution MODIS time series using linear spectral unmixing to retrieve PV, NPV, and BS fractional cover (FPV, FNPV, and FBS). The two-way interaction response of MODIS-equivalent NDVI and SWIR32 was examined for regions of interest (ROI) collected within protected areas and nearby converted lands. The MODIS NDVI, SWIR32 and retrieved FPV, FNPV, and FBS are then compared to detailed cover and structural composition data from field sites, and the influence of the structural and compositional variation on the VIs and cover fractions is explored. The hyperion ROI analysis indicated that the two-way NDVI–SWIR32 response behaved as an effective surrogate for the two-way NDVI–CAI response for the campo limpo/grazed pasture to cerrado sensu stricto woody gradient. The SWIR32 sensitivity to the NPV and BS variation increased as the dry season progressed, but Cerrado savannah exhibited limited dynamic range in the NDVI–CAI and NDVI–SWIR32 two-way responses compared to the entire landscape, which also comprises fallow croplands and forests. Validation analysis of MODIS retrievals with Quickbird-2 images produced an RMSE value of 0.13 for FPV. However, the RMSE values of 0.16 and 0.18 for FBS and FNPV, respectively, were large relative to the seasonal and inter-annual variation. Analysis of site composition and structural data in relation to the MODIS-derived NDVI, SWIR32 and FPV, FNPV, and FBS, indicated that the VI signal and derived cover fractions were influenced by a complex mix of structure and cover but included a strong year-to-year seasonal effect. Therefore, although the MODIS NDVI–SWIR32 response could be used to retrieve cover fractions across all Cerrado land covers including bare cropland, pastures and forests, sensitivity may be limited within the natural Cerrado due to sub-pixel heterogeneity and limited BS and NPV sensitivity.
Article
Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitative information on vegetation cover that have been widely implemented in a number of local and national monitoring programs. The use of point-intercept data in remote sensing projects, however, is complicated due to differences in how vegetation cover indicators can be calculated. Decisions on whether to use plant intercepts from any canopy layer (i.e., any-hit cover) or only the first plant intercept at each point (i.e., top-hit cover) can result in discrepancies in cover estimates which are used to train remotely-sensed imagery. Our objective in this paper was to explore the theory of point-intercept sampling relative to training and testing remotely-sensed imagery, and to test the strength of relationships between top-hit and any-hit methods of calculating vegetation cover and high-resolution satellite imagery in two study areas managed by the Bureau of Land Management in northwestern Colorado and northeastern California. We modeled top-hit and any-hit percent cover for six vegetation indicators from 5m-resolution RapidEye imagery using beta regression. Model performance was judged using normalized root mean-squared error (RMSE) from a 5-fold cross validation. Any-hit cover estimates were significantly higher (α < 0.05) than top-hit cover estimates for forbs and grasses in the White River study area, but only marginally higher in Northern California. Pseudo-R² values for beta regression models of vegetation cover from RapidEye image information varied from 0.1525 to 0.7732 in White River and 0.2455 to 0.6085 in Northern California, with little pattern to whether any-hit or top-hit indicators produced better model fit. However, normalized RMSE was lower for any-hit cover (indicating better model performance) or minimally higher than top-hit cover for all indicators in each study area. Our results do not support the idea that top-hit cover estimates from point-intercept sampling are the most appropriate for remote sensing applications in arid and semi-arid shrub-steppe environments. In fact, having two sets of different indicators calculated from the same data may cause additional confusion in a situation where there is already considerable debate on how vegetation cover should be measured and used. Ultimately, selection of indicators to use for developing remote sensing classification or predictive models should be based first on the meaning or interpretation of the indicator in the ecosystem of interest, and second on how well the indicator performs in modeling applications.
Article
In this study, digital images collected at a study site in the Canadian High Arctic were processed and classified to examine the spatial-temporal patterns of percent vegetation cover (PVC). To obtain the PVC of different plant functional groups (i.e., forbs, graminoids/sedges and mosses), field near infrared-green-blue (NGB) digital images were classified using an object-based image analysis (OBIA) approach. The PVC analyses comparing different vegetation types confirmed: (i) the polar semi-desert exhibited the lowest PVC with a large proportion of bare soil/rock cover; (ii) the mesic tundra cover consisted of approximately 60% mosses; and (iii) the wet sedge consisted almost exclusively of graminoids and sedges. As expected, the PVC and green normalized difference vegetation index (GNDVI; (RNIR − RGreen)/(RNIR + RGreen)), derived from field NGB digital images, increased during the summer growing season for each vegetation type: i.e., ∼5% (0.01) for polar semi-desert; ∼10% (0.04) for mesic tundra; and ∼12% (0.03) for wet sedge respectively. PVC derived from field images was found to be strongly correlated with WorldView-2 derived normalized difference spectral indices (NDSI; (Rx − Ry)/(Rx + Ry)), where Rx is the reflectance of the red edge (724.1 nm) or near infrared (832.9 nm and 949.3 nm) bands; Ry is the reflectance of the yellow (607.7 nm) or red (658.8 nm) bands with R2’s ranging from 0.74 to 0.81. NDSIs that incorporated the yellow band (607.7 nm) performed slightly better than the NDSIs without, indicating that this band may be more useful for investigating Arctic vegetation that often includes large proportions of senescent vegetation throughout the growing season.