ArticlePDF Available

A half-Gaussian fitting method for estimating fractional vegetation cover of corn crops using unmanned aerial vehicle images

Authors:
1
A Half-Gaussian Fitting Method for Estimating Fractional Vegetation
Cover of Corn Crops Using Unmanned Aerial Vehicle Images
Linyuan Lia,b, Xihan Mua,b*, Craig Macfarlanec,d, Wanjuan Songa,b, Jun Chena,b, Kai Yane, Guangjian Yana,b
a State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Beijing Normal University and
Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences
b Beijing Engineering Research Center for Global Land Remote Sensing Products, Institute of Remote
Sensing Science and Engineering, Faculty of Geographical Science, Beijing Normal University, Beijing
100875, China
c CSIRO, 147 Brockway Rd, Floreat WA 6014, Australia
d School of Agriculture and Environment, Faculty of Science, The University of Western Australia,
Crawley WA, Australia
e School of Land Science and Techniques, China University of Geosciences, Beijing, China.
* Corresponding Author
E-Mail: muxihan@bnu.edu.cn (X. Mu); Tel/Fax: +86-10-5880-2041
Highlights
A half-Gaussian mixture model is proposed to extract FVC from LARS images (HAGFVC).
HAGFVC is robust to variations of spatial resolution, mixed pixels and vegetated coverage.
HAGFVC outperforms previous methods developed for proximal images.
2
Abstract 1
Accurate estimates of fractional vegetation cover (FVC) using remotely sensed images collected using 2
unmanned aerial vehicles (UAVs) offer considerable potential for field measurement. However, most 3
existing methods, which were originally designed to extract FVC from ground-based remotely sensed 4
images (acquired at a few meters above the ground level), cannot be directly used to process aerial 5
images because of the presence of large quantities of mixed pixels. To alleviate the negative effects of 6
mixed pixels, we proposed a new method for decomposing the Gaussian mixture model and estimating 7
FVC, namely, the half-Gaussian fitting method for FVC estimation (HAGFVC). In this method, the 8
histograms of pure vegetation pixels and pure background pixels are firstly fit using two half-Gaussian 9
distributions in the Commission Internationale d’Eclairage (CIE) L*a*b* color space. Then, a threshold 10
is determined based on the parameters of Gaussian distribution to generate a more accurate FVC estimate. 11
We acquired low-altitude remote-sensing (LARS) images in three vegetative growth stages at different 12
flight altitudes over a cornfield. The HAGFVC method successfully fitted the half-Gaussian distributions 13
and obtained stable thresholds for FVC estimation. The results indicate that the HAGFVC method can 14
be used to effectively and accurately derive FVC images, with a small mean bias error (MBE) and with 15
root mean square error (RMSE) of less than 0.04 in all cases. Comparatively, other methods we tested 16
performed poorly (RMSE of up to 0.36) because of the abundance of mixed pixels in LARS images, 17
especially at high altitudes above ground level (AGL) or in the case of moderate vegetation coverage. 18
The results demonstrate the importance of developing image-processing methods that specially account 19
for mixed pixels for LARS images. Simulations indicated that the theoretical accuracy (no errors in 20
fitting the half-Gaussian distributions) of the HAGFVC method reflected an RMSE of less than 0.07. 21
Additionally, this method provides a useful approach to efficiently estimating FVC by using LARS 22
images over large areas. 23
3
Keywords: fractional vegetation cover (FVC), unmanned aerial vehicle (UAV), low-altitude remote 24
sensing (LARS), digital photography, half-Gaussian distribution, histogram threshold 25
1. Introduction 26
Fractional vegetation cover (FVC) plays a key role in land surface processes, including carbon and 27
water cycles (Jung et al., 2006) and energy transfer (Sellers, 1997). It is also an important data product 28
in numerical weather prediction (Gutman and Ignatov, 1997) and high-precision agricultural analysis 29
(Hunt et al., 2014; Matese et al., 2015). To meet the requirements of FVC mapping and validation using 30
satellite products, rapid and accurate measurements of FVC are necessary (Mu et al., 2015; Song et al., 31
2017). Hence, various methods have been developed to measure FVC for these applications including 32
visual estimation, direct sampling and digital photography (Muir et al., 2011). Among these methods, 33
photography provides the best performance in terms of efficient and accurate validation of satellite 34
remote sensing products for high-precision applications (Yan et al., 2012). 35
Proximal (very close range, i.e. a few meters) sensing methods have a clear advantage over satellite 36
remote sensing in terms of spatial resolution and flexibility. The data obtained from proximal sensing 37
can provide highly accurate estimates of FVC directly from images (Liu and Pattey, 2010; Macfarlane 38
and Ogden, 2012; Song et al., 2015) while the satellite remote sensing images will typically require that 39
FVC be estimated based on calibrations of vegetation indices against independent estimates of FVC 40
from proximal sensing methods (Carlson and Ripley, 1997). However, traditional proximal sensing 41
methods, lacking the spatial coverage needed for mapping FVC over large regions, are potentially labor 42
intensive even over medium-scale areas and local conditions may limit site access. Recent technological 43
innovations have led to an increase in the availability of unmanned aerial vehicles (UAVs) ( Watts et al., 44
2012), which potentially overcome many limitations of both traditional proximal and satellite imagery 45
platforms. Low-altitude remote-sensing (LARS) UAVs are advantageous because of their flexibility, 46
4
operational ability in a variety of environmental conditions, and capacity for mapping at intermediate 47
spatial scales. The application of UAVs has extended to crop monitoring, precision agriculture and other 48
Earth science studies (Bhardwaj et al., 2016; Zarco-Tejada et al., 2012). Researchers widely agree that 49
commercial cameras mounted on UAVs are powerful tools for assessing FVC (Chianucci et al., 2016; 50
Torres-Sánchez et al., 2014). 51
UAVs are flexible in terms of their flight altitude, which facilitates the collection of imagery at a 52
range of spatial scales (Mesas-Carrascosa et al., 2014). For example, Chapman et al., (2014) deployed 53
UAVs fitted with a fixed, wide angle lens at heights ranging from 20 m to 80 m, in order to evaluate 54
various plant breeding trials. Generally, as UAVs are required to map FVC rapidly over larger areas, the 55
flight altitude must be increased, which reduces the spatial resolution. Spatial resolution could be 56
maintained by narrowing the camera focal length as altitude increases but this would increase the number 57
of flights required by what are frequently UAVs with only short flight time, which would negate one of 58
the main advantages of UAVs. As a result, UAVs are often flown at varied altitudes but constant focal 59
length (e.g., Samseemoung et al., 2012). However, reducing the spatial resolution of LARS imagery 60
increases the proportion of mixed pixels, which is likely to reduce the accuracy of medium-scale FVC 61
mapping (Hsieh et al., 2001; Jones and Sirault, 2014). 62
Image analysis methods developed for proximal sensing methods are poorly suited to estimate FVC 63
from LARS when mixed pixels are abundant in the images. Hsieh et al. (2001) established a simulation 64
scheme to assess the effect of the spatial resolution on classification accuracy and found that the 65
classification errors increased rapidly with decreasing spatial resolution. Jones and Sirault (2014) 66
reported that a low spatial resolution has a significant negative influence on image classification. Torres-67
Sánchez et al. (2014) observed the decrease in accuracy of FVC estimates in the early growth stages of 68
wheat when the spatial resolution of LARS imagery was reduced. 69
5
Early image analysis methods depended on supervised classification, which requires human 70
intervention, has low operational efficiency and produces noisy results. Later, automatic classification 71
methods were based on unsupervised clustering algorithms, category tree methods and threshold-based 72
methods (Yan et al., 2012). Researchers developed numerous threshold-based methods based on 73
vegetation indices in the red-green-blue (RGB) color space; such indices include the excessive green 74
index (Woebbecke et al., 1995 ; Liu and Pattey, 2010), normalized difference index (Pérez et al., 2000), 75
green leaf algorithm (Chianucci et al., 2016), etc. Other color spaces, such as the Commission 76
Internationale d’Eclairage (CIE) L*a*b* and hue saturation intensity (HSI), have also been used for 77
classification (Liu et al., 2012; Macfarlane and Ogden, 2012). These automatic algorithms have modestly 78
improved the efficiency of validation. However, they were specifically developed for proximally-sensed 79
images and unsuited to images containing many mixed pixels. In addition, previously reported studies 80
tended to use UAV-based commercial cameras to collect images over sparse scenes, such as early crop 81
and rangeland areas (Rango, 2009; Torres-Sánchez et al., 2014), while densely vegetated scenes (FVC > 82
0.7) have seldom been studied. 83
In this study, we propose an image analysis method, HAGFVC for estimating FVC that is scale 84
invariant and specifically addresses the problem of large and variable numbers of mixed-pixels in LARS 85
images acquired from varying altitudes. The theory and implementation of the half-Gaussian fitting 86
method for extracting FVC are described in Section 2. Three published methods, LAB2 (Macfarlane and 87
Ogden, 2012), Shadow-Resistant Algorithm for Extracting the Green FVC (SHAR-LABFVC; Song et 88
al., 2015) and excess green vegetation index (ExG; Woebbecke et al., 1995) are introduced for 89
comparison as well. Section 3 describes the real data and simulated data used to validate and analyze the 90
HAGFVC method. In Section 4, the results of the HAGFVC method and the three other methods are 91
compared, and an uncertainty analysis is presented. Sections 5 and 6 present the discussion and 92
6
conclusions, respectively. 93
2. Methods94
2.1. Gaussian Mixture Model for FVC 95
For vegetated surfaces, the CIE a* distribution of an image usually was considered as a Gaussian 96
mixture model (GMM) distribution (Coy et al., 2016; Liu et al., 2012). In proximally sensed images, 97
assuming almost no mixed pixels in these images, the GMM derives from the distributions of pure 98
vegetation and pure background material and exhibits a distinct bimodal distribution mode (Liu et al., 99
2012; Song et al., 2015). This mixture distribution function
H
xcan be given by:100

μ
,

μ
,
(1)
where , and are weight, mean value and standard deviation, respectively; subscripts and
101
indicate vegetation and background, respectively; , stands for a Gaussian distribution function;
102
is CIE a* value. Fig. 1a shows an example of a* distribution for an image with negligible mixed
103
pixels. The GMM distribution is characterized by two distinct peaks, representing vegetation and
104
background. In this situation, it is straightforward to decompose the GMM and select a reasonable
105
threshold to separate green vegetation from the background using automated thresholding methods (e.g.
106
the T2 thresholding method in Liu et al., 2012).
107
In LARS images, as the spatial resolution decreases, many mixed pixels occur. As a result, the GMM 108
consists of three components: pure vegetation, pure background and the mixed pixels. The shape of 109
bimodal GMM is obscured because mixed pixels render the peaks of the vegetation and background less 110
distinct. Generally, the GMM distribution becomes weakly bimodal or even unimodal. This mixture 111
distribution function  can be expressed as: 112


,


,

 (2)
where , and are the weight, mean value and standard deviation after resolution reduction, 113
7
respectively. Subscript refers to the mixed pixels.is an unknown probability density function 114
of mixed pixels which, in reality,is located between the vegetation and background on a* axis because 115
a mixed pixel is a combination of these two pure components. Fig. 1b shows an example of a* 116
distribution for an image with a number of mixed pixels. Each Gaussian component is more indistinct 117
due to the presence of mixed pixels. Furthermore, as the image resolution decreases, the difficulty of 118
decomposing the GMM increases and leads to more errors if Eq. (1) is used. 119
Fig. 1. Schematic diagrams of GMM distribution of CIE a* values at different spatial resolutions, (a) a*
120
distribution at a high spatial resolution (i.e. proximal sensing), (b). a* distribution at a lower spatial resolution (i.e.
121
low-altitude remote sensing).
122
2.2. HAGFVC method 123
To solve the problem caused by mixed pixels in decomposition of the GMM, the HAGFVC method uses 124
only pure pixels to estimate Gaussian parameters of pure vegetation and background. Uncertain pixels 125
distributed between the bimodal peaks of vegetation and background in the histogram are excluded. 126
Therefore,  is not used in the HAGFVC method. These pure pixels are distributed at edges (ends) 127
of the histogram (the green and the orange shaded areas in Fig. 2). After fitting the shaded areas with 128
two half-Gaussian distributions, we can obtain the Gaussian parameters of the pure vegetation and pure 129
background excluding the influence of mixed pixels. These Gaussian parameters are then used to 130
determine the threshold which is applied for image segmentation and FVC estimation. LARS images are 131
8
processed and analyzed using custom written scripts via a graphical user interface (GUI) programmed 132
in MATLAB R2013a (MathWorks, Inc., Natick, MA, USA). 133
The detailed steps of HAGFVC method for estimating FVC from digital images are illustrated in 134
Fig. 3 and are listed below. Steps (3) to (5) are the essential and novel steps of the HAGFVC method. 135
(1) Color space transformation. The first step of image processing is to convert RGB images to the 136
L*a*b* color space. The L*a*b* color space is device independent and simplifies pixel classification 137
based on greenness using a* values, which represent colors ranging from green to red. 138
(2) Smoothing the histogram curve. Generally, the histogram of a* values from an image is noisy 139
because of the complexity of vegetative cover and the variability of illumination. Therefore, we used the 140
Gaussian kernel-based smoothing method (Cox et al., 1989) to smooth the histogram to reduce noise 141
and improve the performance subsequent processing. 142
(3) Determination of initial mean values. To detect pure pixels distributed at the edges (ends) of the 143
histogram, it is necessary to determine the values of 
and
, which are initial estimation for
144
and
. The shapes of the frequency distributions of a* values from vegetation and background are 145
different: for vegetation, the distribution is typically flat and wide, whereas the histogram of background 146
is sharp and narrow (Čugunovs et al., 2017) (Fig. 2). Thus, we use different methods to determine each 147
mean value. For green vegetation, we calculate the second derivative of the smoothed curve and set 
148
as the left-most local maximum of the absolute values of the negative second derivative. For background, 149
we calculate the right-most local maximum frequency value as 
. Pure vegetation pixels lie to the 150
left of 
, and pure background pixels to the right of 
. 151
(4) Assessment of the modality of the distribution. In some cases, the a* histogram is unimodal. This 152
occurs when mixed pixels account for a large proportion of the image or the image largely consists of 153
one type of component. Half-Gaussian fitting is inadequate to process unimodal histograms; hence, we 154
9
determine whether the histogram is bimodal or not before the half-Gaussian fitting. If the difference 155
between 
 and
is larger than an empirical threshold, i.e., 5 in this study, the distribution is 156
considered to be a bimodal. Otherwise, the histogram is unimodal. 157
(5) Half-Gaussian fitting to estimate Gaussian parameters. Half-Gaussian fitting is performed for 158
bimodal and weakly bimodal distributions. All the pure pixels classified in step (3) are fitted with half-159
Gaussian distribution curves to re-estimate to obtain the final (
and
) as well as (
160
and
). In fitting, the distributions of pure pixels are normalized as the weights equal 1. The half-161
Gaussian distribution function  is expressed as: 162
1
2

,
,
(3)
Then, the weights
and
are obtained through calculating the ratios of each Gaussian component 163
to the entire GMM. 164
(6) Threshold computation. Once the Gaussian parameters are estimated, the threshold can be 165
determined through the “T2” threshold computation method introduced by Liu et al. (2012). For bimodal 166
cases, the threshold can be derived by solving a complementary error function equation: 167
‐
/
2∙
  

‐/
2∙
 (4)
where is the complementary Gaussian error function. This computation method is based on the 168
principle that the misclassification probabilities of vegetation and background are equal. The detailed 169
derivation is given in Liu et al., (2012). Fig. 2 shows an example of the threshold (marked as magenta 170
solid line) after solving Eq. (4). For the unimodal cases, an empirical threshold (i.e. -4; Liu et al. 2012) 171
computed from many proximal images is applied. 172
(7) FVC calculation. The threshold is used to segment an image by classifying pixels with a* values 173
less than or equal to this threshold as vegetation and all other pixels as background. Finally, FVC is 174
10
estimated as the ratio of the quantity of vegetation pixels to the quantity of all pixels. 175
176
Fig. 2. An example of the half-Gaussian fit of a GMM from a UAV image taken 19 m above ground level (AGL) in
177
cornfield.
and
are the mean values of the two Gaussian components.
178
179
180
Fig. 3. Flowchart of FVC estimation using the HAGFVC method and LARS images. The modules highlighted in
181
orange are the novel and essential steps of the HAGFVC method.
182
11
2.3. Assessment of performance 183
To highlight the improvement on FVC estimation of the HAGFVC method, we compared it with 184
two other methods (i.e. LAB2 and SHAR-LABFVC) that were also based primarily on the L*a*b* color 185
space. To expand this comparison and further generalize our results, another method (i.e. ExG) was 186
included. 187
(1) LAB2 (Macfarlane and Ogden, 2012): This method was developed for natural vegetation and uses 188
the green leaf algorithm (Louhaichi et al., 2001), a* and b* values of each pixel in the CIE L*a*b* color 189
space to segment green vegetation with a minimum-distance-to-means classifier. The RMSE of LAB2 190
was less than 0.05 (Macfarlane and Ogden, 2012). 191
(2) SHAR-LABFVC (Song et al., 2015): This method used a lognormal-GMM to characterize the CIE 192
a* distribution of a vegetation-covered surface. In addition, this approach introduced the HSI color space 193
to enhance the brightness of shaded parts of an image and improve the classification accuracy of ground-194
based images. The method was capable of detecting many small canopy gaps and partially overcame the 195
shadow effect; the authors reported a root mean square error (RMSE) of 0.025. 196
(3) ExG vegetation index (Woebbecke et al., 1995 ; Torres-Sánchez et al., 2014 ) : This method was 197
originally developed for weed identification and uses the green fraction of vegetation. This index 198
calculated in RGB color space as: ExG = 2G-R-B, where R, G and B are red, green and blue color 199
contents, respectively. An automatic threshold based on Otsu’s thresholding method (Otsu, 1979) was 200
used to segment the ExG grayscale image and estimate FVC. 201
Three statistics were used to assess the performance of each FVC-extraction method. 202
(1) Root mean squared error (RMSE): to measure the accuracy of FVC estimates at different resolutions 203
based on the comparison with ground-based FVC observations. Since the errors are squared before they 204
are averaged, the RMSE assigns a relatively high weight to large errors. 205
12
  


 // (4)
(2) Mean bias error (MBE): to assess the averaged bias of FVC estimates at different resolutions. In 206
MBE, the signs of the errors are not removed. 207
  

/
 (5)
(3) Standard deviation (STD): to analyze the consistency of FVC estimates at different spatial 208
resolutions.209
  


 /1/ (6)
where 
is the FVC value estimated for  flight altitude over a plot and denotes the number of 210
observations.  is the ground-measured FVC and is treated as the true value.  represents the 211
average of FVC estimated from the LARS images at different flight altitudes over a sampling plot. 212
3. Materials 213
3.1. Study area 214
The study area (42.24° N, 117.06° E) was located in Weichang County, Hebei Province, China (Fig. 215
4). Field campaigns were performed on 29 June, 11 July and 31 July in 2015. These dates represent three 216
vegetative growth stages of corn (Zea mays; Table 1). The three growth stages are deemed V4, V6 and 217
V8, where Vn (n = 4, 6, 8) indicates n leaves with collars visible. We established a small sampling plot 218
(10 m 8 m) to measure FVC using UAV and ground-based photography. 219
13
220
Fig. 4. Study area located in Weichang County, Hebei Province, China (marked as the red point on the top-left 221
frame). The orthophoto of the experimental site is on the bottom frame. The sampling plot is approximately 10 m 222
8 m and located in a cornfield (top-right frame). 223
224
225
Ta bl e 1 226
Overview of field campaigns during three growth stages of corn in 2015. Vn (n = 4, 6, 8) indicates n leaves with collars 227
visible. True FVC values are derived from ground-based images using the SHAR-LABFVC method. 228
Date Local
Time
Growth
Stage
Mean Leaf
Width (cm)
Number
of Images Flight Height (m) True FVC Illumination
28/06 11:30 am V4 2.7 14 3 - 29 (step=2 m) 0.22 diffuse light
(cloudy day)
11/07 06:30 pm V6 4.1 26 3 – 53 (step=2 m) 0.35 direct light
(large sun zenith)
31/07 05:45 pm V8 8.8 24 7- 53 (step=2 m) 0.82 diffuse light
(cloudy day)
14
3.2. UAV flights and aerial images 229
We used the model X-601 hexacopter (manufactured by Docwell Corporation, Beijing, China), 230
which is a vertical takeoff and landing aircraft with a maximum flying time of up to 20 minutes 231
depending on weather conditions and payload. An autopilot system provides autonomous navigation 232
based on a Global Position System (GPS) signal. The platform was equipped with a Sony Nex-5R digital 233
camera and stabilized by a stability augmentation system (SAS). This hexacopter can operate from 234
several meters to a few kilometers above ground level (AGL). 235
The flight pattern of the UAV ranged from a lowest flight altitude (e.g. 5 m AGL) to a highest flight 236
altitude (e.g. 60 m AGL) with an interval of 2 meters to acquire images. In our study, images were 237
captured at different flight altitude ranges for each growth stage (see Table. 1). Each waypoint was 238
located over the center of the plot. The UAV hovered for two seconds at each sample point to satisfy the 239
positional accuracy and assure that the digital camera had enough time to acquire an image. Flight 240
parameters containing WGS-84 latitude/longitude waypoints were logged using a ground control station 241
(GCS) computer. 242
A Sony Nex-5R digital camera was mounted on the hexacopter to acquire nadir images. This camera 243
provides an image of 23.5 15.6 mm and a maximum image size of 4912 3264 pixels. The focal 244
length of the camera lens was 16 mm; thus, the pixel ground resolution was 0.3 cm at 10 m AGL. The 245
leaf widths of different growth stages are listed in Table 1. The camera aperture and shutter speed were 246
set manually depending on different light conditions before takeoff. 247
3.3. Ground measurements 248
To obtain ground measurements of FVC for validation, we used a Nikon D3000 digital camera 249
mounted on portable pole via an angled steel bracket. The camera was set to aperture priority mode, 250
automatic exposure, ISO 100, and 18 mm focal length to produce fine quality images in Joint 251
Photographic Experts Group (JPEG) format. Images were captured looking vertically downward from 252
15
approximately 1.5 m above the canopy. A surveyor walked along the two diagonals of the plot and took 253
a photo every 2 meters. We obtained 14 field images at similar times during the UAV flights in each 254
stage. The images were uploaded to a computer for subsequent processing using both the LAB2 method 255
and the SHAR-LABFVC method. The FVC of the entire sampling plot is an arithmetic average of the 256
FVC extracted from each image. 257
3.4. Simulated images 258
A simulated image dataset was used to quantify the uncertainty of FVC estimation by the HAGFVC 259
method. Images were generated using large-scale emulation system (LESS) software developed by Qi 260
et al. (2017) for realistic three-Dimensional (3D) corn scene simulation. LESS is a ray-tracing based 261
radiative transfer simulation model, which is mainly designed for the radiometric simulation of forest 262
canopies, but also can simulate other types of scenes (such as crops). We simulated four binary images 263
from four scenes with different FVCs (see Table. 2). In each image, values of 1 represent vegetation 264
(green areas in Fig. 5) and values of 0 stand for the background (black areas in Fig. 5). The binary images 265
were aggregated to simulate the images obtained at varied flight altitudes. The image resolution 266
decreased after aggregation and the mixed pixels (orange area in Fig. 5) appear, the values of which were 267
between 0 and 1. The coarser the resolution was, the greater the number of mixed pixels in the image. 268
269
270
271
Ta bl e 2
272
Overview of the four simulated images. and are the mean values and standard deviations of Gaussian distribution,
273
respectively.
274
FVC Crop type
Gaussian parameters Image size
[pixel]
Resolution
[cm]
0.21, 0.30,
0.38, 0.59 Corn -16 4.48 2 2.24 49123264 0.02
16
275
Fig. 5. Examples of image aggregation (simulated image with FVC of 0.38). (a) the original binary image of 276
cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 277
To analyze the variation of histogram with spatial resolution, we randomly assigned a* values to the 278
pixels in these binary images based on Gaussian parameters of the vegetation and background (see Table 279
2). These parameters were chosen based on the mean statistics derived from many real proximally sensed 280
images through the HAGFVC method. Then, these images with a* values were linearly aggregated to 281
different resolutions, simulating the process by which an UAV acquired images at different flight 282
altitudes. The quantities of mixed pixels and pure pixels are precisely known, as are the Gaussian 283
parameters of the two pure components of each image. Fig. 6 shows the simulated CIE a* distributions 284
before and after image aggregation. In the aggregation process, the proportion of pure pixels decreased 285
17
and the proportion of mixed pixels increased. The mean values of the vegetation and background were 286
relatively constant, while the standard deviations of both decreased gradually as pixels were aggregated. 287
288
289
Fig. 6. The CIE a* distributions of a simulated image with FVC of 0.38. Histograms of (a) the initial binary image 290
of cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 291
292
4. Results 293
4.1. Fraction of mixed pixels versus flight height for LARS images 294
We used a high-resolution classified LARS images to assess the fraction of mixed pixels versus 295
flight height in each growth stage. The LARS images acquired at 7 m AGL were classified as vegetation 296
and background with a pixel ground resolution of 0.2 cm (much smaller than the foliage width), so the 297
pixels acquired at 7 m AGL were assumed to be ‘pure pixels’. Then, the classified images were 298
18
progressively aggregated to simulate varied flight altitudes, and the fractions of mixed pixels calculated. 299
Coarse-resolution pixels obtained at an AGL higher than 7 m were classified as mixed pixels if they did 300
not entirely overlap with one type of ‘pure’ pixels in the 7 m AGL image (Fig. 7) 301
The fraction of mixed pixels increased with flight altitude (Fig. 7) at all growth stages and the 302
fractions of mixed pixels in V6 and V8 stages were markedly larger than in V4 stage at each altitude. 303
Hence, images acquired in the denser crop stages have more mixed pixels that the early crop stage at the 304
same flight level. 305
306
Fig. 7. The relationship between above ground level (resolution) and fraction of mixed pixels307
4.2. Comparison of FVC estimates 308
There were marked differences between FVC estimated from the four methods, i.e., HAGFVC, 309
LAB2, SHAR-LABFVC and ExG. In Fig. 8, the images captured at 25 m AGL in the three growth stages 310
and cropped using an identical footprint illustrate the classification results of the HAGFVC (Fig. 8 V4b, 311
V6b, V8b), LAB2 (Fig. 8 V4c, V6c, V8c), SHAR-LABFVC (Fig. 8 V4d, V6d, V8d) and ExG (Fig. 8 312
V4e, V6e, V8e) methods. The black area represents the background, and the white area represents green 313
vegetation. Notably, only slight differences exist among the classified images in the V4 stage, while 314
substantial differences can be observed among the four methods in the other two stages, especially the 315
19
V6 stage. 316
317
Fig. 8. Image segmentation using four methods: (V4a, V6a, V8a) UAV RGB images of three growth stages at 25 318
m above ground level (AGL), (V4b, V6b, V8b) HAGFVC method, (V4c, V6c, V8c) LAB2 method, (V4d, V6d, 319
V8d) SHAR-LABFVC method, and (V4e, V6e, V8e) ExG method. 320
The methods are compared at all flight altitudes and in the three growth stages in Fig. 9 and Table 321
3. Note that the true values of FVC derived using LAB2 and SHAR-LABFVC methods differ by less 322
than 0.05. In general, FVC estimated using the HAGFVC method is closest to the true values (Fig. 9) 323
for most flight altitudes and growth stages. 324
In the V4 stage, the FVC estimates of all four methods are close to the true FVC (“TrueValue” in 325
Table 3), i.e., the FVC derived from ground-measured images with the SHAR-LABFVC method), with 326
RMSEs of 0.02 for the LAB2 method, 0.03 for the SHAR-LABFVC method, 0.03 for the ExG method 327
and 0.02 for the HAGFVC method (see Table 3). The results at different altitudes are consistent, with 328
20
STDs of less than 0.03. 329
In the V6 stage, the HAGFVC method provided good results, with an RMSE of 0.02 and an MBE 330
of 0.01, while the LAB2 and SHAR-LABFVC methods obviously overestimated FVC, with an RMSE 331
of approximately 0.20 and an MBE up to 0.19, whereas the ExG method underestimated FVC with an 332
RMSE of 0.09 (see Table 3). Moreover, the HAGFVC method yielded stable results at different flight 333
altitudes, with an STD of 0.02. This finding suggested that the HAGFVC method has the potential to 334
accurately map FVC at several dozen meters AGL. By contrast, the LAB2 and SHAR-LABFVC 335
methods yielded continuously worsening results as the flight height increases; thus, only the results at 336
the lowest flight altitude were trustworthy. ExG method yielded stable results at different resolutions but 337
underestimate FVC with MBE of -0.08 (see Table 3). 338
In the V8 stage, the HAGFVC method provides relatively good results, with an RMSE of 0.03 and 339
an MBE of -0.02. SHAR-LABFVC method has the similar results with HAGFVC. However, LAB2 and 340
ExG methods generate RMSEs more than 0.16 even up to 0.4 (see Table 3). 341
21
342
Fig. 9. FVC comparison among the four methods in three vegetative growth stages, i.e., (a) V4, (b) V6 and (c) 343
V8. Vn (n = 4, 6, 8) indicates n leaves with collars visible. The TrueFVC-SHAR and TrueFVC-LAB2 respectively 344
represent the FVC derived by using the SHAR-LABFVC and LAB2 methods in field measurements. 345
22
346
Ta bl e 3 347
RMSEs, MBEs and STDs of the four FVC-estimation methods. SHAR denotes the SHAR-LABFVC method. 348
TrueValue is the FVC derived by using the SHAR-LABFVC method in field measurements. 349
Statistic V4 stage
(TrueValue = 0.22)
V6 stage
(TrueValue = 0.35)
V8 stage
(TrueValue = 0.82)
HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG
RMSE 0.02 0.02 0.03 0.03 0.02 0.20 0.20 0.09 0.03 0.16 0.06 0.36
STD 0.02 0.03 0.01 0.02 0.02 0.08 0.06 0.02 0.03 0.06 0.05 0.03
MBE -0.01 -0.01 -0.03 -0.03 0.01 0.19 0.19 -0.08 -0.02 -0.15 -0.03 -0.36
350
The observed FVCs at different flight heights differ because an angular effect exists with increasing 351
flight altitude (closer to parallel viewing as the UAV height increases). Although the actual focal length 352
of the lens and its field of view is fixed, cropping the image to measure the same region of interest at 353
ground level narrows the field of view as flight altitude increases. Correspondingly, the threshold slightly 354
changes at different flight altitudes. Fig. 10 shows the a* distributions at two flight altitudes (5 m and 355
35 m AGL) in the V6 stage. The threshold, the mean values of vegetation and background and the fitted 356
Gaussian curves are calculated using the HAGFVC method. Note that the valley between the vegetation 357
and background histogram peaks becomes less pronounced as the flight altitude increases because more 358
mixed pixels cause the overall distribution to become weakly bimodal. However, the threshold 359
determined using the HAGFVC method is still located in the valley. 360
23
361
Fig. 10. Histograms of a* distributions at different flight heights in the V6 stage. (a) above ground level (AGL) 362
of 5 m and (b) AGL of 35 m. The HAGFVC method derives fitted curves, Gaussian parameters and the 363
corresponding thresholds. 364
4.3. Sensitivity analysis
365
The HAGFVC threshold is a function of the weights, mean values and standard deviations of the
366
Gaussian distributions (see Eq. 4). We used the thresholds and FVC estimates of simulated images to
367
quantify the sensitivity of the HAGFVC method to different spatial resolutions and, therefore, different
368
flight altitudes. The images at different flight altitudes were simulated as described in section 3.4. The
369
fraction of mixed pixels increases linearly as the spatial resolution of each simulated image decreases.
370
As resolution decreases, the proportion will increase linearly (Fig 11a). Figs. 11b-d illustrate the weight,
371
mean and standard deviation against flight altitude. The mean values of the vegetation and background
372
are almost constant, while the weights and standard deviations decrease as the flight altitude increases
373
24
and spatial resolution is reduced. The threshold derived from Eq. (4) is weakly affected by variations in 374
the spatial resolution (Fig. 11e). Correspondingly, the FVC estimates closely agree with the true values 375
(deviation of less than 0.07 at resolutions of less than 3.2 cm for all simulated images; Fig. 11f). These 376
results suggest that the threshold used to segment green vegetation and the background is approximately 377
scale invariant and the uncertainty transferred to FVC estimates is small. 378
Fig. 11. Uncertainty analysis using four simulated images. Relationship between flight height and (a) the mean 379
values, (b) standard deviations, (c) weights, (d) fraction of mixed pixels, (e) threshold, and (f) FVC estimates. 380
5. Discussion 381
In this study, we have demonstrated that the HAGFVC method provides a solution for estimating 382
FVC from remotely sensed LARS images that yields consistent and accurate results at different spatial 383
resolutions. This method was developed based on a GMM, which describes the spectral characteristics 384
of a land surface covered by vegetation (Coy et al., 2016; Song et al., 2015). The basic concept of the 385
HAGFVC method is to use only pure pixels to derive the Gaussian parameters. We achieved this by 386
fitting half-Gaussian distributions for pure vegetation pixels and pure background pixels to avoid the 387
negative influence of mixed pixels. Mixed pixels are located between pure vegetation and the pure 388
background in the histogram (Fig. 1). The HAGFVC method uses the pixels at the edges (end) of the a* 389
25
histogram, where pure pixels are mainly distributed, to reconstruct full GMMs from the half Gaussian 390
distributions and then generate a reasonable threshold value. The fact that FVC estimates in this study 391
were close to the reference values strongly suggests that the negative effect of mixed pixels to FVC 392
estimation was suppressed by using the HAGFVC method. 393
Compared to the other three methods, the HAGFVC method improved FVC estimates and showed 394
lower RMSEs, MBEs and STDs in the validation for different vegetation coverages. In the three growth 395
stages of corn, the RMSEs and STDs of FVC estimated based on the HAGFVC method were less than 396
0.04, while LAB2 and SHAR-LABFVC yielded more errors and inconsistencies (RMSEs of up to 0.20 397
in the V6 stage), and ExG yielded quite large errors (RMSE of 0.36) in V8 stage (see Table 3). For sparse 398
vegetation (V4 stage), when the background dominates the image, all three methods accurately estimated 399
the FVC and exhibited similar performance. However, in the growth stages with medium and high 400
vegetation coverage (V6 and V8 growth stages in this study), LAB2, SHAR-LABFVC and ExG 401
produced considerable errors in FVC estimation at high flight altitudes and low spatial resolutions (Fig. 402
9 and Table 3). This is the result of the number of mixed pixels in an image increasing as the fraction of 403
vegetation and background pixels becomes similar (Fig. 7). As shown in Fig. 9b, the LAB2 and SHAR-404
LABFVC methods distinctly overestimate FVC and the MBE increases with flight altitude (up to 0.19, 405
Table 3) whereas the ExG method underestimated FVC with an MBE of -0.08. In the V8 stage (Fig. 9c), 406
the LAB2 and SHAR-LABFVC methods exhibited better performance than in the V6 stage, but the 407
performance was worse than that in the V4 stage. ExG yielded a considerable underestimation with 408
RMSE up to 0.36 (see Table 3). Although the HAGFVC method was validated on corn field at one site, 409
the method does not rely on the structure or spectral property of crops. It only requires information from 410
the histogram of a* values. Thus, we expect the HAGFVC method to apply other crop types. 411
Conventional methods designed for proximal sensing are greatly constrained by the unreasonable 412
26
decomposition of GMMs because of the large quantities of mixed pixels. LAB2 and SHAR-LABFVC 413
were developed to extract FVC from high-resolution images with few mixed pixels. Although ExG has 414
been used for estimating FVC from LARS images (Torres-Sánchez et al., 2014), the effect of mixed 415
pixels was not fully investigated. Other classical image-processing methods that have been used to 416
segment LARS images, such as K-means, Artificial Neural Networks (ANN), Random Forest and 417
Spectral Index methods (Feng et al., 2015; Hu et al., 2017; Poblete-Echeverria et al., 2017), also do not 418
specifically consider mixed pixels. However, mixed pixels occupy a large proportion of the image in 419
some situations (at a coarser resolution and moderate FVC level). The trend of increasing FVC with 420
height in the LAB2 method results from bias in the training data set while the trend in the SHAR-421
LABFVC method results from the weakly bimodal distribution of images acquired at high altitudes. 422
More mixed pixels result in more blurring of foreground and background pixels, which results in more 423
pixels with enough ‘greenness’ to be automatically selected as foreground pixels for training the 424
unsupervised classification used in the LAB2 method. This introduces a bias into the nearest-neighbor 425
classification used by the LAB2 method towards foreground as spatial resolution decreases, which 426
results in increases in the estimated value of FVC with altitude. The trend of increasing FVC with height 427
in the SHAR-LABFVC method results from the failure of finding a reasonable initial cut-off of a* 428
histogram which is used to make an initial segmentation. Because of this failure, SHAR-LABFVC starts 429
back-up algorithm which uses a constant empirical threshold to conduct classification. As resolution 430
decreases, the constant threshold results in a bias in the FVC estimates. For ExG, the continuous 431
underestimation of FVC in V6 and V8 stages is mainly due to less inter-class variability thus leading to 432
poor segmentation using Otsu’s method (Otsu, 1979). Our research demonstrates the need for developing 433
mixed-pixel-resistant methods for analyzing images acquired from UAVs. It is worth noting that, 434
although the method gives accurate estimates of FVC, the resulting classified image shouldn’t be used 435
27
for purposes that require very accurate spatial information about the location of foliage within an image, 436
e.g. Chen-Cihlar clumping corrections (Chen and Cihlar, 1995), because of the large proportion of mixed 437
pixels in high-altitude images. 438
The HAGFVC method was not substantially affected by illumination conditions or flight altitude. 439
The three UAV datasets were collected in distinctly different illumination environments, i.e., near noon 440
and near nightfall on cloudy days and near nightfall on a sunny day (Table 1). The variations in 441
illumination did not affect the HAGFVC method because the absolute values of a* are largely 442
independent of illumination and the method also does not depend on the absolute values of a*. A 443
sensitivity analysis showed that the threshold was insensitive to variations in the weights and Gaussian 444
parameters of the two pure components, despite the weights and standard deviations clearly decreasing 445
with increasing flight altitude. This is strong evidence that our method is relatively insensitive to the 446
level of green vegetation coverage and the quantity of mixed pixels. According to our analysis, the 447
absolute error was less than 0.07 when the resolution was less than 3.2 cm. Note that the HAGFVC 448
method only applies as long as the UAV is sufficiently close to the ground for there to be clearly defined 449
pure pixels of green vegetation and background. At very high altitudes the histogram of a* values will 450
become unimodal and an empirical threshold is used to estimate FVC. In extreme cases the images will 451
come to resemble images from high-altitude remote sensing, from which only vegetation indices can be 452
derived and pixel classification is challenging. 453
The complexity of the spatial distribution of vegetation, the variability in illumination conditions 454
(Ponzoni et al., 2014) and the angular effect (Zhao et al., 2012) caused by perspective projection, all 455
affect the accuracy of FVC estimation using the HAGFVC method by reducing the precision of searching 456
for the mean values of the Gaussian distributions. In practice, the accuracy of our method depends on 457
the precision of determining the mean values of the two components. Fluctuations were observed in the 458
28
FVC estimates at different spatial resolutions because of errors in determining mean values. The 459
relatively large fluctuations in FVC estimation at different flight altitudes in the V8 stage (RMSE of up 460
to 0.03 in Fig. 9c are mainly caused by non-optimal mean values. Generally, non-optimal mean values 461
derive from two sources. The first is the representativeness of GMM for a vegetated surface. An 462
alternative model might produce better results. The second is the sub-optimal smoothing of the histogram. 463
A better smoothing algorithm might achieve more accurate determination of the initial mean values. 464
Theoretically, more accurately estimating these mean values is the key to improving the accuracy of 465
FVC estimation based on GMM decomposition. Notwithstanding the opportunities for improvement, 466
the HAGFVC method is a significant advance on existing methods to minimize the effect of mixed pixels 467
and yield accurate estimates of FVC. 468
6. Conclusions 469
This paper proposed a half-Gaussian fitting method (i.e., HAGFVC) to decompose a Gaussian 470
mixture model (GMM) and estimate fractional vegetation cover (FVC) from low altitude remote sensing 471
(LARS) images. This algorithm only used a portion of pure pixels to derive the GMMs in order to 472
suppress the influence of mixed pixels, and classified mixed pixels as vegetation or background at nearly 473
equal rates of misclassification. We compared three FVC estimation methods (LAB2, SHAR-LABFVC 474
and ExG) with the HAGFVC method and found that the HAGFVC method generated accurate and robust 475
FVC estimates for crop fields of high, moderate and low vegetation density. Particularly, when the 476
fraction of mixed pixels was high (when a corn plant has six visible leaf collars), HAGFVC exhibited 477
good performance, with an RMSE of 0.02 and MBE of 0.01 at flight attitudes from 3 meters to 50 meters 478
above ground level (AGL). Although the LAB2, SHAR-LABFVC and ExG methods exhibited good 479
estimates (RMSEs of less than 0.04) for sparse vegetation, large quantities of mixed pixels in the 480
moderate-density vegetation at coarse spatial resolutions reduced the accuracies of the conventional 481
29
ground-based methods (RMSE of up to 0.20). Simulations showed that the theoretical RMSE of the 482
HAGFVC method was less than 0.07 at resolutions of less than 3.2 cm. Consequently, our approach 483
demonstrates the potential for accurately estimating FVC over large areas using UAVs and LARS. 484
Acknowledgments 485
This work was supported by the National Science Foundation of China (Grant no. 41331171 and 486
61227806). The authors thank Prof. Suhong Liu (Beijing Normal University) for her kind suggestions 487
in image simulation. We also appreciate the help from Dr. Ronghai Hu in the organization of this 488
manuscript and the help from Dr. Jianbo Qi in field campaigns and images simulation. 489
References 490
Bhardwaj, A., Sam, L., Akanksha, Martín-Torres, F.J., Kumar, R., 2016. UAVs as remote sensing platform in glaciology: 491
Present applications and future prospects. Remote Sens. Environ. 175, 196–204. 492
Carlson, T.N. and Ripley, D.A., 1997. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote 493
Sens. Environ. 62(3), 241-252. 494
Chapman, S. et al., 2014. Pheno-Copter: A Low-altitude, autonomous remote-sensing robotic helicopter for high-throughput 495
field-based phenotyping. Agronomy, 4(2), 279-301. 496
Chen, J.M., Cihlar, J., 1995. Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index. 497
Appl. Opt. 34 (27), 6211–6222. 498
Chianucci, F., Disperati, L., Guzzi, D. and Bianchini, D., 2016. Estimation of canopy attributes in beech forests using true 499
colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 47, 60-68. 500
Cox, D.R., Hinkley, D.V., Rubin, D.B. and Silverman, B.W., 1989. Monographs on statistics and applied probability. 2(2), 501
273-277. 502
Coy, A., Rankine, D., Taylor, M., Nielsen, D. and Cohen, J., 2016.Increasing the accuracy and automation of fractional 503
vegetation cover estimation from digital photographs. Remote Sens. 8(7), 474. 504
Čugunovs, M., Tuittila, E.-S., Mehtätalo, L., Pekkola, L., Sara-Aho, I., Kouki, J., 2017. Variability and patterns in forest soil 505
and vegetation characteristics after prescribed burning in clear-cuts and restoration burnings. Silva Fenn. 51. 506
Woebbecke, D.M., Meyer, G.E., Von Bargen, K. Von, Mortensen, D.A., 1995. Color indices for weed identification under 507
various soil, residue, and lighting conditions. Trans. ASAE 38, 259–269. 508
Feng, Q., Liu, J. and Gong, J., 2015. UAV Remote sensing for urban vegetation mapping using random forest and texture 509
analysis. Remote Sens. 7(1), 1074-1094. 510
Gutman, G., Ignatov, A., 1997. Satellite-derived green vegetation fraction for the use in numerical weather prediction models. 511
Satell. Data Appl. Weather Clim. 19, 477–480. 512
Hsieh, P.F., Lee, L.C. and Chen, N.Y., 2001. Effect of spatial resolution on classification errors of pure and mixed pixels in 513
remote sensing. IEEE Trans. Geosci. Remote Sensing. 39(12), 2657-2663. 514
Hunt, E.R., Daughtry, C.S.T., Mirsky, S.B. and Hively, W.D., 2014. Remote sensing with simulated unmanned aircraft 515
imagery for precision agriculture applications. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7(11), 4566-4571. 516
Jones, H. and Sirault, X., 2014. Scaling of thermal images at different spatial resolution: the mixed pixel problem. Agronomy, 517
4(3), 380-396. 518
Jung, M., Henkel, K., Herold, M. and Churkina, G., 2006. Exploiting synergies of global land cover products for carbon cycle 519
30
modeling. Remote Sens. Environ. 101(4), 534-553. 520
Rango, A., Laliberte, A., Herrick, J.E., Winters, C., Havstad, K., Steele, C., Browning, D., 2009. Unmanned aerial vehicle-521
based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 3, 033542. 522
Liu, J. and Pattey, E., 2010. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. 523
Agric. For. Meteorol. 150(11), 1485-1490. 524
Liu, Y., Mu, X., Wang, H. and Yan, G., 2012. A novel method for extracting green fractional vegetation cover from digital 525
images. J. Veg. Sci. 23(3), 406-418. 526
Louhaichi, M., Borman, M.M. and Johnson, D.E., 2001. Spatially located platform and aerial photography for documentation 527
of grazing impacts on wheat. Geocarto Int. 16, 65-70. 528
Macfarlane, C. and Ogden, G.N., 2012. Automated estimation of foliage cover in forest understorey from digital nadir images. 529
Methods Ecol. Evol. 3(2), 405-415. 530
Matese, A., Toscano, P., Di Gennaro, S., Genesio, L. and Vaccari, F., 2015. Intercomparison of UAV, aircraft and satellite 531
remote sensing platforms for precision viticulture. Remote Sens. 7(3), 2971-2990. 532
Mesas-Carrascosa, F.J., Notario-García, M.D., Meroño De Larriva, J.E., Sánchez De La Orden, M. and García-Ferrer Porras, 533
A., 2014. Validation of measurements of land plot area using UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 33, 270-279. 534
Mu, X., Hu, M., Song, W., Ruan, G. and Ge, Y., 2015.Evaluation of sampling methods for validation of remotely sensed 535
fractional vegetation cover. Remote Sens. 7(12), 16164-16182. 536
Muir, J., Schmidt, M., Tindall, D., Trevithick, R., Scarth, P., Stewart, J.B., 2011. Field measurement of fractional ground 537
cover : a technical handbook supporting ground cover monitoring for Australia. ABARES. 538
Otsu, N., 1979. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man. Cybern. 9, 62–66. 539
Pérez, A.J., López, F., Benlloch, J.V. and Christensen, S., 2000. Colour and shape analysis techniques for weed detection in 540
cereal fields. Comput. Electron. Agric. 25(3), 197-212. 541
Poblete-Echeverria, C., Federico Olmedo, G., Ingram, B. and Bardeen, M., 2017. Detection and segmentation of vine canopy 542
in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): A Case Study in a Commercial 543
Vineyard. Remote Sens. 9(3), 268. 544
Ponzoni, F.J., Da Silva, C.B., Dos Santos, S.B., Montanher, O.C. and Dos Santos, T.B., 2014.Local illumination influence 545
on vegetation indices and plant area index (PAI) relationships. Remote Sens, 6(7), 6266-6282. 546
Qi, J., Xie, D., Guo, D., Yan, G., 2017. A large-scale emulation system for realistic three-dimensional (3-D) forest simulation. 547
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10, 4834–4843. 548
Samseemoung G, Soni P, Jayasuriya HP, Salokhe VM. Application of low altitude remote sensing (LARS) platform for 549
monitoring crop growth and weed infestation in a soybean plantation. Precis. Agric. 13(6), 611-27. 550
Sellers, P.J., 1997. Modeling the exchanges of energy, water, and carbon between continents and the atmosphere. Science 551
(80-. ). 275, 502–509. 552
Song, W., Mu, X., Ruan, G. and Gao, Z., 2017. Estimating fractional vegetation cover and the vegetation index of bare soil 553
and highly dense vegetation with a physically based method. Int. J. Appl. Earth Obs. Geoinf. 58, 168-176. 554
Song, W., Mu, X., Yan, G. and Huang, S., 2015. Extracting the green fractional vegetation cover from digital images using 555
a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 7(8), 10425-10443. 556
Torres-Sánchez, J., Peña, J.M., de Castro, A.I. and López-Granados, F., 2014. Multi-temporal mapping of the vegetation 557
fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 103, 104-113. 558
Watts, A.C., Ambrosia, V.G. and Hinkley, E.A., 2012. Unmanned aircraft systems in remote sensing and scientific research: 559
classification and considerations of use. Remote Sens. 4(12), 1671-1692. 560
Yan, G., Mu, X., Liu, Y., 2012. Fractional vegetation cover, in: Advanced remote sensing. Elsevier, pp. 415–438. 561
Zarco-Tejada, P.J., González-Dugo, V., Berni, J.A.J., 2012. Fluorescence, temperature and narrow-band indices acquired 562
from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. 563
Environ. 117, 322–337. 564
Zhao, J., Xie, D., Mu, X., Liu, Y. and Yan, G., 2012. Accuracy evaluation of the ground-based fractional vegetation cover 565
measurement by using simulated images. In: IGARSS 2012. Munich, Germany, pp. 3347-3350.566
... In particular, the "cover" attributes depicting nonphotosynthesis vegetation (NPV) organs and species refer to NPV cover and species cover, respectively. fCover is an essential structural variable that may be near-directly retrieved from ground-based observations or centimetric and decimetric-resolution remote sensing data (Armston et al., 2013;Fisher et al., 2020;Li et al., 2018;Song et al., 2015;White et al., 2018). For agricultural crop and grassland ecosystems that exhibit singlelayer structures in the vertical direction, vegetation elements generally refer to the photosynthetically active leaves, green stalks, and reproductive organs of plants, except for non-photosynthetic elements (Jay et al., 2019;Roth and Streit, 2018;Yan et al., 2020). ...
... By contrast, the remote sensing and phytophysiology communities insisted significant difference between effective canopy cover and crown cover probably because they determine discrepant radiative processes and physiological processes (Armston et al., 2013;Rautiainen et al., 2005;Tang et al., 2019), but, to some extent, interchangeably used effective canopy cover and canopy cover (Chianucci et al., 2016;Ma et al., 2017). In the agriculture and prataculture domain, the canopy cover, effective canopy cover, percentage cover, green cover, percent green cover, ground cover, foliage cover and vegetation cover are equivalent terms and essentially refer to the fCover due to the inexistence of woody parts (Hu et al., 2021;Jay et al., 2019;Li et al., 2018). It is, therefore, suggested to follow the recommended terminologies in the present article to minimize the confusion of related terms and ensure the compatibility of measures. ...
... As the grayscale features of green vegetation and soil are not specially considered in Otsu's thresholding, the binarization results may be prone to the shape of the grayscale histogram. Following a prior assumption that grayscale probability density functions of both vegetation and soil in a specific feature space follow Gaussian distributions, many researchers applied a two-component Gaussian Mixture Model (GMM) to characterize the histogram and then derive the threshold (Li et al., 2018;Liu et al., 2012). The two-component GMM (F(x)) of is expressed as ...
Article
Full-text available
Vegetation cover fraction (fCover) and related quantities are basic yet critical vegetation structure variables in various disciplines and applications. Ground- and aerial-based proximal and remote sensing techniques have been widely adapted across multiple spatial extents. However, the definitions of fCover-related nomenclatures have not yet been fully standardized, leading to confusing terms and making comparing historic measures difficult. With the issues potentially arising from an increasing diversity of fCover and related quantities estimation methods and corresponding uncertainties, there is also a growing need to spread knowledge on the current advances, challenges, and perspectives, especially in the context of no such existing review for ground- and aerial- based estimation. This paper provides the current knowledge mainly concerning passive image-based methods and active light detection and ranging (LiDAR) -based methods. We first harmonized the definitions of fCover and its related quantities (e.g., effective canopy cover, crown cover, stratified vegetation cover, and canopy fraction). Secondly, the typical applications of fCover and related quantities over a range of scales, fields, and ecosystems were summarized. Thirdly yet importantly, we offered a comprehensive review of traditional non-imaging methods, image-based methods (e.g., segmentation, unmixing, and spectral retrieval), point cloud-based methods (e.g., rasterization), and LiDAR return-based methods (e.g., return number index and return intensity retrieval) across different platforms (i.e., ground, unmanned aerial vehicle (UAV) and airplane). Our investigation of fCover and related quantities estimation touches upon various vegetation ecosystems, including agriculture cropland, grassland, wetland, and forest. Finally, the current challenges and future directions were discussed, such as image signal processing under complex heterogeneous surfaces and stratified cover and non-photosynthesis cover retrieval. We, therefore, expect that this review may offer an insight into fCover and related quantities estimation and serve as a reference for remote sensing scientists, agronomists, silviculturists, and ecologists.
... In recent years, the swift progress of unmanned aerial systems (hereafter UAS) and photogrammetric techniques have unlocked fresh avenues for exploration in various research domains (Li et al., 2018;Riihimäki et al., 2019;Yan et al., 2019). Researchers have keenly discovered the powerful collaborative potential between UAS and satellite systems, which are widely used in the applications of ecology and precision agriculture (Yinka-Banjo and Ajayi, 2019;Alvarez-Vanhard et al., 2021). ...
... To obtain label data with the highest possible accuracy, we have improved the four methods for extracting FVC UAS , which are the RFR (Breiman, 2001), the Normalized Difference Vegetation Index image-based dichotomous model (hereafter NDVI-based) (Xiao and Moody, 2005), and the support vector regression (hereafter SVR) (Lin et al., 2021) using multispectral data, and a half-Gaussian fitting method (hereafter HAGFVC) (Li et al., 2018) using RGB data, respectively. Random forest regression model RFR is an ensemble learning algorithm composed of multiple regression trees, which models the relationship between band reflectance and FVC using a set of decision rules using RFR. ...
Article
Full-text available
Accurate estimation of fractional vegetation cover (FVC) is essential for crop growth monitoring. Currently, satellite remote sensing monitoring remains one of the most effective methods for the estimation of crop FVC. However, due to the significant difference in scale between the coarse resolution of satellite images and the scale of measurable data on the ground, there are significant uncertainties and errors in estimating crop FVC. Here, we adopt a Strategy of Upscaling-Downscaling operations for unmanned aerial systems (UAS) and satellite data collected during 2 growing seasons of winter wheat, respectively, using backpropagation neural networks (BPNN) as support to fully bridge this scale gap using highly accurate the UAS-derived FVC (FVC UAS ) to obtain wheat accurate FVC. Through validation with an independent dataset, the BPNN model predicted FVC with an RMSE of 0.059, which is 11.9% to 25.3% lower than commonly used Long Short-Term Memory (LSTM), Random Forest Regression (RFR), and traditional Normalized Difference Vegetation Index-based method (NDVI-based) models. Moreover, all those models achieved improved estimation accuracy with the Strategy of Upscaling-Downscaling, as compared to only upscaling UAS data. Our results demonstrate that: (1) establishing a nonlinear relationship between FVC UAS and satellite data enables accurate estimation of FVC over larger regions, with the strong support of machine learning capabilities. (2) Employing the Strategy of Upscaling-Downscaling is an effective strategy that can improve the accuracy of FVC estimation, in the collaborative use of UAS and satellite data, especially in the boundary area of the wheat field. This has significant implications for accurate FVC estimation for winter wheat, providing a reference for the estimation of other surface parameters and the collaborative application of multisource data.
... First, each image was cropped to fit each of the targeted stand extents. Second, after cropping the targeted range, the Excess green index (ExG) was calculated by using the following formula (Equation (1)) to highlight the canopy information of the shelterbelt [35][36][37]: ...
Article
Full-text available
Farmland shelterbelt, as a category of shelterbelt in forestry ecological engineering, has an important influence on agricultural sustainability in agricultural systems. Timely and accurate acquisition of farmland shelterbelt age is not only essential to understanding their shelter effects but also directly relates to the adjustment of subsequent shelterbelt projects. In this study, we developed an age identification method using growth pattern to extract the age of shelterbelt (i.e., years after planting) based on Landsat time series images. This method was applied to a typical area of shelterbelt construction in the north of Changchun, China. The results indicated that the accuracy of age identification reached a stable situation when the permissible age error exceeded 3 years, achieving an accuracy of approximately 90%. Moreover, the accuracy at different growth phases (1–3 years, 4–15 years, 16–30 years, and >30 years) decreased with increasing age, and the accuracy of each growth phase can reach more than 80% when the permissible age error is beyond 7 years. Compared to building the typically weak statistical relationship between the shelterbelt age and remote sensing characteristic information to derive age, this method presented a direct age identification method for fine-scale age extraction of the shelterbelt. It introduced a novel perspective for shelterbelt age identification and the assessment of shelterbelt project advancement on the regional scale.
... Soil and plant identification in the study of L. Li et al. (2018) were based on the left or right half of the Gaussian distribution. This approach allows for the identification of two components in the presence of three components, but its effectiveness is questionable. ...
Article
Full-text available
Most vegetation indices for UAV data analysis are developed for low-resolution satellite platforms, which requires the use of other monitoring methods and agrochemical measures to accurately determine the state of plantations, considering different stages of vegetation and spectral characteristics. The research aims to develop a methodology for assessing the suitability of remote sensing spectral data for energy crop nutrition management. The study was conducted using winter crops, including wheat and rapeseed. The results for winter wheat for the period from 2017 to 2020 were analysed. Stresses associated with nutrient deficiencies were studied in the fields of long-term stationary experiments at the National University of Life and Environmental Sciences of Ukraine. The results obtained from the Slantrange sensor and Slantview software were used. The studies confirmed that the pixel distribution in images of plantations (wheat and winter rape) can be described by a Gaussian distribution. The coefficient of determination for wheat was higher than for rape due to the peculiarities of the plant leaf structure. For rapeseed, a higher coefficient of determination was found for the lognormal distribution, which is not convenient for automating fertilisation processes in precision farming technologies. The analysis of the distribution by spectral channels, in particular the presence of several maxima, may indicate the presence of foreign inclusions or transitional stages of vegetation, which makes such data unsuitable for crop management. It has been established that if, after soil filtration, the maximum amplitude of the distribution exceeds the nearest one by more than 3 times, the growing season can be considered stable for a particular area, and the results of spectral monitoring are reliable for further analysis It has been confirmed that the vegetation indices GNDVI and RNDVI are not effective for assessing the reliability of data based on the standard deviation of the distribution. Reference values of the standard deviation of the distribution can be established at research stations with controlled stress factors, which will help in crop management
... The research on vegetation that has been based on UAV is fruitful and has resulted in a lot of progress in related research fields. Because of its unique advantages, it is mainly used for remote sensing applications of medium and small scale and high precision, such as vegetation index extraction of the regional high precision LAI [132][133][134][135][136] and vegetation coverage [137][138][139], crop yield estimation at a field scale [140][141][142], fine classification and feature recognition [143][144][145], and high-throughput crop phenotype research [142,146,147]. The Sentinel-2 data have three bands in the "red edge" region, which can calculate the intensity of the chlorophyll reflection peak more accurately and improve the accuracy of atmospheric correction. ...
Article
Full-text available
The leaf area index (LAI) is widely used as an important indicator and ecological parameter of vegetation structure and growth status, but the LAI lacks bibliometric analysis. To further understand the LAI’s research status and frontier dynamics, we used 75 years of data (1947–2021) from the Web of Science for scientific bibliometric analysis. The results showed that 22,276 LAI re-search papers were published from 1947 to 2021. According to the characteristics of the literature growth, LAI research can be divided into five stages: incubation, cultivation, acceleration, evolution, and outbreak periods. The research power at the different stages had different characteristics. The overall research power of the United States is number one globally, followed by China, Canada, and France. The related disciplines were widely varied, involving agriculture (the most studied field of LAI research), environmental science and ecology, remote sensing, and other fields. The development of the Google Earth engine, cloud computing platforms, and unmanned aerial vehicle technology will provide more critical support for LAI research. The results of this paper quantitatively show the development history, research hotspots, and application of LAI research and provide a reference for understanding the current situation and development trends of global LAI research.
... (Segovia-Cardozo et al., 2019). The crop green or canopy cover photos provide an indication of how much water the crop is lost due to evapotranspiration (Li et al., 2018). One of the most important technologies used in irrigation management applications and measuring crop water requirements is satellite image or aerial photography analysis. ...
... FLIGHT (North 1996) and RAYTYAN (Govaerts and Verstraete 1998)) as well as field data, which revealed significant agreement. Simulated datasets of the LESS model have been used for different application in remote sensing (Li et al. 2018;Pu et al. 2020). More specific information about the LESS model can be found on the website (http://lessrt. ...
Article
Full-text available
Accurate estimation of fractional vegetation cover (FVC) is of great significance to agricultural production. Crop residue management affect crop residue cover (CRC) over croplands. Crop and crop residue on the soil surface both contribute to overall canopy reflectance. Few studies, however, have examined the effect of crop residue on vegetation indices (VIs) and estimated FVC. The present study evaluated the response of eight commonly used VIs to crop residues and FVC uncertainty caused by crop residue based on the dimidiate pixel model (DPM) by using simulated reflectance of low-tilled cropland via a three-dimensional radiative transfer model. The absolute difference (AD) was used to quantify the spectral difference between crop residues and soils in red and near infrared wavelengths. Increases in normalized difference VI (NDVI), ratio VI (RVI), transformed soil-adjusted VI (TSAVI), and normalized difference phenology index (NDPI) were observed when green crops were mixed with crop residue that had negative ADs with soils, but decreases in enhance VI (EVI), perpendicular VI (PVI), SAVI, and litter-soil-adjusted VI (L-SAVI) were observed when crop residue was present under medium and high vegetation cover. The presence of crop residue with a positive AD with soils reduced NDVI, RVI, TSAVI, and NDPI while increased the other VIs. Crop residue had the least impact on EVI- and SAVI-based DPMs, with FVC-estimated uncertainty less than 0.1, followed by the NDPI- and L-SAVI-based model, while DPMs based on NDVI- and RVI performed poorly. Each VI-based DPM’s estimated uncertainty was highly correlated with AD values. Furthermore, the majority of the VI-based models were sensitive to solar position except for the NDPI-based model. Our findings highlight the need of considering the impact of crop residue on FVC retrieval over low-tilled cropland in future research.
... Model examples are M-surface-based Raytran (Govaerts and Verstraete, 1998), RAPID (Huang et al., 2013), DART (facet mode) (Gastellu-Etchegorry et al., 2015), DIRSIG (Goodenough and Brown, 2017), LESS (Qi et al., 2019a) and FluorWPS (Zhao et al., 2016) model, as well as the S-crown-based FLiES (Kobayashi and Iwabuchi, 2008) and FLIGHT (Melendo-Vega et al., 2018;North, 1996) models. Due to their relatively high accuracy, they have been frequently used to analyze the influence of canopy structure on remotely sensed signals , validate simpler physically-based models (Bian et al., 2021;Li et al., 2020;Yan et al., 2021b) and retrieval methods (Chen et al., 2019;Li et al., 2018), and to develop inversion models (Ferreira et al., 2018;Levashova et al., 2018). Among all the canopy representations within these models, M-surface is the most frequently adopted approach and has been demonstrated to be the most accurate method for simulating canopy scattering, and thus is usually treated as the reference to validate other representations (Janoutová et al., 2019;Widlowski et al., 2014). ...
Article
Full-text available
Three-dimensional (3D) radiative transfer simulations are critical for studying the radiometric properties of canopies. Efficient and easy-to-use 3D radiative transfer models are required by remote sensing inversion and many validation applications. Extensive efforts have been made to improve the computational efficiency, accuracy, and useability of 3D radiative transfer models. This study focuses on the abstraction of canopies for 3D radiative transfer simulations by proposing a lightweight boundary-based description of leaf clusters (B-cluster) to ease the creation of 3D scenes while keeping the simulation as accurate as possible. B-cluster partitions a tree crown into sub-crown leaf clusters and abstracts each of them into a turbid medium enclosed by a complex and tight boundary, while terrain and branches are described with precise mesh surfaces. The radiative transfer simulation within B-cluster has been developed based on an efficient Monte Carlo path-tracing algorithm and implemented in the LargE-Scale remote sensing data and image Simulation framework (LESS) model by considering the presence of both turbid medium and surface scattering. The performance of the model was assessed by comparing with original LESS version, which describes all landscape elements with mesh surfaces (here called M-surface approach), and with a uniform voxel-based approach (U-voxel) in terms of the multiangle bidirectional reflectance factor (BRF) as well as with pixel-wise images. Results show that B-cluster is highly consistent with M-surface in abstract canopies (mean normalized absolute BRF differences δ¯ < 2%) and in realistic forest stand (δ¯ < 5% at 5-m resolution) with considerably reduced requirements for computational memory. Compared with U-voxel, B-cluster is also more robust and better at describing canopy structures with different levels of detail. B-cluster enables to quickly construct accurate 3D scenes with reduced requirements of computational resources. It is also a unified and scale-adaptive approach that can describe crowns as simple as geometric primitives and as complex as explicitly described meshes. The newly proposed approach has been released in new LESS versions at http://lessrt.org/.
Article
Full-text available
Plant growers need accessible and effective information about the state of crops to implement crop management. The purpose of the study is to develop a method for identifying plants on high-resolution multispectral images for continuous sowing crops, using the example of winter wheat. The studies are conducted in the Left-Bank Forest-Steppe zone, on industrial crops of winter wheat, Mulan variety. At the time of remote monitoring through UAVs (2019.03.17), the plants were in the tillering stage. Monitoring from an altitude of 100 meters is conducted using the Slantrange 3p spectral system installed on the DJI Matrice 600 UAV. A full-screen copy of the snapshot window is made to extract reference graphic data from the SlantView programme. Statistical processing of graphical data of spectral monitoring results is performed in the MathCad programme. It is noted that reliable determination of the spectral portrait of the soil for its pixel filtration from multispectral images is a difficult task, since its colour substantially depends on the state of moisture and may differ in open and shaded areas. A fundamentally new way to filter out random inclusions is to use a spectral portrait of plants based on the intensity ratios of their components. A promising parameter for assessing the condition of crops is the estimation of their horizontal surface area, which can be determined by pixel-by-pixel image analysis. A filtering option that requires debugging is suggested. In further studies, it is advisable to consider the issue of methodological support for assessing the quality of filtering data from spectral monitoring of plantings
Article
Full-text available
The realistic reconstruction and radiometric simulation of a large-scale three-dimensional (3-D) forest scene have potential applications in remote sensing. Although many 3-D radiative transfer models concerning forest canopy have been developed, they mainly focused on homogeneous or relatively small heterogeneous scenes, which are not compatible with the coarse-resolution remote sensing observations. Due to the huge complexity of forests and the inefficiency of collecting precise 3-D data of large areas, realistic simulation over large-scale forest area remains challenging, especially in regions of complex terrain. In this study, a large-scale emulation system for realistic 3-D forest Simulation is proposed. The 3-D forest scene is constructed from a representative single tree database (SDB) and airborne laser scanning (ALS) data. ALS data are used to extract tree height, crown diameter and position, which are linked to the individual trees in SDB. To simulate the radiometric properties of the reconstructed scene, a radiative transfer model based on a parallelized ray-tracing code was developed. This model has been validated with an abstract and an actual 3-D scene from the radiation transfer model intercomparison website and it showed comparable results with other models. Finally, a 1 km $\times$ 1 km scene with more than 100 000 realistic individual trees was reconstructed and a Landsat-like reflectance image was simulated, which kept the same spatial pattern as the actual Landsat 8 image.
Article
Full-text available
The use of Unmanned Aerial Vehicles (UAVs) in viticulture permits the capture of aerial Red-Green-Blue (RGB) images with an ultra-high spatial resolution. Recent studies have demonstrated that RGB images can be used to monitor spatial variability of vine biophysical parameters. However, for estimating these parameters, accurate and automated segmentation methods are required to extract relevant information from RGB images. Manual segmentation of aerial images is a laborious and time-consuming process. Traditional classification methods have shown satisfactory results in the segmentation of RGB images for diverse applications and surfaces, however, in the case of commercial vineyards, it is necessary to consider some particularities inherent to canopy size in the vertical trellis systems (VSP) such as shadow effect and different soil conditions in inter-rows (mixed information of soil and weeds). Therefore, the objective of this study was to compare the performance of four classification methods (K-means, Artificial Neural Networks (ANN), Random Forest (RForest) and Spectral Indices (SI)) to detect canopy in a vineyard trained on VSP. Six flights were carried out from post-flowering to harvest in a commercial vineyard cv. Carménère using a low-cost UAV equipped with a conventional RGB camera. The results show that the ANN and the simple SI method complemented with the Otsu method for thresholding presented the best performance for the detection of the vine canopy with high overall accuracy values for all study days. Spectral indices presented the best performance in the detection of Plant class (Vine canopy) with an overall accuracy of around 0.99. However, considering the performance pixel by pixel, the Spectral indices are not able to discriminate between Soil and Shadow class. The best performance in the classification of three classes (Plant, Soil, and Shadow) of vineyard RGB images, was obtained when the SI values were used as input data in trained methods (ANN and RForest), reaching overall accuracy values around 0.98 with high sensitivity values for the three classes.
Article
Full-text available
Forest ecological restoration by burning is widely applied to promote natural, early-successional sites and increase landscape biodiversity. Burning is also used as a forest management practice to facilitate forest regeneration after clearcutting. Besides the desired goals, restoration burnings also affect soil biogeochemistry, particularly soil organic matter (SOM) and related soil carbon stocks but the long-term effects are poorly understood. However, in order to study these effects, a reliable estimate of spatial variability is first needed for effective sampling. Here we investigate spatial variability of SOM and vegetation features 13 years after burnings and in combination with variable harvest levels. We sampled four experimental sites representing distinct management and restoration treatments with an undisturbed control. While variability of vegetation cover and biomass was generally higher in disturbed sites, soil parameter variability was not different between the four sites. The joint ecological patterns of soil and vegetation parameters across the whole sample continuum support well the prior assumptions on the characteristic disturbance conditions within each of the study sites. We designed and employed statistical simulations as a means to plan prospective sampling. Sampling six forest sites for each treatment type with 30 independent soil cores per site would provide enough statistical power to adequately capture the impacts of burning on SOM based on the data we obtained here and statistical simulations. In conclusion, we argue that an informed design-based approach to documenting the ecosystem effects of forest burnings is worth applying both through obtaining new data and meta-analysing the existing. © 2017, Finnish Society of Forest Science. All rights reserved.
Article
Full-text available
The use of automated methods to estimate fractional vegetation cover (FVC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive FVC measurements. Wide acceptance has been delayed because of the limitations in accuracy, speed, automation and generalization of these methods. This work introduces a novel technique, the Automated Canopy Estimator (ACE) that overcomes many of these challenges to produce accurate estimates of fractional vegetation cover using an unsupervised segmentation process. ACE is shown to outperform nine other segmentation algorithms, consisting of both threshold-based and machine learning approaches, in the segmentation of photographs of four different crops (oat, corn, rapeseed and flax) with an overall accuracy of 89.6%. ACE is similarly accurate (88.7%) when applied to remotely sensed corn, producing FVC estimates that are strongly correlated with ground truth values.
Article
Full-text available
Validation over heterogeneous areas is critical to ensuring the quality of remote sensing products. This paper focuses on the sampling methods used to validate the coarse-resolution fractional vegetation cover (FVC) product in the Heihe River Basin, where the patterns of spatial variations in and between land cover types vary significantly in the different growth stages of vegetation. A sampling method, called the mean of surface with non-homogeneity (MSN) method, and three other sampling methods are examined with real-world data obtained in 2012. A series of 15-m-resolution fractional vegetation cover reference maps were generated using the regressions of field-measured and satellite data. The sampling methods were tested using the 15-m-resolution normalized difference vegetation index (NDVI) and land cover maps over a complete period of vegetation growth. Two scenes were selected to represent the situations in which sampling locations were sparsely and densely distributed. The results show that the FVCs estimated using the MSN method have errors of approximately less than 0.03 in the two selected scenes. The validation accuracy of the sampling methods varies with variations in the stratified non-homogeneity in the different growing stages of the vegetation. The MSN method, which considers both heterogeneity and autocorrelations between strata, is recommended for use in the determination of samplings prior to the design of an experimental campaign. In addition, the slight scaling bias caused by the non-linear relationship between NDVI and FVC samples is discussed. The positive or negative trend of the biases predicted using a Taylor expansion is found to be consistent with that of the real biases.
Article
Full-text available
Taking photographs with a commercially available digital camera is an efficient and objective method for determining the green fractional vegetation cover (FVC) for field validation of satellite products. However, classifying leaves under shadows in processing digital images remains challenging and results in classification errors. To address this problem, an automatic shadow-resistant algorithm in the Commission Internationale d'Eclairage L*a*b* color space (SHAR-LABFVC) based on a documented FVC estimation algorithm (LABFVC) is proposed in this paper. The hue saturation intensity (HSI) is introduced in SHAR-LABFVC to enhance the brightness of shaded parts of the image. The lognormal distribution is used to fit the frequency of vegetation greenness and to classify vegetation and the background. Real and synthesized images are used for evaluation, and the results are in good agreement with the visual interpretation, particularly when the FVC is high and the shadows are deep, indicating that SHAR-LABFVC is shadow resistant. Without specific improvements to reduce the shadow effect, the underestimation of FVC can be up to 0.2 in the flourishing period of vegetation at a scale of 10 m. Therefore, the proposed algorithm is expected to improve the validation accuracy of remote sensing products.
Article
Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.
Article
Satellite remote sensing is an effective way to monitor vast extents of global glaciers and snowfields. However, satellite remote sensing is limited by spatial and temporal resolutions and the high costs involved in data acquisition. Unmanned aerial vehicle (UAV)-based glaciological studies are gaining pace in recent years due to their advantages over conventional remote sensing platforms. UAVs are easy to deploy, with the option of alternating the sensors working in visible, infrared, and microwave wavelengths. The high spatial resolution remote sensing data obtained from these UAV-borne sensors are a significant improvement over the data obtained by traditional remote sensing. The cost involved in data acquisition is minimal and researchers can acquire imagery according to their schedule and convenience. We discuss significant glaciological studies involving UAV as remote sensing platforms. This is the first review work, exclusively dedicated to highlight UAV as a remote sensing platform in glaciology. We examine polar and alpine applications of UAV and their future prospects in separate sections and present an extensive reference list for the readers, so that they can delve into their topic of interest. Because the technology is still widely unexplored for snow and glaciers, we put a special emphasis on discussing the future prospects of utilising UAVs for glaciological research.