ArticlePDF Available
1
A Half-Gaussian Fitting Method for Estimating Fractional Vegetation
Cover of Corn Crops Using Unmanned Aerial Vehicle Images
Linyuan Lia,b, Xihan Mua,b*, Craig Macfarlanec,d, Wanjuan Songa,b, Jun Chena,b, Kai Yane, Guangjian Yana,b
a State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Beijing Normal University and
Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences
b Beijing Engineering Research Center for Global Land Remote Sensing Products, Institute of Remote
Sensing Science and Engineering, Faculty of Geographical Science, Beijing Normal University, Beijing
100875, China
c CSIRO, 147 Brockway Rd, Floreat WA 6014, Australia
d School of Agriculture and Environment, Faculty of Science, The University of Western Australia,
Crawley WA, Australia
e School of Land Science and Techniques, China University of Geosciences, Beijing, China.
* Corresponding Author
E-Mail: muxihan@bnu.edu.cn (X. Mu); Tel/Fax: +86-10-5880-2041
Highlights
A half-Gaussian mixture model is proposed to extract FVC from LARS images (HAGFVC).
HAGFVC is robust to variations of spatial resolution, mixed pixels and vegetated coverage.
HAGFVC outperforms previous methods developed for proximal images.
2
Abstract 1
Accurate estimates of fractional vegetation cover (FVC) using remotely sensed images collected using 2
unmanned aerial vehicles (UAVs) offer considerable potential for field measurement. However, most 3
existing methods, which were originally designed to extract FVC from ground-based remotely sensed 4
images (acquired at a few meters above the ground level), cannot be directly used to process aerial 5
images because of the presence of large quantities of mixed pixels. To alleviate the negative effects of 6
mixed pixels, we proposed a new method for decomposing the Gaussian mixture model and estimating 7
FVC, namely, the half-Gaussian fitting method for FVC estimation (HAGFVC). In this method, the 8
histograms of pure vegetation pixels and pure background pixels are firstly fit using two half-Gaussian 9
distributions in the Commission Internationale d’Eclairage (CIE) L*a*b* color space. Then, a threshold 10
is determined based on the parameters of Gaussian distribution to generate a more accurate FVC estimate. 11
We acquired low-altitude remote-sensing (LARS) images in three vegetative growth stages at different 12
flight altitudes over a cornfield. The HAGFVC method successfully fitted the half-Gaussian distributions 13
and obtained stable thresholds for FVC estimation. The results indicate that the HAGFVC method can 14
be used to effectively and accurately derive FVC images, with a small mean bias error (MBE) and with 15
root mean square error (RMSE) of less than 0.04 in all cases. Comparatively, other methods we tested 16
performed poorly (RMSE of up to 0.36) because of the abundance of mixed pixels in LARS images, 17
especially at high altitudes above ground level (AGL) or in the case of moderate vegetation coverage. 18
The results demonstrate the importance of developing image-processing methods that specially account 19
for mixed pixels for LARS images. Simulations indicated that the theoretical accuracy (no errors in 20
fitting the half-Gaussian distributions) of the HAGFVC method reflected an RMSE of less than 0.07. 21
Additionally, this method provides a useful approach to efficiently estimating FVC by using LARS 22
images over large areas. 23
3
Keywords: fractional vegetation cover (FVC), unmanned aerial vehicle (UAV), low-altitude remote 24
sensing (LARS), digital photography, half-Gaussian distribution, histogram threshold 25
1. Introduction 26
Fractional vegetation cover (FVC) plays a key role in land surface processes, including carbon and 27
water cycles (Jung et al., 2006) and energy transfer (Sellers, 1997). It is also an important data product 28
in numerical weather prediction (Gutman and Ignatov, 1997) and high-precision agricultural analysis 29
(Hunt et al., 2014; Matese et al., 2015). To meet the requirements of FVC mapping and validation using 30
satellite products, rapid and accurate measurements of FVC are necessary (Mu et al., 2015; Song et al., 31
2017). Hence, various methods have been developed to measure FVC for these applications including 32
visual estimation, direct sampling and digital photography (Muir et al., 2011). Among these methods, 33
photography provides the best performance in terms of efficient and accurate validation of satellite 34
remote sensing products for high-precision applications (Yan et al., 2012). 35
Proximal (very close range, i.e. a few meters) sensing methods have a clear advantage over satellite 36
remote sensing in terms of spatial resolution and flexibility. The data obtained from proximal sensing 37
can provide highly accurate estimates of FVC directly from images (Liu and Pattey, 2010; Macfarlane 38
and Ogden, 2012; Song et al., 2015) while the satellite remote sensing images will typically require that 39
FVC be estimated based on calibrations of vegetation indices against independent estimates of FVC 40
from proximal sensing methods (Carlson and Ripley, 1997). However, traditional proximal sensing 41
methods, lacking the spatial coverage needed for mapping FVC over large regions, are potentially labor 42
intensive even over medium-scale areas and local conditions may limit site access. Recent technological 43
innovations have led to an increase in the availability of unmanned aerial vehicles (UAVs) ( Watts et al., 44
2012), which potentially overcome many limitations of both traditional proximal and satellite imagery 45
platforms. Low-altitude remote-sensing (LARS) UAVs are advantageous because of their flexibility, 46
4
operational ability in a variety of environmental conditions, and capacity for mapping at intermediate 47
spatial scales. The application of UAVs has extended to crop monitoring, precision agriculture and other 48
Earth science studies (Bhardwaj et al., 2016; Zarco-Tejada et al., 2012). Researchers widely agree that 49
commercial cameras mounted on UAVs are powerful tools for assessing FVC (Chianucci et al., 2016; 50
Torres-Sánchez et al., 2014). 51
UAVs are flexible in terms of their flight altitude, which facilitates the collection of imagery at a 52
range of spatial scales (Mesas-Carrascosa et al., 2014). For example, Chapman et al., (2014) deployed 53
UAVs fitted with a fixed, wide angle lens at heights ranging from 20 m to 80 m, in order to evaluate 54
various plant breeding trials. Generally, as UAVs are required to map FVC rapidly over larger areas, the 55
flight altitude must be increased, which reduces the spatial resolution. Spatial resolution could be 56
maintained by narrowing the camera focal length as altitude increases but this would increase the number 57
of flights required by what are frequently UAVs with only short flight time, which would negate one of 58
the main advantages of UAVs. As a result, UAVs are often flown at varied altitudes but constant focal 59
length (e.g., Samseemoung et al., 2012). However, reducing the spatial resolution of LARS imagery 60
increases the proportion of mixed pixels, which is likely to reduce the accuracy of medium-scale FVC 61
mapping (Hsieh et al., 2001; Jones and Sirault, 2014). 62
Image analysis methods developed for proximal sensing methods are poorly suited to estimate FVC 63
from LARS when mixed pixels are abundant in the images. Hsieh et al. (2001) established a simulation 64
scheme to assess the effect of the spatial resolution on classification accuracy and found that the 65
classification errors increased rapidly with decreasing spatial resolution. Jones and Sirault (2014) 66
reported that a low spatial resolution has a significant negative influence on image classification. Torres-67
Sánchez et al. (2014) observed the decrease in accuracy of FVC estimates in the early growth stages of 68
wheat when the spatial resolution of LARS imagery was reduced. 69
5
Early image analysis methods depended on supervised classification, which requires human 70
intervention, has low operational efficiency and produces noisy results. Later, automatic classification 71
methods were based on unsupervised clustering algorithms, category tree methods and threshold-based 72
methods (Yan et al., 2012). Researchers developed numerous threshold-based methods based on 73
vegetation indices in the red-green-blue (RGB) color space; such indices include the excessive green 74
index (Woebbecke et al., 1995 ; Liu and Pattey, 2010), normalized difference index (Pérez et al., 2000), 75
green leaf algorithm (Chianucci et al., 2016), etc. Other color spaces, such as the Commission 76
Internationale d’Eclairage (CIE) L*a*b* and hue saturation intensity (HSI), have also been used for 77
classification (Liu et al., 2012; Macfarlane and Ogden, 2012). These automatic algorithms have modestly 78
improved the efficiency of validation. However, they were specifically developed for proximally-sensed 79
images and unsuited to images containing many mixed pixels. In addition, previously reported studies 80
tended to use UAV-based commercial cameras to collect images over sparse scenes, such as early crop 81
and rangeland areas (Rango, 2009; Torres-Sánchez et al., 2014), while densely vegetated scenes (FVC > 82
0.7) have seldom been studied. 83
In this study, we propose an image analysis method, HAGFVC for estimating FVC that is scale 84
invariant and specifically addresses the problem of large and variable numbers of mixed-pixels in LARS 85
images acquired from varying altitudes. The theory and implementation of the half-Gaussian fitting 86
method for extracting FVC are described in Section 2. Three published methods, LAB2 (Macfarlane and 87
Ogden, 2012), Shadow-Resistant Algorithm for Extracting the Green FVC (SHAR-LABFVC; Song et 88
al., 2015) and excess green vegetation index (ExG; Woebbecke et al., 1995) are introduced for 89
comparison as well. Section 3 describes the real data and simulated data used to validate and analyze the 90
HAGFVC method. In Section 4, the results of the HAGFVC method and the three other methods are 91
compared, and an uncertainty analysis is presented. Sections 5 and 6 present the discussion and 92
6
conclusions, respectively. 93
2. Methods94
2.1. Gaussian Mixture Model for FVC 95
For vegetated surfaces, the CIE a* distribution of an image usually was considered as a Gaussian 96
mixture model (GMM) distribution (Coy et al., 2016; Liu et al., 2012). In proximally sensed images, 97
assuming almost no mixed pixels in these images, the GMM derives from the distributions of pure 98
vegetation and pure background material and exhibits a distinct bimodal distribution mode (Liu et al., 99
2012; Song et al., 2015). This mixture distribution function
H
xcan be given by:100

μ
,

μ
,
(1)
where , and are weight, mean value and standard deviation, respectively; subscripts and
101
indicate vegetation and background, respectively; , stands for a Gaussian distribution function;
102
is CIE a* value. Fig. 1a shows an example of a* distribution for an image with negligible mixed
103
pixels. The GMM distribution is characterized by two distinct peaks, representing vegetation and
104
background. In this situation, it is straightforward to decompose the GMM and select a reasonable
105
threshold to separate green vegetation from the background using automated thresholding methods (e.g.
106
the T2 thresholding method in Liu et al., 2012).
107
In LARS images, as the spatial resolution decreases, many mixed pixels occur. As a result, the GMM 108
consists of three components: pure vegetation, pure background and the mixed pixels. The shape of 109
bimodal GMM is obscured because mixed pixels render the peaks of the vegetation and background less 110
distinct. Generally, the GMM distribution becomes weakly bimodal or even unimodal. This mixture 111
distribution function  can be expressed as: 112


,


,

 (2)
where , and are the weight, mean value and standard deviation after resolution reduction, 113
7
respectively. Subscript refers to the mixed pixels.is an unknown probability density function 114
of mixed pixels which, in reality,is located between the vegetation and background on a* axis because 115
a mixed pixel is a combination of these two pure components. Fig. 1b shows an example of a* 116
distribution for an image with a number of mixed pixels. Each Gaussian component is more indistinct 117
due to the presence of mixed pixels. Furthermore, as the image resolution decreases, the difficulty of 118
decomposing the GMM increases and leads to more errors if Eq. (1) is used. 119
Fig. 1. Schematic diagrams of GMM distribution of CIE a* values at different spatial resolutions, (a) a*
120
distribution at a high spatial resolution (i.e. proximal sensing), (b). a* distribution at a lower spatial resolution (i.e.
121
low-altitude remote sensing).
122
2.2. HAGFVC method 123
To solve the problem caused by mixed pixels in decomposition of the GMM, the HAGFVC method uses 124
only pure pixels to estimate Gaussian parameters of pure vegetation and background. Uncertain pixels 125
distributed between the bimodal peaks of vegetation and background in the histogram are excluded. 126
Therefore,  is not used in the HAGFVC method. These pure pixels are distributed at edges (ends) 127
of the histogram (the green and the orange shaded areas in Fig. 2). After fitting the shaded areas with 128
two half-Gaussian distributions, we can obtain the Gaussian parameters of the pure vegetation and pure 129
background excluding the influence of mixed pixels. These Gaussian parameters are then used to 130
determine the threshold which is applied for image segmentation and FVC estimation. LARS images are 131
8
processed and analyzed using custom written scripts via a graphical user interface (GUI) programmed 132
in MATLAB R2013a (MathWorks, Inc., Natick, MA, USA). 133
The detailed steps of HAGFVC method for estimating FVC from digital images are illustrated in 134
Fig. 3 and are listed below. Steps (3) to (5) are the essential and novel steps of the HAGFVC method. 135
(1) Color space transformation. The first step of image processing is to convert RGB images to the 136
L*a*b* color space. The L*a*b* color space is device independent and simplifies pixel classification 137
based on greenness using a* values, which represent colors ranging from green to red. 138
(2) Smoothing the histogram curve. Generally, the histogram of a* values from an image is noisy 139
because of the complexity of vegetative cover and the variability of illumination. Therefore, we used the 140
Gaussian kernel-based smoothing method (Cox et al., 1989) to smooth the histogram to reduce noise 141
and improve the performance subsequent processing. 142
(3) Determination of initial mean values. To detect pure pixels distributed at the edges (ends) of the 143
histogram, it is necessary to determine the values of 
and
, which are initial estimation for
144
and
. The shapes of the frequency distributions of a* values from vegetation and background are 145
different: for vegetation, the distribution is typically flat and wide, whereas the histogram of background 146
is sharp and narrow (Čugunovs et al., 2017) (Fig. 2). Thus, we use different methods to determine each 147
mean value. For green vegetation, we calculate the second derivative of the smoothed curve and set 
148
as the left-most local maximum of the absolute values of the negative second derivative. For background, 149
we calculate the right-most local maximum frequency value as 
. Pure vegetation pixels lie to the 150
left of 
, and pure background pixels to the right of 
. 151
(4) Assessment of the modality of the distribution. In some cases, the a* histogram is unimodal. This 152
occurs when mixed pixels account for a large proportion of the image or the image largely consists of 153
one type of component. Half-Gaussian fitting is inadequate to process unimodal histograms; hence, we 154
9
determine whether the histogram is bimodal or not before the half-Gaussian fitting. If the difference 155
between 
 and
is larger than an empirical threshold, i.e., 5 in this study, the distribution is 156
considered to be a bimodal. Otherwise, the histogram is unimodal. 157
(5) Half-Gaussian fitting to estimate Gaussian parameters. Half-Gaussian fitting is performed for 158
bimodal and weakly bimodal distributions. All the pure pixels classified in step (3) are fitted with half-159
Gaussian distribution curves to re-estimate to obtain the final (
and
) as well as (
160
and
). In fitting, the distributions of pure pixels are normalized as the weights equal 1. The half-161
Gaussian distribution function  is expressed as: 162
1
2

,
,
(3)
Then, the weights
and
are obtained through calculating the ratios of each Gaussian component 163
to the entire GMM. 164
(6) Threshold computation. Once the Gaussian parameters are estimated, the threshold can be 165
determined through the “T2” threshold computation method introduced by Liu et al. (2012). For bimodal 166
cases, the threshold can be derived by solving a complementary error function equation: 167
‐
/
2∙
  

‐/
2∙
 (4)
where is the complementary Gaussian error function. This computation method is based on the 168
principle that the misclassification probabilities of vegetation and background are equal. The detailed 169
derivation is given in Liu et al., (2012). Fig. 2 shows an example of the threshold (marked as magenta 170
solid line) after solving Eq. (4). For the unimodal cases, an empirical threshold (i.e. -4; Liu et al. 2012) 171
computed from many proximal images is applied. 172
(7) FVC calculation. The threshold is used to segment an image by classifying pixels with a* values 173
less than or equal to this threshold as vegetation and all other pixels as background. Finally, FVC is 174
10
estimated as the ratio of the quantity of vegetation pixels to the quantity of all pixels. 175
176
Fig. 2. An example of the half-Gaussian fit of a GMM from a UAV image taken 19 m above ground level (AGL) in
177
cornfield.
and
are the mean values of the two Gaussian components.
178
179
180
Fig. 3. Flowchart of FVC estimation using the HAGFVC method and LARS images. The modules highlighted in
181
orange are the novel and essential steps of the HAGFVC method.
182
11
2.3. Assessment of performance 183
To highlight the improvement on FVC estimation of the HAGFVC method, we compared it with 184
two other methods (i.e. LAB2 and SHAR-LABFVC) that were also based primarily on the L*a*b* color 185
space. To expand this comparison and further generalize our results, another method (i.e. ExG) was 186
included. 187
(1) LAB2 (Macfarlane and Ogden, 2012): This method was developed for natural vegetation and uses 188
the green leaf algorithm (Louhaichi et al., 2001), a* and b* values of each pixel in the CIE L*a*b* color 189
space to segment green vegetation with a minimum-distance-to-means classifier. The RMSE of LAB2 190
was less than 0.05 (Macfarlane and Ogden, 2012). 191
(2) SHAR-LABFVC (Song et al., 2015): This method used a lognormal-GMM to characterize the CIE 192
a* distribution of a vegetation-covered surface. In addition, this approach introduced the HSI color space 193
to enhance the brightness of shaded parts of an image and improve the classification accuracy of ground-194
based images. The method was capable of detecting many small canopy gaps and partially overcame the 195
shadow effect; the authors reported a root mean square error (RMSE) of 0.025. 196
(3) ExG vegetation index (Woebbecke et al., 1995 ; Torres-Sánchez et al., 2014 ) : This method was 197
originally developed for weed identification and uses the green fraction of vegetation. This index 198
calculated in RGB color space as: ExG = 2G-R-B, where R, G and B are red, green and blue color 199
contents, respectively. An automatic threshold based on Otsu’s thresholding method (Otsu, 1979) was 200
used to segment the ExG grayscale image and estimate FVC. 201
Three statistics were used to assess the performance of each FVC-extraction method. 202
(1) Root mean squared error (RMSE): to measure the accuracy of FVC estimates at different resolutions 203
based on the comparison with ground-based FVC observations. Since the errors are squared before they 204
are averaged, the RMSE assigns a relatively high weight to large errors. 205
12
  


 // (4)
(2) Mean bias error (MBE): to assess the averaged bias of FVC estimates at different resolutions. In 206
MBE, the signs of the errors are not removed. 207
  

/
 (5)
(3) Standard deviation (STD): to analyze the consistency of FVC estimates at different spatial 208
resolutions.209
  


 /1/ (6)
where 
is the FVC value estimated for  flight altitude over a plot and denotes the number of 210
observations.  is the ground-measured FVC and is treated as the true value.  represents the 211
average of FVC estimated from the LARS images at different flight altitudes over a sampling plot. 212
3. Materials 213
3.1. Study area 214
The study area (42.24° N, 117.06° E) was located in Weichang County, Hebei Province, China (Fig. 215
4). Field campaigns were performed on 29 June, 11 July and 31 July in 2015. These dates represent three 216
vegetative growth stages of corn (Zea mays; Table 1). The three growth stages are deemed V4, V6 and 217
V8, where Vn (n = 4, 6, 8) indicates n leaves with collars visible. We established a small sampling plot 218
(10 m 8 m) to measure FVC using UAV and ground-based photography. 219
13
220
Fig. 4. Study area located in Weichang County, Hebei Province, China (marked as the red point on the top-left 221
frame). The orthophoto of the experimental site is on the bottom frame. The sampling plot is approximately 10 m 222
8 m and located in a cornfield (top-right frame). 223
224
225
Ta bl e 1 226
Overview of field campaigns during three growth stages of corn in 2015. Vn (n = 4, 6, 8) indicates n leaves with collars 227
visible. True FVC values are derived from ground-based images using the SHAR-LABFVC method. 228
Date Local
Time
Growth
Stage
Mean Leaf
Width (cm)
Number
of Images Flight Height (m) True FVC Illumination
28/06 11:30 am V4 2.7 14 3 - 29 (step=2 m) 0.22 diffuse light
(cloudy day)
11/07 06:30 pm V6 4.1 26 3 – 53 (step=2 m) 0.35 direct light
(large sun zenith)
31/07 05:45 pm V8 8.8 24 7- 53 (step=2 m) 0.82 diffuse light
(cloudy day)
14
3.2. UAV flights and aerial images 229
We used the model X-601 hexacopter (manufactured by Docwell Corporation, Beijing, China), 230
which is a vertical takeoff and landing aircraft with a maximum flying time of up to 20 minutes 231
depending on weather conditions and payload. An autopilot system provides autonomous navigation 232
based on a Global Position System (GPS) signal. The platform was equipped with a Sony Nex-5R digital 233
camera and stabilized by a stability augmentation system (SAS). This hexacopter can operate from 234
several meters to a few kilometers above ground level (AGL). 235
The flight pattern of the UAV ranged from a lowest flight altitude (e.g. 5 m AGL) to a highest flight 236
altitude (e.g. 60 m AGL) with an interval of 2 meters to acquire images. In our study, images were 237
captured at different flight altitude ranges for each growth stage (see Table. 1). Each waypoint was 238
located over the center of the plot. The UAV hovered for two seconds at each sample point to satisfy the 239
positional accuracy and assure that the digital camera had enough time to acquire an image. Flight 240
parameters containing WGS-84 latitude/longitude waypoints were logged using a ground control station 241
(GCS) computer. 242
A Sony Nex-5R digital camera was mounted on the hexacopter to acquire nadir images. This camera 243
provides an image of 23.5 15.6 mm and a maximum image size of 4912 3264 pixels. The focal 244
length of the camera lens was 16 mm; thus, the pixel ground resolution was 0.3 cm at 10 m AGL. The 245
leaf widths of different growth stages are listed in Table 1. The camera aperture and shutter speed were 246
set manually depending on different light conditions before takeoff. 247
3.3. Ground measurements 248
To obtain ground measurements of FVC for validation, we used a Nikon D3000 digital camera 249
mounted on portable pole via an angled steel bracket. The camera was set to aperture priority mode, 250
automatic exposure, ISO 100, and 18 mm focal length to produce fine quality images in Joint 251
Photographic Experts Group (JPEG) format. Images were captured looking vertically downward from 252
15
approximately 1.5 m above the canopy. A surveyor walked along the two diagonals of the plot and took 253
a photo every 2 meters. We obtained 14 field images at similar times during the UAV flights in each 254
stage. The images were uploaded to a computer for subsequent processing using both the LAB2 method 255
and the SHAR-LABFVC method. The FVC of the entire sampling plot is an arithmetic average of the 256
FVC extracted from each image. 257
3.4. Simulated images 258
A simulated image dataset was used to quantify the uncertainty of FVC estimation by the HAGFVC 259
method. Images were generated using large-scale emulation system (LESS) software developed by Qi 260
et al. (2017) for realistic three-Dimensional (3D) corn scene simulation. LESS is a ray-tracing based 261
radiative transfer simulation model, which is mainly designed for the radiometric simulation of forest 262
canopies, but also can simulate other types of scenes (such as crops). We simulated four binary images 263
from four scenes with different FVCs (see Table. 2). In each image, values of 1 represent vegetation 264
(green areas in Fig. 5) and values of 0 stand for the background (black areas in Fig. 5). The binary images 265
were aggregated to simulate the images obtained at varied flight altitudes. The image resolution 266
decreased after aggregation and the mixed pixels (orange area in Fig. 5) appear, the values of which were 267
between 0 and 1. The coarser the resolution was, the greater the number of mixed pixels in the image. 268
269
270
271
Ta bl e 2
272
Overview of the four simulated images. and are the mean values and standard deviations of Gaussian distribution,
273
respectively.
274
FVC Crop type
Gaussian parameters Image size
[pixel]
Resolution
[cm]
0.21, 0.30,
0.38, 0.59 Corn -16 4.48 2 2.24 49123264 0.02
16
275
Fig. 5. Examples of image aggregation (simulated image with FVC of 0.38). (a) the original binary image of 276
cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 277
To analyze the variation of histogram with spatial resolution, we randomly assigned a* values to the 278
pixels in these binary images based on Gaussian parameters of the vegetation and background (see Table 279
2). These parameters were chosen based on the mean statistics derived from many real proximally sensed 280
images through the HAGFVC method. Then, these images with a* values were linearly aggregated to 281
different resolutions, simulating the process by which an UAV acquired images at different flight 282
altitudes. The quantities of mixed pixels and pure pixels are precisely known, as are the Gaussian 283
parameters of the two pure components of each image. Fig. 6 shows the simulated CIE a* distributions 284
before and after image aggregation. In the aggregation process, the proportion of pure pixels decreased 285
17
and the proportion of mixed pixels increased. The mean values of the vegetation and background were 286
relatively constant, while the standard deviations of both decreased gradually as pixels were aggregated. 287
288
289
Fig. 6. The CIE a* distributions of a simulated image with FVC of 0.38. Histograms of (a) the initial binary image 290
of cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 291
292
4. Results 293
4.1. Fraction of mixed pixels versus flight height for LARS images 294
We used a high-resolution classified LARS images to assess the fraction of mixed pixels versus 295
flight height in each growth stage. The LARS images acquired at 7 m AGL were classified as vegetation 296
and background with a pixel ground resolution of 0.2 cm (much smaller than the foliage width), so the 297
pixels acquired at 7 m AGL were assumed to be ‘pure pixels’. Then, the classified images were 298
18
progressively aggregated to simulate varied flight altitudes, and the fractions of mixed pixels calculated. 299
Coarse-resolution pixels obtained at an AGL higher than 7 m were classified as mixed pixels if they did 300
not entirely overlap with one type of ‘pure’ pixels in the 7 m AGL image (Fig. 7) 301
The fraction of mixed pixels increased with flight altitude (Fig. 7) at all growth stages and the 302
fractions of mixed pixels in V6 and V8 stages were markedly larger than in V4 stage at each altitude. 303
Hence, images acquired in the denser crop stages have more mixed pixels that the early crop stage at the 304
same flight level. 305
306
Fig. 7. The relationship between above ground level (resolution) and fraction of mixed pixels307
4.2. Comparison of FVC estimates 308
There were marked differences between FVC estimated from the four methods, i.e., HAGFVC, 309
LAB2, SHAR-LABFVC and ExG. In Fig. 8, the images captured at 25 m AGL in the three growth stages 310
and cropped using an identical footprint illustrate the classification results of the HAGFVC (Fig. 8 V4b, 311
V6b, V8b), LAB2 (Fig. 8 V4c, V6c, V8c), SHAR-LABFVC (Fig. 8 V4d, V6d, V8d) and ExG (Fig. 8 312
V4e, V6e, V8e) methods. The black area represents the background, and the white area represents green 313
vegetation. Notably, only slight differences exist among the classified images in the V4 stage, while 314
substantial differences can be observed among the four methods in the other two stages, especially the 315
19
V6 stage. 316
317
Fig. 8. Image segmentation using four methods: (V4a, V6a, V8a) UAV RGB images of three growth stages at 25 318
m above ground level (AGL), (V4b, V6b, V8b) HAGFVC method, (V4c, V6c, V8c) LAB2 method, (V4d, V6d, 319
V8d) SHAR-LABFVC method, and (V4e, V6e, V8e) ExG method. 320
The methods are compared at all flight altitudes and in the three growth stages in Fig. 9 and Table 321
3. Note that the true values of FVC derived using LAB2 and SHAR-LABFVC methods differ by less 322
than 0.05. In general, FVC estimated using the HAGFVC method is closest to the true values (Fig. 9) 323
for most flight altitudes and growth stages. 324
In the V4 stage, the FVC estimates of all four methods are close to the true FVC (“TrueValue” in 325
Table 3), i.e., the FVC derived from ground-measured images with the SHAR-LABFVC method), with 326
RMSEs of 0.02 for the LAB2 method, 0.03 for the SHAR-LABFVC method, 0.03 for the ExG method 327
and 0.02 for the HAGFVC method (see Table 3). The results at different altitudes are consistent, with 328
20
STDs of less than 0.03. 329
In the V6 stage, the HAGFVC method provided good results, with an RMSE of 0.02 and an MBE 330
of 0.01, while the LAB2 and SHAR-LABFVC methods obviously overestimated FVC, with an RMSE 331
of approximately 0.20 and an MBE up to 0.19, whereas the ExG method underestimated FVC with an 332
RMSE of 0.09 (see Table 3). Moreover, the HAGFVC method yielded stable results at different flight 333
altitudes, with an STD of 0.02. This finding suggested that the HAGFVC method has the potential to 334
accurately map FVC at several dozen meters AGL. By contrast, the LAB2 and SHAR-LABFVC 335
methods yielded continuously worsening results as the flight height increases; thus, only the results at 336
the lowest flight altitude were trustworthy. ExG method yielded stable results at different resolutions but 337
underestimate FVC with MBE of -0.08 (see Table 3). 338
In the V8 stage, the HAGFVC method provides relatively good results, with an RMSE of 0.03 and 339
an MBE of -0.02. SHAR-LABFVC method has the similar results with HAGFVC. However, LAB2 and 340
ExG methods generate RMSEs more than 0.16 even up to 0.4 (see Table 3). 341
21
342
Fig. 9. FVC comparison among the four methods in three vegetative growth stages, i.e., (a) V4, (b) V6 and (c) 343
V8. Vn (n = 4, 6, 8) indicates n leaves with collars visible. The TrueFVC-SHAR and TrueFVC-LAB2 respectively 344
represent the FVC derived by using the SHAR-LABFVC and LAB2 methods in field measurements. 345
22
346
Ta bl e 3 347
RMSEs, MBEs and STDs of the four FVC-estimation methods. SHAR denotes the SHAR-LABFVC method. 348
TrueValue is the FVC derived by using the SHAR-LABFVC method in field measurements. 349
Statistic V4 stage
(TrueValue = 0.22)
V6 stage
(TrueValue = 0.35)
V8 stage
(TrueValue = 0.82)
HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG
RMSE 0.02 0.02 0.03 0.03 0.02 0.20 0.20 0.09 0.03 0.16 0.06 0.36
STD 0.02 0.03 0.01 0.02 0.02 0.08 0.06 0.02 0.03 0.06 0.05 0.03
MBE -0.01 -0.01 -0.03 -0.03 0.01 0.19 0.19 -0.08 -0.02 -0.15 -0.03 -0.36
350
The observed FVCs at different flight heights differ because an angular effect exists with increasing 351
flight altitude (closer to parallel viewing as the UAV height increases). Although the actual focal length 352
of the lens and its field of view is fixed, cropping the image to measure the same region of interest at 353
ground level narrows the field of view as flight altitude increases. Correspondingly, the threshold slightly 354
changes at different flight altitudes. Fig. 10 shows the a* distributions at two flight altitudes (5 m and 355
35 m AGL) in the V6 stage. The threshold, the mean values of vegetation and background and the fitted 356
Gaussian curves are calculated using the HAGFVC method. Note that the valley between the vegetation 357
and background histogram peaks becomes less pronounced as the flight altitude increases because more 358
mixed pixels cause the overall distribution to become weakly bimodal. However, the threshold 359
determined using the HAGFVC method is still located in the valley. 360
23
361
Fig. 10. Histograms of a* distributions at different flight heights in the V6 stage. (a) above ground level (AGL) 362
of 5 m and (b) AGL of 35 m. The HAGFVC method derives fitted curves, Gaussian parameters and the 363
corresponding thresholds. 364
4.3. Sensitivity analysis
365
The HAGFVC threshold is a function of the weights, mean values and standard deviations of the
366
Gaussian distributions (see Eq. 4). We used the thresholds and FVC estimates of simulated images to
367
quantify the sensitivity of the HAGFVC method to different spatial resolutions and, therefore, different
368
flight altitudes. The images at different flight altitudes were simulated as described in section 3.4. The
369
fraction of mixed pixels increases linearly as the spatial resolution of each simulated image decreases.
370
As resolution decreases, the proportion will increase linearly (Fig 11a). Figs. 11b-d illustrate the weight,
371
mean and standard deviation against flight altitude. The mean values of the vegetation and background
372
are almost constant, while the weights and standard deviations decrease as the flight altitude increases
373
24
and spatial resolution is reduced. The threshold derived from Eq. (4) is weakly affected by variations in 374
the spatial resolution (Fig. 11e). Correspondingly, the FVC estimates closely agree with the true values 375
(deviation of less than 0.07 at resolutions of less than 3.2 cm for all simulated images; Fig. 11f). These 376
results suggest that the threshold used to segment green vegetation and the background is approximately 377
scale invariant and the uncertainty transferred to FVC estimates is small. 378
Fig. 11. Uncertainty analysis using four simulated images. Relationship between flight height and (a) the mean 379
values, (b) standard deviations, (c) weights, (d) fraction of mixed pixels, (e) threshold, and (f) FVC estimates. 380
5. Discussion 381
In this study, we have demonstrated that the HAGFVC method provides a solution for estimating 382
FVC from remotely sensed LARS images that yields consistent and accurate results at different spatial 383
resolutions. This method was developed based on a GMM, which describes the spectral characteristics 384
of a land surface covered by vegetation (Coy et al., 2016; Song et al., 2015). The basic concept of the 385
HAGFVC method is to use only pure pixels to derive the Gaussian parameters. We achieved this by 386
fitting half-Gaussian distributions for pure vegetation pixels and pure background pixels to avoid the 387
negative influence of mixed pixels. Mixed pixels are located between pure vegetation and the pure 388
background in the histogram (Fig. 1). The HAGFVC method uses the pixels at the edges (end) of the a* 389
25
histogram, where pure pixels are mainly distributed, to reconstruct full GMMs from the half Gaussian 390
distributions and then generate a reasonable threshold value. The fact that FVC estimates in this study 391
were close to the reference values strongly suggests that the negative effect of mixed pixels to FVC 392
estimation was suppressed by using the HAGFVC method. 393
Compared to the other three methods, the HAGFVC method improved FVC estimates and showed 394
lower RMSEs, MBEs and STDs in the validation for different vegetation coverages. In the three growth 395
stages of corn, the RMSEs and STDs of FVC estimated based on the HAGFVC method were less than 396
0.04, while LAB2 and SHAR-LABFVC yielded more errors and inconsistencies (RMSEs of up to 0.20 397
in the V6 stage), and ExG yielded quite large errors (RMSE of 0.36) in V8 stage (see Table 3). For sparse 398
vegetation (V4 stage), when the background dominates the image, all three methods accurately estimated 399
the FVC and exhibited similar performance. However, in the growth stages with medium and high 400
vegetation coverage (V6 and V8 growth stages in this study), LAB2, SHAR-LABFVC and ExG 401
produced considerable errors in FVC estimation at high flight altitudes and low spatial resolutions (Fig. 402
9 and Table 3). This is the result of the number of mixed pixels in an image increasing as the fraction of 403
vegetation and background pixels becomes similar (Fig. 7). As shown in Fig. 9b, the LAB2 and SHAR-404
LABFVC methods distinctly overestimate FVC and the MBE increases with flight altitude (up to 0.19, 405
Table 3) whereas the ExG method underestimated FVC with an MBE of -0.08. In the V8 stage (Fig. 9c), 406
the LAB2 and SHAR-LABFVC methods exhibited better performance than in the V6 stage, but the 407
performance was worse than that in the V4 stage. ExG yielded a considerable underestimation with 408
RMSE up to 0.36 (see Table 3). Although the HAGFVC method was validated on corn field at one site, 409
the method does not rely on the structure or spectral property of crops. It only requires information from 410
the histogram of a* values. Thus, we expect the HAGFVC method to apply other crop types. 411
Conventional methods designed for proximal sensing are greatly constrained by the unreasonable 412
26
decomposition of GMMs because of the large quantities of mixed pixels. LAB2 and SHAR-LABFVC 413
were developed to extract FVC from high-resolution images with few mixed pixels. Although ExG has 414
been used for estimating FVC from LARS images (Torres-Sánchez et al., 2014), the effect of mixed 415
pixels was not fully investigated. Other classical image-processing methods that have been used to 416
segment LARS images, such as K-means, Artificial Neural Networks (ANN), Random Forest and 417
Spectral Index methods (Feng et al., 2015; Hu et al., 2017; Poblete-Echeverria et al., 2017), also do not 418
specifically consider mixed pixels. However, mixed pixels occupy a large proportion of the image in 419
some situations (at a coarser resolution and moderate FVC level). The trend of increasing FVC with 420
height in the LAB2 method results from bias in the training data set while the trend in the SHAR-421
LABFVC method results from the weakly bimodal distribution of images acquired at high altitudes. 422
More mixed pixels result in more blurring of foreground and background pixels, which results in more 423
pixels with enough ‘greenness’ to be automatically selected as foreground pixels for training the 424
unsupervised classification used in the LAB2 method. This introduces a bias into the nearest-neighbor 425
classification used by the LAB2 method towards foreground as spatial resolution decreases, which 426
results in increases in the estimated value of FVC with altitude. The trend of increasing FVC with height 427
in the SHAR-LABFVC method results from the failure of finding a reasonable initial cut-off of a* 428
histogram which is used to make an initial segmentation. Because of this failure, SHAR-LABFVC starts 429
back-up algorithm which uses a constant empirical threshold to conduct classification. As resolution 430
decreases, the constant threshold results in a bias in the FVC estimates. For ExG, the continuous 431
underestimation of FVC in V6 and V8 stages is mainly due to less inter-class variability thus leading to 432
poor segmentation using Otsu’s method (Otsu, 1979). Our research demonstrates the need for developing 433
mixed-pixel-resistant methods for analyzing images acquired from UAVs. It is worth noting that, 434
although the method gives accurate estimates of FVC, the resulting classified image shouldn’t be used 435
27
for purposes that require very accurate spatial information about the location of foliage within an image, 436
e.g. Chen-Cihlar clumping corrections (Chen and Cihlar, 1995), because of the large proportion of mixed 437
pixels in high-altitude images. 438
The HAGFVC method was not substantially affected by illumination conditions or flight altitude. 439
The three UAV datasets were collected in distinctly different illumination environments, i.e., near noon 440
and near nightfall on cloudy days and near nightfall on a sunny day (Table 1). The variations in 441
illumination did not affect the HAGFVC method because the absolute values of a* are largely 442
independent of illumination and the method also does not depend on the absolute values of a*. A 443
sensitivity analysis showed that the threshold was insensitive to variations in the weights and Gaussian 444
parameters of the two pure components, despite the weights and standard deviations clearly decreasing 445
with increasing flight altitude. This is strong evidence that our method is relatively insensitive to the 446
level of green vegetation coverage and the quantity of mixed pixels. According to our analysis, the 447
absolute error was less than 0.07 when the resolution was less than 3.2 cm. Note that the HAGFVC 448
method only applies as long as the UAV is sufficiently close to the ground for there to be clearly defined 449
pure pixels of green vegetation and background. At very high altitudes the histogram of a* values will 450
become unimodal and an empirical threshold is used to estimate FVC. In extreme cases the images will 451
come to resemble images from high-altitude remote sensing, from which only vegetation indices can be 452
derived and pixel classification is challenging. 453
The complexity of the spatial distribution of vegetation, the variability in illumination conditions 454
(Ponzoni et al., 2014) and the angular effect (Zhao et al., 2012) caused by perspective projection, all 455
affect the accuracy of FVC estimation using the HAGFVC method by reducing the precision of searching 456
for the mean values of the Gaussian distributions. In practice, the accuracy of our method depends on 457
the precision of determining the mean values of the two components. Fluctuations were observed in the 458
28
FVC estimates at different spatial resolutions because of errors in determining mean values. The 459
relatively large fluctuations in FVC estimation at different flight altitudes in the V8 stage (RMSE of up 460
to 0.03 in Fig. 9c are mainly caused by non-optimal mean values. Generally, non-optimal mean values 461
derive from two sources. The first is the representativeness of GMM for a vegetated surface. An 462
alternative model might produce better results. The second is the sub-optimal smoothing of the histogram. 463
A better smoothing algorithm might achieve more accurate determination of the initial mean values. 464
Theoretically, more accurately estimating these mean values is the key to improving the accuracy of 465
FVC estimation based on GMM decomposition. Notwithstanding the opportunities for improvement, 466
the HAGFVC method is a significant advance on existing methods to minimize the effect of mixed pixels 467
and yield accurate estimates of FVC. 468
6. Conclusions 469
This paper proposed a half-Gaussian fitting method (i.e., HAGFVC) to decompose a Gaussian 470
mixture model (GMM) and estimate fractional vegetation cover (FVC) from low altitude remote sensing 471
(LARS) images. This algorithm only used a portion of pure pixels to derive the GMMs in order to 472
suppress the influence of mixed pixels, and classified mixed pixels as vegetation or background at nearly 473
equal rates of misclassification. We compared three FVC estimation methods (LAB2, SHAR-LABFVC 474
and ExG) with the HAGFVC method and found that the HAGFVC method generated accurate and robust 475
FVC estimates for crop fields of high, moderate and low vegetation density. Particularly, when the 476
fraction of mixed pixels was high (when a corn plant has six visible leaf collars), HAGFVC exhibited 477
good performance, with an RMSE of 0.02 and MBE of 0.01 at flight attitudes from 3 meters to 50 meters 478
above ground level (AGL). Although the LAB2, SHAR-LABFVC and ExG methods exhibited good 479
estimates (RMSEs of less than 0.04) for sparse vegetation, large quantities of mixed pixels in the 480
moderate-density vegetation at coarse spatial resolutions reduced the accuracies of the conventional 481
29
ground-based methods (RMSE of up to 0.20). Simulations showed that the theoretical RMSE of the 482
HAGFVC method was less than 0.07 at resolutions of less than 3.2 cm. Consequently, our approach 483
demonstrates the potential for accurately estimating FVC over large areas using UAVs and LARS. 484
Acknowledgments 485
This work was supported by the National Science Foundation of China (Grant no. 41331171 and 486
61227806). The authors thank Prof. Suhong Liu (Beijing Normal University) for her kind suggestions 487
in image simulation. We also appreciate the help from Dr. Ronghai Hu in the organization of this 488
manuscript and the help from Dr. Jianbo Qi in field campaigns and images simulation. 489
References 490
Bhardwaj, A., Sam, L., Akanksha, Martín-Torres, F.J., Kumar, R., 2016. UAVs as remote sensing platform in glaciology: 491
Present applications and future prospects. Remote Sens. Environ. 175, 196–204. 492
Carlson, T.N. and Ripley, D.A., 1997. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote 493
Sens. Environ. 62(3), 241-252. 494
Chapman, S. et al., 2014. Pheno-Copter: A Low-altitude, autonomous remote-sensing robotic helicopter for high-throughput 495
field-based phenotyping. Agronomy, 4(2), 279-301. 496
Chen, J.M., Cihlar, J., 1995. Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index. 497
Appl. Opt. 34 (27), 6211–6222. 498
Chianucci, F., Disperati, L., Guzzi, D. and Bianchini, D., 2016. Estimation of canopy attributes in beech forests using true 499
colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 47, 60-68. 500
Cox, D.R., Hinkley, D.V., Rubin, D.B. and Silverman, B.W., 1989. Monographs on statistics and applied probability. 2(2), 501
273-277. 502
Coy, A., Rankine, D., Taylor, M., Nielsen, D. and Cohen, J., 2016.Increasing the accuracy and automation of fractional 503
vegetation cover estimation from digital photographs. Remote Sens. 8(7), 474. 504
Čugunovs, M., Tuittila, E.-S., Mehtätalo, L., Pekkola, L., Sara-Aho, I., Kouki, J., 2017. Variability and patterns in forest soil 505
and vegetation characteristics after prescribed burning in clear-cuts and restoration burnings. Silva Fenn. 51. 506
Woebbecke, D.M., Meyer, G.E., Von Bargen, K. Von, Mortensen, D.A., 1995. Color indices for weed identification under 507
various soil, residue, and lighting conditions. Trans. ASAE 38, 259–269. 508
Feng, Q., Liu, J. and Gong, J., 2015. UAV Remote sensing for urban vegetation mapping using random forest and texture 509
analysis. Remote Sens. 7(1), 1074-1094. 510
Gutman, G., Ignatov, A., 1997. Satellite-derived green vegetation fraction for the use in numerical weather prediction models. 511
Satell. Data Appl. Weather Clim. 19, 477–480. 512
Hsieh, P.F., Lee, L.C. and Chen, N.Y., 2001. Effect of spatial resolution on classification errors of pure and mixed pixels in 513
remote sensing. IEEE Trans. Geosci. Remote Sensing. 39(12), 2657-2663. 514
Hunt, E.R., Daughtry, C.S.T., Mirsky, S.B. and Hively, W.D., 2014. Remote sensing with simulated unmanned aircraft 515
imagery for precision agriculture applications. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7(11), 4566-4571. 516
Jones, H. and Sirault, X., 2014. Scaling of thermal images at different spatial resolution: the mixed pixel problem. Agronomy, 517
4(3), 380-396. 518
Jung, M., Henkel, K., Herold, M. and Churkina, G., 2006. Exploiting synergies of global land cover products for carbon cycle 519
30
modeling. Remote Sens. Environ. 101(4), 534-553. 520
Rango, A., Laliberte, A., Herrick, J.E., Winters, C., Havstad, K., Steele, C., Browning, D., 2009. Unmanned aerial vehicle-521
based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 3, 033542. 522
Liu, J. and Pattey, E., 2010. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. 523
Agric. For. Meteorol. 150(11), 1485-1490. 524
Liu, Y., Mu, X., Wang, H. and Yan, G., 2012. A novel method for extracting green fractional vegetation cover from digital 525
images. J. Veg. Sci. 23(3), 406-418. 526
Louhaichi, M., Borman, M.M. and Johnson, D.E., 2001. Spatially located platform and aerial photography for documentation 527
of grazing impacts on wheat. Geocarto Int. 16, 65-70. 528
Macfarlane, C. and Ogden, G.N., 2012. Automated estimation of foliage cover in forest understorey from digital nadir images. 529
Methods Ecol. Evol. 3(2), 405-415. 530
Matese, A., Toscano, P., Di Gennaro, S., Genesio, L. and Vaccari, F., 2015. Intercomparison of UAV, aircraft and satellite 531
remote sensing platforms for precision viticulture. Remote Sens. 7(3), 2971-2990. 532
Mesas-Carrascosa, F.J., Notario-García, M.D., Meroño De Larriva, J.E., Sánchez De La Orden, M. and García-Ferrer Porras, 533
A., 2014. Validation of measurements of land plot area using UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 33, 270-279. 534
Mu, X., Hu, M., Song, W., Ruan, G. and Ge, Y., 2015.Evaluation of sampling methods for validation of remotely sensed 535
fractional vegetation cover. Remote Sens. 7(12), 16164-16182. 536
Muir, J., Schmidt, M., Tindall, D., Trevithick, R., Scarth, P., Stewart, J.B., 2011. Field measurement of fractional ground 537
cover : a technical handbook supporting ground cover monitoring for Australia. ABARES. 538
Otsu, N., 1979. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man. Cybern. 9, 62–66. 539
Pérez, A.J., López, F., Benlloch, J.V. and Christensen, S., 2000. Colour and shape analysis techniques for weed detection in 540
cereal fields. Comput. Electron. Agric. 25(3), 197-212. 541
Poblete-Echeverria, C., Federico Olmedo, G., Ingram, B. and Bardeen, M., 2017. Detection and segmentation of vine canopy 542
in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): A Case Study in a Commercial 543
Vineyard. Remote Sens. 9(3), 268. 544
Ponzoni, F.J., Da Silva, C.B., Dos Santos, S.B., Montanher, O.C. and Dos Santos, T.B., 2014.Local illumination influence 545
on vegetation indices and plant area index (PAI) relationships. Remote Sens, 6(7), 6266-6282. 546
Qi, J., Xie, D., Guo, D., Yan, G., 2017. A large-scale emulation system for realistic three-dimensional (3-D) forest simulation. 547
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10, 4834–4843. 548
Samseemoung G, Soni P, Jayasuriya HP, Salokhe VM. Application of low altitude remote sensing (LARS) platform for 549
monitoring crop growth and weed infestation in a soybean plantation. Precis. Agric. 13(6), 611-27. 550
Sellers, P.J., 1997. Modeling the exchanges of energy, water, and carbon between continents and the atmosphere. Science 551
(80-. ). 275, 502–509. 552
Song, W., Mu, X., Ruan, G. and Gao, Z., 2017. Estimating fractional vegetation cover and the vegetation index of bare soil 553
and highly dense vegetation with a physically based method. Int. J. Appl. Earth Obs. Geoinf. 58, 168-176. 554
Song, W., Mu, X., Yan, G. and Huang, S., 2015. Extracting the green fractional vegetation cover from digital images using 555
a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 7(8), 10425-10443. 556
Torres-Sánchez, J., Peña, J.M., de Castro, A.I. and López-Granados, F., 2014. Multi-temporal mapping of the vegetation 557
fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 103, 104-113. 558
Watts, A.C., Ambrosia, V.G. and Hinkley, E.A., 2012. Unmanned aircraft systems in remote sensing and scientific research: 559
classification and considerations of use. Remote Sens. 4(12), 1671-1692. 560
Yan, G., Mu, X., Liu, Y., 2012. Fractional vegetation cover, in: Advanced remote sensing. Elsevier, pp. 415–438. 561
Zarco-Tejada, P.J., González-Dugo, V., Berni, J.A.J., 2012. Fluorescence, temperature and narrow-band indices acquired 562
from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. 563
Environ. 117, 322–337. 564
Zhao, J., Xie, D., Mu, X., Liu, Y. and Yan, G., 2012. Accuracy evaluation of the ground-based fractional vegetation cover 565
measurement by using simulated images. In: IGARSS 2012. Munich, Germany, pp. 3347-3350.566
... The DCP image provides the strength of high resolution but the weakness of a small footprint for vegetation monitoring. The problem of insufficient spatial representativeness might exist (e.g., P(θ) > 0.3 at Plot 5 on 13 June and Plot 4 on 28 June) because of the spatial heterogeneity of the croplands in some growth stages of maize (Li et al., 2018). As a statistical index, FD is influenced by the spatial coverage of the DCP image. ...
... However, the resolution of the DCP images might decrease with the distance between the canopy and the camera. For the DCP images with different resolutions of the same scene, the results based on the FD method would be different, because the resolution of the DCP images influences the calculation of W and P(θ) Li et al., 2018;Yan et al., 2019b). In other words, the higher resolution the DCP images, the fewer mixed pixels, and the higher estimation accuracy of the P(θ) and FD. ...
Article
Full-text available
The leaf area index (LAI), defined as one-half of the total green leaf area per unit horizontal ground surface area, is a key parameter in agriculture, forestry, ecology, and other fields. The clumping index (CI) accounts for the nonrandom spatial distribution of the foliage elements in the canopy, thereby considerably influencing the accuracy of LAI estimation with optical field-based instruments. Most of the traditional clumping effect correction methods for LAI measurements are based on the theory developed for one-dimensional (1D) data of the vegetation canopy. The development of new methods remains necessary for LAI measurement with two-dimensional (2D) data, including images from digital cover photography (DCP). We found that the fractal dimension (FD) of a 2D ground-based DCP image is an effective tool for correcting the clumping effect and estimating the LAI. The universal formula was derived to describe the relationship between the LAI and FD for randomly distributed leaves using the box-counting method (BCM) and the Boolean model. For the clumped leaves, the universal formula is related to the FD, LAI, and CI. The LAI and CI can be calculated with the FD and gap probability derived from DCP images. Eighteen simulated scenes of different vegetation structure patterns, three realistic canopy scenes of the fourth phase of the radiative transfer model intercomparison (RAMI), and field-measured data acquired from 5 plots in 4 temporal phases were provided to validate the method. The results showed good agreement with the reference (R² = 0.93 and RMSE = 0.46 for simulated data; uncertainty from 0.31 to 1.05 for realistic canopy; R² = 0.85 and RMSE = 0.34 for field-measured data). This validation with downward DCP images shows that the proposed FD method assesses the clumping effect more thoroughly compared to the clumping effect correction methods originally designed for 1D data. The FD method is expected to improve the measurement accuracy of LAI with DCP images, especially for heterogeneous canopies.
... July 2022 | Volume 10 | Article 916355data through the comparison of various fitting forms, including Exponential fitting, Fourier fitting, Polynomial fitting, Power fitting, Sum of sin functions, etc. The expression of Gaussian fitting can be written as(Li et al., 2018;Xu et al., 2019) ...
Article
Full-text available
The key problem to be solved in the process of wind turbine (WT) operation and maintenance is to obtain the wind turbine performance accurately. The power curve is an important indicator to evaluate the performance of wind turbines. How to model and obtain the power curve of wind turbines has always been one of the hot topics in research. This paper proposes a novel idea to get the actual power curve of wind turbines. Firstly, the basic data preprocessing algorithm is designed to process the zero value and null value in the original supervisory control and data acquisition (SCADA) data. The moving average filtering (MAF) method is employed to deal with the wind speed, the purpose of which is to consider the comprehensive result of wind on the wind turbine power in a certain period. According to the momentum theory of the ideal wind turbine and combined with the characteristics of the anemometer installation position, the deviation between the measured wind speed and the actual wind speed is approximately corrected. Here, the influence of dynamic changes in air density is also considered. Then, the Gaussian fitting algorithm is used to fit the wind-power curve. The characteristics of the power curve before and after wind speed correction are compared and analyzed. At the same time, the influence of the parameter uncertainty on the reliability of the power curve is considered and investigated. Finally, the characteristics of the power curves of four wind turbines are compared and analyzed. The research results show that among these power curves, WT3 and WT4 are the closest, WT2 is the next, and WT1 has the farthest deviation from the others. The research work provides a valuable basis for on-site performance evaluation, overhaul, and maintenance of wind turbines.
... Li et al. in 2018 proposed the half-Gaussian fitting method for fractional vegetation cover estimation (Li et al., 2018). They used low-altitude remote-sensing UAV images of corn during three growth stages at different flight altitude. ...
Article
Full-text available
Because of the increasing global population, changing climate, and consumer demands for safe, environmentally friendly, and high‐quality food, plant breeders strive for higher yield cultivars by monitoring specific plant phenotypes. Developing new crop cultivars and monitoring through current methods is time‐consuming, sometimes subjective, and based on subsampling of microplots. High‐throughput phenotyping using unmanned aerial vehicle‐acquired aerial orthomosaic images of breeding trials improves and simplifies this labor‐intensive process. To perform per‐microplot phenotype analysis from such imagery, it is necessary to identify and localize individual microplots in the orthomosaics. This paper reviews the key concepts of recent studies and possible future developments regarding vegetation segmentation and microplot segmentation. The studies are presented in two main categories: (a) general vegetation segmentation using vegetation‐index‐based thresholding, learning‐based, and deep‐learning‐based methods; and (b) microplot segmentation based on machine learning and image processing methods. In this study, we performed a literature review to extract the algorithms that have been developed in vegetation and microplots segmentation studies. Based on our search criteria, we retrieved 92 relevant studies from five electronic databases. We investigated these selected studies carefully, summarized the methods, and provided some suggestions for future research. Algorithms that are commonly used for vegetation segmentation in the field are reviewed. The state‐of‐the‐art algorithms in vegetation segmentation are presented. The state‐of‐the‐art algorithms for microplot segmentation in the field are reviewed. Challenges created by lack of and gaps between plots in microplot segmentation in the field are analyzed. Recommendations are given on algorithms and direction of future research for vegetation and microplot segmentation.
... UAS carrying multispectral and RGB (red, green, blue) cameras have been indicated to be appropriate for crop growth and yield monitoring (Lu, 2005;Torres-Sanchez et al., 2014;Huang et al., 2016;Kanning et al., 2018;Li et al., 2018a;Hlatshwayo et al., 2019;Yang et al., 2020;Zhang et al., 2021). Many studies have shown that RGB images can result in accurate estimation of f c for a wide range of vegetation coverage and using various image resolutions (Chen et al., 2016;Duan et al., 2017;Li et al., 2018b;Yan et al., 2019;Jiang et al., 2020). Studies by Hunt Jr. et al. (2005 have indicated that the triangular green index (TGI), an RGB index, is well-correlated with plant chlorophyll content and biomass. ...
Article
Guayule (Parthenium argentatum, A. Gray), a perennial desert shrub, produces high-quality natural rubber and is targeted as a domestic natural rubber source in the U.S. While commercialization efforts for guayule are on- going, crop management requires plant growth monitoring, irrigation requirement assessment, and final yield estimation. Such assistance for guayule management could be provided with remote sensing (RS) data. In this study, field and RS data, collected via drones, from a 2-year guayule irrigation experiment conducted at Mar- icopa, Arizona were evaluated. In-season field measurements included fractional canopy cover (fc), basal (Kcb) and single (Kc) crop coefficients, and final yields of dry biomass (DB), rubber (RY), and resin (ReY). The ob- jectives of this paper were to compare vegetations indices from MS data (NDVI) and RGB data (triangular greenness index, TGI); and derive linear prediction models for estimating fc, Kcb, Kc, and yield as functions of the MS and RGB indices. The NDVI and TGI showed similar seasonal trends and were correlated at a coefficient of determination (r2) of 0.52 and a root mean square error (RMSE) of 0.11. The prediction of measured fc as a linear function of NDVI (r2 = 0.90) was better than by TGI (r2 = 0.50). In contrast to TGI, the measured fc was highly correlated with estimated fc based on RGB image evaluation (r2 = 0.96). Linear models of Kcb and Kc, developed over the two years of guayule growth, had similar r2 values vs NDVI (r2 = 0.46 and 0.41, respectively) and vs TGI (r2 = 0.48 and 0.40, respectively). Final DB, RY, and ReY were predicted by both NDVI (r2 = 0.75, 0.53, and 0.70, respectively) and TGI (r2 = 0.72, 0.48, and 0.65, respectively). The RS-based models enable estimation of irrigation requirements and yields in guayule production fields in the U.S.
... Accurate and frequent monitoring of forest canopy variables over limited spatial extents is critical for precision forestry, sustainable management, ecological modeling, and plant physiological evaluation (Ferreira et al., 2018;Sankey et al., 2017;Schiefer et al., 2020;Shen et al., 2020). The advance of close-range remote sensing allows the finescale measurement of canopy biophysical and biochemical variables due to its ultrahigh resolution (i.e., centimetric or even millimetric resolution) (Li et al., 2020(Li et al., , 2018. Unmanned aerial vehicles (UAVs), in particular, enable variable estimation at a wide range of scales (from the individual scale to the species scale to the community scale to the ecosystem scale) with flexible spatial and temporal resolutions at affordable costs (Berra et al., 2019;Dandois and Ellis, 2013). ...
Article
Full-text available
Accurate wall-to-wall estimation of forest crown cover is critical for a wide range of ecological studies. Notwithstanding the increasing use of UAVs in forest canopy mapping, the ultrahigh-resolution UAV imagery requires an appropriate procedure to separate the contribution of understorey from overstorey vegetation, which is complicated by the spectral similarity between the two forest components and the illumination environment. In this study, we investigated the integration of deep learning and the combined data of imagery and photogrammetric point clouds for boreal forest canopy mapping. The procedure enables the automatic creation of training sets of tree crown (overstorey) and background (understorey) data via the combination of UAV images and their associated photogrammetric point clouds and expands the applicability of deep learning models with self-supervision. Based on the UAV images with different overlap levels of 12 conifer forest plots that are categorized into "I", "II" and "III" complexity levels according to illumination environment, we compared the self-supervised deep learning-predicted canopy maps from original images with manual delineation data and found an average intersection of union (IoU) larger than 0.9 for "complexity I" and "complexity II" plots and larger than 0.75 for "complexity III" plots. The proposed method was then compared with three classical image segmentation methods (i.e., maximum likelihood, Kmeans, and Otsu) in the plot-level crown cover estimation, showing out-performance in overstorey canopy extraction against other methods. The proposed method was also validated against wall-to-wall and pointwise crown cover estimates using UAV LiDAR and in situ digital cover photography (DCP) benchmarking methods. The results showed that the model-predicted crown cover was in line with the UAV LiDAR method (RMSE of 0.06) and deviate from the DCP method (RMSE of 0.18). We subsequently compared the new method and the commonly used UAV structure-from-motion (SfM) method at varying forward and lateral overlaps over all plots and a rugged terrain region, yielding results showing that the method-predicted crown cover was relatively insensitive to varying overlap (largest bias of less than 0.15), whereas the UAV SfM-estimated crown cover was seriously affected by overlap and decreased with decreasing overlap. In addition, canopy mapping over rugged terrain verified the merits of the new method, with no need for a detailed digital terrain model (DTM). The new method is recommended to be used in various image overlaps, illuminations, and terrains due to its robustness and high accuracy. This study offers opportunities to promote forest ecological applications (e.g., leaf area index estimation) and sustainable management (e.g., deforestation).
Chapter
By analyzing the conversion algorithm between RGB and HSV, this paper proposes a fast algorithm to convert RGB space to HSV space. The algorithm adopts shift operation and lookup table instead of floating-point multiplication, which greatly improves the speed of the algorithm on FPGA. In addition, the Y component is no longer involved in the calculation during the conversion, which further reduces the computational complexity. Finally, experiments show that compared with the traditional algorithm, the algorithm can save 80% of the computing time on the FPGA platform and 46% on the PC platform. Therefore, this algorithm is widely used in real-time video analysis such as license plate recognition and flame detection.
Article
Crop identification and classification is an important aspect for modern agricultural sector. With development of unmanned aerial vehicle (UAV) systems, crop identification from RGB images is experiencing a paradigm shift from conventional image processing techniques to deep learning strategies because of successful breakthrough in convolutional neural networks (CNNs). UAV images are quite trustworthy to identify different crops due to its higher spatial resolution. For precision agriculture crop identification is the primal criteria. Identifying a specific type of crop in a land is essential for performing proper farming and that also helps to estimate the net yield production of a particular crop. Previous works are limited to identify a single crop from the RGB images captured by UAVs and have not explored the chance of multi-crop classification by implementing deep learning techniques. Multi crop identification tool is highly needed as designing separate tool for each type of crop is a cumbersome job, but if a tool can successfully differentiate multiple crops then that will be helpful for the agro experts. In contrast with the previous existing techniques, this article elucidates a new conjugated dense CNN (CD-CNN) architecture with a new activation function named SL-ReLU for intelligent classification of multiple crops from RGB images captured by UAV. CD-CNN integrates data fusion and feature map extraction in conjunction with classification process. Initially a dense block architecture is proposed with a new activation function, called SL-ReLU, associated with the convolution operation to mitigate the chance of unbounded convolved output and gradient explosion. Dense block architecture concatenates all the previous layer features for determining the new features. This reduces the chance of losing important features due to deepening of the CNN module. Later, two dense blocks are conjugated with the help of a conversion block for obtaining better performance. Unlike traditional CNN, CD-CNN omits the use of fully connected layer and that reduces the chance of feature loss due to random weight initialization. The proposed CD-CNN achieves a strong distinguishing capability from several classes of crops. Raw UAV images of five different crops are captured from different parts of India and then small candidate crop regions are extracted from the raw images with the help of Arc GIS 10.3.1 software and then the candidate regions are fed to CD-CNN for proper training purpose. Experimental results show that the proposed module can achieve an accuracy of 96.2% for the concerned data. Further, superiority of the proposed network is established after comparing with other machine learning techniques viz. RF-200 and SVM, and standard CNN architectures viz. AlexNet, VGG-16, VGG-19 and ResNet-50.
Article
Full-text available
The survival rate of seedlings is a decisive factor of afforestation assessment. Generally, ground checking is more accurate than any other methods. However, the survival rate of seedlings can be higher in the growing season, and this can be estimated in a larger area at a relatively lower cost by extracting the tree crown from the unmanned aerial vehicle (UAV) images, which provides an opportunity for monitoring afforestation in an extensive area. At present, studies on extracting individual tree crowns under the complex ground vegetation conditions are limited. Based on the afforestation images obtained by airborne consumer-grade cameras in central China, this study proposes a method of extracting and fusing multiple radii morphological features to obtain the potential crown. A random forest (RF) was used to identify the regions extracted from the images, and then the recognized crown regions were fused selectively according to the distance. A low-cost individual crown recognition framework was constructed for rapid checking of planted trees. The method was tested in two afforestation areas of 5950 m2 and 5840 m2, with a population of 2418 trees (Koelreuteria) in total. Due to the complex terrain of the sample plot, high weed coverage, the crown width of trees, and spacing of saplings vary greatly, which increases both the difficulty and complexity of crown extraction. Nevertheless, recall and F-score of the proposed method reached 93.29%, 91.22%, and 92.24% precisions, respectively, and 2212 trees were correctly recognized and located. The results show that the proposed method is robust to the change of brightness and to splitting up of a multi-directional tree crown, and is an automatic solution for afforestation verification.
Article
Crop management used in these technologies is one of the main trends in the modernization of agricultural technologies. To implement crop management, growers need accessible and effective information about the state of crops. The aim of the work is to develop a method of plant identification on multispectral images of high resolution for crops of continuous sowing on the example of winter wheat. The research was conducted on 03/17/2019 on winter wheat crops in the tillering vegetation phase, Mukan variety in production fields near the village of Horodyshche, Kyiv region. Aerial monitoring from a height of 100 meters was carried out using a spectral complex Slantrange 3p, mounted on a UAV UAV DJI Matrice 600. To extract the reference graphics data from Slantview made a copy of the screen in full screen mode of the image window. Statistical processing of graphical data of spectral monitoring results was performed in MathCad. It was found that the reliable establishment of the spectral portrait of the soil for its pixel-by-pixel filtering from multispectral images is a difficult task because its color significantly depends on the state of moisture, which may differ in open and shaded by plants. A more promising way to eliminate random inclusions is to use a spectral portrait of plants based on the intensity ratios of its spectral components. A promising parameter for assessing the condition of crops is to assess their area of heir horizontal surface, which can be determined by pixel analysis of the image. A filtering option is proposed, which, as in the solutions implemented in Slantview software, needs to be debugged. In further researches it is expedient to consider questions of methodical maintenance of an estimation of quality of a filtration of data of spectral monitoring of vegetation.
Article
Full-text available
The realistic reconstruction and radiometric simulation of a large-scale three-dimensional (3-D) forest scene have potential applications in remote sensing. Although many 3-D radiative transfer models concerning forest canopy have been developed, they mainly focused on homogeneous or relatively small heterogeneous scenes, which are not compatible with the coarse-resolution remote sensing observations. Due to the huge complexity of forests and the inefficiency of collecting precise 3-D data of large areas, realistic simulation over large-scale forest area remains challenging, especially in regions of complex terrain. In this study, a large-scale emulation system for realistic 3-D forest Simulation is proposed. The 3-D forest scene is constructed from a representative single tree database (SDB) and airborne laser scanning (ALS) data. ALS data are used to extract tree height, crown diameter and position, which are linked to the individual trees in SDB. To simulate the radiometric properties of the reconstructed scene, a radiative transfer model based on a parallelized ray-tracing code was developed. This model has been validated with an abstract and an actual 3-D scene from the radiation transfer model intercomparison website and it showed comparable results with other models. Finally, a 1 km $\times$ 1 km scene with more than 100 000 realistic individual trees was reconstructed and a Landsat-like reflectance image was simulated, which kept the same spatial pattern as the actual Landsat 8 image.
Article
Full-text available
The use of Unmanned Aerial Vehicles (UAVs) in viticulture permits the capture of aerial Red-Green-Blue (RGB) images with an ultra-high spatial resolution. Recent studies have demonstrated that RGB images can be used to monitor spatial variability of vine biophysical parameters. However, for estimating these parameters, accurate and automated segmentation methods are required to extract relevant information from RGB images. Manual segmentation of aerial images is a laborious and time-consuming process. Traditional classification methods have shown satisfactory results in the segmentation of RGB images for diverse applications and surfaces, however, in the case of commercial vineyards, it is necessary to consider some particularities inherent to canopy size in the vertical trellis systems (VSP) such as shadow effect and different soil conditions in inter-rows (mixed information of soil and weeds). Therefore, the objective of this study was to compare the performance of four classification methods (K-means, Artificial Neural Networks (ANN), Random Forest (RForest) and Spectral Indices (SI)) to detect canopy in a vineyard trained on VSP. Six flights were carried out from post-flowering to harvest in a commercial vineyard cv. Carménère using a low-cost UAV equipped with a conventional RGB camera. The results show that the ANN and the simple SI method complemented with the Otsu method for thresholding presented the best performance for the detection of the vine canopy with high overall accuracy values for all study days. Spectral indices presented the best performance in the detection of Plant class (Vine canopy) with an overall accuracy of around 0.99. However, considering the performance pixel by pixel, the Spectral indices are not able to discriminate between Soil and Shadow class. The best performance in the classification of three classes (Plant, Soil, and Shadow) of vineyard RGB images, was obtained when the SI values were used as input data in trained methods (ANN and RForest), reaching overall accuracy values around 0.98 with high sensitivity values for the three classes.
Article
Full-text available
Forest ecological restoration by burning is widely applied to promote natural, early-successional sites and increase landscape biodiversity. Burning is also used as a forest management practice to facilitate forest regeneration after clearcutting. Besides the desired goals, restoration burnings also affect soil biogeochemistry, particularly soil organic matter (SOM) and related soil carbon stocks but the long-term effects are poorly understood. However, in order to study these effects, a reliable estimate of spatial variability is first needed for effective sampling. Here we investigate spatial variability of SOM and vegetation features 13 years after burnings and in combination with variable harvest levels. We sampled four experimental sites representing distinct management and restoration treatments with an undisturbed control. While variability of vegetation cover and biomass was generally higher in disturbed sites, soil parameter variability was not different between the four sites. The joint ecological patterns of soil and vegetation parameters across the whole sample continuum support well the prior assumptions on the characteristic disturbance conditions within each of the study sites. We designed and employed statistical simulations as a means to plan prospective sampling. Sampling six forest sites for each treatment type with 30 independent soil cores per site would provide enough statistical power to adequately capture the impacts of burning on SOM based on the data we obtained here and statistical simulations. In conclusion, we argue that an informed design-based approach to documenting the ecosystem effects of forest burnings is worth applying both through obtaining new data and meta-analysing the existing. © 2017, Finnish Society of Forest Science. All rights reserved.
Article
Full-text available
The use of automated methods to estimate fractional vegetation cover (FVC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive FVC measurements. Wide acceptance has been delayed because of the limitations in accuracy, speed, automation and generalization of these methods. This work introduces a novel technique, the Automated Canopy Estimator (ACE) that overcomes many of these challenges to produce accurate estimates of fractional vegetation cover using an unsupervised segmentation process. ACE is shown to outperform nine other segmentation algorithms, consisting of both threshold-based and machine learning approaches, in the segmentation of photographs of four different crops (oat, corn, rapeseed and flax) with an overall accuracy of 89.6%. ACE is similarly accurate (88.7%) when applied to remotely sensed corn, producing FVC estimates that are strongly correlated with ground truth values.
Article
Full-text available
Validation over heterogeneous areas is critical to ensuring the quality of remote sensing products. This paper focuses on the sampling methods used to validate the coarse-resolution fractional vegetation cover (FVC) product in the Heihe River Basin, where the patterns of spatial variations in and between land cover types vary significantly in the different growth stages of vegetation. A sampling method, called the mean of surface with non-homogeneity (MSN) method, and three other sampling methods are examined with real-world data obtained in 2012. A series of 15-m-resolution fractional vegetation cover reference maps were generated using the regressions of field-measured and satellite data. The sampling methods were tested using the 15-m-resolution normalized difference vegetation index (NDVI) and land cover maps over a complete period of vegetation growth. Two scenes were selected to represent the situations in which sampling locations were sparsely and densely distributed. The results show that the FVCs estimated using the MSN method have errors of approximately less than 0.03 in the two selected scenes. The validation accuracy of the sampling methods varies with variations in the stratified non-homogeneity in the different growing stages of the vegetation. The MSN method, which considers both heterogeneity and autocorrelations between strata, is recommended for use in the determination of samplings prior to the design of an experimental campaign. In addition, the slight scaling bias caused by the non-linear relationship between NDVI and FVC samples is discussed. The positive or negative trend of the biases predicted using a Taylor expansion is found to be consistent with that of the real biases.
Article
Full-text available
Taking photographs with a commercially available digital camera is an efficient and objective method for determining the green fractional vegetation cover (FVC) for field validation of satellite products. However, classifying leaves under shadows in processing digital images remains challenging and results in classification errors. To address this problem, an automatic shadow-resistant algorithm in the Commission Internationale d'Eclairage L*a*b* color space (SHAR-LABFVC) based on a documented FVC estimation algorithm (LABFVC) is proposed in this paper. The hue saturation intensity (HSI) is introduced in SHAR-LABFVC to enhance the brightness of shaded parts of the image. The lognormal distribution is used to fit the frequency of vegetation greenness and to classify vegetation and the background. Real and synthesized images are used for evaluation, and the results are in good agreement with the visual interpretation, particularly when the FVC is high and the shadows are deep, indicating that SHAR-LABFVC is shadow resistant. Without specific improvements to reduce the shadow effect, the underestimation of FVC can be up to 0.2 in the flourishing period of vegetation at a scale of 10 m. Therefore, the proposed algorithm is expected to improve the validation accuracy of remote sensing products.
Article
Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.
Article
Satellite remote sensing is an effective way to monitor vast extents of global glaciers and snowfields. However, satellite remote sensing is limited by spatial and temporal resolutions and the high costs involved in data acquisition. Unmanned aerial vehicle (UAV)-based glaciological studies are gaining pace in recent years due to their advantages over conventional remote sensing platforms. UAVs are easy to deploy, with the option of alternating the sensors working in visible, infrared, and microwave wavelengths. The high spatial resolution remote sensing data obtained from these UAV-borne sensors are a significant improvement over the data obtained by traditional remote sensing. The cost involved in data acquisition is minimal and researchers can acquire imagery according to their schedule and convenience. We discuss significant glaciological studies involving UAV as remote sensing platforms. This is the first review work, exclusively dedicated to highlight UAV as a remote sensing platform in glaciology. We examine polar and alpine applications of UAV and their future prospects in separate sections and present an extensive reference list for the readers, so that they can delve into their topic of interest. Because the technology is still widely unexplored for snow and glaciers, we put a special emphasis on discussing the future prospects of utilising UAVs for glaciological research.