Content uploaded by Linyuan Li
Author content
All content in this area was uploaded by Linyuan Li on Sep 20, 2019
Content may be subject to copyright.
1
A Half-Gaussian Fitting Method for Estimating Fractional Vegetation
Cover of Corn Crops Using Unmanned Aerial Vehicle Images
Linyuan Lia,b, Xihan Mua,b*, Craig Macfarlanec,d, Wanjuan Songa,b, Jun Chena,b, Kai Yane, Guangjian Yana,b
a State Key Laboratory of Remote Sensing Science, Jointly Sponsored by Beijing Normal University and
Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences
b Beijing Engineering Research Center for Global Land Remote Sensing Products, Institute of Remote
Sensing Science and Engineering, Faculty of Geographical Science, Beijing Normal University, Beijing
100875, China
c CSIRO, 147 Brockway Rd, Floreat WA 6014, Australia
d School of Agriculture and Environment, Faculty of Science, The University of Western Australia,
Crawley WA, Australia
e School of Land Science and Techniques, China University of Geosciences, Beijing, China.
* Corresponding Author
E-Mail: muxihan@bnu.edu.cn (X. Mu); Tel/Fax: +86-10-5880-2041
Highlights
A half-Gaussian mixture model is proposed to extract FVC from LARS images (HAGFVC).
HAGFVC is robust to variations of spatial resolution, mixed pixels and vegetated coverage.
HAGFVC outperforms previous methods developed for proximal images.
2
Abstract 1
Accurate estimates of fractional vegetation cover (FVC) using remotely sensed images collected using 2
unmanned aerial vehicles (UAVs) offer considerable potential for field measurement. However, most 3
existing methods, which were originally designed to extract FVC from ground-based remotely sensed 4
images (acquired at a few meters above the ground level), cannot be directly used to process aerial 5
images because of the presence of large quantities of mixed pixels. To alleviate the negative effects of 6
mixed pixels, we proposed a new method for decomposing the Gaussian mixture model and estimating 7
FVC, namely, the half-Gaussian fitting method for FVC estimation (HAGFVC). In this method, the 8
histograms of pure vegetation pixels and pure background pixels are firstly fit using two half-Gaussian 9
distributions in the Commission Internationale d’Eclairage (CIE) L*a*b* color space. Then, a threshold 10
is determined based on the parameters of Gaussian distribution to generate a more accurate FVC estimate. 11
We acquired low-altitude remote-sensing (LARS) images in three vegetative growth stages at different 12
flight altitudes over a cornfield. The HAGFVC method successfully fitted the half-Gaussian distributions 13
and obtained stable thresholds for FVC estimation. The results indicate that the HAGFVC method can 14
be used to effectively and accurately derive FVC images, with a small mean bias error (MBE) and with 15
root mean square error (RMSE) of less than 0.04 in all cases. Comparatively, other methods we tested 16
performed poorly (RMSE of up to 0.36) because of the abundance of mixed pixels in LARS images, 17
especially at high altitudes above ground level (AGL) or in the case of moderate vegetation coverage. 18
The results demonstrate the importance of developing image-processing methods that specially account 19
for mixed pixels for LARS images. Simulations indicated that the theoretical accuracy (no errors in 20
fitting the half-Gaussian distributions) of the HAGFVC method reflected an RMSE of less than 0.07. 21
Additionally, this method provides a useful approach to efficiently estimating FVC by using LARS 22
images over large areas. 23
3
Keywords: fractional vegetation cover (FVC), unmanned aerial vehicle (UAV), low-altitude remote 24
sensing (LARS), digital photography, half-Gaussian distribution, histogram threshold 25
1. Introduction 26
Fractional vegetation cover (FVC) plays a key role in land surface processes, including carbon and 27
water cycles (Jung et al., 2006) and energy transfer (Sellers, 1997). It is also an important data product 28
in numerical weather prediction (Gutman and Ignatov, 1997) and high-precision agricultural analysis 29
(Hunt et al., 2014; Matese et al., 2015). To meet the requirements of FVC mapping and validation using 30
satellite products, rapid and accurate measurements of FVC are necessary (Mu et al., 2015; Song et al., 31
2017). Hence, various methods have been developed to measure FVC for these applications including 32
visual estimation, direct sampling and digital photography (Muir et al., 2011). Among these methods, 33
photography provides the best performance in terms of efficient and accurate validation of satellite 34
remote sensing products for high-precision applications (Yan et al., 2012). 35
Proximal (very close range, i.e. a few meters) sensing methods have a clear advantage over satellite 36
remote sensing in terms of spatial resolution and flexibility. The data obtained from proximal sensing 37
can provide highly accurate estimates of FVC directly from images (Liu and Pattey, 2010; Macfarlane 38
and Ogden, 2012; Song et al., 2015) while the satellite remote sensing images will typically require that 39
FVC be estimated based on calibrations of vegetation indices against independent estimates of FVC 40
from proximal sensing methods (Carlson and Ripley, 1997). However, traditional proximal sensing 41
methods, lacking the spatial coverage needed for mapping FVC over large regions, are potentially labor 42
intensive even over medium-scale areas and local conditions may limit site access. Recent technological 43
innovations have led to an increase in the availability of unmanned aerial vehicles (UAVs) ( Watts et al., 44
2012), which potentially overcome many limitations of both traditional proximal and satellite imagery 45
platforms. Low-altitude remote-sensing (LARS) UAVs are advantageous because of their flexibility, 46
4
operational ability in a variety of environmental conditions, and capacity for mapping at intermediate 47
spatial scales. The application of UAVs has extended to crop monitoring, precision agriculture and other 48
Earth science studies (Bhardwaj et al., 2016; Zarco-Tejada et al., 2012). Researchers widely agree that 49
commercial cameras mounted on UAVs are powerful tools for assessing FVC (Chianucci et al., 2016; 50
Torres-Sánchez et al., 2014). 51
UAVs are flexible in terms of their flight altitude, which facilitates the collection of imagery at a 52
range of spatial scales (Mesas-Carrascosa et al., 2014). For example, Chapman et al., (2014) deployed 53
UAVs fitted with a fixed, wide angle lens at heights ranging from 20 m to 80 m, in order to evaluate 54
various plant breeding trials. Generally, as UAVs are required to map FVC rapidly over larger areas, the 55
flight altitude must be increased, which reduces the spatial resolution. Spatial resolution could be 56
maintained by narrowing the camera focal length as altitude increases but this would increase the number 57
of flights required by what are frequently UAVs with only short flight time, which would negate one of 58
the main advantages of UAVs. As a result, UAVs are often flown at varied altitudes but constant focal 59
length (e.g., Samseemoung et al., 2012). However, reducing the spatial resolution of LARS imagery 60
increases the proportion of mixed pixels, which is likely to reduce the accuracy of medium-scale FVC 61
mapping (Hsieh et al., 2001; Jones and Sirault, 2014). 62
Image analysis methods developed for proximal sensing methods are poorly suited to estimate FVC 63
from LARS when mixed pixels are abundant in the images. Hsieh et al. (2001) established a simulation 64
scheme to assess the effect of the spatial resolution on classification accuracy and found that the 65
classification errors increased rapidly with decreasing spatial resolution. Jones and Sirault (2014) 66
reported that a low spatial resolution has a significant negative influence on image classification. Torres-67
Sánchez et al. (2014) observed the decrease in accuracy of FVC estimates in the early growth stages of 68
wheat when the spatial resolution of LARS imagery was reduced. 69
5
Early image analysis methods depended on supervised classification, which requires human 70
intervention, has low operational efficiency and produces noisy results. Later, automatic classification 71
methods were based on unsupervised clustering algorithms, category tree methods and threshold-based 72
methods (Yan et al., 2012). Researchers developed numerous threshold-based methods based on 73
vegetation indices in the red-green-blue (RGB) color space; such indices include the excessive green 74
index (Woebbecke et al., 1995 ; Liu and Pattey, 2010), normalized difference index (Pérez et al., 2000), 75
green leaf algorithm (Chianucci et al., 2016), etc. Other color spaces, such as the Commission 76
Internationale d’Eclairage (CIE) L*a*b* and hue saturation intensity (HSI), have also been used for 77
classification (Liu et al., 2012; Macfarlane and Ogden, 2012). These automatic algorithms have modestly 78
improved the efficiency of validation. However, they were specifically developed for proximally-sensed 79
images and unsuited to images containing many mixed pixels. In addition, previously reported studies 80
tended to use UAV-based commercial cameras to collect images over sparse scenes, such as early crop 81
and rangeland areas (Rango, 2009; Torres-Sánchez et al., 2014), while densely vegetated scenes (FVC > 82
0.7) have seldom been studied. 83
In this study, we propose an image analysis method, HAGFVC for estimating FVC that is scale 84
invariant and specifically addresses the problem of large and variable numbers of mixed-pixels in LARS 85
images acquired from varying altitudes. The theory and implementation of the half-Gaussian fitting 86
method for extracting FVC are described in Section 2. Three published methods, LAB2 (Macfarlane and 87
Ogden, 2012), Shadow-Resistant Algorithm for Extracting the Green FVC (SHAR-LABFVC; Song et 88
al., 2015) and excess green vegetation index (ExG; Woebbecke et al., 1995) are introduced for 89
comparison as well. Section 3 describes the real data and simulated data used to validate and analyze the 90
HAGFVC method. In Section 4, the results of the HAGFVC method and the three other methods are 91
compared, and an uncertainty analysis is presented. Sections 5 and 6 present the discussion and 92
6
conclusions, respectively. 93
2. Methods94
2.1. Gaussian Mixture Model for FVC 95
For vegetated surfaces, the CIE a* distribution of an image usually was considered as a Gaussian 96
mixture model (GMM) distribution (Coy et al., 2016; Liu et al., 2012). In proximally sensed images, 97
assuming almost no mixed pixels in these images, the GMM derives from the distributions of pure 98
vegetation and pure background material and exhibits a distinct bimodal distribution mode (Liu et al., 99
2012; Song et al., 2015). This mixture distribution function
H
xcan be given by:100
μ
,
μ
,
(1)
where , and are weight, mean value and standard deviation, respectively; subscripts and
101
indicate vegetation and background, respectively; , stands for a Gaussian distribution function;
102
is CIE a* value. Fig. 1a shows an example of a* distribution for an image with negligible mixed
103
pixels. The GMM distribution is characterized by two distinct peaks, representing vegetation and
104
background. In this situation, it is straightforward to decompose the GMM and select a reasonable
105
threshold to separate green vegetation from the background using automated thresholding methods (e.g.
106
the T2 thresholding method in Liu et al., 2012).
107
In LARS images, as the spatial resolution decreases, many mixed pixels occur. As a result, the GMM 108
consists of three components: pure vegetation, pure background and the mixed pixels. The shape of 109
bimodal GMM is obscured because mixed pixels render the peaks of the vegetation and background less 110
distinct. Generally, the GMM distribution becomes weakly bimodal or even unimodal. This mixture 111
distribution function can be expressed as: 112
,
,
(2)
where , and are the weight, mean value and standard deviation after resolution reduction, 113
7
respectively. Subscript refers to the mixed pixels.is an unknown probability density function 114
of mixed pixels which, in reality,is located between the vegetation and background on a* axis because 115
a mixed pixel is a combination of these two pure components. Fig. 1b shows an example of a* 116
distribution for an image with a number of mixed pixels. Each Gaussian component is more indistinct 117
due to the presence of mixed pixels. Furthermore, as the image resolution decreases, the difficulty of 118
decomposing the GMM increases and leads to more errors if Eq. (1) is used. 119
Fig. 1. Schematic diagrams of GMM distribution of CIE a* values at different spatial resolutions, (a) a*
120
distribution at a high spatial resolution (i.e. proximal sensing), (b). a* distribution at a lower spatial resolution (i.e.
121
low-altitude remote sensing).
122
2.2. HAGFVC method 123
To solve the problem caused by mixed pixels in decomposition of the GMM, the HAGFVC method uses 124
only pure pixels to estimate Gaussian parameters of pure vegetation and background. Uncertain pixels 125
distributed between the bimodal peaks of vegetation and background in the histogram are excluded. 126
Therefore, is not used in the HAGFVC method. These pure pixels are distributed at edges (ends) 127
of the histogram (the green and the orange shaded areas in Fig. 2). After fitting the shaded areas with 128
two half-Gaussian distributions, we can obtain the Gaussian parameters of the pure vegetation and pure 129
background excluding the influence of mixed pixels. These Gaussian parameters are then used to 130
determine the threshold which is applied for image segmentation and FVC estimation. LARS images are 131
8
processed and analyzed using custom written scripts via a graphical user interface (GUI) programmed 132
in MATLAB R2013a (MathWorks, Inc., Natick, MA, USA). 133
The detailed steps of HAGFVC method for estimating FVC from digital images are illustrated in 134
Fig. 3 and are listed below. Steps (3) to (5) are the essential and novel steps of the HAGFVC method. 135
(1) Color space transformation. The first step of image processing is to convert RGB images to the 136
L*a*b* color space. The L*a*b* color space is device independent and simplifies pixel classification 137
based on greenness using a* values, which represent colors ranging from green to red. 138
(2) Smoothing the histogram curve. Generally, the histogram of a* values from an image is noisy 139
because of the complexity of vegetative cover and the variability of illumination. Therefore, we used the 140
Gaussian kernel-based smoothing method (Cox et al., 1989) to smooth the histogram to reduce noise 141
and improve the performance subsequent processing. 142
(3) Determination of initial mean values. To detect pure pixels distributed at the edges (ends) of the 143
histogram, it is necessary to determine the values of
and
, which are initial estimation for
144
and
. The shapes of the frequency distributions of a* values from vegetation and background are 145
different: for vegetation, the distribution is typically flat and wide, whereas the histogram of background 146
is sharp and narrow (Čugunovs et al., 2017) (Fig. 2). Thus, we use different methods to determine each 147
mean value. For green vegetation, we calculate the second derivative of the smoothed curve and set
148
as the left-most local maximum of the absolute values of the negative second derivative. For background, 149
we calculate the right-most local maximum frequency value as
. Pure vegetation pixels lie to the 150
left of
, and pure background pixels to the right of
. 151
(4) Assessment of the modality of the distribution. In some cases, the a* histogram is unimodal. This 152
occurs when mixed pixels account for a large proportion of the image or the image largely consists of 153
one type of component. Half-Gaussian fitting is inadequate to process unimodal histograms; hence, we 154
9
determine whether the histogram is bimodal or not before the half-Gaussian fitting. If the difference 155
between
and
is larger than an empirical threshold, i.e., 5 in this study, the distribution is 156
considered to be a bimodal. Otherwise, the histogram is unimodal. 157
(5) Half-Gaussian fitting to estimate Gaussian parameters. Half-Gaussian fitting is performed for 158
bimodal and weakly bimodal distributions. All the pure pixels classified in step (3) are fitted with half-159
Gaussian distribution curves to re-estimate to obtain the final (
and
) as well as (
160
and
). In fitting, the distributions of pure pixels are normalized as the weights equal 1. The half-161
Gaussian distribution function is expressed as: 162
1
√
2
,
,
(3)
Then, the weights
and
are obtained through calculating the ratios of each Gaussian component 163
to the entire GMM. 164
(6) Threshold computation. Once the Gaussian parameters are estimated, the threshold can be 165
determined through the “T2” threshold computation method introduced by Liu et al. (2012). For bimodal 166
cases, the threshold can be derived by solving a complementary error function equation: 167
∙‐
/
√
2∙
∙
‐/
√
2∙
(4)
where is the complementary Gaussian error function. This computation method is based on the 168
principle that the misclassification probabilities of vegetation and background are equal. The detailed 169
derivation is given in Liu et al., (2012). Fig. 2 shows an example of the threshold (marked as magenta 170
solid line) after solving Eq. (4). For the unimodal cases, an empirical threshold (i.e. -4; Liu et al. 2012) 171
computed from many proximal images is applied. 172
(7) FVC calculation. The threshold is used to segment an image by classifying pixels with a* values 173
less than or equal to this threshold as vegetation and all other pixels as background. Finally, FVC is 174
10
estimated as the ratio of the quantity of vegetation pixels to the quantity of all pixels. 175
176
Fig. 2. An example of the half-Gaussian fit of a GMM from a UAV image taken 19 m above ground level (AGL) in
177
cornfield.
and
are the mean values of the two Gaussian components.
178
179
180
Fig. 3. Flowchart of FVC estimation using the HAGFVC method and LARS images. The modules highlighted in
181
orange are the novel and essential steps of the HAGFVC method.
182
11
2.3. Assessment of performance 183
To highlight the improvement on FVC estimation of the HAGFVC method, we compared it with 184
two other methods (i.e. LAB2 and SHAR-LABFVC) that were also based primarily on the L*a*b* color 185
space. To expand this comparison and further generalize our results, another method (i.e. ExG) was 186
included. 187
(1) LAB2 (Macfarlane and Ogden, 2012): This method was developed for natural vegetation and uses 188
the green leaf algorithm (Louhaichi et al., 2001), a* and b* values of each pixel in the CIE L*a*b* color 189
space to segment green vegetation with a minimum-distance-to-means classifier. The RMSE of LAB2 190
was less than 0.05 (Macfarlane and Ogden, 2012). 191
(2) SHAR-LABFVC (Song et al., 2015): This method used a lognormal-GMM to characterize the CIE 192
a* distribution of a vegetation-covered surface. In addition, this approach introduced the HSI color space 193
to enhance the brightness of shaded parts of an image and improve the classification accuracy of ground-194
based images. The method was capable of detecting many small canopy gaps and partially overcame the 195
shadow effect; the authors reported a root mean square error (RMSE) of 0.025. 196
(3) ExG vegetation index (Woebbecke et al., 1995 ; Torres-Sánchez et al., 2014 ) : This method was 197
originally developed for weed identification and uses the green fraction of vegetation. This index 198
calculated in RGB color space as: ExG = 2G-R-B, where R, G and B are red, green and blue color 199
contents, respectively. An automatic threshold based on Otsu’s thresholding method (Otsu, 1979) was 200
used to segment the ExG grayscale image and estimate FVC. 201
Three statistics were used to assess the performance of each FVC-extraction method. 202
(1) Root mean squared error (RMSE): to measure the accuracy of FVC estimates at different resolutions 203
based on the comparison with ground-based FVC observations. Since the errors are squared before they 204
are averaged, the RMSE assigns a relatively high weight to large errors. 205
12
// (4)
(2) Mean bias error (MBE): to assess the averaged bias of FVC estimates at different resolutions. In 206
MBE, the signs of the errors are not removed. 207
/
(5)
(3) Standard deviation (STD): to analyze the consistency of FVC estimates at different spatial 208
resolutions.209
/1/ (6)
where
is the FVC value estimated for flight altitude over a plot and denotes the number of 210
observations. is the ground-measured FVC and is treated as the true value. represents the 211
average of FVC estimated from the LARS images at different flight altitudes over a sampling plot. 212
3. Materials 213
3.1. Study area 214
The study area (42.24° N, 117.06° E) was located in Weichang County, Hebei Province, China (Fig. 215
4). Field campaigns were performed on 29 June, 11 July and 31 July in 2015. These dates represent three 216
vegetative growth stages of corn (Zea mays; Table 1). The three growth stages are deemed V4, V6 and 217
V8, where Vn (n = 4, 6, 8) indicates n leaves with collars visible. We established a small sampling plot 218
(10 m 8 m) to measure FVC using UAV and ground-based photography. 219
13
220
Fig. 4. Study area located in Weichang County, Hebei Province, China (marked as the red point on the top-left 221
frame). The orthophoto of the experimental site is on the bottom frame. The sampling plot is approximately 10 m 222
8 m and located in a cornfield (top-right frame). 223
224
225
Ta bl e 1 226
Overview of field campaigns during three growth stages of corn in 2015. Vn (n = 4, 6, 8) indicates n leaves with collars 227
visible. True FVC values are derived from ground-based images using the SHAR-LABFVC method. 228
Date Local
Time
Growth
Stage
Mean Leaf
Width (cm)
Number
of Images Flight Height (m) True FVC Illumination
28/06 11:30 am V4 2.7 14 3 - 29 (step=2 m) 0.22 diffuse light
(cloudy day)
11/07 06:30 pm V6 4.1 26 3 – 53 (step=2 m) 0.35 direct light
(large sun zenith)
31/07 05:45 pm V8 8.8 24 7- 53 (step=2 m) 0.82 diffuse light
(cloudy day)
14
3.2. UAV flights and aerial images 229
We used the model X-601 hexacopter (manufactured by Docwell Corporation, Beijing, China), 230
which is a vertical takeoff and landing aircraft with a maximum flying time of up to 20 minutes 231
depending on weather conditions and payload. An autopilot system provides autonomous navigation 232
based on a Global Position System (GPS) signal. The platform was equipped with a Sony Nex-5R digital 233
camera and stabilized by a stability augmentation system (SAS). This hexacopter can operate from 234
several meters to a few kilometers above ground level (AGL). 235
The flight pattern of the UAV ranged from a lowest flight altitude (e.g. 5 m AGL) to a highest flight 236
altitude (e.g. 60 m AGL) with an interval of 2 meters to acquire images. In our study, images were 237
captured at different flight altitude ranges for each growth stage (see Table. 1). Each waypoint was 238
located over the center of the plot. The UAV hovered for two seconds at each sample point to satisfy the 239
positional accuracy and assure that the digital camera had enough time to acquire an image. Flight 240
parameters containing WGS-84 latitude/longitude waypoints were logged using a ground control station 241
(GCS) computer. 242
A Sony Nex-5R digital camera was mounted on the hexacopter to acquire nadir images. This camera 243
provides an image of 23.5 15.6 mm and a maximum image size of 4912 3264 pixels. The focal 244
length of the camera lens was 16 mm; thus, the pixel ground resolution was 0.3 cm at 10 m AGL. The 245
leaf widths of different growth stages are listed in Table 1. The camera aperture and shutter speed were 246
set manually depending on different light conditions before takeoff. 247
3.3. Ground measurements 248
To obtain ground measurements of FVC for validation, we used a Nikon D3000 digital camera 249
mounted on portable pole via an angled steel bracket. The camera was set to aperture priority mode, 250
automatic exposure, ISO 100, and 18 mm focal length to produce fine quality images in Joint 251
Photographic Experts Group (JPEG) format. Images were captured looking vertically downward from 252
15
approximately 1.5 m above the canopy. A surveyor walked along the two diagonals of the plot and took 253
a photo every 2 meters. We obtained 14 field images at similar times during the UAV flights in each 254
stage. The images were uploaded to a computer for subsequent processing using both the LAB2 method 255
and the SHAR-LABFVC method. The FVC of the entire sampling plot is an arithmetic average of the 256
FVC extracted from each image. 257
3.4. Simulated images 258
A simulated image dataset was used to quantify the uncertainty of FVC estimation by the HAGFVC 259
method. Images were generated using large-scale emulation system (LESS) software developed by Qi 260
et al. (2017) for realistic three-Dimensional (3D) corn scene simulation. LESS is a ray-tracing based 261
radiative transfer simulation model, which is mainly designed for the radiometric simulation of forest 262
canopies, but also can simulate other types of scenes (such as crops). We simulated four binary images 263
from four scenes with different FVCs (see Table. 2). In each image, values of 1 represent vegetation 264
(green areas in Fig. 5) and values of 0 stand for the background (black areas in Fig. 5). The binary images 265
were aggregated to simulate the images obtained at varied flight altitudes. The image resolution 266
decreased after aggregation and the mixed pixels (orange area in Fig. 5) appear, the values of which were 267
between 0 and 1. The coarser the resolution was, the greater the number of mixed pixels in the image. 268
269
270
271
Ta bl e 2
272
Overview of the four simulated images. and are the mean values and standard deviations of Gaussian distribution,
273
respectively.
274
FVC Crop type
Gaussian parameters Image size
[pixel]
Resolution
[cm]
0.21, 0.30,
0.38, 0.59 Corn -16 4.48 2 2.24 49123264 0.02
16
275
Fig. 5. Examples of image aggregation (simulated image with FVC of 0.38). (a) the original binary image of 276
cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 277
To analyze the variation of histogram with spatial resolution, we randomly assigned a* values to the 278
pixels in these binary images based on Gaussian parameters of the vegetation and background (see Table 279
2). These parameters were chosen based on the mean statistics derived from many real proximally sensed 280
images through the HAGFVC method. Then, these images with a* values were linearly aggregated to 281
different resolutions, simulating the process by which an UAV acquired images at different flight 282
altitudes. The quantities of mixed pixels and pure pixels are precisely known, as are the Gaussian 283
parameters of the two pure components of each image. Fig. 6 shows the simulated CIE a* distributions 284
before and after image aggregation. In the aggregation process, the proportion of pure pixels decreased 285
17
and the proportion of mixed pixels increased. The mean values of the vegetation and background were 286
relatively constant, while the standard deviations of both decreased gradually as pixels were aggregated. 287
288
289
Fig. 6. The CIE a* distributions of a simulated image with FVC of 0.38. Histograms of (a) the initial binary image 290
of cornfield and the images aggregated by the scale factors of (b) 4 4, (c) 8 8 and (d) 16 16 pixels. 291
292
4. Results 293
4.1. Fraction of mixed pixels versus flight height for LARS images 294
We used a high-resolution classified LARS images to assess the fraction of mixed pixels versus 295
flight height in each growth stage. The LARS images acquired at 7 m AGL were classified as vegetation 296
and background with a pixel ground resolution of 0.2 cm (much smaller than the foliage width), so the 297
pixels acquired at 7 m AGL were assumed to be ‘pure pixels’. Then, the classified images were 298
18
progressively aggregated to simulate varied flight altitudes, and the fractions of mixed pixels calculated. 299
Coarse-resolution pixels obtained at an AGL higher than 7 m were classified as mixed pixels if they did 300
not entirely overlap with one type of ‘pure’ pixels in the 7 m AGL image (Fig. 7) 301
The fraction of mixed pixels increased with flight altitude (Fig. 7) at all growth stages and the 302
fractions of mixed pixels in V6 and V8 stages were markedly larger than in V4 stage at each altitude. 303
Hence, images acquired in the denser crop stages have more mixed pixels that the early crop stage at the 304
same flight level. 305
306
Fig. 7. The relationship between above ground level (resolution) and fraction of mixed pixels307
4.2. Comparison of FVC estimates 308
There were marked differences between FVC estimated from the four methods, i.e., HAGFVC, 309
LAB2, SHAR-LABFVC and ExG. In Fig. 8, the images captured at 25 m AGL in the three growth stages 310
and cropped using an identical footprint illustrate the classification results of the HAGFVC (Fig. 8 V4b, 311
V6b, V8b), LAB2 (Fig. 8 V4c, V6c, V8c), SHAR-LABFVC (Fig. 8 V4d, V6d, V8d) and ExG (Fig. 8 312
V4e, V6e, V8e) methods. The black area represents the background, and the white area represents green 313
vegetation. Notably, only slight differences exist among the classified images in the V4 stage, while 314
substantial differences can be observed among the four methods in the other two stages, especially the 315
19
V6 stage. 316
317
Fig. 8. Image segmentation using four methods: (V4a, V6a, V8a) UAV RGB images of three growth stages at 25 318
m above ground level (AGL), (V4b, V6b, V8b) HAGFVC method, (V4c, V6c, V8c) LAB2 method, (V4d, V6d, 319
V8d) SHAR-LABFVC method, and (V4e, V6e, V8e) ExG method. 320
The methods are compared at all flight altitudes and in the three growth stages in Fig. 9 and Table 321
3. Note that the true values of FVC derived using LAB2 and SHAR-LABFVC methods differ by less 322
than 0.05. In general, FVC estimated using the HAGFVC method is closest to the true values (Fig. 9) 323
for most flight altitudes and growth stages. 324
In the V4 stage, the FVC estimates of all four methods are close to the true FVC (“TrueValue” in 325
Table 3), i.e., the FVC derived from ground-measured images with the SHAR-LABFVC method), with 326
RMSEs of 0.02 for the LAB2 method, 0.03 for the SHAR-LABFVC method, 0.03 for the ExG method 327
and 0.02 for the HAGFVC method (see Table 3). The results at different altitudes are consistent, with 328
20
STDs of less than 0.03. 329
In the V6 stage, the HAGFVC method provided good results, with an RMSE of 0.02 and an MBE 330
of 0.01, while the LAB2 and SHAR-LABFVC methods obviously overestimated FVC, with an RMSE 331
of approximately 0.20 and an MBE up to 0.19, whereas the ExG method underestimated FVC with an 332
RMSE of 0.09 (see Table 3). Moreover, the HAGFVC method yielded stable results at different flight 333
altitudes, with an STD of 0.02. This finding suggested that the HAGFVC method has the potential to 334
accurately map FVC at several dozen meters AGL. By contrast, the LAB2 and SHAR-LABFVC 335
methods yielded continuously worsening results as the flight height increases; thus, only the results at 336
the lowest flight altitude were trustworthy. ExG method yielded stable results at different resolutions but 337
underestimate FVC with MBE of -0.08 (see Table 3). 338
In the V8 stage, the HAGFVC method provides relatively good results, with an RMSE of 0.03 and 339
an MBE of -0.02. SHAR-LABFVC method has the similar results with HAGFVC. However, LAB2 and 340
ExG methods generate RMSEs more than 0.16 even up to 0.4 (see Table 3). 341
21
342
Fig. 9. FVC comparison among the four methods in three vegetative growth stages, i.e., (a) V4, (b) V6 and (c) 343
V8. Vn (n = 4, 6, 8) indicates n leaves with collars visible. The TrueFVC-SHAR and TrueFVC-LAB2 respectively 344
represent the FVC derived by using the SHAR-LABFVC and LAB2 methods in field measurements. 345
22
346
Ta bl e 3 347
RMSEs, MBEs and STDs of the four FVC-estimation methods. SHAR denotes the SHAR-LABFVC method. 348
TrueValue is the FVC derived by using the SHAR-LABFVC method in field measurements. 349
Statistic V4 stage
(TrueValue = 0.22)
V6 stage
(TrueValue = 0.35)
V8 stage
(TrueValue = 0.82)
HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG HAGFVC LAB2 SHAR ExG
RMSE 0.02 0.02 0.03 0.03 0.02 0.20 0.20 0.09 0.03 0.16 0.06 0.36
STD 0.02 0.03 0.01 0.02 0.02 0.08 0.06 0.02 0.03 0.06 0.05 0.03
MBE -0.01 -0.01 -0.03 -0.03 0.01 0.19 0.19 -0.08 -0.02 -0.15 -0.03 -0.36
350
The observed FVCs at different flight heights differ because an angular effect exists with increasing 351
flight altitude (closer to parallel viewing as the UAV height increases). Although the actual focal length 352
of the lens and its field of view is fixed, cropping the image to measure the same region of interest at 353
ground level narrows the field of view as flight altitude increases. Correspondingly, the threshold slightly 354
changes at different flight altitudes. Fig. 10 shows the a* distributions at two flight altitudes (5 m and 355
35 m AGL) in the V6 stage. The threshold, the mean values of vegetation and background and the fitted 356
Gaussian curves are calculated using the HAGFVC method. Note that the valley between the vegetation 357
and background histogram peaks becomes less pronounced as the flight altitude increases because more 358
mixed pixels cause the overall distribution to become weakly bimodal. However, the threshold 359
determined using the HAGFVC method is still located in the valley. 360
23
361
Fig. 10. Histograms of a* distributions at different flight heights in the V6 stage. (a) above ground level (AGL) 362
of 5 m and (b) AGL of 35 m. The HAGFVC method derives fitted curves, Gaussian parameters and the 363
corresponding thresholds. 364
4.3. Sensitivity analysis
365
The HAGFVC threshold is a function of the weights, mean values and standard deviations of the
366
Gaussian distributions (see Eq. 4). We used the thresholds and FVC estimates of simulated images to
367
quantify the sensitivity of the HAGFVC method to different spatial resolutions and, therefore, different
368
flight altitudes. The images at different flight altitudes were simulated as described in section 3.4. The
369
fraction of mixed pixels increases linearly as the spatial resolution of each simulated image decreases.
370
As resolution decreases, the proportion will increase linearly (Fig 11a). Figs. 11b-d illustrate the weight,
371
mean and standard deviation against flight altitude. The mean values of the vegetation and background
372
are almost constant, while the weights and standard deviations decrease as the flight altitude increases
373
24
and spatial resolution is reduced. The threshold derived from Eq. (4) is weakly affected by variations in 374
the spatial resolution (Fig. 11e). Correspondingly, the FVC estimates closely agree with the true values 375
(deviation of less than 0.07 at resolutions of less than 3.2 cm for all simulated images; Fig. 11f). These 376
results suggest that the threshold used to segment green vegetation and the background is approximately 377
scale invariant and the uncertainty transferred to FVC estimates is small. 378
Fig. 11. Uncertainty analysis using four simulated images. Relationship between flight height and (a) the mean 379
values, (b) standard deviations, (c) weights, (d) fraction of mixed pixels, (e) threshold, and (f) FVC estimates. 380
5. Discussion 381
In this study, we have demonstrated that the HAGFVC method provides a solution for estimating 382
FVC from remotely sensed LARS images that yields consistent and accurate results at different spatial 383
resolutions. This method was developed based on a GMM, which describes the spectral characteristics 384
of a land surface covered by vegetation (Coy et al., 2016; Song et al., 2015). The basic concept of the 385
HAGFVC method is to use only pure pixels to derive the Gaussian parameters. We achieved this by 386
fitting half-Gaussian distributions for pure vegetation pixels and pure background pixels to avoid the 387
negative influence of mixed pixels. Mixed pixels are located between pure vegetation and the pure 388
background in the histogram (Fig. 1). The HAGFVC method uses the pixels at the edges (end) of the a* 389
25
histogram, where pure pixels are mainly distributed, to reconstruct full GMMs from the half Gaussian 390
distributions and then generate a reasonable threshold value. The fact that FVC estimates in this study 391
were close to the reference values strongly suggests that the negative effect of mixed pixels to FVC 392
estimation was suppressed by using the HAGFVC method. 393
Compared to the other three methods, the HAGFVC method improved FVC estimates and showed 394
lower RMSEs, MBEs and STDs in the validation for different vegetation coverages. In the three growth 395
stages of corn, the RMSEs and STDs of FVC estimated based on the HAGFVC method were less than 396
0.04, while LAB2 and SHAR-LABFVC yielded more errors and inconsistencies (RMSEs of up to 0.20 397
in the V6 stage), and ExG yielded quite large errors (RMSE of 0.36) in V8 stage (see Table 3). For sparse 398
vegetation (V4 stage), when the background dominates the image, all three methods accurately estimated 399
the FVC and exhibited similar performance. However, in the growth stages with medium and high 400
vegetation coverage (V6 and V8 growth stages in this study), LAB2, SHAR-LABFVC and ExG 401
produced considerable errors in FVC estimation at high flight altitudes and low spatial resolutions (Fig. 402
9 and Table 3). This is the result of the number of mixed pixels in an image increasing as the fraction of 403
vegetation and background pixels becomes similar (Fig. 7). As shown in Fig. 9b, the LAB2 and SHAR-404
LABFVC methods distinctly overestimate FVC and the MBE increases with flight altitude (up to 0.19, 405
Table 3) whereas the ExG method underestimated FVC with an MBE of -0.08. In the V8 stage (Fig. 9c), 406
the LAB2 and SHAR-LABFVC methods exhibited better performance than in the V6 stage, but the 407
performance was worse than that in the V4 stage. ExG yielded a considerable underestimation with 408
RMSE up to 0.36 (see Table 3). Although the HAGFVC method was validated on corn field at one site, 409
the method does not rely on the structure or spectral property of crops. It only requires information from 410
the histogram of a* values. Thus, we expect the HAGFVC method to apply other crop types. 411
Conventional methods designed for proximal sensing are greatly constrained by the unreasonable 412
26
decomposition of GMMs because of the large quantities of mixed pixels. LAB2 and SHAR-LABFVC 413
were developed to extract FVC from high-resolution images with few mixed pixels. Although ExG has 414
been used for estimating FVC from LARS images (Torres-Sánchez et al., 2014), the effect of mixed 415
pixels was not fully investigated. Other classical image-processing methods that have been used to 416
segment LARS images, such as K-means, Artificial Neural Networks (ANN), Random Forest and 417
Spectral Index methods (Feng et al., 2015; Hu et al., 2017; Poblete-Echeverria et al., 2017), also do not 418
specifically consider mixed pixels. However, mixed pixels occupy a large proportion of the image in 419
some situations (at a coarser resolution and moderate FVC level). The trend of increasing FVC with 420
height in the LAB2 method results from bias in the training data set while the trend in the SHAR-421
LABFVC method results from the weakly bimodal distribution of images acquired at high altitudes. 422
More mixed pixels result in more blurring of foreground and background pixels, which results in more 423
pixels with enough ‘greenness’ to be automatically selected as foreground pixels for training the 424
unsupervised classification used in the LAB2 method. This introduces a bias into the nearest-neighbor 425
classification used by the LAB2 method towards foreground as spatial resolution decreases, which 426
results in increases in the estimated value of FVC with altitude. The trend of increasing FVC with height 427
in the SHAR-LABFVC method results from the failure of finding a reasonable initial cut-off of a* 428
histogram which is used to make an initial segmentation. Because of this failure, SHAR-LABFVC starts 429
back-up algorithm which uses a constant empirical threshold to conduct classification. As resolution 430
decreases, the constant threshold results in a bias in the FVC estimates. For ExG, the continuous 431
underestimation of FVC in V6 and V8 stages is mainly due to less inter-class variability thus leading to 432
poor segmentation using Otsu’s method (Otsu, 1979). Our research demonstrates the need for developing 433
mixed-pixel-resistant methods for analyzing images acquired from UAVs. It is worth noting that, 434
although the method gives accurate estimates of FVC, the resulting classified image shouldn’t be used 435
27
for purposes that require very accurate spatial information about the location of foliage within an image, 436
e.g. Chen-Cihlar clumping corrections (Chen and Cihlar, 1995), because of the large proportion of mixed 437
pixels in high-altitude images. 438
The HAGFVC method was not substantially affected by illumination conditions or flight altitude. 439
The three UAV datasets were collected in distinctly different illumination environments, i.e., near noon 440
and near nightfall on cloudy days and near nightfall on a sunny day (Table 1). The variations in 441
illumination did not affect the HAGFVC method because the absolute values of a* are largely 442
independent of illumination and the method also does not depend on the absolute values of a*. A 443
sensitivity analysis showed that the threshold was insensitive to variations in the weights and Gaussian 444
parameters of the two pure components, despite the weights and standard deviations clearly decreasing 445
with increasing flight altitude. This is strong evidence that our method is relatively insensitive to the 446
level of green vegetation coverage and the quantity of mixed pixels. According to our analysis, the 447
absolute error was less than 0.07 when the resolution was less than 3.2 cm. Note that the HAGFVC 448
method only applies as long as the UAV is sufficiently close to the ground for there to be clearly defined 449
pure pixels of green vegetation and background. At very high altitudes the histogram of a* values will 450
become unimodal and an empirical threshold is used to estimate FVC. In extreme cases the images will 451
come to resemble images from high-altitude remote sensing, from which only vegetation indices can be 452
derived and pixel classification is challenging. 453
The complexity of the spatial distribution of vegetation, the variability in illumination conditions 454
(Ponzoni et al., 2014) and the angular effect (Zhao et al., 2012) caused by perspective projection, all 455
affect the accuracy of FVC estimation using the HAGFVC method by reducing the precision of searching 456
for the mean values of the Gaussian distributions. In practice, the accuracy of our method depends on 457
the precision of determining the mean values of the two components. Fluctuations were observed in the 458
28
FVC estimates at different spatial resolutions because of errors in determining mean values. The 459
relatively large fluctuations in FVC estimation at different flight altitudes in the V8 stage (RMSE of up 460
to 0.03 in Fig. 9c are mainly caused by non-optimal mean values. Generally, non-optimal mean values 461
derive from two sources. The first is the representativeness of GMM for a vegetated surface. An 462
alternative model might produce better results. The second is the sub-optimal smoothing of the histogram. 463
A better smoothing algorithm might achieve more accurate determination of the initial mean values. 464
Theoretically, more accurately estimating these mean values is the key to improving the accuracy of 465
FVC estimation based on GMM decomposition. Notwithstanding the opportunities for improvement, 466
the HAGFVC method is a significant advance on existing methods to minimize the effect of mixed pixels 467
and yield accurate estimates of FVC. 468
6. Conclusions 469
This paper proposed a half-Gaussian fitting method (i.e., HAGFVC) to decompose a Gaussian 470
mixture model (GMM) and estimate fractional vegetation cover (FVC) from low altitude remote sensing 471
(LARS) images. This algorithm only used a portion of pure pixels to derive the GMMs in order to 472
suppress the influence of mixed pixels, and classified mixed pixels as vegetation or background at nearly 473
equal rates of misclassification. We compared three FVC estimation methods (LAB2, SHAR-LABFVC 474
and ExG) with the HAGFVC method and found that the HAGFVC method generated accurate and robust 475
FVC estimates for crop fields of high, moderate and low vegetation density. Particularly, when the 476
fraction of mixed pixels was high (when a corn plant has six visible leaf collars), HAGFVC exhibited 477
good performance, with an RMSE of 0.02 and MBE of 0.01 at flight attitudes from 3 meters to 50 meters 478
above ground level (AGL). Although the LAB2, SHAR-LABFVC and ExG methods exhibited good 479
estimates (RMSEs of less than 0.04) for sparse vegetation, large quantities of mixed pixels in the 480
moderate-density vegetation at coarse spatial resolutions reduced the accuracies of the conventional 481
29
ground-based methods (RMSE of up to 0.20). Simulations showed that the theoretical RMSE of the 482
HAGFVC method was less than 0.07 at resolutions of less than 3.2 cm. Consequently, our approach 483
demonstrates the potential for accurately estimating FVC over large areas using UAVs and LARS. 484
Acknowledgments 485
This work was supported by the National Science Foundation of China (Grant no. 41331171 and 486
61227806). The authors thank Prof. Suhong Liu (Beijing Normal University) for her kind suggestions 487
in image simulation. We also appreciate the help from Dr. Ronghai Hu in the organization of this 488
manuscript and the help from Dr. Jianbo Qi in field campaigns and images simulation. 489
References 490
Bhardwaj, A., Sam, L., Akanksha, Martín-Torres, F.J., Kumar, R., 2016. UAVs as remote sensing platform in glaciology: 491
Present applications and future prospects. Remote Sens. Environ. 175, 196–204. 492
Carlson, T.N. and Ripley, D.A., 1997. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote 493
Sens. Environ. 62(3), 241-252. 494
Chapman, S. et al., 2014. Pheno-Copter: A Low-altitude, autonomous remote-sensing robotic helicopter for high-throughput 495
field-based phenotyping. Agronomy, 4(2), 279-301. 496
Chen, J.M., Cihlar, J., 1995. Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index. 497
Appl. Opt. 34 (27), 6211–6222. 498
Chianucci, F., Disperati, L., Guzzi, D. and Bianchini, D., 2016. Estimation of canopy attributes in beech forests using true 499
colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 47, 60-68. 500
Cox, D.R., Hinkley, D.V., Rubin, D.B. and Silverman, B.W., 1989. Monographs on statistics and applied probability. 2(2), 501
273-277. 502
Coy, A., Rankine, D., Taylor, M., Nielsen, D. and Cohen, J., 2016.Increasing the accuracy and automation of fractional 503
vegetation cover estimation from digital photographs. Remote Sens. 8(7), 474. 504
Čugunovs, M., Tuittila, E.-S., Mehtätalo, L., Pekkola, L., Sara-Aho, I., Kouki, J., 2017. Variability and patterns in forest soil 505
and vegetation characteristics after prescribed burning in clear-cuts and restoration burnings. Silva Fenn. 51. 506
Woebbecke, D.M., Meyer, G.E., Von Bargen, K. Von, Mortensen, D.A., 1995. Color indices for weed identification under 507
various soil, residue, and lighting conditions. Trans. ASAE 38, 259–269. 508
Feng, Q., Liu, J. and Gong, J., 2015. UAV Remote sensing for urban vegetation mapping using random forest and texture 509
analysis. Remote Sens. 7(1), 1074-1094. 510
Gutman, G., Ignatov, A., 1997. Satellite-derived green vegetation fraction for the use in numerical weather prediction models. 511
Satell. Data Appl. Weather Clim. 19, 477–480. 512
Hsieh, P.F., Lee, L.C. and Chen, N.Y., 2001. Effect of spatial resolution on classification errors of pure and mixed pixels in 513
remote sensing. IEEE Trans. Geosci. Remote Sensing. 39(12), 2657-2663. 514
Hunt, E.R., Daughtry, C.S.T., Mirsky, S.B. and Hively, W.D., 2014. Remote sensing with simulated unmanned aircraft 515
imagery for precision agriculture applications. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7(11), 4566-4571. 516
Jones, H. and Sirault, X., 2014. Scaling of thermal images at different spatial resolution: the mixed pixel problem. Agronomy, 517
4(3), 380-396. 518
Jung, M., Henkel, K., Herold, M. and Churkina, G., 2006. Exploiting synergies of global land cover products for carbon cycle 519
30
modeling. Remote Sens. Environ. 101(4), 534-553. 520
Rango, A., Laliberte, A., Herrick, J.E., Winters, C., Havstad, K., Steele, C., Browning, D., 2009. Unmanned aerial vehicle-521
based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 3, 033542. 522
Liu, J. and Pattey, E., 2010. Retrieval of leaf area index from top-of-canopy digital photography over agricultural crops. 523
Agric. For. Meteorol. 150(11), 1485-1490. 524
Liu, Y., Mu, X., Wang, H. and Yan, G., 2012. A novel method for extracting green fractional vegetation cover from digital 525
images. J. Veg. Sci. 23(3), 406-418. 526
Louhaichi, M., Borman, M.M. and Johnson, D.E., 2001. Spatially located platform and aerial photography for documentation 527
of grazing impacts on wheat. Geocarto Int. 16, 65-70. 528
Macfarlane, C. and Ogden, G.N., 2012. Automated estimation of foliage cover in forest understorey from digital nadir images. 529
Methods Ecol. Evol. 3(2), 405-415. 530
Matese, A., Toscano, P., Di Gennaro, S., Genesio, L. and Vaccari, F., 2015. Intercomparison of UAV, aircraft and satellite 531
remote sensing platforms for precision viticulture. Remote Sens. 7(3), 2971-2990. 532
Mesas-Carrascosa, F.J., Notario-García, M.D., Meroño De Larriva, J.E., Sánchez De La Orden, M. and García-Ferrer Porras, 533
A., 2014. Validation of measurements of land plot area using UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 33, 270-279. 534
Mu, X., Hu, M., Song, W., Ruan, G. and Ge, Y., 2015.Evaluation of sampling methods for validation of remotely sensed 535
fractional vegetation cover. Remote Sens. 7(12), 16164-16182. 536
Muir, J., Schmidt, M., Tindall, D., Trevithick, R., Scarth, P., Stewart, J.B., 2011. Field measurement of fractional ground 537
cover : a technical handbook supporting ground cover monitoring for Australia. ABARES. 538
Otsu, N., 1979. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man. Cybern. 9, 62–66. 539
Pérez, A.J., López, F., Benlloch, J.V. and Christensen, S., 2000. Colour and shape analysis techniques for weed detection in 540
cereal fields. Comput. Electron. Agric. 25(3), 197-212. 541
Poblete-Echeverria, C., Federico Olmedo, G., Ingram, B. and Bardeen, M., 2017. Detection and segmentation of vine canopy 542
in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): A Case Study in a Commercial 543
Vineyard. Remote Sens. 9(3), 268. 544
Ponzoni, F.J., Da Silva, C.B., Dos Santos, S.B., Montanher, O.C. and Dos Santos, T.B., 2014.Local illumination influence 545
on vegetation indices and plant area index (PAI) relationships. Remote Sens, 6(7), 6266-6282. 546
Qi, J., Xie, D., Guo, D., Yan, G., 2017. A large-scale emulation system for realistic three-dimensional (3-D) forest simulation. 547
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10, 4834–4843. 548
Samseemoung G, Soni P, Jayasuriya HP, Salokhe VM. Application of low altitude remote sensing (LARS) platform for 549
monitoring crop growth and weed infestation in a soybean plantation. Precis. Agric. 13(6), 611-27. 550
Sellers, P.J., 1997. Modeling the exchanges of energy, water, and carbon between continents and the atmosphere. Science 551
(80-. ). 275, 502–509. 552
Song, W., Mu, X., Ruan, G. and Gao, Z., 2017. Estimating fractional vegetation cover and the vegetation index of bare soil 553
and highly dense vegetation with a physically based method. Int. J. Appl. Earth Obs. Geoinf. 58, 168-176. 554
Song, W., Mu, X., Yan, G. and Huang, S., 2015. Extracting the green fractional vegetation cover from digital images using 555
a shadow-resistant algorithm (SHAR-LABFVC). Remote Sens. 7(8), 10425-10443. 556
Torres-Sánchez, J., Peña, J.M., de Castro, A.I. and López-Granados, F., 2014. Multi-temporal mapping of the vegetation 557
fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 103, 104-113. 558
Watts, A.C., Ambrosia, V.G. and Hinkley, E.A., 2012. Unmanned aircraft systems in remote sensing and scientific research: 559
classification and considerations of use. Remote Sens. 4(12), 1671-1692. 560
Yan, G., Mu, X., Liu, Y., 2012. Fractional vegetation cover, in: Advanced remote sensing. Elsevier, pp. 415–438. 561
Zarco-Tejada, P.J., González-Dugo, V., Berni, J.A.J., 2012. Fluorescence, temperature and narrow-band indices acquired 562
from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. 563
Environ. 117, 322–337. 564
Zhao, J., Xie, D., Mu, X., Liu, Y. and Yan, G., 2012. Accuracy evaluation of the ground-based fractional vegetation cover 565
measurement by using simulated images. In: IGARSS 2012. Munich, Germany, pp. 3347-3350.566