Article

Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Color slide images of weeds among various soils and residues were digitized and analyzed for red, green, and blue (RGB) color content. Red, green, and blue chromatic coordinates (rgb) of plants were very different from those of background soils and residue. To distinguish living plant material from a nonplant background, several indices of chromatic coordinates were studied, tested, and were successful in identifying weeds. The indices included r-g, g-b, (g-b)||r-g|, and 2g-r-b. A modified hue was also used to distinguish weeds from non-plant surfaces. The modified hue, 2g-r-b index, and the green chromatic coordinate distinguished weeds from a nonplant background (0.05 level of significance) better than other indices. However, the modified hue was the most computationally intense. These indices worked well for both nonshaded and shaded sunlit conditions. These indices could be used for sensor design for detecting weeds for spot spraying control.

No full-text available

Request Full-text Paper PDF

Request the article directly
from the authors on ResearchGate.

... Therefore, customized vegetation indices were developed considering the strong influence of the RE and NIR bands. These vegetation indices are inspired by the Excess Green Index (ExG) [36] that showed effectiveness in weed discrimination [36], crop identification [37,38] and quantification [39], early-season crop monitoring [40], and multi-temporal mapping of vegetation fractions [41] using both close-range and UAV-based imagery. Thus, the assumption that added weight of both RE and NIR bands would improve the detection of phytosanitary problems was made ( Figure 3). ...
... Therefore, customized vegetation indices were developed considering the strong influence of the RE and NIR bands. These vegetation indices are inspired by the Excess Green Index (ExG) [36] that showed effectiveness in weed discrimination [36], crop identification [37,38] and quantification [39], early-season crop monitoring [40], and multi-temporal mapping of vegetation fractions [41] using both close-range and UAV-based imagery. Thus, the assumption that added weight of both RE and NIR bands would improve the detection of phytosanitary problems was made ( Figure 3). ...
... A visual analysis allowed us to conclude that apart from trees of significant size (chestnuts and other trees) the amount of green vegetation in the study area was low or almost absent (depending Therefore, customized vegetation indices were developed considering the strong influence of the RE and NIR bands. These vegetation indices are inspired by the Excess Green Index (ExG) [36] that showed effectiveness in weed discrimination [36], crop identification [37,38] and quantification [39], early-season crop monitoring [40], and multi-temporal mapping of vegetation fractions [41] using both close-range and UAV-based imagery. Thus, the assumption that added weight of both RE and NIR bands would improve the detection of phytosanitary problems was made ( Figure 3). ...
Article
Full-text available
Phytosanitary conditions can hamper the normal development of trees and significantly impact their yield. The phytosanitary condition of chestnut stands is usually evaluated by sampling trees followed by a statistical extrapolation process, making it a challenging task, as it is labor-intensive and requires skill. In this study, a novel methodology that enables multi-temporal analysis of chestnut stands using multispectral imagery acquired from unmanned aerial vehicles is presented. Data were collected in different flight campaigns along with field surveys to identify the phytosanitary issues affecting each tree. A random forest classifier was trained with sections of each tree crown using vegetation indices and spectral bands. These were first categorized into two classes: (i) absence or (ii) presence of phytosanitary issues. Subsequently, the class with phytosanitary issues was used to identify and classify either biotic or abiotic factors. The comparison between the classification results, obtained by the presented methodology, with ground-truth data, allowed us to conclude that phytosanitary problems were detected with an accuracy rate between 86% and 91%. As for determining the specific phytosanitary issue, rates between 80% and 85% were achieved. Higher accuracy rates were attained in the last flight campaigns, the stage when symptoms are more prevalent. The proposed methodology proved to be effective in automatically detecting and classifying phytosanitary issues in chestnut trees throughout the growing season. Moreover, it is also able to identify decline or expansion situations. It may be of help as part of decision support systems that further improve on the efficient and sustainable management practices of chestnut stands.
... Fixed cluster centres obtained through experimentation were used. After clustering, the excess green index (a feature used in weed segmentation) (Woebbecke et al., 1995) was used to segment out the leaf pixels from the clusters. An accuracy of 90% was obtained using this method. ...
... Of particular importance to this paper was the index-based segmentation method which uses indices based on RGB spectral bands. The Excess Green Index (ExG) (Woebbecke et al., 1995), Excess Red Index (ExR) (Meyer and Neto, 2008), as well as a linear combination of the two (ExGR) can be used to accentuate the green or red components of an image while suppressing the background. This method can be adapted to images containing rodent bait since the colour of the bait is distinct from the bait station. ...
... We can therefore suppress the background pixels by channel subtraction. The method applied here is similar to the Excess Green (Woebbecke et al., 1995) and Excess Red (Meyer and Neto, 2008) index based segmentation. To produce a difference image D x y ( , ) a with suppressed background pixel intensities for bait type A, we therefore subtract the pixel intensity values in the Green channel image, G x y ( , ) from the Red channel image R x y ( , ). ...
Article
The continual management of pest species and their preventative elimination is an ongoing, labour-intensive problem. Bait stations are pivotal in this management process as they are the point of contact between the rodents and the process, since the rodents need to enter the bait stations to consume the poisoned bait. Monitoring of these bait stations provides feedback of the effectiveness of the management process. However, there is a significantly large cost associated with periodically sending pest control experts to check the level of bait in the bait stations. This becomes even more apparent over a large geographical area. In this paper we present a method of reducing the labour component associated with regular bait level monitoring by placing a camera in the bait stations and using machine vision to provide an estimate of the amount of bait remaining and the type of bait in the station. Images of four common bait types were captured under provided artificial light in a closed bait station, and the computer vision algorithms proved effective in identifying the type of bait as well as providing an estimate of the bait level.
... The rationale behind using color-based vegetation indices is to outline the vegetation region of interest, e.g., crops or trees, by combining information from several bands into a single grayscale image. Many color-based indices have been developed, among others Excess Green [31], Excess Red [32], Vegetative Index [33], Visible Atmospheric Resistance Index [34], Normalized Difference Index [35], Triangular Greenness Index [36], and Visible-band Difference Vegetation Index [28]. Other indices combine two or more vegetation indices such as Excess Green minus Excess Red [25], and the Combined index [27]. ...
... Excess Green (ExG) 2g -r -b [31] Excess Red (ExR) 1.4 × r -g [32] Excess Green minus Excess Red (ExGR) ExG -ExR [25] Vegetative Index (VEG) g / r 0.667 × b 0.333 [33] Color Index of Vegetation Extraction (CIVE) 0.441 × r -0.881 × g + 0.385 × b + 18.78745 [24] Visible Atmospheric Resistant Index (VARI) (g -r) / (g + r -b) [34] Combined Index (COM) 0.25 × ExG + 0.30 × ExGR + 0.33 × CIVE + 0.12 × VEG [27] Normalized Difference Index (NDI) (g -r) / (g + r) [35] Triangular Greenness Index (TGI) ...
... Red minus Green (R-G) r -g [31] Green minus Blue (G-B) g -b [31] Ratio (GB_RG) No normalization was applied to the brightness values because the vegetation indices were mainly used to identify tree markers, and normalization does not necessarily enhance the contrast in index values between trees and marsh. The calculation of vegetation indices resulted in grayscale images as shown by ExG and ExR images ( Figure 4). ...
Article
Full-text available
Mangrove migration, or transgression in response to global climatic changes or sea-level rise, is a slow process; to capture it, understanding both the present distribution of mangroves at individual patch (single- or clumped trees) scale, and their rates of change are essential. In this study, a new method was developed to delineate individual patches and to estimate mangrove cover from very high-resolution (0.08 m spatial resolution) true color (Red (R), Green (G), and Blue (B) spectral channels) aerial photography. The method utilizes marker-based watershed segmentation, where markers are detected using a vegetation index and Otsu’s automatic thresholding. Fourteen commonly used vegetation indices were tested, and shadows were removed from the segmented images to determine their effect on the accuracy of tree detection, cover estimation, and patch delineation. According to point-based accuracy analysis, we obtained adjusted overall accuracies >90% in tree detection using seven vegetation indices. Likewise, using an object-based approach, the highest overlap accuracy between predicted and reference data was 95%. The vegetation index Excess Green (ExG) without shadow removal produced the most accurate mangrove maps by separating tree patches from shadows and background marsh vegetation and detecting more individual trees. The method provides high precision delineation of mangrove trees and patches, and the opportunity to analyze mangrove migration patterns at the scale of isolated individuals and patches.
... A CVI is defined as a mathematical function of red (R), green (G), and blue (B) values in a color pallet (or their normalized values denoted by r, g, and b for each image pixel). Proposed CVIs abound in the literature; ones cited frequently include Excess Green Index (ExG) (Woebbecke et al. 1995), Excess Red Index (ExR) (Meyer and Neto 2008), Green Leaf Index (GLI) (Louhaichi et al. 2001), Hue (Cheng et al. 2001), Color Index of Vegetation Extraction (CIVE) (Kataoka et al. 2003), Modified Excess Green Index (MEGI) (Mao et al. 2003), Normalized Green-Red Difference Index (NGRDI) (Hunt et al. 2005), Vegetation Index (VEG) (Hague et al. 2006), Excess Green minus Red Index (ExGR) (Meyer and Neto 2008), and more recently, Combined Indices (COM) (Guijarro et al. 2011 (COM1); Guerrero et al. 2012 (COM2)), Modified ExG (MExG) , Modified Green Red Vegetation Index (MGVRI) (Bendig et al. 2015), and Red Green Blue Vegetation Index (RGBVI) (Bendig et al. 2015). By applying a CVI, a color field image of RGB channels is converted to a single dimensional grayscale image, where vegetation colors are highlighted. ...
... H 0a The new index will significantly outperform existing indices in terms of segmentation performance. Woebbecke et al. (1995) r-g, g-b, (g-b)/|r-g|, ExG, Hue Cocklebur, velvetleaf Hue Meyer and Neto (2008) ExG, ExGR, NGRDI Soybean ExGR Golzarian et al. (2012) g, ExG, MEGI, g-r, ...
Article
Color vegetation indices enable various precision agriculture applications by transforming a 3D-color image into its 1D-grayscale counterpart, such that the color of vegetation pixels can be accentuated, while those of nonvegetation pixels are attenuated. The quality of the transformation is essential to the outcomes of computational analyses to follow. The objective of this article is to propose a new vegetation index, the Elliptical Color Index (ECI), which leverages the quadratic discriminant analysis of 3D-color images along a normalized red (r)—green (g) plane. The proposed index is defined as an ellipse function of r and g variables with a shape parameter. For comparison, the ECI’s performance was evaluated along with six other indices, by using 240 color images as a test sample captured from four vegetation species under different illumination and background conditions, together with the corresponding ground-truth patterns. For comparative analysis, the receiver operating characteristic (ROC) and the precision–recall (PR) curves helped quantify the overall performance of vegetation segmentation across all of the vegetation indices evaluated. For a practical appraisal of vegetation segmentation outcomes, this paper applied Gaussian filtering, and then the thresholding method of Otsu, to the grayscale images transformed by each of the indices. Overall, the test results confirmed that ECI outperforms the other indices, in terms of the area under the curves of ROC and PR, as well as other performance metrics, including total error, precision, and F-score.
... The evolution of vegetative cover for each bioengineering technique adopted was evaluated by means of the vegetative cover index, calculated through a computer code in the MATLAB software (2015). The images captured (on a smartphone with wide-angle lens) during the experimental units' monitoring period (27 months; e.g., Figure 4a) were treated and analyzed with a MATLAB script (or algorithm) developed using the method proposed by Woebbecke et al. [44]. This semi-quantitative analysis is proposed as an effective approach to assess the vegetation performance under climatological changes (dry and wet spell cycles). ...
... The non-satisfactory vegetative cover index values reported are associated with the difficulty of seeds' germination and establishment. The geotextiles act as a barrier between the seeds and the soil [44,48] and between the sunlight and soil, leading to low germination rates [49]. However, the IBT-6 technique proved to avoid slope superficial erosion processes, as no erosive processes were reported during the inspections performed in the investigated period. ...
Article
Full-text available
Controlling and preventing soil erosion on slope surfaces is a pressing concern worldwide, and at the same time, there is a growing need to incorporate sustainability into our engineering works. This study evaluates the efficiency of bioengineering techniques in the development of vegetation in soil slopes located near a hydroelectric power plant in Brazil. For this purpose, twelve different bioengineering techniques were evaluated, in isolation and in combination, in the slopes (10 m high) of two experimental units (approximately 70 m long each) located next to the Paraíba do Sul riverbanks, in Brazil. High-resolution images of the slopes' frontal view were taken in 15-day interval visits in all units for the first 90 days after implantation, followed by monthly visits up to 27 months after the works were finished. The images were treated and analyzed in a computer algorithm that, based on three-color bands (red-green-blue scale), helps to assess the temporal evolution of the vegetative cover index for each technique adopted. The results showed that most of the solutions showed a deficiency in vegetation establishment and were sensitive to climatological conditions, which induced changes in the vegetation phytosanitary aspects. Techniques which provided a satisfactory vegetative cover index throughout the investigated period are pointed out.
... ExG tends to decrease maximal value, and minimun value in line with the increase in Ganoderma attack levels. The ExG value according to the attack level is presented in Figure 2 and the distribution of ExG values is presented in Figure 3. [16] informed that ExG has an advantage compared to other color indices, especially in distinguishing the greenness of plants with other soils / residues. ExG has a similarity with the range of visible wavelength (RGB) which affect the number from a recording sensor. ...
... The calculation of accuracy value came from the precision sampling of Ganoderma infection level through VSI and field sensus with ExG, ExR, and CIVE have a higher accuracy value of 83,4-83,6% compared with ExB which reached 81%. These results are in line with [16] which informed that ExG has an advance value compared with others colour index, more importantly on differentiate greenness [19] also states that CIVE and ExG tend to process images clearly under different environmental conditions. Meanwhile, [20] supports the results by giving the lowest standard deviation value on ExR. ...
... Excess green (ExG) (Woebbecke et al., 1995): ...
... Woebbecke index (WI) (Woebbecke et al., 1995): ...
Thesis
Full-text available
Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered.
... Besides normalized visible spectrum information of Rn (normalized red), Gn (normalized green) and Bn (normalized blue), three vegetation indexes, Excess Green index (ExG), Excess Red index (ExR), and Excess Green minus Excess Red index (ExGR), were calculated for the training and testing datasets (Woebbecke et al., 1995;Meyer and Neto, 2008). Formulas of these three vegetation indices are listed below. ...
Article
Full-text available
Rice is a globally important crop that will continue to play an essential role in feeding our world as we grapple with climate change and population growth. Lodging is a primary threat to rice production, decreasing rice yield, and quality. Lodging assessment is a tedious task and requires heavy labor and a long duration due to the vast land areas involved. Newly developed autonomous crop scouting techniques have shown promise in mapping crop fields without any human interaction. By combining autonomous scouting and lodged rice detection with edge computing, it is possible to estimate rice lodging faster and at a much lower cost than previous methods. This study presents an adaptive crop scouting mechanism for Autonomous Unmanned Aerial Vehicles (UAV). We simulate UAV crop scouting of rice fields at multiple levels using deep neural networks and real UAV energy profiles, focusing on areas with high lodging. Using the proposed method, we can scout rice fields 36% faster than conventional scouting methods at 99.25% accuracy.
... Recent work has shown VIs derived from consumer-grade RGB cameras were comparable or better than spectroradiometer-derived VIs in estimating chlorophyll content and other parameters for wheat [45]. The ExG-ExR index and its constituent components, excess green (ExG) [46] and excess red (ExR) [47], were originally developed to isolate vegetation from non-vegetation in RGB photography, but the index has also been used successfully to detect vegetation stress from RGB imagery collected from a small unmanned aerial system (sUAS) [48]. The simple VCI index has been shown to be sensitive to detecting phenology changes, including the start of senescence [40]. ...
Article
Full-text available
The early detection of plant pathogens at the landscape scale holds great promise for better managing forest ecosystem threats. In Hawai'i, two recently described fungal species are responsible for increasingly widespread mortality in 'ōhi'a Metrosideros polymorpha, a foundational tree species in Hawaiian native forests. In this study, we share work from repeat laboratory and field measurements to determine if visible near-infrared and optical remote sensing can detect pre-symptomatic trees infected with these pathogens. After generating a dense time series of laboratory spectral reflectance data and red green blue (RGB) images for inoculated 'ōhi'a seedlings, seedlings subjected to extreme drought, and control plants, we found few obvious spectral indicators that could be used for reliable pre-symptomatic detection in the inoculated seedlings, which quickly experienced complete and total wilting following stress onset. In the field, we found similar results when we collected repeat multispectral and RGB imagery over inoculated mature trees (sudden onset of symptoms with little advance warning). We found selected vegetation indices to be reliable indicators for detecting non-specific stress in 'ōhi'a trees, but never providing more than five days prior warning relative to visual detection in the laboratory trials. Finally, we generated a sequence of linear support vector machine classification models from the laboratory data at time steps ranging from pre-treatment to late-stage stress. Overall classification accuracies increased with stress stage maturity, but poor model performance prior to stress onset and the sudden onset of symptoms in infected trees suggest that early detection of rapid 'ōhi'a death over timescales helpful for land managers remains a challenge.
... ExG Excess green index 2 * ρg -ρr -ρb [28] 10 CIVE Color index of vegetation 0.441* ρr -0.881* ρg + 0.385* ρb +18.787 [29] where ρb is the reflectance at the blue band, ρg is the reflectance at the green band, ρr is the reflectance at the red band, λb is the wavelength of the blue band, and λr is the wavelength of the red band. ...
Article
Full-text available
Red–green–blue (RGB) cameras which are attached in commercial unmanned aerial vehicles (UAVs) can support remote-observation small-scale campaigns, by mapping, within a few centimeter’s accuracy, an area of interest. Vegetated areas need to be identified either for masking purposes (e.g., to exclude vegetated areas for the production of a digital elevation model (DEM) or for monitoring vegetation anomalies, especially for precision agriculture applications. However, while detection of vegetated areas is of great importance for several UAV remote sensing applications, this type of processing can be quite challenging. Usually, healthy vegetation can be extracted at the near-infrared part of the spectrum (approximately between 760-900 nm), which is not captured by the visible (RGB) cameras. In this study, we explore several visible (RGB) vegetation indices in different environments using various UAV sensors and cameras to validate their performance. For this purposes, openly licensed unmanned aerial vehicle (UAV) imagery has been downloaded "as is" and analyzed. The overall results are presented in the study. As it was found, the green leaf index (GLI) was able to provide the optimum results for all case studies.
... In order to avoid potential misclassification of target objects with unnecessary objects, it is a good approach to mask the non-vegetative area upfront. For this purpose, excess green vegetation index (ExG) [44] was calculated using Equation (4). ...
Article
Full-text available
In recent years, Unmanned Aerial Systems (UAS) have emerged as an innovative technology to provide spatio-temporal information about weed species in crop fields. Such information is a critical input for any site-specific weed management program. A multi-rotor UAS (Phantom 4) equipped with an RGB sensor was used to collect imagery in three bands (Red, Green, and Blue; 0.8 cm/pixel resolution) with the objectives of (a) mapping weeds in cotton and (b) determining the relationship between image-based weed coverage and ground-based weed densities. For weed mapping, three different weed density levels (high, medium, and low) were established for a mix of different weed species, with three replications. To determine weed densities through ground truthing, five quadrats (1 m × 1 m) were laid out in each plot. The aerial imageries were preprocessed and subjected to Hough transformation to delineate cotton rows. Following the separation of inter-row vegetation from crop rows, a multi-level classification coupled with machine learning algorithms were used to distinguish intra-row weeds from cotton. Overall, accuracy levels of 89.16%, 85.83%, and 83.33% and kappa values of 0.84, 0.79, and 0.75 were achieved for detecting weed occurrence in high, medium, and low density plots, respectively. Further, ground-truthing based overall weed density values were fairly correlated (r2 = 0.80) with image-based weed coverage assessments. Among the specific weed species evaluated, Palmer amaranth (Amaranthus palmeri S. Watson) showed the highest correlation (r2 = 0.91) followed by red sprangletop (Leptochloa mucronata Michx) (r2 = 0.88). The results highlight the utility of UAS-borne RGB imagery for weed mapping and density estimation in cotton for precision weed management.
... Statistics were applied to assess their respective accuracy and precision of classification process. Excess green index [66] has been used for vegetation identification in our test sites which constitutes one of the most widely used indices in the visual spectrum. Moreover, for the discontinuity surface detection, researchers [67] developed a semi-automated top-down object-based methodology for extracting lineaments from airborne magnetic data. ...
Article
Full-text available
The increased development of computer vision technology combined with the increased availability of innovative platforms with ultra-high-resolution sensors, has generated new opportunities and fields for investigation in the engineering geology domain in general and landslide identification and characterization in particular. During the last decade, the so-called Unmanned Aerial Vehicles (UAVs) have been evaluated for diverse applications such as 3D terrain analysis, slope stability, mass movement hazard and risk management. Their advantages of detailed data acquisition at a low cost and effective performance identifies them as leading platforms for site-specific 3D modelling. In this study, the proposed methodology has been developed based on Object-Based Image Analysis (OBIA) and fusion of multivariate data resulted from UAV photogrammetry processing in order to take full advantage of the produced data. Two landslide case studies within the territory of Greece, with different geological and geomorphological characteristics, have been investigated in order to assess the developed landslide detection and characterization algorithm performance in distinct scenarios. The methodology outputs demonstrate the potential for an accurate characterization of individual landslide objects within this natural process based on ultra high-resolution data from close range photogrammetry and OBIA techniques for landslide conceptualization. This proposed study shows that UAV-based landslide modelling on the specific case sites provides a detailed characterization of local scale events in an automated sense with high adaptability on the specific case site.
... Processed orthomosaics and DSMs were loaded into the Quantum GIS ("QGIS") version 2.18 ([36]). The excess green (ExG, [37]) vegetation index was calculated from the true-color orthomosaics, using the QGIS raster calculator function. ExG and NDVI rasters were then used for the threshold-based image classification to distinguish canopy from soil. ...
Article
Full-text available
Vigorous early-season growth rate allows crops to compete more effectively against weeds and to conserve soil moisture in arid areas. These traits are of increasing economic importance due to changing consumer demand, reduced labor availability, and climate-change-related increasing global aridity. Many crop species, including common bean, show genetic variation in growth rate, between varieties. Despite this, the genetic basis of early-season growth has not been well-resolved in the species, in part due to historic phenotyping challenges. Using a range of UAV- and ground-based methods, we evaluated the early-season growth vigor of two populations. These growth data were used to find genetic regions associated with several growth parameters. Our results suggest that early-season growth rate is the result of complex interactions between several genetic and environmental factors. They also highlight the need for high-precision phenotyping provided by UAVs. The quantitative trait loci (QTLs) identified in this study are the first in common bean to be identified remotely using UAV technology. These will be useful for developing crop varieties that compete with weeds and use water more effectively. Ultimately, this will improve crop productivity in the face of changing climatic conditions and will mitigate the need for water and resource-intensive forms of weed control.
... Previous studies under the controlled and uncontrolled environmental conditions have suggested a variety of approaches to separate plants for a non-plant background. Color indices-based approaches, such as the normalized difference index (Woebbecke et al., 1993), excess green index (Woebbecke et al., 1995), color index of vegetation extraction (Kataoka et al., 2004(Kataoka et al., , 2005, and normalized green-red difference index (Hunt et al., 2005) are commonly used for image segmentation in the field. A comprehensive review of these methods has been reported in the literature (Hamuda et al., 2016). ...
... The canopy cover was calculated per time point and plot using the Excess Green vegetation index (ExG, [19]). Pixels corresponding to vegetation were differentiated from pixels corresponding to soil ( Figure 1B) using a simple thresholding, and the proportion of pixels classified as vegetation was then calculated. ...
Article
Full-text available
Close remote sensing approaches can be used for high throughput on-field phenotyping in the context of plant breeding and biological research. Data on canopy cover (CC) and canopy height (CH) and their temporal changes throughout the growing season can yield information about crop growth and performance. In the present study, sigmoid models were fitted to multi-temporal CC and CH data obtained using RGB imagery captured with a drone for a broad set of soybean genotypes. The Gompertz and Beta functions were used to fit CC and CH data, respectively. Overall, 90.4% fits for CC and 99.4% fits for CH reached an adjusted R 2 > 0.70, demonstrating good performance of the models chosen. Using these growth curves, parameters including maximum absolute growth rate, early vigor, maximum height, and senescence were calculated for a collection of soybean genotypes. This information was also used to estimate seed yield and maturity (R8 stage) (adjusted R 2 = 0.51 and 0.82). Combinations of parameter values were tested to identify genotypes with interesting traits. An integrative approach of fitting a curve to a multi-temporal dataset resulted in biologically interpretable parameters that were informative for relevant traits.
... ExG is useful for discriminating between green and non-green vegetation [26]. This index has been used coupled with crop classified mean heights to predict corn grain yield, and it showed good results [38]. ...
Article
Full-text available
Crop monitoring and appropriate agricultural management practices of elite germplasm will enhance bioenergy’s efficiency. Unmanned aerial systems (UAS) may be a useful tool for this purpose. The objective of this study was to assess the use of UAS with true color and multispectral imagery to predict the yield and total cellulosic content (TCC) of newly created energy cane germplasm. A trial was established in the growing season of 2016 at the Texas A&M AgriLife Research Center in Weslaco, Texas, where 15 energy cane elite lines and three checks were grown on experimental plots, arranged in a complete block design and replicated four times. Four flights were executed at different growth stages in 2018, at the first ratoon crop, using two multi-rotor UAS: the DJI Phantom 4 Pro equipped with RGB camera and the DJI Matrice 100, equipped with multispectral sensor (SlantRange 3p). Canopy cover, canopy height, NDVI (Normalized Difference Vegetation Index), and ExG (Excess Green Index) were extracted from the images and used to perform a stepwise regression to obtain the yield and TCC models. The results showed a good agreement between the predicted and the measured yields (R2 = 0.88); however, a low coefficient of determination was found between the predicted and the observed TCC (R2 = 0.30). This study demonstrated the potential application of UAS to estimate energy cane yield with high accuracy, enabling plant breeders to phenotype larger populations and make selections with higher confidence.
... ; Modified Green Red Vegetation Index (MGRVI),Bendig et al. (2014); Red Green Blue Vegetation Index (RGBVI),Bendig et al. (2015); Excess Green Index (ExG),Woebbecke et al. (1995); Excess Red Index (ExR),Meyer et al. (1999); Excess GreenRed Index (ExGR),Neto (2004); Grassland Index (GrassI),Bareth et al. (2015); Excess Green Combined with CHM (ExG + CHM), Viljanen et al. indices using two spectral bands of hyperspectral camera: Normalised Difference Vegetation Index (NDVI),Rouse et al., 1974; Ratio Vegetation Index (RVI),Pearson and Miller (1972); Modified Soil-Adjusted Vegetation Index (MSAVI),Qi et al. (1994); Optimisation of Soil-Adjusted Vegetation Index (OSAVI),Rondeaux et al. ...
Article
Full-text available
Drones offer entirely new prospects for precision agriculture. This study investigates the utilisation of drone remote sensing for managing and monitoring silage grass swards. In northern countries, grass swards are fertilised and harvested three times per season when aiming to maximise the yield. Information about the grass quantity and quality is necessary to optimise these operations. Our objectives were to investigate and develop machine-learning techniques for estimating these parameters using drone photogrammetry and spectral imaging. Trial sites were established in southern Finland for the primary growth and regrowth of grass in the summer of 2017. Remote-sensing datasets were captured four times during the primary growth season and three times during the regrowth period. Reference measurements included fresh and dry biomass and several quality parameters, such as the digestibility of organic matter in dry matter (the D-value), neutral detergent fibre (NDF), indigestible neutral detergent fibre (iNDF), water-soluble carbohydrates (WSC), the nitrogen concentration (Ncont) in dry matter (DM) and nitrogen uptake (NU). Machine-learning estimators based on random forest (RF) and multiple linear regression (MLR) methods were trained using the reference measurements and tested using independent test datasets. The best results for the biomass estimation, nitrogen amount and digestibility were obtained when using hyperspectral and 3D data, followed by the combination of multispectral and 3D data. During the training process, the best normalised root-mean-square errors (RMSE%) were 14.66% for the dry biomass and 12% for fresh biomass; the best RMSE% values for NU, the D-value and NDF were 13.6%, 1.98% and 3% respectively. For the primary growth, the accuracies of all quality parameters were better than 20% with the independent test datasets; for the regrowth, the estimation accuracies of the D-value, iNDF, NDF, Ncont and NU were better than 20%. The results showed that drone remote sensing was an excellent tool for the efficient and accurate management of silage production.
... The Excess-Green Index (ExG) was used to discriminate vegetation from other terrestrial elements using Equations (2)-(4) (Woebbecke et al. 1995). The ExG has been shown to produce satisfactory results for vegetation discrimination in aerial images (Ponti 2013;Marcial-Pablo et al. 2019). ...
Article
Full-text available
Torrential rainfall can generate landslides, flash floods, and debris flows which might become disasters, causing loss of life and damage to property and infrastructure. To respond opportunely to hydrometeorological hazards, it is necessary to assess, rapidly and accurately, damage to the affected area. This is commonly done through time-consuming reconnaissance visits to obtain detailed field information. This paper proposes a methodology which uses: i) high resolution satellite and RGB images from unmanned aerial vehicles (UAV), ii) digital elevation models (DEM), and iii) object-based image analysis (OBIA) for rapid urban flood damage assessment and estimation of the number of houses washed away, or with a total or partial roof collapse, by comparing pre- and post-event data. The case study was Tropical Storm Earl in 2016 that affected the town of Chicahuaxtla, Puebla, Mexico, due to the overflow of the Zempoloantongo River that cuts through the town causing several loss of life and severe property damage. The results indicate that the three-pronged approach proposed herein is able to discriminate changes before and after the event and improve image classification of washed-away or destroyed houses. The overall accuracy of the proposed automatic classification obtained with UAV data had a value of 97.4%. Structural damage was not assessed in this study.
... Due to the lack of a NIR band in the UAV imagery, we use the Normalized ExG vegetation index (Woebbecke et al., 1995) to identify vegetation. The ExG can be calculated as: ...
Article
Full-text available
In-situ slum upgrading projects include infrastructural improvements such as new roads, which are perceived to improve the quality of life for the residents and encourage structural improvements at a household level. Although these physical changes are easily visible in satellite imagery, it is more difficult to track incremental improvements undertaken by the residents – which are perhaps more closely linked to the socio-economic development of the households themselves. The improved detail provided by imagery obtained from Unmanned Aerial Vehicles (UAVs) has the potential to monitor these more subtle changes in a settlement. This paper provides a framework which takes advantage of high-resolution imagery and a detailed elevation model from UAVs to detect changes in informal settlements. The proposed framework leverages expert knowledge to provide training labels for deep learning and thus avoids the cost of manual labelling. The semantic classification is then used to interpret a change mask and identify: new buildings, the creation of open spaces, and incremental roof upgrading in an informal settlement. The methodology is demonstrated on UAV imagery of an informal settlement in Kigali, Rwanda, successfully identifying changes between 2015 and 2017 with an Overall Accuracy of 95 % and correctly interpreting changes with an Overall Accuracy of 91 %. Results reveal that almost half the buildings in the settlement show visible changes in the roofing material, and 61 % of these changed less than 1m². This demonstrates the incremental nature of housing improvements in the settlement.
... Triangular greenness index TGI = R Green − 0.39R Red − 0.61R Blue [48] Excess green index EXG = 2 g − r − b [49] Visible atmospherically resistant index VARI = (R Red − R Green )/(R Green + R Red − R Blue ) [50] Normalized difference vegetation index NDVI = (R Nir − R Red )/(R Nir + R Red ) [51] Normalized difference red edge NDRE = (R Nir − R RE )/(R Nir + R RE ) [52] Wide dynamic range vegetation index * WDRVI = (α·R Nir − R Red )/(α·R Nir + R Red ) [53] * WDRVI with an α coefficient value of 0.1 presented a good relationship with corn canopy cover [53]. ...
Article
Full-text available
Corn yields vary spatially and temporally in the plots as a result of weather, altitude, variety, plant density, available water, nutrients, and planting date; these are the main factors that influence crop yield. In this study, different multispectral and red-green-blue (RGB) vegetation indices were analyzed, as well as the digitally estimated canopy cover and plant density, in order to estimate corn grain yield using a neural network model. The relative importance of the predictor variables was also analyzed. An experiment was established with five levels of nitrogen fertilization (140, 200, 260, 320, and 380 kg/ha) and four replicates, in a completely randomized block design, resulting in 20 experimental polygons. Crop information was captured using two sensors (Parrot Sequoia_4.9, and DJI FC6310_8.8) mounted on an unmanned aerial vehicle (UAV) for two flight dates at 47 and 79 days after sowing (DAS). The correlation coefficient between the plant density, obtained through the digital count of corn plants, and the corn grain yield was 0.94; this variable was the one with the highest relative importance in the yield estimation according to Garson’s algorithm. The canopy cover, digitally estimated, showed a correlation coefficient of 0.77 with respect to the corn grain yield, while the relative importance of this variable in the yield estimation was 0.080 and 0.093 for 47 and 79 DAS, respectively. The wide dynamic range vegetation index (WDRVI), plant density, and canopy cover showed the highest correlation coefficient and the smallest errors (R = 0.99, mean absolute error (MAE) = 0.028 t ha−1, root mean square error (RMSE) = 0.125 t ha−1) in the corn grain yield estimation at 47 DAS, with the WDRVI index and the density being the variables with the highest relative importance for this crop development date. For the 79 DAS flight, the combination of the normalized difference vegetation index (NDVI), normalized difference red edge (NDRE), WDRVI, excess green (EXG), triangular greenness index (TGI), and visible atmospherically resistant index (VARI), as well as plant density and canopy cover, generated the highest correlation coefficient and the smallest errors (R = 0.97, MAE = 0.249 t ha−1, RMSE = 0.425 t ha−1) in the corn grain yield estimation, where the density and the NDVI were the variables with the highest relative importance, with values of 0.295 and 0.184, respectively. However, the WDRVI, plant density, and canopy cover estimated the corn grain yield with acceptable precision (R = 0.96, MAE = 0.209 t ha−1, RMSE = 0.449 t ha−1). The generated neural network models provided a high correlation coefficient between the estimated and the observed corn grain yield, and also showed acceptable errors in the yield estimation. The spectral information registered through remote sensors mounted on unmanned aerial vehicles and its processing in vegetation indices, canopy cover, and plant density allowed the characterization and estimation of corn grain yield. Such information is very useful for decision-making and agricultural activities planning.
... One widely used transformation is to calculate color indices from the original RGB (red, green, and blue) values. For example, the excess green (ExG) index provides a clear contrast between plant objects and soil background and has performed well in vegetation segmentation [14][15][16]. Based on the composition of the retina of the human eye-4% blue cones, 32% green cones, and 64% red cones-the excess red (ExR) index was introduced to separate leaf regions from the background [15]. ...
Article
Full-text available
Maize plant detection was conducted in this study with the goals of target fertilization and reduction of fertilization waste in weed spots and gaps between maize plants. The methods used included two types of color featuring and deep learning (DL). The four color indices used were excess green (ExG), excess red (ExR), ExG minus ExR, and the hue value from the HSV (hue, saturation, and value) color space, while the DL methods used were YOLOv3 and YOLOv3_tiny. For practical application, this study focused on performance comparison in detection accuracy, robustness to complex field conditions, and detection speed. Detection accuracy was evaluated by the resulting images, which were divided into three categories: true positive, false positive, and false negative. The robustness evaluation was performed by comparing the average intersection over union of each detection method across different sub–datasets—namely original subset, blur processing subset, increased brightness subset, and reduced brightness subset. The detection speed was evaluated by the indicator of frames per second. Results demonstrated that the DL methods outperformed the color index–based methods in detection accuracy and robustness to complex conditions, while they were inferior to color feature–based methods in detection speed. This research shows the application potential of deep learning technology in maize plant detection. Future efforts are needed to improve the detection speed for practical applications.
... The GCC accounts for the influence of scene illumination on brightness levels (Sonnentag et al., 2012;Woebbecke et al., 1995) and is often used in phenology studies (Migliavacca et al., 2011;Richardson et al., 2007). ...
Article
Full-text available
The short revisit times afforded by recently-deployed optical satellite sensors that acquire 3–30 m resolution imagery provide new opportunities to study seasonal vegetation dynamics. Previous studies demonstrated a successful retrieval of phenology with Sentinel-2 for relatively stable annual growing seasons. In semi-arid East Africa however, vegetation responds rapidly to a concentration of rainfall over short periods and consequently is subject to strong interannual variability. Obtaining a sufficient density of cloud-free acquisitions to accurately describe these short vegetation cycles is therefore challenging. The objective of this study is to evaluate if data from two satellite constellations, i.e., PlanetScope (3 m resolution) and Sentinel-2 (10 m resolution), each independently allow for accurate mapping of vegetation phenology under these challenging conditions. The study area is a rangeland with bimodal seasonality located at the 128-km² Kapiti Farm in Machakos County, Kenya. Using all the available PlanetScope and Sentinel-2 imagery between March 2017 and February 2019, we derived temporal NDVI profiles and fitted double hyperbolic tangent models (equivalent to commonly-used logistic functions), separately for the two rainy seasons locally referred to as the short and long rains. We estimated start- and end-of-season for the series using a 50% threshold between minimum and maximum levels of the modelled time series (SOS50/EOS50). We compared our estimates against those obtained from vegetation index series from two alternative sources, i.e. a) greenness chromatic coordinate (GCC) series obtained from digital repeat photography, and b) MODIS NDVI. We found that both PlanetScope and Sentinel-2 series resulted in acceptable retrievals of phenology (RMSD of ~8 days for SOS50 and ~15 days for EOS50 when compared against GCC series) suggesting that the sensors individually provide sufficient temporal detail. However, when applying the model to the entire study area, fewer spatial artefacts occurred in the PlanetScope results. This could be explained by the higher observation frequency of PlanetScope, which becomes critical during periods of persistent cloud cover. We further illustrated that PlanetScope series could differentiate the phenology of individual trees from grassland surroundings, whereby tree green-up was found to be both earlier and later than for grass, depending on location. The spatially-detailed phenology retrievals, as achieved in this study, are expected to help in better understanding climate and degradation impacts on rangeland vegetation, particularly for heterogeneous rangeland systems with large interannual variability in phenology and productivity.
... of processes such as rainfall, earthquakes. Due to burst in the human population, anthropological activities are bound to continue to expand into landslide-prone environments; hence, the recognition of the scope and the magnitude of the hazard has increased (Cruden and Varnes 1996;De Blasio 2011;Woebbecke et al. 1995). ...
... The use of the OVI method could remove areas with particularly critical effects of noncanopy areas on the VI [48], and areas with less impact were not completely removed. The OS method used the texture and spectral features of the UAV image for segmentation so that the heterogeneity of the spectrum and texture in the same segmentation area was small, while that in the different segmentation areas was large [49,50]. ...
Article
Full-text available
The accurate estimation of the key growth indicators of rice is conducive to rice production, and the rapid monitoring of these indicators can be achieved through remote sensing using the commercial RGB cameras of unmanned aerial vehicles (UAVs). However, the method of using UAV RGB images lacks an optimized model to achieve accurate qualifications of rice growth indicators. In this study, we established a correlation between the multi-stage vegetation indices (VIs) extracted from UAV imagery and the leaf dry biomass, leaf area index, and leaf total nitrogen for each growth stage of rice. Then, we used the optimal VI (OVI) method and object-oriented segmentation (OS) method to remove the noncanopy area of the image to improve the estimation accuracy. We selected the OVI and the models with the best correlation for each growth stage to establish a simple estimation model database. The results showed that the OVI and OS methods to remove the noncanopy area can improve the correlation between the key growth indicators and VI of rice. At the tillering stage and early jointing stage, the correlations between leaf dry biomass (LDB) and the Green Leaf Index (GLI) and Red Green Ratio Index (RGRI) were 0.829 and 0.881, respectively; at the early jointing stage and late jointing stage, the coefficient of determination (R 2) between the Leaf Area Index (LAI) and Modified Green Red Vegetation Index (MGRVI) was 0.803 and 0.875, respectively; at the early stage and the filling stage, the correlations between the leaf total nitrogen (LTN) and UAV vegetation index and the Excess Red Vegetation Index (ExR) were 0.861 and 0.931, respectively. By using the simple estimation model database established using the UAV-based VI and the measured indicators at different growth stages, the rice growth indicators can be estimated for each stage. The proposed estimation model database for monitoring rice at the different growth stages is helpful for improving the estimation accuracy of the key rice growth indicators and accurately managing rice production.
... (Color + NIR)-based indices (Hamuda et al., 2016;Milioto et al., 2018;Xue and Su, 2017) have been proven to be effective at enhancing the inherent color property of objects. For example, ExG (Woebbecke et al., 1995) uses chromatic coordinates to express the new representation through the transformation formula 2 g-r-b, which shows a clear contrast from the non-plant background. CIVE (Kataoka et al., 2003) emphasizes the green information, and obtains the discriminate features to segment the crop from the soya bean and sugar beet image. ...
Article
Full-text available
Weed control is a global issue, and has attracted great attention in recent years. Deploying autonomous robots for weed removal has great potential in terms of constructing environment-friendly agriculture, and saving manpower. In this paper, we propose a weed/crop segmentation network that provides better performance for precisely recognizing the weed with arbitrary shape in complex environment condition, and offers great support for autonomous robots to successfully reduce the density of weed. Our deep neural network (DNN)-based segmentation model obtains persistent improvements by integrating four additional components. i) Hybrid dilated convolution and DropBlock are introduced into the classification backbone network, where the hybrid dilated convolution enlarges the receptive field, while DropBlock regularizes the weight parameters to learn robust features by random drops contiguous regions. ii) A universal function approximation block is added to the front-end of the backbone network, which adaptively converts the existing RGB-NIR bands into optimized (RGB + NIR)-based indices to increase the classification performance. iii) The bridge attention block is exploited, in order to make the network “globally” refer to the correlated region, regardless of the distance for capturing the rich long-range contextual information. iv) The spatial pyramid refinement block is inserted to fuse multi-scale feature maps with different size of receptive fields to provide the precise localization of segmentation result, by maintaining the consistency of feature maps. We evaluate our network performance on two challenging Stuttgart and Bonn datasets. The state-of-the-art performance on the two datasets shows that each added component has notable potential to boost the segmentation accuracy.
... As shown in the binarized image (the bottom image in Fig. 3b), the white areas are corresponding to the green crops shown in the top image in Fig. 3b; and the black area indicates the background, which is soil and crop residue. The specific method for vegetation segmentation is as follows: First, the agronomic image segmentation method (Gée et al., 2008;Woebbecke et al., 1995) is applied to normalize the red (r), green (g), and blue (b) spectral components into the range of 0 to 1 according Eqs. (8) and (9). ...
Article
Stand counts is one of the most common ways farmers assess plant growth conditions and management practices throughout the season. The conventional method for early-season stand count is through manual inspection, which is time-consuming, laborious, and spatially limited in scope. In recent years, Unmanned Aerial Vehicles (UAV) based remote sensing has been widely used in agriculture to provide low-altitude, high spatial resolution imagery to assist decision making. In this project, we designed a system that uses geometric descriptor information with deep neural networks to determine early-season maize stands from relatively low spatial resolution (10 to 25 mm) aerial data, which covers a relatively large area (10 to 25 hectares). Instead of detecting individual crops in a row, we process the entire row at one time, which significantly reduces the requirements for the clarity of the crops. Besides, our new MaxArea Mask Scoring RCNN algorithm could segment crop-rows out in each patch image, regardless of the terrain conditions. The robustness of our scheme was tested on data collected at two different fields in different years. The accuracy of the estimated emergence rate reached up to 95.8%. Due to the high processing speed of the system, it has the potential for real-time applications in the future.
... Time-lapse camera is a useful tool to track the seasonality of surface conditions. It has been widely used for the determination of phenological timings (Richardson et al. 2007;Woebbecke et al. 1995). The time-lapse camera system was used to quantify the phenology at 17 sites across Alaska ( Fig. 21.5). ...
Chapter
Boreal forest has played a role as sink of atmospheric CO2 due to the slow growth of black spruce; however, changes in source of atmospheric CO2 by forest fires and recent warming have significantly triggered modulation in physiological ecology and biogeochemistry over the boreal forest of Alaska. This chapter describes recent research findings in boreal forest ecosystem of Alaska: (1) the forest aboveground biomass (AGB) with field survey data and satellite data, (2) latitudinal gradients of phenology with time-lapsed camera and satellite data, (3) spatio-temporal variation of leaf area index (LAI) with the analysis of satellite data, (4) latitudinal distribution of winter and spring season soil CO2 emission, and (5) successional changes in CO2 and energy balance after forest fires. As a result, mapping of forest AGB is useful for the evaluation of vegetation models and carbon stock in the biogeochemical cycle. Latitudinal distribution of phenology understands the recent and future phenological changes including post-fire recovery forests. Interannual variation of LAI shows the leaf dynamics and near-surface remote-sensing approaches with the analyses of time-lapsed digital camera and satellite data. Spring carbon contributions are sensitive to subtle changes in the onset of spring. Vegetation recovery after forest fire is the major driver of the carbon balance in the stage of early succession. Increasing soil carbon emission in response to abrupt climate warming in Alaska is a significant driver of carbon balance.
... where G DN , R DN , and B DN were the green, red, and blue digital numbers (DN) of each pixel, respectively (Woebbecke et al., 1995;Richardson et al., 2009). The daily ExGI was then smoothed with a 3day running median filter. ...
Article
Full-text available
The accurate estimation of temporally-continuous gross primary production (GPP) is important for a mechanistic understanding of the global carbon budget, as well as the carbon exchange between land and atmosphere. Ground-based PhenoCams can provide near-surface observations of plant phenology with high temporal resolution and possess great potential for use in modeling the seasonal dynamics of GPP. However, due to the site-level empirical approaches for estimating the fraction of absorbed photosynthetically active radiation (fAPAR), a broad application of PhenoCams in GPP modeling has been restricted. In this study, the stage of vegetation phenology (Pscalar) is proposed, which is calculated from the excess green index (ExGI) derived from PhenoCam data. We integrate Pscalar with the enhanced vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) in order to generate a daily time-series of the fAPAR (fAPARCAM), and then to estimate daily GPP (GPPCAM) with a light use efficiency model in a semi-arid grassland area from 2012 to 2014. Over the three continuous years, the daily fAPARCAM exhibited similar temporal behavior to the eddy covariance–measured GPP (GPPEC), and the overall determination coefficients (R²) were all > 0.81. GPPCAM agreed well with GPPEC, and these agreements were highly statistically significant (p < 0.01); R² varied from 0.80 to 0.87, the relative error (RE) varied from -2.9% to 2.81%, and the root mean square error (RMSE) ranged from 0.83 to 0.98 gC/m²/d. GPPCAM was then resampled to 8-day temporal resolution (GPPCAM8d), and further evaluated by comparisons with MODIS GPP products (GPPMOD17) and vegetation photosynthesis model (VPM)–derived GPP (GPPVPM). Validation revealed that the variance explained by GPPCAM8d was still the greatest among these three GPP products. The RMSE and RE of GPPCAM8d were also lower than those of the other two GPP products. The explanatory power of predictors in GPP modeling was also explored; the fAPAR was found to be the most influential predictor, followed by photosynthetically active radiation (PAR). The contributions of the environmental stress indices of temperature and water (Tscalar and Wscalar, respectively) were less than that of PAR. These results highlight the potential for PhenoCam images in high temporal resolution GPP modeling. Our GPP modeling method will help reduce uncertainties by using PhenoCam images for monitoring the seasonal development of vegetation production.
Article
Full-text available
BACKGROUND Bushfires are becoming an increasing issue for the wine sector due to grape and vine losses and to smoke taint in wine. Smoke affects vine physiology and the smoke volatile phenols are absorbed by plant and berry, contaminating the wine. Our hypothesis has been that, for the first time, UAV (unmanned aerial vehicle)‐based visible images can be used to study the physiology of the smoke‐affected vines and to assess the compromised vines. RESULTS Procanico vines were exposed to two smoke treatments, a week apart. Gas exchanges and leaf biochemical traits were measured in the short‐term (30 min after smoke‐exposure) and in the long‐term (24 h after smoke‐exposure). Canopy damages were assessed by conventional VIs (Vegetation Indices) and by an innovative index derived by UAV‐based visible images, the Canopy Area Health Index (CAHI). Gas exchange showed a reduction after the first smoke exposure, but the vines recovered within 24 h. The second smoke exposure led to an irreversible reduction of functional parameters. VIs exhibited significant differences and CAHI presented a damage gradient related to bushfire nearby. CONCLUSION The vineyard damage assessment by UAV‐based visible images may represent a tool to study the physiological activity of smoke‐affected vines and to quantify the loss of destroyed or damaged vines. This article is protected by copyright. All rights reserved.
Article
The purpose of this study is to monitor the growth of rice on a weekly basis by multicopter. The data collected were used to 1) determine whether top-dressing was required, 2) assess the potential for lodging risk, 3) estimate yield, 4) create maps of rice growth for protein content estimation. The normalized difference vegetation index (NDVI) and green excess index (2G_RBi) were both suitable for use as monitoring indices, and their application revealed the following: 1) The standard deviation of 2G_RBi was thought to be useful for determining the timing of top-dressing. The timing of top-dressing application was estimated most effective 10-15 days after maximum standard deviations were recorded. Areas with poor growth could also be identified using NDVI of the non-productive tillering stages and areas where top-dressing needed to be applied could be identified. 2) To diagnose lodging, plant height was estimated using the differences between the digital surface model (DSM) before the field was prepared for planting and on the monitoring day, and the risk of lodging 14 days before heading was shown for the entire area. 3) Yield was highly correlated with NDVI of the heading stage, and yield maps were created using a yield estimation equation. 4) With regard to eating quality, a strong correlation was observed between the protein content of brown rice and NDVI values from the heading stage to the first half of the maturing stage(15 days after heading stage), and accurate maps of eating quality were created. The monitoring of rice growth using a multicopter is both safe and cost effective for individual farmers. By producing objective data and maps for assessments of top-dressing, lodging risk, yield, and protein contents, the findings presented here were shown to be useful for the detailed management of crop growth in fields.
Article
Full-text available
To realize quick localization of plant maize, a new real-time localization approach is proposed for maize cores at the seedling stage, which can meet the basic demands for localization and quantitative fertilization in precision agriculture and reduce environmental pollution and the use of chemical fertilizers. In the first stage, by taking pictures of maize at the seedling stage in a field with a monocular camera, the maize is segmented from the weed background of the picture. And then the three most-effective methods (i.e., minimum cross entropy, ISODATA, and the Otsu algorithm) are found from six common segmentation algorithms after comparing the accuracy rate of extracting maize and the time efficiency of segmentation. In the second stage, plant core from segmented maize image is recognized, and localized, based on different brightness of the rest part of maize core and plant. Then the geometric center of maize core is considered as localization point. the best effect of extracting maize core was found from the minimum cross entropy method based on gray level. According to experimental validation using many field pictures, under weedy conditions on sunny days, the proposed method has a minimum recognition rate of 88.37% for maize cores and is more robust at excluding weeds.
Chapter
In this paper, an enhanced approach for greenness identification from organic carrot crop while the carrot plants were in the initial phase of authentic leaf development is proposed. This paper proposes an enhancement gamma correction and optimization contrast techniques which is worked with vegetation index-based methods for identification of greenness. In this paper, the enhancement of vegetation index-based methods examined its efficiency by comparing it with performances of vegetative index-based methods that have lately been commonly used. The findings showed that the accuracy of vegetation extraction was considerably better than existing methods by the proposed method.
Article
Full-text available
The spatial resolution of in situ unmanned aerial vehicle (UAV) multispectral images has a crucial effect on crop growth monitoring and image acquisition efficiency. However, existing studies about optimal spatial resolution for crop monitoring are mainly based on resampled images. Therefore, the resampled spatial resolution in these studies might not be applicable to in situ UAV images. In order to obtain optimal spatial resolution of in situ UAV multispectral images for crop growth monitoring, a RedEdge Micasense 3 camera was installed onto a DJI M600 UAV flying at different heights of 22, 29, 44, 88, and 176m to capture images of seedling rapeseed with ground sampling distances (GSD) of 1.35, 1.69, 2.61, 5.73, and 11.61 cm, respectively. Meanwhile, the normalized difference vegetation index (NDVI) measured by a GreenSeeker (GS-NDVI) and leaf area index (LAI) were collected to evaluate the performance of nine vegetation indices (VIs) and VI*plant height (PH) at different GSDs for rapeseed growth monitoring. The results showed that the normalized difference red edge index (NDRE) had a better performance for estimating GS-NDVI (R 2 = 0.812) and LAI (R 2 = 0.717), compared with other VIs. Moreover, when GSD was less than 2.61 cm, the NDRE*PH derived from in situ UAV images outperformed the NDRE for LAI estimation (R 2 = 0.757). At oversized GSD (≥5.73 cm), imprecise PH information and a large heterogeneity within the pixel (revealed by semi-variogram analysis) resulted in a large random error for LAI estimation by NDRE*PH. Furthermore, the image collection and processing time at 1.35 cm GSD was about three times as long as that at 2.61 cm. The result of this study suggested that NDRE*PH from UAV multispectral images with a spatial resolution around 2.61 cm could be a preferential selection for seedling rapeseed growth monitoring, while NDRE alone might have a better performance for low spatial resolution images.
Article
In the American tropics, livestock production is highly restricted by forage availability. In addition, the breeding and development of new forage varieties with outstanding yield and high nutritional quality is often limited by a lack of resources and poor technology. Non-destructive, high-throughput phenotyping offers a rapid and economical means of evaluating large numbers of genotypes. In this study, visual assessments, digital colour images, and spectral reflectance data were collected from 200 Urochloa hybrids in a field setting. Partial least-squares regression (PLSR) was applied to relate visual assessments, digital image analysis and spectral data to shoot dry weight, crude protein and chlorophyll concentrations. Visual evaluations of biomass and greenness were collected in 68 min, digital colour imaging data in 40 min, and hyperspectral canopy data in 80 min. Root-mean-squared errors of prediction for PLSR estimations of shoot dry weight, crude protein and chlorophyll were lowest for digital image analysis followed by hyperspectral analysis and visual assessments. This study showed that digital colour image and spectral analysis techniques have the potential to improve precision and reduce time for tropical forage grass phenotyping.
Article
Italian ryegrass (Lolium perenne ssp. multiflorum (Lam) Husnot) is a troublesome weed species in wheat (Triticum aestivum) production in the United States, severely affecting grain yields. Spatial mapping of ryegrass infestation in wheat fields and early prediction of its impact on yield can assist management decision making. In this study, unmanned aerial systems (UAS)-based red, green and blue (RGB) imageries acquired at an early wheat growth stage in two different experimental sites were used for developing predictive models. Deep neural networks (DNNs) coupled with an extensive feature selection method were used to detect ryegrass in wheat and estimate ryegrass canopy coverage. Predictive models were developed by regressing early-season ryegrass canopy coverage (%) with end-of-season (at wheat maturity) biomass and seed yield of ryegrass, as well as biomass and grain yield reduction (%) of wheat. Italian ryegrass was detected with high accuracy (precision = 95.44 ± 4.27%, recall = 95.48 ± 5.05%, F-score = 95.56 ± 4.11%) using the best model which included four features: hue, saturation, excess green index, and visible atmospheric resistant index. End-of-season ryegrass biomass was predicted with high accuracy (R 2 = 0.87), whereas the other variables had moderate to high accuracy levels (R 2 values of 0.74 for ryegrass seed yield, 0.73 for wheat biomass reduction, and 0.69 for wheat grain yield reduction). The methodology demonstrated in the current study shows great potential for mapping and quantifying ryegrass infestation and predicting its competitive response in wheat, allowing for timely management decisions.
Article
The yield and quality of fresh lettuce can be determined from the growth rate and color of individual plants. Manual assessment and phenotyping for hundreds of varieties of lettuce is very time consuming and labor intensive. In this study, we utilized a "Sensor-to-Plant" greenhouse phenotyping platform to periodically capture top-view images of lettuce, and datasets of over 2000 plants from 500 lettuce varieties were thus captured at eight time points during vegetative growth. Here, we present a novel object detection-semantic segmentation-phenotyping method based on convolutional neural networks (CNNs) to conduct non-invasive and high-throughput phenotyping of the growth and development status of multiple lettuce varieties. Multistage CNN models for object detection and semantic segmentation were integrated to bridge the gap between image capture and plant phenotyping. An object detection model was used to detect and identify each pot from the sequence of images with 99.82% accuracy, semantic segmentation model was utilized to segment and identify each lettuce plant with a 97.65% F1 score, and a phenotyping pipeline was utilized to extract a total of 15 static traits (related to geometry and color) of each lettuce plant. Furthermore, the dynamic traits (growth and accumulation rates) were calculated based on the changing curves of static traits at eight growth points. The correlation and descriptive ability of these static and dynamic traits were carefully evaluated for the interpretability of traits related to digital biomass and quality of lettuce, and the observed accumulation rates of static straits more accurately reflected the growth status of lettuce plants. Finally, we validated the application of image-based high-throughput phenotyping through geometric measurement and color grading for a wide range of lettuce varieties. The proposed method can be extended to crops such as maize, wheat, and soybean as a non-invasive means of phenotype evaluation and identification.
Article
Full-text available
Fast and accurate quantification of the available pasture biomass is essential to support grazing management decisions in intensively managed fields. The increasing temporal and spatial resolutions offered by the new generation of orbital platforms, such as Planet CubeSat satellites, have improved the capability of monitoring pasture biomass using remotely sensed data. Here, we assessed the feasibility of using spectral and textural information derived from PlanetScope imagery for estimating pasture aboveground biomass (AGB) and canopy height (CH) in intensively managed fields and the potential for enhanced accuracy by applying the extreme gradient boosting (XGBoost) algorithm. Our results demonstrated that the texture measures enhanced AGB and CH estimations compared to the performance obtained using only spectral bands or vegetation indices. The best results were found by employing the XGBoost models based only on texture measures. These models achieved moderately high accuracy to predict pasture AGB and CH, explaining 65% and 89% of AGB (root mean square error (RMSE) = 26.52%) and CH (RMSE = 20.94%) variability, respectively. This study demonstrated the potential of using texture measures to improve the prediction accuracy of AGB and CH models based on high spatiotemporal resolution PlanetScope data in intensively managed mixed pastures.
Article
Full-text available
Timely monitoring and precise estimation of the leaf chlorophyll contents of maize are crucial for agricultural practices. The scale effects are very important as the calculated vegetation index (VI) were crucial for the quantitative remote sensing. In this study, the scale effects were investigated by analyzing the linear relationships between VI calculated from red–green–blue (RGB) images from unmanned aerial vehicles (UAV) and ground leaf chlorophyll contents of maize measured using SPAD-502. The scale impacts were assessed by applying different flight altitudes and the highest coefficient of determination (R2) can reach 0.85. We found that the VI from images acquired from flight altitude of 50 m was better to estimate the leaf chlorophyll contents using the DJI UAV platform with this specific camera (5472 × 3648 pixels). Moreover, three machine-learning (ML) methods including backpropagation neural network (BP), support vector machine (SVM), and random forest (RF) were applied for the grid-based chlorophyll content estimation based on the common VI. The average values of the root mean square error (RMSE) of chlorophyll content estimations using ML methods were 3.85, 3.11, and 2.90 for BP, SVM, and RF, respectively. Similarly, the mean absolute error (MAE) were 2.947, 2.460, and 2.389, for BP, SVM, and RF, respectively. Thus, the ML methods had relative high precision in chlorophyll content estimations using VI; in particular, the RF performed better than BP and SVM. Our findings suggest that the integrated ML methods with RGB images of this camera acquired at a flight altitude of 50 m (spatial resolution 0.018 m) can be perfectly applied for estimations of leaf chlorophyll content in agriculture.
Chapter
When segmenting green crops, we usually use green indexes, such as Excess Green Index (ExG), Combined Indices 2 (COM2), Modified Excess Green Index (MExG) etc., which are regarded as efficient methods. However, they can’t extract green crop exactly under complex environmental conditions. Particularly, they can’t segment green crops from complex soil backgrounds, such as with high light or deep shadow areas in crop leaves. To address current deficiencies in green crop segmentation, this paper introduces a new crop segmentation method to extract more compete green crops. Firstly, a pre-processing procedure divides the crop images into superpixel blocks by using Simple Linear Iterative Clustering (SLIC) algorithm, then these superpixel blocks are classified into three class by using Classification And Regression Tree (CART) decision tree based on a seven-dimensional(7-D) features vector constructed in this paper: only crop blocks (OC-block), only background blocks (OB-blocks) and CBE-blocks within which crop and background both coexist. Finally, CBE-blocks are processed by making good use of the advantage of ExG to exact green crops in them. Experimental results show that the algorithm proposed in this paper can segment accurately the green crops from the soil backgrounds, even under relative complex field conditions with \( Accuracy \) of 98.44%, \( Precision \) of 91.75%, Recall of 91.78%, FPR of 0.55% and F1_score of 89.31%.
Article
Full-text available
Soybean plant density is an important factor of successful agricultural production. Due to the high number of plants per unit area, early plant overlapping and eventual plant loss, the estimation of soybean plant density in the later stages of development should enable the determination of the final plant number and reflect the state of the harvest. In order to assess soybean plant density in a digital, nondestructive, and less intense way, analysis was performed on RGB images (containing three channels: RED, GREEN, and BLUE) taken with a UAV (Unmanned Aerial Vehicle) on 66 experimental plots in 2018, and 200 experimental plots in 2019. Mean values of the R, G, and B channels were extracted for each plot, then vegetation indices (VIs) were calculated and used as predictors for the machine learning model (MLM). The model was calibrated in 2018 and validated in 2019. For validation purposes, the predicted values for the 200 experimental plots were compared with the real number of plants per unit area (m2). Model validation resulted in the correlation coefficient—R = 0.87, mean absolute error (MAE) = 6.24, and root mean square error (RMSE) = 7.47. The results of the research indicate the possibility of using the MLM, based on simple values of VIs, for the prediction of plant density in agriculture without using human labor.
Article
Full-text available
The objective of this study was to develop a low-cost method for rice growth information obtained quickly using digital images taken with smartphone. A new canopy parameter, namely, the canopy volume parameter (CVP), was proposed and developed for rice using the leaf area index (LAI) and plant height (PH). Among these parameters, the CVP was selected as an optimal parameter to characterize rice yields during the growth period. Rice canopy images were acquired with a smartphone. Image feature parameters were extracted, including the canopy cover (CC) and numerous vegetation indices (VIs), before and after image segmentation. A rice CVP prediction model in which the CC and VIs served as independent variables was established using a random forest (RF) regression algorithm. The results revealed the following. The CVP was better than the LAI and PH for predicting the final yield. And a CVP prediction model constructed according to a local modelling method for distinguishing different types of rice varieties was the most accurate (coefficient of determination (R2) = 0.92; root mean square error (RMSE) = 0.44). These findings indicate that digital images can be used to track the growth of crops over time and provide technical support for estimating rice yields.
Preprint
Full-text available
Logged forests cover four million square kilometres of the tropics and restoring these forests is essential if we are to avoid the worst impacts of climate change, yet monitoring recovery is challenging. Tracking the abundance of visually identifiable, early-successional species enables successional status and thereby restoration progress to be evaluated. Here we present a new pipeline, SLIC-UAV, for processing Unmanned Aerial Vehicle (UAV) imagery to map early-successional species in tropical forests. The pipeline is novel because it comprises: (a) a time-efficient approach for labelling crowns from UAV imagery; (b) machine learning of species based on spectral and textural features within individual tree crowns, and (c) automatic segmentation of orthomosaiced UAV imagery into 'superpixels', using Simple Linear Iterative Clustering (SLIC). Creating superpixels reduces the dataset's dimensionality and focuses prediction onto clusters of pixels, greatly improving accuracy. To demonstrate SLIC-UAV, support vector machines and random forests were used to predict the species of hand-labelled crowns in a restoration concession in Indonesia. Random forests were most accurate at discriminating species for whole crowns, with accuracy ranging from 79.3% when mapping five common species, to 90.5% when mapping the three most visually-distinctive species. In contrast, support vector machines proved better for labelling automatically segmented superpixels, with accuracy ranging from 74.3% to 91.7% for the same species. Models were extended to map species across 100 hectares of forest. The study demonstrates the power of SLIC-UAV for mapping characteristic early-successional tree species as an indicator of successional stage within tropical forest restoration areas. Continued effort is needed to develop easy-to-implement and low-cost technology to improve the affordability of project management.
Article
Full-text available
Perennial ryegrass (Lolium perenne L.) is one of the most important forage grass species in temperate regions of Australia and New Zealand. However, it can have poor persistence due to a low tolerance to both abiotic and biotic stresses. A major challenge in measuring persistence in pasture breeding is that the assessment of pasture survival depends on ranking populations based on manual ground cover estimation. Ground cover measurements may include senescent and living tissues and can be measured as percentages or fractional units. The amount of senescent pasture present in a sward may indicate changes in plant growth, development, and resistance to abiotic and biotic stresses. The existing tools to estimate perennial ryegrass ground cover are not sensitive enough to discriminate senescent ryegrass from soil. This study aimed to develop a more precise sensor-based phenomic method to discriminate senescent pasture from soil. Ground-based RGB images, airborne multispectral images, ground-based hyperspectral data, and ground truth samples were taken from 54 perennial ryegrass plots three years after sowing. Software packages and machine learning scripts were used to develop a pipeline for high-throughput data extraction from sensor-based platforms. Estimates from the high-throughput pipeline were positively correlated with the ground truth data (p < 0.05). Based on the findings of this study, we conclude that the RGB-based high-throughput approach offers a precision tool to assess perennial ryegrass persistence in pasture breeding programs. Improvements in the spatial resolution of hyperspectral and multispectral techniques would then be used for persistence estimation in mixed swards and other monocultures.
Article
Full-text available
Climate change and competition among water users are increasingly leading to a reduction of water availability for irrigation; at the same time, traditionally non-irrigated crops require irrigation to achieve high quality standards. In the context of precision agriculture, particular attention is given to the optimization of on-farm irrigation management, based on the knowledge of within-field variability of crop and soil properties, to increase crop yield quality and ensure an efficient water use. Unmanned Aerial Vehicle (UAV) imagery is used in precision agriculture to monitor crop variability, but in the case of row-crops, image post-processing is required to separate crop rows from soil background and weeds. This study focuses on the crop row detection and extraction from images acquired through a UAV during the cropping season of 2018. Thresholding algorithms, classification algorithms, and Bayesian segmentation are tested and compared on three different crop types, namely grapevine, pear, and tomato, for analyzing the suitability of these methods with respect to the characteristics of each crop. The obtained results are promising, with overall accuracy greater than 90% and producer’s accuracy over 85% for the class “crop canopy”. The methods’ performances vary according to the crop types, input data, and parameters used. Some important outcomes can be pointed out from our study: NIR information does not give any particular added value, and RGB sensors should be preferred to identify crop rows; the presence of shadows in the inter-row distances may affect crop detection on vineyards. Finally, the best methodologies to be adopted for practical applications are discussed.
Article
Full-text available
Nitrogen is an essential element for coffee production. However, when fertilization do not consider the spatial variability of the agricultural parameters, it can generate economic losses, and environmental impacts. Thus, the monitoring of nitrogen is essential to the fertilizing management, and remote sensing based on unmanned aerial vehicles imagery has been evaluated for this task. This work aimed to analyze the potential of vegetation indices of the visible range, obtained with such vehicles, to monitor the nitrogen content of coffee plants in southern Minas Gerais, Brazil. Therefore, we performed leaf analysis using the Kjeldahl method, and we processed the images to produce the vegetation indices using Geographic Information Systems and photo-grammetry software. Moreover, the images were classified using the Color Index of Vegetation and the Maximum Likelihood Classifier. As estimator tool, we created Random Forest models of classification and regression. We also evaluated the Pearson correlation coefficient between the nitrogen and the vegetation indices, and we performed the analysis of variance and the Tukey-Kramer test to assess whether there is a significant difference between the averages of these indices in relation to nitrogen levels. However, the models were not able to predict the nitrogen. The regression model obtained a R 2 = 0.01. The classification model achieved an overall accuracy of 0.33 (33%), but it did not distinguish between the different levels of nitrogen. The correlation tests revealed that the vegetation indices are not correlated with the nitrogen, since the best index was the Green Leaf Index (R = 0.21). However, the image classification achieved a Kappa coefficient of 0.92, indicating that the tested index is efficient. Therefore, visible indices were not able to monitor the nitrogen in this case, but they should continue to be explored, since they could represent a less expensive alternative.
Conference Paper
Pasture biomass information is essential to monitor forage resources in grazed areas, as well as to support grazing management decisions. The increasing temporal and spatial resolutions offered by the new generation of orbital platforms, such as Planet CubeSat satellites, have improved the capability of monitoring pasture biomass using remotely-sensed data. In a preliminary study, we investigated the potential of spectral variables derived from PlanetScope imagery to predict pasture biomass in an area of Integrated Crop-Livestock System (ICLS) in Brazil. Satellite and field data were collected during the same period (May – August 2019) for calibration and validation of the relation between predictor variables and pasture biomass using the Random Forest (RF) regression algorithm. We used as predictor variables 24 vegetation indices derived from PlanetScope imagery, as well as the four PlanetScope bands, and field management information. Pasture biomass ranged from approximately 24 to 656 g.m−², with a coefficient of variation of 54.96%. Near Infrared Green Simple Ratio (NIR/Green), Green Leaf Algorithm (GLA) vegetation indices and days after sowing (DAS) are among the most important variables as measured by the RF Variable Importance metric in the best RF model predicting pasture biomass, which resulted in Root Mean Square Error (RMSE) of 52.04 g.m−² (32.75%). Accurate estimates of pasture biomass using spectral variables derived from PlanetScope imagery are promising, providing new insights into the opportunities and limitations related to the use of PlanetScope imagery for pasture monitoring.
Article
Full-text available
Random sampling is an important approach to field vegetation surveys. However, sampling surveys in desert areas are difficult because determining an appropriate quadrat size that represent the sparse and unevenly distributed vegetation is challenging. In this study, we present a methodology for quadrat size optimization based on low-altitude high-precision unmanned aerial vehicle (UAV) images. Using the Daliyaboyi Oasis as our study area, we simulated random sampling and analyzed the frequency distribution and variation in the fractional vegetation cover (FVC) index of the samples. Our results show that quadrats of 50 m × 50 m size are the most representative for sampling surveys in this location. The method exploits UAV technology to rapidly acquire vegetation information and overcomes the shortcomings of traditional methods that rely on labor-intensive fieldwork to collect species-area relationship (SAR) data. Our method presents two major advantages: (1) speed and efficiency stemming from the application of UAV, which also effectively overcomes the difficulties posed in vegetation surveys by the challenging desert climate and terrain; (2) the large sample size enabled by the use of a sampling simulation. Our methodology is thus highly suitable for selecting the optimal quadrat size and making accurate estimates, and can improve the efficiency and accuracy of field vegetation sampling surveys.
ResearchGate has not been able to resolve any references for this publication.